Skip to main content

OpenAI-Compatible Chat Completions

Build conversational AI applications using Eden AI’s OpenAI-compatible chat completions endpoint.

Overview

Eden AI V3 provides full OpenAI API compatibility with multi-provider support. The endpoint follows OpenAI’s exact format, making it a drop-in replacement. Endpoint:
POST /v3/llm/chat/completions
Note: Streaming is optional. When enabled, responses are delivered via Server-Sent Events (SSE). See Streaming Responses for streaming examples.

Model Format

Use the simplified model string format for LLM:
provider/model
Examples:
  • openai/gpt-4
  • anthropic/claude-sonnet-4-5
  • google/gemini-pro
  • cohere/command-r-plus

Basic Chat Completion

import requests

url = "https://api.edenai.run/v3/llm/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

payload = {
    "model": "openai/gpt-4",
    "messages": [
        {"role": "user", "content": "Hello! How are you?"}
    ]
}

response = requests.post(url, headers=headers, json=payload)
result = response.json()
print(result["choices"][0]["message"]["content"])

Multi-Turn Conversations

Build conversations with message history:
import requests

url = "https://api.edenai.run/v3/llm/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

payload = {
    "model": "anthropic/claude-sonnet-4-5",
    "messages": [
        {"role": "user", "content": "What is the capital of France?"},
        {"role": "assistant", "content": "The capital of France is Paris."},
        {"role": "user", "content": "What's the population?"}
    ]
}

response = requests.post(url, headers=headers, json=payload)
result = response.json()
print(result["choices"][0]["message"]["content"])

System Messages

Guide the model’s behavior with system messages:
import requests

url = "https://api.edenai.run/v3/llm/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

payload = {
    "model": "openai/gpt-4",
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant that speaks like a pirate."
        },
        {
            "role": "user",
            "content": "Tell me about artificial intelligence."
        }
    ]
}

response = requests.post(url, headers=headers, json=payload)
result = response.json()
print(result["choices"][0]["message"]["content"])

Temperature and Parameters

Control response creativity and behavior:
import requests

url = "https://api.edenai.run/v3/llm/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

payload = {
    "model": "openai/gpt-4",
    "messages": [
        {"role": "user", "content": "Write a creative story about a robot."}
    ],
    "temperature": 0.9,  # Higher = more creative (0-2)
    "max_tokens": 500    # Limit response length
}

response = requests.post(url, headers=headers, json=payload)
result = response.json()
print(result["choices"][0]["message"]["content"])

Available Parameters

ParameterTypeDefaultDescription
modelstringRequiredModel string (e.g., openai/gpt-4)
messagesarrayRequiredConversation messages
streambooleanfalseEnable streaming (uses SSE when true)
temperaturefloat1.0Randomness (0-2)
max_tokensinteger-Maximum response tokens
top_pfloat1.0Nucleus sampling threshold
frequency_penaltyfloat0.0Penalize repeated tokens (-2 to 2)
presence_penaltyfloat0.0Penalize topic repetition (-2 to 2)

Response Format

Standard JSON response:
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! I'm doing well, thank you for asking."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 15,
    "total_tokens": 27
  }
}

Available Models

OpenAI

  • openai/gpt-4
  • openai/gpt-4-turbo
  • openai/gpt-3.5-turbo

Anthropic

  • anthropic/claude-sonnet-4-5
  • anthropic/claude-opus-4-5
  • anthropic/claude-sonnet-4-5

Google

  • google/gemini-pro
  • google/gemini-1.5-pro

Cohere

  • cohere/command-r-plus
  • cohere/command-r

Meta

  • meta/llama-3-70b
  • meta/llama-3-8b

OpenAI Python SDK Integration

Use Eden AI with the OpenAI SDK:
from openai import OpenAI

# Point to Eden AI endpoint
client = OpenAI(
    api_key="YOUR_EDEN_AI_API_KEY",
    base_url="https://api.edenai.run/v3/llm"
)

# Use any provider through OpenAI SDK
response = client.chat.completions.create(
    model="anthropic/claude-sonnet-4-5",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Next Steps