Migration Tutorial: to /v2/llm/chat

Eden AI has released a new unified and OpenAI-compatible endpoint for all LLM and multimodal chat models:

πŸ”— New endpoint: https://api.edenai.run/v2/llm/chat

The previous endpoints /v1/chat and /v1/multimodal/chat are being deprecated and will no longer be supported after June 1st, 2025.

Follow this quick guide to migrate your existing code.

βœ… Before – Old Endpoint (/v2/text/chat)

import json  
import requests

headers = {  
    "Accept": "application/json",  
    "Content-Type": "application/json",  
    "Authorization": "Bearer \<YOUR_API_KEY>"  
}

url = "<https://api.edenai.run/v2/text/chat">

payload = {  
    "providers": "openai/o1",  
    "text": "Describe the movie Dune",  
    "chat_global_action": "Act as an assistant",  
    "previous_history": \[],  
    "temperature": 1,  
    "max_tokens": 1000  
}

response = requests.post(url, json=payload, headers=headers)  
result = json.loads(response.text)  
print(result)

βœ… Before – Old Endpoint (/v2/text/chat)

import json
import requests

headers = {
    "Authorization": "Bearer <your_api_key>",
    "Content-Type": "application/json"
}

url = "https://api.edenai.run/v2/chat/multimodal"

image_base64 = "/9j/4AAQSkZ..."

payload = {
    "provider": "openai/gpt-4-turbo",
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "What's happening in this image?"
                },
                {
                    "type": "image",
                    "image": {
                        "image_base64": image_base64,
                        "filename": "filename.jpg"
                    }
                }
            ]
        }
    ],
    "temperature": 0.7,
    "max_tokens": 1000
}

response = requests.post(url, headers=headers, json=payload)
print(response.json())

πŸ†• After – New Endpoint (/v2/llm/chat)

import json  
import requests

headers = {  
    "Accept": "application/json",  
    "Content-Type": "application/json",  
    "Authorization": "Bearer \<YOUR_API_KEY>"  
}

url = "<https://api.edenai.run/v2/llm/chat">

payload = {  
    "model": "openai.gpt-3.5-turbo",  
    "messages": [  
        {"role": "system", "content": "You are a helpful assistant."},  
        {"role": "user", "content": "Describe the movie Dune"}  
    ],  
    "temperature": 1,  
    "max_tokens": 1000,  
    "fallback_model": "mistral.mixtral-8x7b"  
}

response = requests.post(url, json=payload, headers=headers)  
result = json.loads(response.text)  
print(result)

βœ… Example messages Object (with multimodal)

'messages':[{
  "role": "user",
  "content": [
    {"type": "text", "text": "What's in this image?"},
    {
      "type": "image_url",
      "image_url": {
        "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
      },
    },
  ],
}]

πŸ”„ Key Changes

Old ParameterNew Equivalent
providersmodel (e.g. "openai.gpt-3.5-turbo")
textmessages array with role/content
chat_global_actionsystem message (in messages)
previous_historyUse prior messages with proper roles

🟩 New Endpoint Parameters

🧠 Required Parameters

NameTypeDescription
modelstringID of the model to use (e.g., "gpt-4", "claude-3-5-sonnet-latest").
messagesarray of objectsList of user/assistant messages (with rich content structure, including text and media).

🧩 messages Structure

Each message must follow this structure:

KeyTypeDescription
rolestring"user" or "assistant"or system
contentarray of content blocksEach block has type and content keys. See below.

πŸ”Ή content Blocks

typeRequired content format
text{ "text": "..." }
media_url{ "media_url": { "url": "..." } }

πŸ†• Optional Parameters (New Endpoint Only)

NameTypeDescription
reasoning_effortstring"low", "medium", or "high"
metadataarray of objectsList of { key, value } pairs to pass metadata
frequency_penaltyfloat (-2 to 2)Penalizes frequent tokens to reduce repetition
logit_biasobjectAdjust likelihoods for specific token IDs
logprobsbooleanReturn log probabilities of top tokens
top_logprobsinteger (0–20)Number of top logprobs per token to return
max_completion_tokensinteger (β‰₯1)Limit on the number of tokens in the completion
ninteger (β‰₯1)Number of completion alternatives to return
modalitiesarray of stringsList of allowed input/output formats, e.g., ["text", "image", "audio"]
predictionobjectCustom object to store prediction data
audioobjectAudio metadata (e.g., language, format)
presence_penaltyfloat (-2 to 2)Encourage/discourage new topics
response_formatobjectResponse structure, e.g., JSON output with schema
seedintegerDeterministic random seed
service_tierstring"auto" or "default"
stoparray of stringsStrings that stop the generation
streambooleanEnable streaming responses
stream_optionsobjectStreaming config, e.g., { "include_usage": true }
temperaturefloat (0–2)Controls creativity (higher = more random)
top_pfloat (0–1)Controls token pool diversity
toolsarray of objectsTool definitions available to the model
tool_choicestring"auto", "none", or specific tool name
parallel_tool_callsbooleanAllow multiple tools to be called at once
userstringOptional user ID for tracking
function_callstring"auto", "none", or specific function name
functionsarray of objectsFunction definitions for the model

πŸ“š Additional Notes

The new endpoint follows OpenAI’s chat format for easy migration from OpenAI tools or SDKs.
You can now specify a fallback_model to automatically retry with another model if the primary one fails.
It supports both text and multimodal models in a single, unified schema.

πŸ“– See full documentation: https://docs.edenai.co/reference/chat