Skip to main content
Eden AI automatically handles fallback for LLM requests. If your primary model fails (provider outage, rate limit, error), Eden AI retries with the next model in your fallback list — no retry logic needed on your side.

Usage

Add a fallbacks array to your request with one or more backup models. If the primary model fails, Eden AI will try each fallback in order.
All models (primary and fallbacks) must be valid models listed in the Models page.
import requests

url = "https://api.edenai.run/v3/llm/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json",
}

payload = {
    "model": "google/gemini-2.5-pro",
    "fallbacks": ["openai/gpt-4o"],
    "messages": [
        {"role": "user", "content": [{"type": "text", "text": "hi"}]}
    ],
    "reasoning_effort": "none",
}

response = requests.post(url, headers=headers, json=payload)

print(response.status_code)
print(response.json())

Next Steps

Expert Model Fallback

Fallback for Universal AI / expert model requests

Models

Browse available LLM models