Skip to main content
Eden AI handles fallback for Universal AI / expert model requests. If your primary provider fails (outage, rate limit, error), Eden AI retries with the next provider in your fallback list — no retry logic needed on your side.

Usage

Set your primary provider in model and add a fallbacks array with one or more backup providers. If the primary fails, Eden AI tries each fallback in order.
All providers (primary and fallbacks) must support the same feature and subfeature. Use the Listing Models endpoint to discover available providers.
import requests

url = "https://api.edenai.run/v3/universal-ai"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json",
}

payload = {
    "model": "text/moderation/microsoft",
    "fallbacks": ["text/moderation/google"],
    "input": {"text": "Text to moderate"}
}

response = requests.post(url, headers=headers, json=payload)

print(response.status_code)
print(response.json())

Next Steps

LLM Fallback

Built-in fallback for LLM chat requests

Listing Models

Discover available providers for each feature