Skip to main content
When using Universal AI / expert models, there is no built-in fallback mechanism in the model string (unlike the LLM smart routing). Instead, you implement fallback logic in your application code.

What is Fallback?

Fallback means automatically retrying a request with an alternative provider when the primary provider fails. This improves reliability by ensuring your application continues working even if one provider experiences downtime or errors.

Implementation Pattern

The approach is straightforward:
  1. Send a request to your primary provider
  2. If it fails (network error, provider outage, rate limit, etc.), catch the error
  3. Retry the same request with a fallback provider
import requests

url = "https://api.edenai.run/v3/universal-ai"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

text_to_moderate = "Some text to check for harmful content."

# Define primary and fallback models
primary_model = "text/moderation/openai"
fallback_model = "text/moderation/google"

def call_universal_ai(model, text):
    response = requests.post(url, headers=headers, json={
        "model": model,
        "input": {"text": text}
    })
    response.raise_for_status()
    return response.json()

# Try primary, fall back if it fails
try:
    result = call_universal_ai(primary_model, text_to_moderate)
    print("Primary provider succeeded")
except requests.exceptions.RequestException as e:
    print(f"Primary provider failed: {e}. Trying fallback...")
    result = call_universal_ai(fallback_model, text_to_moderate)
    print("Fallback provider succeeded")

print(result)

Multiple Fallbacks

You can chain several fallback providers for even greater resilience:
Python
import requests

url = "https://api.edenai.run/v3/universal-ai"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

models = [
    "text/moderation/openai",
    "text/moderation/google",
    "text/moderation/microsoft"
]

result = None
for model in models:
    try:
        response = requests.post(url, headers=headers, json={
            "model": model,
            "input": {"text": "Text to moderate"}
        })
        response.raise_for_status()
        result = response.json()
        print(f"Success with {model}")
        break
    except requests.exceptions.RequestException as e:
        print(f"{model} failed: {e}")

if result is None:
    print("All providers failed")

Best Practices

Choose fallback providers that support the same feature and subfeature. For example, if your primary is text/moderation/openai, your fallback should also be a text/moderation/* provider. Use the Listing Models endpoint to discover which providers support a given feature.
  • Order by preference — place your most reliable or cost-effective provider first.
  • Log fallback events — track when fallbacks occur so you can identify recurring provider issues.
  • Set timeouts — add request timeouts so a slow provider triggers the fallback quickly rather than hanging.
  • Handle all providers failing — always have a final error path if every provider is unavailable.