Skip to main content
Eden AI is a unified AI gateway that gives you access to 200+ AI models from 50+ providers through a single API. Instead of integrating each AI provider separately, you connect once to Eden AI and get instant access to the entire AI ecosystem.

Two Endpoints, Full Coverage

Eden AI V3 organizes all AI capabilities under two main endpoints:
EndpointPurposeModel Format
POST /v3/llm/chat/completionsLLMs (chat, text generation, vision, tool calling)provider/model
POST /v3/universal-aiExpert models (OCR, text analysis, image, translation, audio)feature/subfeature/provider[/model]
Both endpoints share the same base URL and authentication.

Base URL

https://api.edenai.run/v3

Authentication

All requests require a Bearer token in the Authorization header:
Authorization: Bearer YOUR_API_KEY
Get your API key from the Eden AI dashboard.

Key Benefits

Single Integration

Connect once to Eden AI and access every supported AI provider. No need to manage separate SDKs or API keys for each provider.

Unified Billing

One invoice for all your AI usage. Every API response includes a cost field in USD so you always know exactly what you spent.

OpenAI-Compatible Format

The LLM endpoint follows the OpenAI chat completions format. Use any OpenAI-compatible SDK or tool and just change the base URL.

Multi-Provider Support

Switch between providers by changing a single string in your request. No code changes needed.

Persistent File Storage

Upload files once and reference them by UUID across multiple requests and features.

Built-in API Discovery

Explore available models, features, and input schemas programmatically through the listing endpoints.

Pay-Per-Use Pricing

Eden AI uses a pay-per-use model. You only pay for the API calls you make. Every response includes a cost field showing the exact charge in USD:
{
  "status": "success",
  "cost": 0.0015,
  "output": { ... }
}
No minimum commitments, no upfront fees. See Plans & Pricing for details on available tiers.

Quick Example

import requests

# LLM call
response = requests.post(
    "https://api.edenai.run/v3/llm/chat/completions",
    headers={
        "Authorization": "Bearer YOUR_API_KEY",
        "Content-Type": "application/json"
    },
    json={
        "model": "openai/gpt-4",
        "messages": [{"role": "user", "content": "Hello!"}]
    }
)
print(response.json()["choices"][0]["message"]["content"])
If you were a user before January 2026, you still have access to the previous version at old-app.edenai.run. The old version will continue to be supported until the end of 2026. Documentation for the previous version is available at old-docs.edenai.co.

Next Steps