Skip to main content
Eden AI V3 provides two main endpoints. Both share the same base URL (https://api.edenai.run/v3) and authentication (Authorization: Bearer YOUR_API_KEY), but they serve different purposes.

At a Glance

LLM EndpointUniversal AI Endpoint
URLPOST /v3/llm/chat/completionsPOST /v3/universal-ai
Model Formatprovider/modelfeature/subfeature/provider[/model]
Use CasesChat, text generation, vision, tool callingOCR, text analysis, image processing, translation, audio
Response FormatOpenAI-compatible (choices, usage)Unified (status, cost, output)
StreamingYes (SSE)No
OpenAI SDK CompatibleYesNo

LLM Endpoint

The LLM endpoint follows the OpenAI chat completions format, making it a drop-in replacement for any OpenAI-compatible integration. Best for:
  • Conversational AI and chatbots
  • Text generation and completion
  • Vision and multimodal analysis (analyzing images with LLMs)
  • Tool/function calling
  • Any workflow that uses the OpenAI SDK
Model format: provider/model Examples: openai/gpt-4, anthropic/claude-sonnet-4-5, google/gemini-2.5-flash
import requests

url = "https://api.edenai.run/v3/llm/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

payload = {
    "model": "openai/gpt-4",
    "messages": [
        {"role": "user", "content": "Summarize the benefits of cloud computing."}
    ]
}

response = requests.post(url, headers=headers, json=payload)
result = response.json()
print(result["choices"][0]["message"]["content"])

Universal AI Endpoint

The Universal AI endpoint handles specialized AI tasks through a single URL. You specify the task, provider, and optionally the model in the model string. Best for:
  • Text analysis (moderation, AI detection, NER, sentiment)
  • OCR and document parsing (invoices, IDs, resumes)
  • Image processing (generation, object detection, face detection)
  • Translation
  • Audio (speech-to-text, text-to-speech)
Model format: feature/subfeature/provider[/model] Examples: text/moderation/openai, ocr/financial_parser/google, image/generation/openai/dall-e-3
import requests

url = "https://api.edenai.run/v3/universal-ai"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

payload = {
    "model": "text/moderation/openai",
    "input": {
        "text": "Content to analyze for moderation"
    }
}

response = requests.post(url, headers=headers, json=payload)
result = response.json()
print(f"Status: {result['status']}, Cost: ${result['cost']}")

When to Use Which

Use the LLM Endpoint when...

  • You need conversational AI or text generation
  • You want OpenAI SDK compatibility
  • You need streaming responses
  • You are doing vision analysis with an LLM
  • You need tool/function calling

Use Universal AI when...

  • You need OCR or document parsing
  • You need text moderation or analysis
  • You want image generation or detection
  • You need translation
  • You need speech-to-text or text-to-speech
Not sure which endpoint handles your task? Use the listing models endpoint to discover all available features and providers.

Response Format Comparison

LLM response (OpenAI-compatible):
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "model": "openai/gpt-4",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Cloud computing offers..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 85,
    "total_tokens": 97
  }
}
Universal AI response:
{
  "status": "success",
  "cost": 0.0001,
  "provider": "openai",
  "feature": "text",
  "subfeature": "moderation",
  "output": {
    // Feature-specific results
  }
}

Next Steps