Skip to main content

Getting Started with Eden AI V3 API

Welcome to the Eden AI V3 API! This guide will help you understand V3’s unified architecture and make your first API calls.

Overview

Eden AI V3 introduces a revolutionary approach to AI API integration with:
  • Universal AI Endpoint - Single endpoint for all non-LLM features
  • OpenAI-Compatible Format - Drop-in replacement for OpenAI’s API
  • Persistent File Storage - Upload once, use in multiple requests
  • Built-in API Discovery - Explore features and schemas programmatically
  • Mandatory Streaming - All LLM responses use Server-Sent Events (SSE)
If you were an user before 2026/01/05, you still have access to the previous version: https://old-app.edenai.run/. We’ll continue supporting the old version until the end of 2026. It your’re looking for the documentation, you can find it here

V3 Architecture

V3 uses a model string format instead of separate provider parameters:
feature/subfeature/provider[/model]
Examples:
  • text/ai_detection/openai/gpt-4
  • ocr/financial_parser/google
  • image/generation/openai/dall-e-3
  • openai/gpt-4 (for LLM endpoints)
This unified format allows a single endpoint to handle all features intelligently.

Prerequisites

Before you start, you’ll need:
  1. API Token - Get your token from the Eden AI dashboard
  2. HTTP Client - Use cURL, Python requests, or any HTTP client
  3. Credits - Ensure your account has sufficient credits

Base URL

All V3 API endpoints are available at:
https://api.edenai.run/v3

Authentication

All requests must include your API token in the Authorization header:

Your First API Call: Universal AI

The Universal AI endpoint handles all non-LLM features through a single endpoint. Let’s analyze some text:

Response Format

All V3 responses follow a consistent structure:
{
  "status": "success",
  "cost": 0.0001,
  "provider": "openai",
  "feature": "text",
  "subfeature": "ai_detection",
  "output": {
    "ai_score": 0.85,
    "is_ai_generated": true
  }
}

Your First LLM Call: OpenAI-Compatible

The LLM endpoint provides OpenAI-compatible chat completions with mandatory streaming:

Streaming Response Format

LLM responses use Server-Sent Events (SSE):
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","choices":[{"delta":{"content":"Hello"},"index":0}]}

data: {"id":"chatcmpl-123","object":"chat.completion.chunk","choices":[{"delta":{"content":"!"},"index":0}]}

data: [DONE]

Working with Files

V3 introduces persistent file storage. Upload files once, then reference them by ID:

Step 1: Upload a File

Step 2: Use the File in Requests

Understanding Model Strings

The model string is the key to V3’s unified architecture. Here’s the format:
feature/subfeature/provider[/model]

Breaking It Down

ComponentDescriptionExample
featureCategory of AI capabilitytext, ocr, image, translation
subfeatureSpecific functionalityai_detection, moderation, ocr, generation
providerAI provideropenai, google, amazon, anthropic
modelSpecific model (optional)gpt-4, claude-3-5-sonnet, gemini-pro

Examples

Text Analysis:
  • text/ai_detection/openai/gpt-4 - Detect AI-generated text with GPT-4
  • text/moderation/openai - Content moderation with OpenAI’s default model
  • text/embeddings/cohere/embed-english-v3.0 - Generate embeddings with Cohere
OCR:
  • ocr/financial_parser/google - Extract financial information with Google DocumentAI
  • ocr/identity_parser/amazon - Parse ID documents with Amazon Textract
  • ocr/invoice_parser/microsoft - Extract invoice data with Azure
Image:
  • image/generation/openai/dall-e-3 - Generate images with DALL-E 3
  • image/object_detection/google - Detect objects with Google Vision
  • image/face_detection/amazon - Detect faces with AWS Rekognition
LLM (simplified format):
  • openai/gpt-4 - Chat with GPT-4
  • anthropic/claude-3-5-sonnet-20241022 - Chat with Claude
  • google/gemini-pro - Chat with Gemini

API Discovery

V3 includes built-in endpoints to explore available features programmatically:

Error Handling

V3 uses standard HTTP status codes with detailed error messages:
{
  "status": "error",
  "error": {
    "code": "invalid_model_string",
    "message": "Model string format must be feature/subfeature/provider[/model]",
    "details": {
      "provided": "invalid/model",
      "expected": "feature/subfeature/provider[/model]"
    }
  }
}

Common Status Codes

  • 200 - Success
  • 400 - Bad Request (invalid model string or input)
  • 401 - Unauthorized (invalid API token)
  • 402 - Payment Required (insufficient credits)
  • 404 - Not Found (feature/provider not available)
  • 422 - Validation Error (invalid request body)
  • 429 - Rate Limit Exceeded
  • 500 - Internal Server Error

V3 vs V2: Key Differences

AspectV2V3
EndpointsFeature-specific (/v2/text/moderation)Universal (/v3/universal-ai) + LLM
Provider Formatproviders parameterModel string (text/moderation/openai)
File HandlingPer-request uploadsPersistent storage with file IDs
LLM StreamingOptionalMandatory (always SSE)
API DiscoveryDocumentation onlyBuilt-in /v3/info endpoints
OpenAI CompatibilityCustom formatNative OpenAI format

Next Steps

Now that you understand V3 basics, explore specific features:

Universal AI

OpenAI-Compatible LLM

File Management

API Discovery

Need Help?