Skip to main content

Eden AI V3 API Documentation

Welcome to the Eden AI V3 API - our next-generation unified API with OpenAI-compatible endpoints and a revolutionary single-endpoint architecture.

What’s New in V3?

Eden AI V3 introduces a completely redesigned API architecture that simplifies AI integration while maintaining full compatibility with OpenAI’s API format.

Key Innovations

Universal AI Endpoint - Access all AI features through a single endpoint with a unified request format. OpenAI-Compatible LLM - Drop-in replacement for OpenAI’s chat completions API with multi-provider support. Persistent File Storage - Upload files once, reference them in multiple requests with automatic expiration. Built-in API Discovery - Explore available features, providers, and schemas programmatically. Mandatory Streaming - All LLM responses use Server-Sent Events (SSE) for real-time output.
If you were a user before 2026/01/05, you still have access to the previous version: https://old-app.edenai.run/. We’ll continue supporting the old version until the end of 2026. If you’re looking for the documentation, you can find it here

Core Endpoints

1. Universal AI - Single Endpoint for Everything

Access all non-LLM features through one unified endpoint:
POST /v3/universal-ai
One endpoint for:
  • Text analysis (moderation, embeddings, AI detection)
  • OCR (text extraction, invoice/ID parsing)
  • Image processing (generation, detection, analysis)
  • Translation (document translation)
Example Request:
import requests

url = "https://api.edenai.run/v3/universal-ai"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

payload = {
    "model": "text/ai_detection/openai/gpt-4",
    "input": {
        "text": "This is a sample text to analyze"
    }
}

response = requests.post(url, headers=headers, json=payload)
print(response.json())

2. OpenAI-Compatible LLM

OpenAI-compatible chat completions with streaming:
POST /v3/llm/chat/completions
Drop-in replacement for OpenAI API:
import requests

url = "https://api.edenai.run/v3/llm/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

payload = {
    "model": "openai/gpt-4",
    "messages": [
        {"role": "user", "content": "Hello!"}
    ],
    "stream": true
}

# Streaming response with SSE
response = requests.post(url, headers=headers, json=payload, stream=True)

for line in response.iter_lines():
    if line:
        print(line.decode('utf-8'))

3. File Upload & Management

Persistent file storage for use across requests:
POST /v3/upload
Upload once, use everywhere:
import requests

# Upload file
url = "https://api.edenai.run/v3/upload"
headers = {"Authorization": "Bearer YOUR_API_KEY"}

files = {"file": open("document.pdf", "rb")}
data = {"purpose": "ocr-processing"}

response = requests.post(url, headers=headers, files=files, data=data)
file_id = response.json()["file_id"]

# Use file in OCR request
ocr_response = requests.post(
    "https://api.edenai.run/v3/universal-ai",
    headers={
        "Authorization": "Bearer YOUR_API_KEY",
        "Content-Type": "application/json"
    },
    json={
        "model": "ocr/financial_parser/google",
        "input": {"file": file_id}
    }
)

4. API Discovery

Explore features and schemas programmatically:
GET /v3/info
GET /v3/info/{feature}
GET /v3/info/{feature}/{subfeature}
import requests

# List all features
response = requests.get(
    "https://api.edenai.run/v3/info"
)

features = response.json()
print(features)  # {"text": [...], "ocr": [...], "image": [...]}

V3 vs V2: Key Differences

AspectV2V3
ArchitectureMultiple feature-specific endpointsSingle universal endpoint + LLM
Provider FormatBody parameterProvider string in model field
LLM StreamingOptionalMandatory (always SSE)
File HandlingInline uploads per requestPersistent storage with file IDs
API DiscoveryDocumentation onlyBuilt-in /v3/info endpoints
OpenAI CompatibilityCustom formatNative OpenAI format

Model String Format

V3 uses a unified model string format:
feature/subfeature/provider[/model]
Examples:
  • text/ai_detection/openai/gpt-4
  • ocr/financial_parser/google
  • image/generation/openai/dall-e-3
  • translation/document_translation/deepl

Available Features

Text Analysis

  • AI Detection - Detect AI-generated content
  • Moderation - Content moderation and safety
  • Embeddings - Semantic search vectors
  • Spell Check - Grammar and spelling correction
  • Named Entity Recognition - Extract entities
  • Topic Extraction - Identify main topics
  • Plagiarism Detection - Check for duplicates

OCR (Optical Character Recognition)

  • Text Extraction - Extract text from images/PDFs
  • Identity Parser - Parse ID documents and passports
  • Invoice Parser - Extract structured invoice data
  • Resume Parser - Parse CV and resume data

Image Processing

  • Generation - Create AI images
  • Object Detection - Identify objects
  • Face Detection - Detect faces
  • Face Comparison - Compare face similarity
  • Background Removal - Remove backgrounds
  • Explicit Content Detection - NSFW detection
  • AI Detection - Detect AI-generated images
  • Anonymization - Blur faces for privacy
  • Deepfake Detection - Detect manipulated images
  • Image Embeddings - Visual similarity vectors

Translation

  • Document Translation - Translate documents

Quick Start Guide

1. Get Your API Key

Sign up at Eden AI Dashboard and get your API key.

2. Choose Your Approach

Option A: Universal AI (Recommended for non-LLM)
response = requests.post(
    "https://api.edenai.run/v3/universal-ai",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    json={
        "model": "text/moderation/openai",
        "input": {"text": "Content to moderate"}
    }
)
Option B: OpenAI-Compatible LLM
response = requests.post(
    "https://api.edenai.run/v3/llm/chat/completions",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    json={
        "model": "openai/gpt-4",
        "messages": [{"role": "user", "content": "Hello"}],
        "stream": True
    },
    stream=True
)

3. Handle File Uploads (Optional)

# Upload
files_response = requests.post(
    "https://api.edenai.run/v3/upload",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    files={"file": open("doc.pdf", "rb")}
)
file_id = files_response.json()["file_id"]

# Use in requests
response = requests.post(
    "https://api.edenai.run/v3/universal-ai",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    json={
        "model": "ocr/financial_parser/affinda",
        "input": {"file": file_id}
    }
)

Authentication

All V3 API requests require Bearer token authentication:
Authorization: Bearer YOUR_API_KEY

Base URL

https://api.edenai.run/v3

Response Format

All V3 responses follow a consistent structure:
{
  "status": "success",
  "cost": 0.001,
  "provider": "openai",
  "feature": "text",
  "subfeature": "ai_detection",
  "output": {
    // Feature-specific output
  }
}

Getting Started

Ready to dive in? Here are your next steps:
  1. Getting Started Guide - Learn V3 basics
  2. Universal AI Guide - Use the universal endpoint
  3. OpenAI-Compatible LLM - Chat completions
  4. File Upload Guide - Persistent file storage
  5. API Discovery - Explore programmatically

Need Help?


Ready to get started with V3? Jump to the Getting Started Guide or explore the Universal AI Guide.