Getting Started with Eden AI V3 API
Welcome to the Eden AI V3 API! This guide will help you understand V3’s unified architecture and make your first API calls.Overview
Eden AI V3 introduces a revolutionary approach to AI API integration with:- Universal AI Endpoint - Single endpoint for all non-LLM features
- OpenAI-Compatible Format - Drop-in replacement for OpenAI’s API
- Persistent File Storage - Upload once, use in multiple requests
- Built-in API Discovery - Explore features and schemas programmatically
- Mandatory Streaming - All LLM responses use Server-Sent Events (SSE)
If you were an user before 2026/01/05, you still have access to the previous version: https://old-app.edenai.run/. We’ll continue supporting the old version until the end of 2026. It your’re looking for the documentation, you can find it here
V3 Architecture
V3 uses a model string format instead of separate provider parameters:text/ai_detection/openai/gpt-4ocr/financial_parser/googleimage/generation/openai/dall-e-3openai/gpt-4(for LLM endpoints)
Prerequisites
Before you start, you’ll need:- API Token - Get your token from the Eden AI dashboard
- HTTP Client - Use cURL, Python requests, or any HTTP client
- Credits - Ensure your account has sufficient credits
Base URL
All V3 API endpoints are available at:Authentication
All requests must include your API token in the Authorization header:Your First API Call: Universal AI
The Universal AI endpoint handles all non-LLM features through a single endpoint. Let’s analyze some text:Response Format
All V3 responses follow a consistent structure:Your First LLM Call: OpenAI-Compatible
The LLM endpoint provides OpenAI-compatible chat completions with mandatory streaming:Streaming Response Format
LLM responses use Server-Sent Events (SSE):Working with Files
V3 introduces persistent file storage. Upload files once, then reference them by ID:Step 1: Upload a File
Step 2: Use the File in Requests
Understanding Model Strings
The model string is the key to V3’s unified architecture. Here’s the format:Breaking It Down
| Component | Description | Example |
|---|---|---|
feature | Category of AI capability | text, ocr, image, translation |
subfeature | Specific functionality | ai_detection, moderation, ocr, generation |
provider | AI provider | openai, google, amazon, anthropic |
model | Specific model (optional) | gpt-4, claude-3-5-sonnet, gemini-pro |
Examples
Text Analysis:text/ai_detection/openai/gpt-4- Detect AI-generated text with GPT-4text/moderation/openai- Content moderation with OpenAI’s default modeltext/embeddings/cohere/embed-english-v3.0- Generate embeddings with Cohere
ocr/financial_parser/google- Extract financial information with Google DocumentAIocr/identity_parser/amazon- Parse ID documents with Amazon Textractocr/invoice_parser/microsoft- Extract invoice data with Azure
image/generation/openai/dall-e-3- Generate images with DALL-E 3image/object_detection/google- Detect objects with Google Visionimage/face_detection/amazon- Detect faces with AWS Rekognition
openai/gpt-4- Chat with GPT-4anthropic/claude-3-5-sonnet-20241022- Chat with Claudegoogle/gemini-pro- Chat with Gemini
API Discovery
V3 includes built-in endpoints to explore available features programmatically:Error Handling
V3 uses standard HTTP status codes with detailed error messages:Common Status Codes
200- Success400- Bad Request (invalid model string or input)401- Unauthorized (invalid API token)402- Payment Required (insufficient credits)404- Not Found (feature/provider not available)422- Validation Error (invalid request body)429- Rate Limit Exceeded500- Internal Server Error
V3 vs V2: Key Differences
| Aspect | V2 | V3 |
|---|---|---|
| Endpoints | Feature-specific (/v2/text/moderation) | Universal (/v3/universal-ai) + LLM |
| Provider Format | providers parameter | Model string (text/moderation/openai) |
| File Handling | Per-request uploads | Persistent storage with file IDs |
| LLM Streaming | Optional | Mandatory (always SSE) |
| API Discovery | Documentation only | Built-in /v3/info endpoints |
| OpenAI Compatibility | Custom format | Native OpenAI format |
Next Steps
Now that you understand V3 basics, explore specific features:Universal AI
- Getting Started with Universal AI - Learn the universal endpoint
- Text Features - AI detection, moderation, embeddings
- OCR Features - Text extraction, document parsing
- Image Features - Generation, detection, analysis
OpenAI-Compatible LLM
- Chat Completions - Build conversational AI with streaming
- Streaming Responses - Handle Server-Sent Events
- File Attachments - Send images and documents to LLMs
File Management
- Upload Files - Persistent file storage
API Discovery
- Explore the API - Programmatic feature discovery
Need Help?
- Visit Eden AI Support for additional assistance