Frequently Asked Questions
Common questions about Eden AI V3 API.General
What is Eden AI V3?
Eden AI V3 is a unified AI API platform that provides:- Universal AI Endpoint - Single endpoint for all non-LLM features (text, OCR, image, translation)
- OpenAI-Compatible LLM - Drop-in replacement for OpenAI’s chat completions API
- Multi-Provider Support - Access 50+ AI providers through one interface
- Persistent File Storage - Upload files once, use in multiple requests
Which endpoint should I use?
Choose based on your use case: Use/v3/llm/chat/completions for:
- Conversational AI and chatbots
- Text generation and completion
- Vision/multimodal AI (analyzing images with LLMs)
- Tool/function calling
- Any use case requiring OpenAI-compatible format
/v3/universal-ai for:
- Text analysis (sentiment, moderation, AI detection)
- OCR and document parsing
- Image generation and analysis
- Text embeddings
- Translation
How does the model string format work?
V3 uses a model string to specify what feature and provider to use: Universal AI format:text/moderation/openaiocr/financial_parser/googleimage/generation/openai/dall-e-3
openai/gpt-4anthropic/claude-3-5-sonnet-20241022google/gemini-pro
Authentication & Access
How do I get an API key?
- Sign up at app.edenai.run
- Navigate to your dashboard
- Generate an API token under “API Keys”
- Use it in the
Authorizationheader:Bearer YOUR_API_KEY
Can I use my own provider API keys?
Yes! You can bypass Eden AI billing by providing your own provider API keys. This is useful for:- Using existing provider credits
- Testing specific provider features
- Cost optimization
What are the rate limits?
Rate limits vary by:- Your account tier
- Provider being used
- Specific feature
X-RateLimit-Limit- Total requests allowedX-RateLimit-Remaining- Requests remainingX-RateLimit-Reset- Time when limit resets
429 Too Many Requests response.
How much does it cost?
Pricing is pay-as-you-go based on:- Provider used
- Feature/model called
- Volume of data processed
cost field showing the charge in USD for that request:
File Handling
What file input methods are supported?
V3 supports three methods: 1. File Upload (recommended for reuse):How long are uploaded files stored?
Files uploaded to/v3/upload are stored for 7 days by default. After expiration:
- Files are automatically deleted
- File IDs become invalid
- You’ll need to re-upload
expires_at timestamp is included in upload responses:
What are the file size limits?
Limits vary by feature and provider:| Feature Type | Typical Limit | Notes |
|---|---|---|
| OCR | 100 MB | Some providers support larger files |
| Image Analysis | 20 MB | JPEG, PNG, WebP, GIF |
| LLM Vision | 20 MB | Provider-dependent |
| Document Translation | 50 MB | PDF, DOCX, TXT |
413 Payload Too Large error with specific limit information.
Which file formats are supported?
Images:- JPEG, PNG, WebP, GIF
- TIFF (OCR only)
- DOCX, DOC
- TXT, RTF
- MP3, WAV (provider-dependent)
LLM Features
Why is streaming mandatory in V3?
All LLM responses in V3 use Server-Sent Events (SSE) streaming to:- Reduce perceived latency
- Provide real-time token generation
- Match OpenAI’s API behavior
- Enable better UX for chat applications
stream: false is sent, responses are streamed. See Streaming Responses for implementation guide.
How do I send images to LLMs?
V3 LLM supports multimodal inputs via message content arrays:- Image URLs
- Base64 data URLs (
data:image/jpeg;base64,...) - Uploaded file UUIDs
Which providers support vision/multimodal?
| Provider | Models | Image Support | File Support |
|---|---|---|---|
| OpenAI | gpt-4o, gpt-4-turbo | ✓ | ✓ |
| Anthropic | claude-3-opus, claude-3-5-sonnet | ✓ | ✓ |
| gemini-1.5-pro, gemini-1.5-flash | ✓ | ✓ | |
| Mistral | pixtral-12b | ✓ | - |
Does V3 support tool/function calling?
Yes! V3 supports OpenAI-compatible tool calling:- ✓ OpenAI (all GPT-4+ models)
- ✓ Anthropic (Claude 3+)
- ✓ Google (Gemini 1.5+)
- ✓ Mistral (Large models)
Universal AI
How do I discover available features?
Use the built-in API discovery endpoints:Can I use fallback providers?
Not directly in V3’s simplified model string format. However, you can implement fallback logic in your application:How do I optimize costs?
Best practices:-
Choose cost-effective providers:
- Compare pricing in dashboard
- Check
costfield in responses
-
Cache results when possible:
- Many features (embeddings, moderation) have deterministic outputs
- Store results for identical inputs
-
Use appropriate models:
- Don’t use premium models for simple tasks
- Match model capability to task complexity
-
Batch processing:
- Process multiple items in fewer API calls when supported
-
Monitor usage:
- Track costs via Monitor Usage
- Set up alerts in dashboard
I’ve been an Eden AI user for some time. Is the previous version going to disappear?
Not yet. We’ll continue supporting the previous version until the end of 2026. You can find everything here https://old-app.edenai.run Also, you can find the documentation hereTroubleshooting
401 Unauthorized
Cause: Invalid or missing API token Solutions:- Verify token is correct
- Check
Authorizationheader format:Bearer YOUR_API_KEY - Ensure token hasn’t been revoked
- Generate new token in dashboard
402 Payment Required - Insufficient Credits
Cause: Account has insufficient credits Solutions:- Add credits in dashboard
- Check current balance
- Review cost per request in responses
404 Not Found
Cause: Invalid endpoint or model string Solutions:- Verify endpoint URL is correct
- Check model string format matches pattern
- Use
/v3/infoto discover available features - Ensure provider supports requested feature
422 Validation Error
Cause: Invalid request body or parameters Common issues:- Missing required fields
- Invalid parameter types
- File format not supported
- Model string malformed
429 Too Many Requests
Cause: Rate limit exceeded Solutions:- Implement exponential backoff
- Check
X-RateLimit-Resetheader - Upgrade account tier for higher limits
- Distribute requests over time
Invalid model string format
Cause: Model string doesn’t match expected pattern For Universal AI: Must befeature/subfeature/provider[/model]
provider/model
Provider temporarily unavailable
Cause: Upstream provider experiencing issues Solutions:- Check Eden AI Status Page
- Try alternative provider for same feature
- Implement retry logic with exponential backoff
- Use error response for specific provider error details