https://api.edenai.run/v3) and authentication (Authorization: Bearer YOUR_API_KEY), but they serve different purposes.
At a Glance
| LLM Endpoint | Universal AI Endpoint | |
|---|---|---|
| URL | POST /v3/llm/chat/completions | POST /v3/universal-ai |
| Model Format | provider/model | feature/subfeature/provider[/model] |
| Use Cases | Chat, text generation, vision, tool calling | OCR, text analysis, image processing, translation, audio |
| Response Format | OpenAI-compatible (choices, usage) | Unified (status, cost, output) |
| Streaming | Yes (SSE) | No |
| OpenAI SDK Compatible | Yes | No |
LLM Endpoint
The LLM endpoint follows the OpenAI chat completions format, making it a drop-in replacement for any OpenAI-compatible integration. Best for:- Conversational AI and chatbots
- Text generation and completion
- Vision and multimodal analysis (analyzing images with LLMs)
- Tool/function calling
- Any workflow that uses the OpenAI SDK
provider/model
Examples: openai/gpt-4, anthropic/claude-sonnet-4-5, google/gemini-2.5-flash
Universal AI Endpoint
The Universal AI endpoint handles specialized AI tasks through a single URL. You specify the task, provider, and optionally the model in the model string. Best for:- Text analysis (moderation, AI detection, NER, sentiment)
- OCR and document parsing (invoices, IDs, resumes)
- Image processing (generation, object detection, face detection)
- Translation
- Audio (speech-to-text, text-to-speech)
feature/subfeature/provider[/model]
Examples: text/moderation/openai, ocr/financial_parser/google, image/generation/openai/dall-e-3
When to Use Which
Use the LLM Endpoint when...
- You need conversational AI or text generation
- You want OpenAI SDK compatibility
- You need streaming responses
- You are doing vision analysis with an LLM
- You need tool/function calling
Use Universal AI when...
- You need OCR or document parsing
- You need text moderation or analysis
- You want image generation or detection
- You need translation
- You need speech-to-text or text-to-speech