post https://api.edenai.run/v2/llm/chat/
Available Providers
Provider | Model | Version | Price | Billing unit |
---|---|---|---|---|
amazon | eu.amazon.nova-lite-v1:0 | llmengine (v2) | 0.24 (per 1000000 token) | 1 token |
amazon | amazon.nova-lite-v1:0 | llmengine (v2) | 0.24 (per 1000000 token) | 1 token |
amazon | amazon.nova-micro-v1:0 | llmengine (v2) | 0.14 (per 1000000 token) | 1 token |
amazon | amazon.nova-pro-v1:0 | llmengine (v2) | 3.2 (per 1000000 token) | 1 token |
anthropic | claude-3-5-sonnet-latest | v1 | 15.0 (per 1000000 token) | 1 token |
anthropic | claude-3-5-haiku-latest | v1 | 4.0 (per 1000000 token) | 1 token |
anthropic | claude-3-opus-latest | v1 | 75.0 (per 1000000 token) | 1 token |
anthropic | claude-3-7-sonnet-latest | v1 | 15.0 (per 1000000 token) | 1 token |
cohere | command-r7b-12-2024 | llmengine (v2) | 0.15 (per 1000000 token) | 1 token |
cohere | command-r-plus-08-2024 | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
cohere | command-r-plus | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
cohere | command-r-08-2024 | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
cohere | command-r-03-2024 | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
cohere | command-r | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
cohere | command | llmengine (v2) | 2.0 (per 1000000 token) | 1 token |
cohere | command-light | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
deepseek | deepseek-chat | llmengine (v2) | 1.25 (per 1000000 token) | 1 token |
deepseek | deepseek-reasoner | llmengine (v2) | 7.0 (per 1000000 token) | 1 token |
meta | meta.llama3-1-405b-instruct-v1:0 | llmengine (v2) | 2.4 (per 1000000 token) | 1 token |
meta | meta.llama3-1-70b-instruct-v1:0 | llmengine (v2) | 0.72 (per 1000000 token) | 1 token |
meta | meta.llama3-1-8b-instruct-v1:0 | llmengine (v2) | 0.22 (per 1000000 token) | 1 token |
mistral | pixtral-large-latest | llmengine (v2) | 6.0 (per 1000000 token) | 1 token |
mistral | mistral-small-latest | llmengine (v2) | 0.3 (per 1000000 token) | 1 token |
mistral | codestral-latest | llmengine (v2) | 0.9 (per 1000000 token) | 1 token |
mistral | mistral-large-latest | llmengine (v2) | 6.0 (per 1000000 token) | 1 token |
openai | gpt-4.1-2025-04-14 | llmengine (v2) | 8.0 (per 1000000 token) | 1 token |
openai | gpt-4.1-mini-2025-04-14 | llmengine (v2) | 1.6 (per 1000000 token) | 1 token |
openai | gpt-4.1-nano-2025-04-14 | llmengine (v2) | 0.4 (per 1000000 token) | 1 token |
openai | o3-2025-04-16 | llmengine (v2) | 40.0 (per 1000000 token) | 1 token |
openai | o4-mini-2025-04-16 | llmengine (v2) | 4.4 (per 1000000 token) | 1 token |
openai | gpt-4 | llmengine (v2) | 60.0 (per 1000000 token) | 1 token |
openai | gpt-4o | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
openai | gpt-4o-mini | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
openai | o1-preview | llmengine (v2) | 60.0 (per 1000000 token) | 1 token |
openai | o1-mini | llmengine (v2) | 4.4 (per 1000000 token) | 1 token |
openai | gpt-4o-2024-05-13 | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
openai | gpt-4-turbo | llmengine (v2) | 30.0 (per 1000000 token) | 1 token |
openai | o1-2024-12-17 | llmengine (v2) | 60.0 (per 1000000 token) | 1 token |
openai | o1 | llmengine (v2) | 60.0 (per 1000000 token) | 1 token |
openai | o3-mini | llmengine (v2) | 4.4 (per 1000000 token) | 1 token |
openai | gpt-4.5-preview-2025-02-27 | llmengine (v2) | 150.0 (per 1000000 token) | 1 token |
openai | o1-mini-2024-09-12 | llmengine (v2) | 4.4 (per 1000000 token) | 1 token |
openai | o3-mini-2025-01-31 | llmengine (v2) | 4.4 (per 1000000 token) | 1 token |
openai | gpt-4o-2024-08-06 | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
openai | gpt-4o-mini-2024-07-18 | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
openai | gpt-3.5-turbo | llmengine (v2) | 1.5 (per 1000000 token) | 1 token |
openai | tts-1 | llmengine (v2) | 15.0 (per 1000000 char) | 1 char |
together_ai | Qwen/Qwen2.5-72B-Instruct-Turbo | llmengine (v2) | 1.2 (per 1000000 token) | 1 token |
together_ai | meta-llama/Llama-3.3-70B-Instruct-Turbo | llmengine (v2) | 0.88 (per 1000000 token) | 1 token |
together_ai | microsoft/WizardLM-2-8x22B | llmengine (v2) | 1.2 (per 1000000 token) | 1 token |
xai | grok-2-latest | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
xai | grok-2 | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
xai | grok-2-vision-1212 | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
gemini-2.5-pro-preview-03-25 | llmengine (v2) | 15.0 (per 1000000 token) | 1 token | |
gemini-2.5-flash-preview-04-17 | llmengine (v2) | 3.5 (per 1000000 token) | 1 token | |
gemini-2.0-flash-lite | llmengine (v2) | 0.3 (per 1000000 token) | 1 token | |
gemini-1.5-flash | llmengine (v2) | 0.6 (per 1000000 token) | 1 token | |
gemini-1.5-pro | llmengine (v2) | 10.0 (per 1000000 token) | 1 token | |
gemini-1.5-flash-latest | llmengine (v2) | 0.6 (per 1000000 token) | 1 token | |
gemini-1.5-pro-latest | llmengine (v2) | 10.0 (per 1000000 token) | 1 token | |
gemini-1.5-flash-8b | llmengine (v2) | 0.3 (per 1000000 token) | 1 token | |
gemini-1.5-flash-8b-latest | llmengine (v2) | 0.3 (per 1000000 token) | 1 token | |
gemini-2.0-flash | llmengine (v2) | 0.4 (per 1000000 token) | 1 token | |
gemini-2.5-pro-exp-03-25 | llmengine (v2) | 0.0 (per 1000000 token) | 1 token | |
gemini-2.0-flash-lite-preview-02-05 | llmengine (v2) | 0.3 (per 1000000 token) | 1 token | |
groq | llama-3.1-8b-instant | v1 | 0.05 (per 1000000 token) | 1 token |
groq | llama3-70b-8192 | v1 | 0.59 (per 1000000 token) | 1 token |
groq | llama3-8b-8192 | v1 | 0.05 (per 1000000 token) | 1 token |
groq | gemma2-9b-it | v1 | 0.07 (per 1000000 token) | 1 token |
groq | llama-3.3-70b-versatile | v1 | 0.59 (per 1000000 token) | 1 token |
microsoft | gpt-4o | Azure AI Foundry | 5.0 (per 1000000 token) | 1 token |
microsoft | o3-mini | Azure AI Foundry | 4.4 (per 1000000 token) | 1 token |
microsoft | o1-mini | Azure AI Foundry | 12.0 (per 1000000 token) | 1 token |
microsoft | gpt-4o-mini | Azure AI Foundry | 0.66 (per 1000000 token) | 1 token |
microsoft | gpt-4 | Azure AI Foundry | 60.0 (per 1000000 token) | 1 token |
microsoft | gpt-35-turbo-16k | Azure AI Foundry | 4.0 (per 1000000 token) | 1 token |
microsoft | gpt-35-turbo | Azure AI Foundry | 1.5 (per 1000000 token) | 1 token |
Default Models
Name | Value |
---|---|
amazon | amazon.nova-pro-v1:0 |
anthropic | claude-3-7-sonnet-latest |
cohere | command-r |
deepseek | deepseek-chat |
meta | meta.llama3-1-70b-instruct-v1:0 |
mistral | mistral-large-latest |
openai | gpt-4o |
together_ai | Qwen/Qwen2.5-72B-Instruct-Turbo |
xai | grok-2-latest |
gemini-2.0-flash | |
groq | llama-3.3-70b-versatile |
microsoft | gpt-4o |