post https://api.edenai.run/v2/llm/chat
Available Providers
Provider | Model | Version | Price | Billing unit |
---|---|---|---|---|
amazon | amazon.nova-lite-v1:0 | llmengine (v2) | 0.24 (per 1000000 token) | 1 token |
amazon | amazon.nova-micro-v1:0 | llmengine (v2) | 0.14 (per 1000000 token) | 1 token |
amazon | amazon.nova-pro-v1:0 | llmengine (v2) | 3.2 (per 1000000 token) | 1 token |
anthropic | claude-3-5-sonnet-latest | v1 | 15.0 (per 1000000 token) | 1 token |
anthropic | claude-3-5-haiku-latest | v1 | 4.0 (per 1000000 token) | 1 token |
anthropic | claude-3-opus-latest | v1 | 75.0 (per 1000000 token) | 1 token |
anthropic | claude-3-7-sonnet-latest | v1 | 15.0 (per 1000000 token) | 1 token |
cohere | command-r7b-12-2024 | llmengine (v2) | 0.15 (per 1000000 token) | 1 token |
cohere | command-r-plus-08-2024 | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
cohere | command-r-plus-04-2024 | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
cohere | command-r-plus | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
cohere | command-r-08-2024 | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
cohere | command-r-03-2024 | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
cohere | command-r | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
cohere | command | llmengine (v2) | 2.0 (per 1000000 token) | 1 token |
cohere | command-light | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
deepseek | DeepSeek-V3 | llmengine (v2) | 1.25 (per 1000000 token) | 1 token |
deepseek | DeepSeek-R1 | llmengine (v2) | 7.0 (per 1000000 token) | 1 token |
meta | llama3-1-405b-instruct-v1:0 | llmengine (v2) | 2.4 (per 1000000 token) | 1 token |
meta | llama3-1-70b-instruct-v1:0 | llmengine (v2) | 0.72 (per 1000000 token) | 1 token |
meta | llama3-1-8b-instruct-v1:0 | llmengine (v2) | 0.22 (per 1000000 token) | 1 token |
meta | llama3-2-3b-instruct-v1:0 | llmengine (v2) | 0.15 (per 1000000 token) | 1 token |
meta | llama3-2-1b-instruct-v1:0 | llmengine (v2) | 0.1 (per 1000000 token) | 1 token |
meta | llama3-2-11b-instruct-v1:0 | llmengine (v2) | 0.16 (per 1000000 token) | 1 token |
meta | llama3-1-405b-instruct-v1:0 | llmengine (v2) | 2.4 (per 1000000 token) | 1 token |
meta | llama3-1-70b-instruct-v1:0 | llmengine (v2) | 0.72 (per 1000000 token) | 1 token |
meta | llama3-1-8b-instruct-v1:0 | llmengine (v2) | 0.22 (per 1000000 token) | 1 token |
meta | llama3-2-3b-instruct-v1:0 | llmengine (v2) | 0.15 (per 1000000 token) | 1 token |
meta | llama3-2-1b-instruct-v1:0 | llmengine (v2) | 0.1 (per 1000000 token) | 1 token |
meta | llama3-2-11b-instruct-v1:0 | llmengine (v2) | 0.16 (per 1000000 token) | 1 token |
meta | llama3-3-70b-instruct-v1:0 | llmengine (v2) | 0.72 (per 1000000 token) | 1 token |
mistral | mistral-large-latest | llmengine (v2) | 6.0 (per 1000000 token) | 1 token |
mistral | pixtral-large-latest | llmengine (v2) | 6.0 (per 1000000 token) | 1 token |
mistral | mistral-small-latest | llmengine (v2) | 0.3 (per 1000000 token) | 1 token |
mistral | mistral-saba-latest | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
mistral | codestral-latest | llmengine (v2) | 0.9 (per 1000000 token) | 1 token |
openai | gpt-4 | llmengine (v2) | 60.0 (per 1000000 token) | 1 token |
openai | gpt-4o | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
openai | gpt-4o-mini | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
openai | o1-preview | llmengine (v2) | 60.0 (per 1000000 token) | 1 token |
openai | o1-mini | llmengine (v2) | 4.4 (per 1000000 token) | 1 token |
openai | gpt-4o-2024-05-13 | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
openai | gpt-4-turbo | llmengine (v2) | 30.0 (per 1000000 token) | 1 token |
openai | o1-2024-12-17 | llmengine (v2) | 60.0 (per 1000000 token) | 1 token |
openai | o1 | llmengine (v2) | 60.0 (per 1000000 token) | 1 token |
openai | o3-mini | llmengine (v2) | 4.4 (per 1000000 token) | 1 token |
openai | gpt-4.5-preview-2025-02-27 | llmengine (v2) | 150.0 (per 1000000 token) | 1 token |
openai | o1-mini-2024-09-12 | llmengine (v2) | 4.4 (per 1000000 token) | 1 token |
openai | o3-mini-2025-01-31 | llmengine (v2) | 4.4 (per 1000000 token) | 1 token |
openai | gpt-4o-2024-08-06 | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
openai | gpt-4o-mini-2024-07-18 | llmengine (v2) | 0.6 (per 1000000 token) | 1 token |
openai | gpt-3.5-turbo | llmengine (v2) | 1.5 (per 1000000 token) | 1 token |
together_ai | Qwen/Qwen2.5-72B-Instruct-Turbo | llmengine (v2) | 1.2 (per 1000000 token) | 1 token |
together_ai | meta-llama/Llama-3.3-70B-Instruct-Turbo | llmengine (v2) | 0.88 (per 1000000 token) | 1 token |
together_ai | microsoft/WizardLM-2-8x22B | llmengine (v2) | 1.2 (per 1000000 token) | 1 token |
xai | grok-2-latest | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
xai | grok-2 | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
xai | grok-2-vision-1212 | llmengine (v2) | 10.0 (per 1000000 token) | 1 token |
Default Models
Name | Value |
---|---|
amazon | amazon.nova-pro-v1:0 |
anthropic | claude-3-7-sonnet-latest |
cohere | command-r |
deepseek | DeepSeek-V3 |
meta | llama3-2-11b-instruct-v1:0 |
openai | gpt-4o |
together_ai | Qwen/Qwen2.5-72B-Instruct-Turbo |
xai | grok-2-latest |