OpenAI-compatible chat completions endpoint (v3).
Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
The name of the LLM model to use.
The messages to send to the LLM model.
List of model candidates for dynamic routing when using model='@edenai'. If not provided, defaults to all available models.
The number of completions to generate for each prompt. Defaults to 1.
x >= 1The reasoning effort level for the LLM model.
low, medium, high list of metadata associated with the chat request. Can be used to provide additional context or tracking information.
Penalty for repeated tokens in the output.
-2 <= x <= 2Logit bias to influence token generation.
Whether to include log probabilities of tokens in the output. Defaults to False.
Number of top log probabilities to return with each token.
1 <= x <= 20The maximum number of tokens to generate in the chat completion
x >= 1An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
x >= 1List of supported input/output modalities for the chat.
field for storing prediction-related information.
dictionary for audio-related parameters or metadata.
Penalty for new tokens based on their presence in the text so far.
-2 <= x <= 2Specify the desired response format for the completion.
Seed for random number generation.
'auto': Automatically select appropriate tier 'default': Use the default service tier
auto, default List of stop sequences to end the generation.
Whether to stream the response in real-time. Defaults to False.
Options for streaming responses, such as chunk size or format.
Sampling temperature for controlling randomness in output.
0 <= x <= 2Nucleus sampling parameter for controlling diversity in output. Defaults to 1.0.
0 <= x <= 1List of tools that can be used by the model to assist in generating responses.
Specify how tools should be used. Can be 'auto', 'required', 'none', or an object like {'type': 'function', 'function': {'name': 'tool_name'}} to force a specific tool.
Whether to allow parallel tool calls in the completion.
User identifier for tracking or personalization purposes.
Function call parameters for invoking specific functions during the chat.
List of functions that can be called by the model to assist in generating responses.
Parameters related to the model's reasoning or thinking process.
Options for web search integration. Example: json web_search_options={ "search_context_size": "medium" # Options: "low", "medium", "high" }
Hint the model to be more or less expansive in its replies. Values: "low", "medium", "high". low (gpt5 models)
low, medium, high Successful Response