Create a model response.
Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Model identifier, e.g. 'openai/gpt-4o'
Text, image, or file inputs to the model
System/developer instructions prepended to input. Not carried over when using previous_response_id.
ID of a prior response to continue a multi-turn conversation. The provider manages conversation state server-side.
Whether to stream the response via server-sent events.
List of tools the model may call (function, web_search, file_search, etc.).
Controls which tool is called. 'auto', 'required', 'none', or a specific tool object.
0 <= x <= 20 <= x <= 1x >= 1Reasoning configuration, e.g. {'effort': 'low'|'medium'|'high'}.
How to handle context that exceeds the model's context window.
auto, disabled Whether the provider should store the response server-side for later retrieval.
Up to 16 key-value pairs for tagging.
Stable end-user identifier for abuse detection.
Text output configuration, e.g. {'format': {'type': 'json_schema', ...}}.
Additional output data to include, e.g. 'file_search_call.results'.
Whether to run the model response in the background.
List of fallback model IDs to try if the primary model fails. Models are tried in order. Example: ['anthropic/claude-3-opus', 'openai/gpt-4o']
3List of model candidates for dynamic routing when using model='@edenai'. Each entry should be 'provider/model', e.g. ['openai/gpt-4o', 'anthropic/claude-3-5-sonnet-20241022']. If not provided, defaults to all available models.
Successful Response
"response"