Submit an asynchronous Universal AI job for long-running AI features.
Model format: feature/subfeature/provider[/model]
Use this endpoint for features that require processing time (e.g., speech-to-text, OCR on large documents). The response returns a job ID that you can poll using GET /v3/universal-ai/async/{job_id}.
Request body is identical to the sync endpoint (POST /v3/universal-ai).
Example:
{
"model": "audio/speech_to_text_async/google",
"input": {"file": "YOUR_FILE_UUID_OR_URL", "language": "en"}
}
Response (202 Accepted):
{
"public_id": "abc123-def456",
"status": "processing",
"cost": "0.000",
"provider": "google",
"feature": "audio",
"subfeature": "speech_to_text_async",
"output": null,
"error": null,
"model": null,
"created_at": "2025-01-01T00:00:00Z"
}
Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Universal AI request body.
Model format: feature/subfeature/provider[/model]
Examples: - text/moderation/google - ocr/ocr/amazon - image/generation/google/imagen-3
The input dict contains feature-specific parameters that are
validated at runtime based on the parsed feature/subfeature.
Model in format: feature/subfeature/provider[/model]
"text/moderation/google"
"ocr/ocr/amazon"
"image/generation/google/imagen-3"
Feature-specific input parameters. Required fields depend on the feature/subfeature specified in provider. Examples:
{
"text": "Content to moderate for harmful material"
}{
"dimensions": 512,
"texts": ["text1", "text2"]
}{ "file_id": "abc123", "language": "en" }Provider-specific parameters
Include raw provider response in the output
Job Created
Response from the async universal-ai endpoint. Used for both job creation (202) and job status polling (GET).
Job ID for polling status
Job status: processing (still running), success, or fail
success, fail, processing Cost in credits for this request
^(?!^[-+.]*$)[+-]?0*\d*\.?\d*$Provider name that processed the request
Feature category (e.g., audio, ocr, image)
Specific subfeature (e.g., speech_to_text_async, ocr_async)
Job creation timestamp
Normalized output from the provider (null while processing)
Error details from the provider (only present when status is 'fail')
Raw response from the provider (if show_original_response=true)
Model name if specified in the request