Skip to main content
POST
/
v3
/
universal-ai
/
async
Create Async Job
curl --request POST \
  --url https://api.edenai.run/v3/universal-ai/async \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "<string>",
  "input": {},
  "provider_params": {},
  "show_original_response": false
}
'
{
  "public_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
  "status": "success",
  "cost": "<string>",
  "provider": "<string>",
  "feature": "<string>",
  "subfeature": "<string>",
  "created_at": "2023-11-07T05:31:56Z",
  "output": "<unknown>",
  "error": {},
  "original_response": "<unknown>",
  "model": "<string>"
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json

Universal AI request body.

Model format: feature/subfeature/provider[/model]

Examples: - text/moderation/google - ocr/ocr/amazon - image/generation/google/imagen-3

The input dict contains feature-specific parameters that are validated at runtime based on the parsed feature/subfeature.

model
string
required

Model in format: feature/subfeature/provider[/model]

Examples:

"text/moderation/google"

"ocr/ocr/amazon"

"image/generation/google/imagen-3"

input
Input · object
required

Feature-specific input parameters. Required fields depend on the feature/subfeature specified in provider. Examples:

  • text/moderation: {'text': 'content to moderate'}
  • text/embeddings: {'texts': ['text1', 'text2']}
  • ocr/ocr: {'file_id': 'abc123', 'language': 'en'}
  • image/generation: {'text': 'prompt', 'resolution': '1024x1024'}
  • translation/document_translation: {'file_id': 'abc123', 'target_language': 'fr'}
Examples:
{
"text": "Content to moderate for harmful material"
}
{
"dimensions": 512,
"texts": ["text1", "text2"]
}
{ "file_id": "abc123", "language": "en" }
provider_params
Provider Params · object

Provider-specific parameters

show_original_response
boolean | null
default:false

Include raw provider response in the output

Response

Job Created

Response from the async universal-ai endpoint. Used for both job creation (202) and job status polling (GET).

public_id
string<uuid>
required

Job ID for polling status

status
enum<string>
required

Job status: processing (still running), success, or fail

Available options:
success,
fail,
processing
cost
string
required

Cost in credits for this request

Pattern: ^(?!^[-+.]*$)[+-]?0*\d*\.?\d*$
provider
string
required

Provider name that processed the request

feature
string
required

Feature category (e.g., audio, ocr, image)

subfeature
string
required

Specific subfeature (e.g., speech_to_text_async, ocr_async)

created_at
string<date-time>
required

Job creation timestamp

output
any | null

Normalized output from the provider (null while processing)

error
Error · object

Error details from the provider (only present when status is 'fail')

original_response
any | null

Raw response from the provider (if show_original_response=true)

model
string | null

Model name if specified in the request