Universal AI endpoint for all non-LLM AI features.
Model format: feature/subfeature/provider[/model]
Request body:
Example:
{
"model": "text/moderation/google",
"input": {"text": "Content to moderate"}
}
Response:
{
"status": "success",
"cost": "0.001",
"provider": "google",
"feature": "text",
"subfeature": "moderation",
"output": {...},
"error": null
}
Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Universal AI request body.
Model format: feature/subfeature/provider[/model]
Examples: - text/moderation/google - ocr/ocr/amazon - image/generation/google/imagen-3
The input dict contains feature-specific parameters that are
validated at runtime based on the parsed feature/subfeature.
Model in format: feature/subfeature/provider[/model]
"text/moderation/google"
"ocr/ocr/amazon"
"image/generation/google/imagen-3"
Feature-specific input parameters. Required fields depend on the feature/subfeature specified in provider. Examples:
{
"text": "Content to moderate for harmful material"
}{
"dimensions": 512,
"texts": ["text1", "text2"]
}{ "file_id": "abc123", "language": "en" }Provider-specific parameters
Include raw provider response in the output
Successful Response
Normalized response from universal-ai endpoint.
All responses have a consistent structure regardless of the feature/subfeature.
Whether the request succeeded or failed
success, fail Cost in credits for this request
Provider name that processed the request
Feature category (e.g., text, ocr, image)
Specific subfeature (e.g., ai_detection, sentiment)
Normalized output from the provider
Error message from the provider (only present when status is 'fail')
Raw response from the provider (if show_original_response=true)