Skip to main content

Getting Started with Universal AI

The Universal AI endpoint is the core of Eden AI V3, providing a single unified endpoint for all non-LLM AI features.

Overview

Instead of calling different endpoints for different features, V3’s Universal AI endpoint handles everything through model strings:
POST /v3/universal-ai
One endpoint for:
  • Text analysis (moderation, AI detection, embeddings, sentiment)
  • OCR (text extraction, invoice/ID parsing)
  • Image processing (generation, detection, analysis)
  • Translation (document translation)

Model String Format

The model string tells the endpoint what feature and provider to use:
feature/subfeature/provider[/model]
Examples:
  • text/ai_detection/winstonai
  • ocr/financial_parser/google
  • image/generation/openai/dall-e-3
  • translation/document_translation/deepl

Basic Request

Response Format

All Universal AI responses follow the same structure:
{
  "status": "success",
  "cost": 0.0001,
  "provider": "openai",
  "feature": "text",
  "subfeature": "moderation",
  "output": {
    // Feature-specific output
  }
}

Input Formats

The input field varies based on the feature:

Text-Based Features

{
  "model": "text/ai_detection/winstonai",
  "input": {
    "text": "Text to analyze"
  }
}

File-Based Features (UUID)

{
  "model": "ocr/financial_parser/google",
  "input": {
    "file": "550e8400-e29b-41d4-a716-446655440000"
  }
}

File-Based Features (URL)

{
  "model": "image/object_detection/google",
  "input": {
    "file": "https://example.com/image.jpg"
  }
}

Common Use Cases

Text Moderation

AI Content Detection

OCR Text Extraction

Next Steps