Skip to main content

LangChain

Integrate Eden AI with LangChain for building powerful LLM applications with access to 200+ models.

Overview

LangChain is a framework for developing applications powered by language models. Eden AI integrates seamlessly with LangChain’s ChatOpenAI class, giving you access to multiple providers through a single interface.

Installation

Install LangChain and required dependencies:

Quick Start (Python)

Use Eden AI with LangChain’s OpenAI integration:

Quick Start (TypeScript)

Available Models

Access any model from Eden AI:

Streaming Responses

Handle streaming for real-time responses:

Prompt Templates

Use LangChain’s prompt templates:

Chains

Build complex workflows with chains:

RAG (Retrieval-Augmented Generation)

Build RAG applications with vector stores:

Agents

Create autonomous agents:

Conversational Memory

Add memory to maintain context:

Function Calling (Tools)

Use function calling for structured outputs:

Output Parsing

Parse structured outputs:

Multi-Provider Comparison

Compare responses from different providers:

Environment Variables

Store credentials securely:

Best Practices

1. Choose the Right Model

Select models based on your use case:
  • Complex reasoning: anthropic/claude-3-5-sonnet-20241022
  • Fast responses: openai/gpt-3.5-turbo
  • Cost-effective: anthropic/claude-3-haiku-20240307

2. Use Streaming

Enable streaming for better UX:
llm = ChatOpenAI(..., streaming=True)

3. Implement Error Handling

Wrap API calls in try-except blocks:
try:
    response = llm.invoke(messages)
except Exception as e:
    print(f"Error: {e}")

4. Cache Results

Use LangChain’s caching to avoid redundant API calls:
from langchain.cache import InMemoryCache
from langchain.globals import set_llm_cache

set_llm_cache(InMemoryCache())

Next Steps