Tutorial

Access all the LLM models with ONE unfied OpenAI compatible API

Tired of juggling different LLM APIs? This tutorial shows how to build a FastAPI backend using Eden AI’s unified endpoint—supporting text and image models from top providers like OpenAI and Google, all through a single, OpenAI-compatible API.

Access all the LLM models with ONE unfied OpenAI compatible API
TABLE OF CONTENTS

Developers and organizations face an increasingly complex challenge: how to integrate a growing variety of large language models (LLMs) without compromising speed, reliability, or user experience.

Each provider offers powerful capabilities, but also different APIs, request formats, and integration requirements.

Beyond that, AI is no longer limited to text; many providers now offer multimodal models that combine text, image, and other types of data - each often requiring its own distinct handling.

For developers, this means not just constant switching between providers, but also adapting workflows for different types of inputs and outputs, a process that drains time, increases complexity, and diverts focus away from building great products.

We believed there had to be a better way.
So we built it.

Unlock all LLM models with a single API. Fully compatible with OpenAI!

Today, we are thrilled to introduce our new unified LLM endpoint: a solution designed from the ground up to make working with the best AI models effortless, scalable, and future-proof.

At the heart of this new system is a simple but powerful idea: a single, OpenAI-compatible interface that provides direct, seamless access to all major LLM providers. This endpoint is designed not just for text generation but for full multimodal interaction, supporting both text and image processing through a single, consistent API call.

Developers can now effortlessly switch between models, or even between modalities, without any disruption to their existing workflows — all while using the familiar OpenAI API structure.

This isn’t just another aggregator or wrapper. This is a fully professional-grade, production-ready API that lets you plug into the future of AI with the same familiar tools and workflows you already use today. No drama, no rewrites, no fragmented workflows.

In this tutorial, you’ll see our new endpoint in action and learn how to build a powerful web API with FastAPI that integrates seamlessly with Eden AI, providing unified access to a range of AI services from text generation to image analysis, through a single, OpenAI-compatible multimodal chat endpoint.

For a detailed breakdown of this tutorial, be sure to check out our video on the Eden AI YouTube channel.

Whether you're a total beginner or just getting started with APIs and AI integration, this guide will walk you through everything step by step — from setting up the environment to writing the code and understanding what each line does.

What is Eden AI and Why Should You Use It?

Eden AI is a unified API gateway for artificial intelligence services. Instead of integrating separately with providers like OpenAI, Cohere, or Stability AI, Eden AI acts as a bridge providing a single API format to interact with multiple providers and models.

This drastically simplifies the developer experience:

  • No need to learn each provider’s API format
  • Easy switching between models (like GPT-4o, Claude, or Mistral)
  • Centralized billing and management

Real-World Benefits

For developers and businesses, this means you can:

  • Swap models with minimal code changes
  • Try out different AI providers to compare cost and performance
  • Rapidly build and test AI-powered features

Tools & Technologies Used

Tool Purpose
Python 3.10+ Programming language
FastAPI Web API framework
Uvicorn ASGI server to run FastAPI apps
Pydantic Data validation and typing
requests Makes HTTP requests to Eden AI
python-dotenv Loads .env config file
Eden AI Provides AI services
CORS Cross-origin access for frontend

💡 Installation Tip: Install the dependencies with:


pip install fastapi uvicorn pydantic python-dotenv requests

  

Step-by-Step Tutorial

Step 1: Project Setup

Create a new folder for your project and inside it, create:

  • main.py – the main FastAPI app

  • .env – to store your Eden AI API key

Inside .env, add:

EDEN_AI_API_KEY=your_eden_api_key_here

Never commit your API key to version control like GitHub!

Step 2: FastAPI App Initialization

from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware

FastAPI() initializes the app

CORSMiddleware allows access from other origins (like your frontend):

app = FastAPI(
    title="Eden AI API Integration",
    description="Simplified FastAPI application for Eden AI text and image analysis"
)

# Allow all origins for testing
app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

Step 3: Load Environment Variables

from dotenv import load_dotenv
import os

load_dotenv()
EDEN_AI_API_KEY = os.getenv("EDEN_AI_API_KEY")

dotenv loads your .env file

os.getenv fetches the key

If the key isn’t found, it logs a warning

Step 4: Define Request Models Using Pydantic

Pydantic models are a core feature of FastAPI. They define the structure, types, and defaults for incoming requests:


from pydantic import BaseModel
from typing import Optional

class TextRequest(BaseModel):
    text: str
    model: str = "openai/gpt-4o"
    temperature: Optional[float] = 0.7
    max_tokens: Optional[int] = None

  • text: user prompt
  • model: which LLM to use (default GPT-4o)
  • temperature: randomness (higher = more creative)
  • max_tokens: response length limit

Similarly, for image requests:

class ImageUrlRequest(BaseModel):
    image_url: str
    prompt: str
    model: str = "openai/gpt-4o"
    temperature: Optional[float] = 0.7
    max_tokens: Optional[int] = None

Step 5: Eden AI API Headers

Before calling the Eden API, we define a utility function to generate headers:

def get_headers():
    return {"Authorization": f"Bearer {EDEN_AI_API_KEY}"}

This ensures every request includes the Authorization header, which identifies your account to Eden AI.

Step 6: Create the Text Generation Endpoint


from fastapi import HTTPException, Body
from fastapi.responses import JSONResponse
import requests

This is the route that handles user prompts and returns generated responses:


@app.post("/api/text", response_class=JSONResponse)
async def text_completion(request: TextRequest):

It accepts a TextRequest object and sends a POST request to Eden AI’s /v2/llm/chat endpoint with the required structure.

This format mimics a typical chat interaction where the user provides input, and the AI responds.


payload = {
    "model": request.model,
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": request.text
                }
            ]
        }
    ]
}

Optionally add temperature and max_tokens:


if request.temperature is not None:
    payload["temperature"] = request.temperature
if request.max_tokens is not None:
    payload["max_tokens"] = request.max_tokens

Send the POST request:


response = requests.post("https://api.edenai.run/v2/llm/chat", json=payload, headers=get_headers())

Error handling is done using FastAPI’s HTTPException, which cleanly returns status codes and error messages:


if response.status_code != 200:
    raise HTTPException(status_code=response.status_code, detail=f"Eden AI API Error: {response.text}")
return response.json()

Step 7: Create the Image Analysis Endpoint

This works similarly to the text route but includes an image URL:


@app.post("/api/image", response_class=JSONResponse)
async def analyze_image(request: ImageUrlRequest):

The request payload is slightly different here, as it includes both text and image_url types in the content array:

Payload example:


payload = {
    "model": request.model,
    "messages": [
        {
            "role": "user",
            "content": [
                {"type": "text", "text": request.prompt},
                {
                    "type": "image_url",
                    "image_url": {"url": request.image_url}
                }
            ]
        }
    ]
}

Everything else (headers, request, error handling) remains the same.

This lets you ask things like:

  • “What objects are in this picture?”
  • “Is this street safe to walk on at night?”

And get intelligent responses based on the image content.

Step 8: Add a Health Check Route

A simple ping route to check if the API is running:


@app.get("/health")
async def health_check():
    return {"status": "ok"}

Use this for automated monitoring or to verify deployments.

Step 9: Run Your App

To start your FastAPI app, use Uvicorn:

uvicorn main:app --reload

Then open your browser and visit:

Docs: http://localhost:8000/docs

Health check: http://localhost:8000/health

Example Use Cases

1. Text Generation

You can send prompts like:

  • "Write a 5-line poem about summer"
  • "Summarize this paragraph..."

And get AI-generated responses from GPT-4o.

2. Image Analysis

Use it to analyze image URLs with prompts like:

  • "Describe this scene"
  • "What objects are visible?"

Conclusion

You’ve now built a powerful FastAPI backend that talks to Eden AI, handles both text and image prompts, and is structured in a way that’s extensible and production-friendly.

It’s a clean, secure, and scalable solution — and most importantly, it demystifies how AI-powered APIs work under the hood.

This project is a fantastic starting point. Here are ideas to take it further:

  • Connect it to a frontend (React, Vue, or Svelte)
  • Add authentication with OAuth or API keys
  • Log and store user requests and responses
  • Allow switching between AI providers with a dropdown
  • Support image upload (not just URLs)

Resources

Ready to dive deeper? We've got everything you need to get started.

Whether you're migrating from another LLM provider or building your first AI-powered app, we've made it easy to explore, implement, and launch with confidence:

Full Documentation
Get a complete overview of how the unified LLM API works, including endpoints, parameters, and response formats.

Migration Guide
Already using OpenAI or another provider? This guide walks you through switching to Eden AI’s unified endpoint, without breaking your existing setup.

Code Example
Want to see it in action? Check out our full working FastAPI implementation with real code examples.

Start Your AI Journey Today

  • Access 100+ AI APIs in a single platform.
  • Compare and deploy AI models effortlessly.
  • Pay-as-you-go with no upfront fees.
Start building FREE

Related Posts

Try Eden AI for free.

You can directly start building now. If you have any questions, feel free to chat with us!

Get startedContact sales
X

Start Your AI Journey Today

Sign up now with free credits to explore 100+ AI APIs.
Get my FREE credits now