
Start Your AI Journey Today
- Access 100+ AI APIs in a single platform.
- Compare and deploy AI models effortlessly.
- Pay-as-you-go with no upfront fees.
Tired of juggling different LLM APIs? This tutorial shows how to build a FastAPI backend using Eden AI’s unified endpoint—supporting text and image models from top providers like OpenAI and Google, all through a single, OpenAI-compatible API.
Developers and organizations face an increasingly complex challenge: how to integrate a growing variety of large language models (LLMs) without compromising speed, reliability, or user experience.
Each provider offers powerful capabilities, but also different APIs, request formats, and integration requirements.
Beyond that, AI is no longer limited to text; many providers now offer multimodal models that combine text, image, and other types of data - each often requiring its own distinct handling.
For developers, this means not just constant switching between providers, but also adapting workflows for different types of inputs and outputs, a process that drains time, increases complexity, and diverts focus away from building great products.
We believed there had to be a better way.
So we built it.
Today, we are thrilled to introduce our new unified LLM endpoint: a solution designed from the ground up to make working with the best AI models effortless, scalable, and future-proof.
At the heart of this new system is a simple but powerful idea: a single, OpenAI-compatible interface that provides direct, seamless access to all major LLM providers. This endpoint is designed not just for text generation but for full multimodal interaction, supporting both text and image processing through a single, consistent API call.
Developers can now effortlessly switch between models, or even between modalities, without any disruption to their existing workflows — all while using the familiar OpenAI API structure.
This isn’t just another aggregator or wrapper. This is a fully professional-grade, production-ready API that lets you plug into the future of AI with the same familiar tools and workflows you already use today. No drama, no rewrites, no fragmented workflows.
In this tutorial, you’ll see our new endpoint in action and learn how to build a powerful web API with FastAPI that integrates seamlessly with Eden AI, providing unified access to a range of AI services from text generation to image analysis, through a single, OpenAI-compatible multimodal chat endpoint.
For a detailed breakdown of this tutorial, be sure to check out our video on the Eden AI YouTube channel.
Whether you're a total beginner or just getting started with APIs and AI integration, this guide will walk you through everything step by step — from setting up the environment to writing the code and understanding what each line does.
Eden AI is a unified API gateway for artificial intelligence services. Instead of integrating separately with providers like OpenAI, Cohere, or Stability AI, Eden AI acts as a bridge providing a single API format to interact with multiple providers and models.
This drastically simplifies the developer experience:
For developers and businesses, this means you can:
💡 Installation Tip: Install the dependencies with:
Create a new folder for your project and inside it, create:
Inside .env, add:
Never commit your API key to version control like GitHub!
FastAPI() initializes the app
CORSMiddleware allows access from other origins (like your frontend):
dotenv loads your .env file
os.getenv fetches the key
If the key isn’t found, it logs a warning
Pydantic models are a core feature of FastAPI. They define the structure, types, and defaults for incoming requests:
Similarly, for image requests:
Before calling the Eden API, we define a utility function to generate headers:
This ensures every request includes the Authorization header, which identifies your account to Eden AI.
This is the route that handles user prompts and returns generated responses:
It accepts a TextRequest object and sends a POST request to Eden AI’s /v2/llm/chat endpoint with the required structure.
This format mimics a typical chat interaction where the user provides input, and the AI responds.
Optionally add temperature and max_tokens:
Send the POST request:
Error handling is done using FastAPI’s HTTPException, which cleanly returns status codes and error messages:
This works similarly to the text route but includes an image URL:
The request payload is slightly different here, as it includes both text and image_url types in the content array:
Payload example:
Everything else (headers, request, error handling) remains the same.
This lets you ask things like:
And get intelligent responses based on the image content.
A simple ping route to check if the API is running:
Use this for automated monitoring or to verify deployments.
To start your FastAPI app, use Uvicorn:
Then open your browser and visit:
Docs: http://localhost:8000/docs
Health check: http://localhost:8000/health
You can send prompts like:
And get AI-generated responses from GPT-4o.
Use it to analyze image URLs with prompts like:
You’ve now built a powerful FastAPI backend that talks to Eden AI, handles both text and image prompts, and is structured in a way that’s extensible and production-friendly.
It’s a clean, secure, and scalable solution — and most importantly, it demystifies how AI-powered APIs work under the hood.
This project is a fantastic starting point. Here are ideas to take it further:
Ready to dive deeper? We've got everything you need to get started.
Whether you're migrating from another LLM provider or building your first AI-powered app, we've made it easy to explore, implement, and launch with confidence:
Full Documentation
Get a complete overview of how the unified LLM API works, including endpoints, parameters, and response formats.
Migration Guide
Already using OpenAI or another provider? This guide walks you through switching to Eden AI’s unified endpoint, without breaking your existing setup.
Code Example
Want to see it in action? Check out our full working FastAPI implementation with real code examples.
You can directly start building now. If you have any questions, feel free to chat with us!
Get startedContact sales