The 2025 Guide to Retrieval-Augmented Generation (RAG)
Science

The 2025 Guide to Retrieval-Augmented Generation (RAG)

Retrieval-Augmented Generation (RAG) has emerged as a pivotal approach in AI applications, combining the strengths of retrieval-based methods with generative capabilities. This article provides a detailed technical overview of RAG, covering its architecture, process flow, and the various types of RAG frameworks. By the end, you will have a solid understanding of RAG and its application in complex scenarios.

What is RAG?

RAG is a hybrid framework that integrates a retrieval mechanism with a generative model to improve the contextual relevance and factual accuracy of generated content. The retrieval mechanism fetches relevant external data, while the generative model uses this retrieved information to produce coherent, contextually accurate text (1). 

This approach addresses key challenges in large language models (LLMs):

  • Limited Contextual Knowledge: LLMs are trained on fixed datasets and cannot update knowledge dynamically.
  • Hallucination: Generative models often produce plausible-sounding but incorrect information.
  • Scalability: RAG allows systems to access vast external databases, effectively bypassing memory constraints.

How does Retrieval-Augmented Generation work?

RAG's architecture involves two primary components:

  1. Retriever: Fetches relevant documents from a knowledge base (e.g., vector databases, search engines, or local storage).
  2. Generator: Combines the retrieved documents with the input query to generate contextually enriched responses.

A common implementation of RAG involves three main systems:

  1. Embedding Model: Converts both the query and knowledge base documents into dense vector representations.
  2. Retriever: Uses similarity metrics (e.g., cosine similarity) to identify the most relevant documents: the chunks closest to the query
  3. Generator: Typically a transformer-based language model (e.g., GPT) that produces responses by conditioning on both the query and the retrieved documents.

Step-by-Step RAG process

1. Query Encoding

The input query is transformed into a dense vector using a pre-trained embedding model (e.g., OpenAI's Ada, Sentence-BERT).

2. Document Retrieval

  • The query vector is matched against vectors in a pre-built document index (e.g., stored in a vector database like Pinecone, Weaviate, or Qdrant).
  • Retrieval is typically performed using Approximate Nearest Neighbor (ANN) search for scalability and efficiency.

3. Contextual Fusion

  • The retriever returns the top-k documents, which are appended to the query as additional context.
  • These documents may be processed (e.g., summarized, chunked) to ensure they fit within the input length limitations of the generator.

4. Response Generation

The generative model takes the enriched input (query + retrieved documents) and generates a response.

RAG Process Flow Diagram

Traditional RAG Limitations

Nowadays, RAG has become a well-known and accessible technique for every company, every use case. The fact that RAG has been exposed to millions of use cases also showed that it has its limitations and not always perfectly fulfill its mission. Many limitations can be identified depending on the use case:

Retrieval Quality

Relevance Issues: RAG heavily relies on the retrieval system to provide accurate and relevant documents. If the retrieved content does not align with the query's intent, the generated response will be flawed.

Knowledge Base Limitations: An incomplete or outdated knowledge base can result in critical information gaps, making it difficult for the RAG model to produce correct or useful outputs.

Lack of Context Understanding

Ambiguity in Queries: RAG models can struggle with ambiguous or poorly phrased queries, leading to irrelevant document retrieval.

Multi-Hop Reasoning: The inability to connect information across multiple retrieved documents limits the model's ability to provide coherent and comprehensive answers for complex tasks.

Lack of Accuracy & Hallucinations

Hallucinations: The generation model can still hallucinate or fabricate information, even when presented with accurate retrieved documents.

Misinterpretation: The language model may misinterpret or distort the content of the retrieved documents when generating responses.

High Latency

Traditional RAG models divide documents into small chunks, typically averaging around 100 words. This approach enables fine-grained searching but significantly increases the search space, requiring retrievers to sift through millions of units to find relevant information. 

Advanced RAG Techniques

To overcome those limitations, many advanced RAG techniques have been developed. All those techniques solve one or multiple limitations by adding some more optimization complexity to the RAG process.

1. Long RAG

What is Long RAG?

Long RAG (Retrieval-Augmented Generation) is an enhanced version of the traditional RAG architecture designed to handle lengthy documents more effectively. Unlike conventional RAG models, which split documents into small chunks for retrieval, Long RAG processes longer retrieval units, such as sections or entire documents. This innovation improves retrieval efficiency, preserves context, and reduces computational costs.

What Traditional RAG Limitation Does It Solve?

Traditional RAG models face significant challenges due to their reliance on small text chunks (often around 100 words):

  • Loss of Context: Splitting documents into small chunks often fragments the narrative, making it harder for the model to understand and utilize the full context.
  • High Computational Overhead: Retrieving relevant information from millions of small chunks (e.g., up to 22 million for open-domain tasks like Wikipedia) increases latency and resource consumption.
  • Reduced Scalability: As datasets grow, the need to process numerous small chunks becomes impractical, particularly for real-time applications.

Long RAG solves these issues by working with larger retrieval units, reducing fragmentation, and improving efficiency.

How Does Long RAG Work?

  1. Document Preprocessing:

 Instead of breaking documents into small chunks, Long RAG divides them into longer, coherent sections or even processes full documents directly. This preserves the narrative and context (2). 

  1. Retriever Optimization:

Long RAG uses advanced retrievers designed to handle extended text spans effectively. These retrievers identify the most relevant sections or documents, reducing the number of units that need to be searched while maintaining accuracy.

  1. Generative Component:

The generation model is fine-tuned to process and synthesize information from longer retrieval units. This allows the system to produce detailed, coherent, and contextually accurate responses without losing critical nuances.

Long RAG Workflow for Improved Context Handling

Advantages of Long RAG

Improved Contextual Understanding:

Processing longer text spans allows the model to retain and utilize the full context of a document, leading to more accurate and coherent responses.

Increased Efficiency:
By working with fewer, larger retrieval units, Long RAG reduces computational requirements and accelerates retrieval and generation.

Scalability:
Long RAG is better equipped to handle massive datasets, making it a robust choice for applications with large or complex knowledge bases.

Accuracy for Complex Domains:
The system is particularly effective for generating responses in domains that require nuanced understanding, such as legal, medical, or academic fields.

Reduced Latency:
The streamlined process enables faster response times, making Long RAG ideal for real-time use cases.

Use Cases for Long RAG

Research Assistance:
Summarizing or answering questions from academic papers, technical documents, or research reports.

Legal Document Analysis:
Extracting key information or generating summaries from lengthy legal texts, contracts, or case law.

Customer Support:
Providing detailed answers using information from large manuals, troubleshooting guides, or user documentation.

Content Generation:
Summarizing or deriving insights from books, articles, or extensive datasets for creative or analytical purposes.

Knowledge Management:
Efficiently retrieving and synthesizing information from enterprise knowledge bases, technical repositories, or archival materials.

2. Self-RAG

What is SELF-RAG?

SELF-RAG, or Self-Reflective Retrieval-Augmented Generation, is an advanced AI framework designed to improve the factual accuracy and reliability of generated content. Unlike traditional models, it incorporates a self-reflective mechanism that dynamically decides when and how to retrieve information, evaluates the relevance of data, and critiques its outputs to ensure high-quality, evidence-backed responses (3). 

What Traditional RAG Limitation Does It Solve?

SELF-RAG addresses several key limitations of traditional RAG systems:

  • Fixed and Blind Retrieval: Traditional RAG retrieves a fixed number of documents, often introducing irrelevant or conflicting data.
  • Lack of Critical Evaluation: RAG does not assess whether retrieved information is properly used or relevant to the generated response.
  • Inaccuracy: Outputs often lack sufficient evidence, leading to unreliable or misleading content.
  • Static Retrieval Process: Traditional models cannot decide adaptively when retrieval is unnecessary, wasting computational resources.

SELF-RAG overcomes these challenges by enabling the model to dynamically retrieve, evaluate, and refine responses, ensuring they are both accurate and contextually relevant.

How SELF-RAG Works (Detailed Steps)

  1. Adaptive Retrieval:

SELF-RAG determines, using reflection tokens, whether external information is needed for a given query. It selectively retrieves relevant documents only when necessary, avoiding unnecessary or irrelevant data.

  1. Selective Sourcing:

Retrieved documents are evaluated for relevance and evidence using specialized reflection tokens (e.g., ISREL for relevance, ISSUP for evidence support).Only the most reliable data informs the response generation.

  1. Reflection Tokens:

These unique markers guide the model's decision-making process. Tokens like Retrieve (when to fetch data), ISREL (relevance), and ISUSE (utility) enable the model to self-assess its performance.

  1. Critique Mechanism:

After generating responses, SELF-RAG critiques its outputs to check alignment with retrieved data and ensure factual accuracy. The model iteratively refines its responses based on critique scores, improving overall quality.

  1. Final Output Selection:

SELF-RAG ranks all possible responses and selects the most accurate and contextually appropriate one, backed by relevant citations.

Self-RAG Architecture with Reflection Tokens

Advantages of SELF-RAG

Enhanced Accuracy:

Dynamically retrieves and integrates only verified and relevant information, minimizing the risk of factual errors.

Adaptive Retrieval:
Retrieves data only when needed, optimizing computational resources and improving response efficiency.

Self-Critique for Refinement:
Iterative self-reflection ensures outputs are continually refined to meet high standards of quality and relevance.

Transparency:
Provides citations for retrieved information, making responses verifiable and trustworthy.

Versatility:
Handles a wide range of tasks, from open-domain question-answering to complex reasoning and long-form content generation.

Use Cases of SELF-RAG

Open-Domain Question-Answering:
Answering questions with evidence-backed and accurate responses, outperforming traditional RAG models in tasks like TriviaQA.

Fact Verification:
Verifying claims and statements in domains like health, science, and news (e.g., PubHealth dataset).

Research and Academic Assistance:
Summarizing and generating insights from extensive, credible sources with proper citations.

Complex Reasoning Tasks:
Excelling in reasoning-heavy scenarios such as answering ARC-Challenge questions with high accuracy.

Professional Writing and Documentation:
Generating long-form content with precise citations, ensuring high factual accuracy for industries like academia or law.

3. Corrective RAG

What is Corrective RAG (CRAG)?

Corrective Retrieval-Augmented Generation (CRAG) is a framework for Retrieval-Augmented Generation (RAG) designed to improve robustness when dealing with inaccuracies in retrieved data. It introduces a lightweight retrieval evaluator to assess the quality of retrieved documents, enabling the system to adaptively respond to incorrect, ambiguous, or irrelevant information. By refining the retrieval process and dynamically incorporating large-scale web searches when necessary, CRAG ensures that the generated content is more accurate and reliable (4). 

What Traditional RAG Limitation Does It Solve?

CRAG addresses key shortcomings of traditional RAG systems:

  • Handling Inaccurate Retrievals: Traditional RAG lacks mechanisms to evaluate or correct errors in retrieved information, leading to unreliable outputs when the retrieval process fails.
  • Static Knowledge Bases: RAG often depends on static or limited corpora, which can result in incomplete or outdated information.
  • Information Overload: Conventional RAG retrieves documents indiscriminately, often including redundant or irrelevant details that reduce the clarity and accuracy of generated content.

CRAG improves RAG by introducing adaptive retrieval actions, refining document utilization, and integrating dynamic web searches for better context and reliability.

How CRAG Works

  1. Retrieval Evaluator:

CRAG uses a lightweight retrieval evaluator to analyze the quality and relevance of retrieved documents for a given query. This evaluator assigns a confidence score to each document, classifying results into categories like:

  • Correct: Relevant and accurate information.
  • Incorrect: Mismatched or erroneous data.
  • Ambiguous: Information that lacks clarity or requires additional context.

  1. Adaptive Knowledge Retrieval:

Correct data is directly used for response generation. For Incorrect or Ambiguous Data, it triggers additional retrieval actions, often web searches, to augment the original dataset with more reliable or diverse information.

  1. Generation with Decompose-then-Recompose Algorithm:

Retrieved documents are broken down into smaller components to focus on key insights while filtering out irrelevant or redundant details. The filtered information is recombined into a cohesive and concise dataset, optimizing the quality of data input for generation.

Corrective RAG (CRAG) Workflow with Decompose-then-Recompose Algorithm

Advantages of CRAG

Improved Accuracy:

By evaluating and correcting retrieved data, CRAG ensures more reliable and factually accurate outputs.

Dynamic Adaptability:

The integration of large-scale web searches allows CRAG to expand beyond static knowledge bases, providing up-to-date and diverse information.

Efficient Data Utilization:

The decompose-then-recompose algorithm reduces noise and focuses on critical insights, ensuring the generated responses are both concise and relevant.

Better Robustness:

CRAG mitigates the risk of generating incorrect knowledge by dynamically addressing errors in the retrieval process.

Use Cases for CRAG

Open-Domain Question Answering:

Delivering more accurate and contextually relevant answers by dynamically refining retrieval results.

Fact Verification:

Validating claims and filtering out misinformation, particularly useful in journalism, academic research, or public discourse.

Knowledge-Intensive Tasks:

Supporting applications like medical or legal document summarization, where accuracy and precision are critical.

Dynamic Research Assistance:

Incorporating up-to-date information through web searches, especially for topics that rely on evolving data.

Content Generation:

Creating high-quality, factually grounded content for long-form writing or professional documentation.

4. Golden-Retriever RAG

What is Golden-Retriever RAG ?

Golden-Retriever is an advanced RAG framework tailored to navigate extensive industrial knowledge bases effectively. It incorporates into RAG a reflection-based question augmentation step before document retrieval, which involves identifying domain-specific jargon, clarifying their meanings based on context, and augmenting the question accordingly (5). This approach ensures that the RAG framework retrieves the most relevant documents by providing clear context and resolving ambiguities, significantly improving retrieval accuracy. 

What Traditional RAG Limitation Does It Solve?

Golden-Retriever RAG method allows to avoid:

  • Misinterpretation of Domain-Specific Jargon: Standard RAG frameworks may misinterpret or hallucinate meanings for specialized terms not present in their training data, leading to inaccurate document retrieval and responses.

  • Lack of Contextual Understanding: Without explicit context, RAG systems might retrieve irrelevant documents, reducing the effectiveness of the generated answers.

  • Static Knowledge Bases: Traditional RAG systems often rely on static or limited corpora, which can result in incomplete or outdated information.

How Golden-Retriever Works

  1. Reflection-Based Question Augmentation:

Jargon Identification: The system extracts and lists all jargon and abbreviations in the input question.

Context Determination: It determines the context against a predefined list to understand the specific domain or application.

Jargon Clarification: Queries a jargon dictionary for extended definitions and descriptions to clarify meanings. A jargon dictionary contains structured and detailed information about domain-specific terms, abbreviations, and concepts.. The jargon dictionary can be built by the user, the RAG system, or a combination of both, depending on the domain and complexity of the application.

Question Augmentation: The original question is augmented with the clarified jargon definitions and context, providing clear context and resolving ambiguities.

  1. Document Retrieval:

Utilizes the augmented question to retrieve the most relevant documents from the knowledge base, ensuring that the retrieved information aligns accurately with the user's intent.

  1. Answer Generation:

The retrieved documents are then used to generate accurate and contextually relevant responses to the user's query.

olden-Retriever RAG: Enhancing Domain-Specific Retrieval with Jargon Clarification

Advantages of Golden-Retriever

Enhanced Retrieval Accuracy: By clarifying ambiguous terms and providing explicit context, the system retrieves documents that are more relevant to the user's query.

Improved Response Generation: With access to precise documents, the generated answers are more accurate and informative.

Scalability: It efficiently handles vast industrial knowledge bases, making it suitable for large organizations with extensive documentation.

Use Cases for Golden-Retriever

Industrial Knowledge Management: Assisting engineers and new hires in navigating and querying extensive proprietary documents, such as training materials, design documents, and research outputs.

Technical Support: Providing accurate and contextually relevant answers to complex technical queries that involve domain-specific jargon.

Research and Development: Facilitating efficient information retrieval from large datasets, aiding in literature reviews and data analysis.

Healthcare: Interpreting medical terminologies and retrieving pertinent information for healthcare professionals.

5. Adaptive RAG

What is Adaptive RAG

Adaptive RAG is an advanced framework that dynamically tailors its retrieval strategies based on the complexity of user queries. Unlike traditional RAG systems that apply a uniform retrieval approach to all queries, Adaptive RAG intelligently decides when and how to retrieve external information, optimizing both efficiency and accuracy (6). 

What Traditional RAG Limitation Does It Solve?

Conventional RAG models often treat all queries similarly, leading to inefficiencies:

  • Unnecessary Retrievals: Simple queries that the model can handle internally still trigger external data retrieval, causing needless computational overhead.

  • Inadequate Handling of Complex Queries: Complex, multi-step queries may not receive the thorough retrieval processes they require, resulting in incomplete or inaccurate responses.

How Adaptive RAG Works

Adaptive RAG addresses these issues through a structured process:

  1. Query Complexity Assessment: A specialized classifier evaluates the incoming query to determine its complexity level.

  2. Strategy Selection (Query Classification)

    some text
    • Straightforward Queries: Handled directly by the language model without external retrieval, ensuring quick responses.
    • Simple Queries: Engage a traditional retrieval process.
    • Complex Queries: Engage in a multi-step retrieval process, iteratively gathering and integrating information to construct a comprehensive answer.

  1. Dynamic Adjustment: The system adapts its retrieval strategy in real-time, balancing the need for external information with computational efficiency.

Adaptive RAG: Dynamic Retrieval Strategies Based on Query Complexity

Advantages of Adaptive RAG

Better Efficiency: By avoiding unnecessary retrievals for straightforward queries, the system reduces latency and conserves resources.

Improved Accuracy: Tailoring retrieval strategies to query complexity ensures that complex questions receive the depth of information they require.

Resource Optimization: Adaptive RAG allocates computational resources more effectively, enhancing overall system performance.

Adaptive RAG Use Cases

Conversational AI: Delivers precise and timely responses in chatbots and virtual assistants by adjusting retrieval efforts based on query demands.

Customer Support: Provides accurate answers efficiently, improving user satisfaction by dynamically adapting to the complexity of customer inquiries.

Information Retrieval Systems: Balances speed and thoroughness in search engines and QA systems, offering users relevant information promptly.

6. Graph RAG

What is Graph RAG?

Graph RAG is a novel RAG framework that integrates graph-based representations of knowledge to enhance document retrieval and response generation. It constructs and utilizes knowledge graphs—structured networks of entities and their relationships—alongside traditional RAG methods, ensuring a more interconnected and contextually rich retrieval process. This approach is particularly effective in domains where the relationships between entities are as critical as the entities themselves (7). 

What Traditional RAG Limitation Does It Solve?

Graph RAG addresses several limitations inherent to traditional RAG systems:

  • Loss of Contextual Relationships: Standard RAG frameworks often treat documents as isolated units, overlooking intricate relationships between entities, leading to fragmented or incomplete responses.
  • Poor Handling of Complex Queries: Queries that require understanding the interplay between multiple entities or concepts are challenging for traditional systems, which lack a structured representation of these relationships.

How It Works

Graph RAG enhances the retrieval process by incorporating knowledge graphs into the RAG pipeline:

  1. Knowledge Graph Construction: A knowledge graph is created from the knowledge base, capturing entities (e.g., concepts, terms) and their relationships (e.g., dependencies, hierarchies).
  2. Query-to-Graph Mapping: Incoming queries are mapped onto the graph to identify relevant entities and relationships.
  3. Graph-Based Retrieval: The system traverses the graph to retrieve not only the entities explicitly mentioned in the query but also related entities, ensuring comprehensive coverage of the query’s intent.
  4. Enhanced Response Generation: The retrieved graph information is integrated with the RAG model to generate contextually rich and accurate responses, leveraging the interconnected nature of the data.

GraphRAG Workflow: Integrating Knowledge Graphs for Enhanced Retrieval (8)

Advantages of Graph RAG

Improved Contextual Understanding: By considering entity relationships, Graph RAG provides more coherent and context-aware responses.

Enhanced Retrieval Accuracy: The knowledge graph ensures that the system retrieves documents and information that are highly relevant to the query’s context.

Scalability: The graph structure enables efficient querying and retrieval, making it suitable for large and complex datasets.

Use Cases for Graph RAG

Scientific Research

Assists researchers in exploring relationships between scientific concepts, facilitating deeper insights and hypothesis generation.

Healthcare

Supports healthcare professionals by retrieving interconnected information about symptoms, diagnoses, and treatments.

Enterprise Knowledge Management

Enhances the retrieval of related documents, processes, and concepts for decision-making in large organizations.

Education

Helps students and educators navigate complex topics by presenting interconnected concepts and their relationships.

Which one do I need? 

RAG Technique Specific Advantages Best Use Cases
Traditional RAG - Simple and well-established framework.
- Effective for basic retrieval and generation tasks.
- General-purpose question answering.
- Use cases with static and well-structured knowledge bases.
Long RAG - Processes longer retrieval units (e.g., sections or documents).
- Preserves context for complex tasks.
- Reduces computational load by limiting the number of retrieval units.
- Summarizing lengthy documents (e.g., research papers, legal texts).
- Applications requiring deep contextual understanding.
Self-RAG - Critiques and refines its own outputs.
- Dynamically retrieves additional information as needed.
- Ensures factual accuracy with iterative improvement.
- High-stakes applications (e.g., healthcare, legal, research).
- Fact-checking and open-domain question answering.
Corrective RAG (CRAG) - Identifies and corrects inaccurate or ambiguous retrievals.
- Uses large-scale web searches for broader context.
- Filters out redundant or irrelevant information.
- Scenarios where retrieved data may be unreliable (e.g., customer support FAQs).
- Applications with dynamic knowledge bases.
Golden-Retriever RAG - Resolves domain-specific jargon and ambiguity.
- Improves retrieval relevance for technical queries.
- Seamlessly integrates into existing workflows.
- Industrial knowledge management.
- Technical support for complex domains (e.g., engineering, healthcare).
Adaptive RAG - Dynamically adjusts retrieval strategies based on query complexity.
- Reduces unnecessary computational overhead.
- Optimizes resource allocation for diverse tasks.
- Customer support systems with varied query complexity.
- Conversational AI needing a mix of simple and complex answers.

Conclusion

In conclusion, Retrieval-Augmented Generation (RAG) is set to remain a cornerstone of information retrieval and generation in 2025, offering a powerful fusion of advanced retrieval methods and sophisticated language models.

As organizations continue to face the challenge of managing expansive knowledge bases and responding to increasingly complex queries, RAG systems have adapted and evolved to meet these needs.

The various RAG techniques discussed—such as Traditional RAG, Long RAG, Self-RAG, Corrective RAG, Golden-Retriever RAG, Adaptive RAG, and GraphRAG—highlight the range of solutions available, each tailored to different complexities and specific requirements.

The choice of technique is crucial, depending on factors like domain-specific language or the integration of knowledge graphs for enhanced insights. As AI technology advances, RAG frameworks will remain instrumental in providing smart, scalable solutions that empower industries to harness information with greater precision and efficiency.

Sources

  1. Qdrant. (n.d.). How RAG works. Retrieved from https://qdrant.tech/articles_data/what-is-rag-in-ai/how-rag-works.jpg
  2. Pubby AI. (n.d.). Long context LLMs for information retrieval: The LongRAG approach. Retrieved from https://ai.gopubby.com/long-context-llms-for-information-retrieval-the-longrag-approach-470dc6e8f56e
  3. Wu, A., et al. (2023). Self-reflective retrieval-augmented generation (SELF-RAG). arXiv. Retrieved from https://arxiv.org/abs/2310.11511
  4. Chen, Z., et al. (2024). Corrective RAG (CRAG): Improving accuracy through adaptive retrieval evaluation. arXiv. Retrieved from https://arxiv.org/pdf/2401.15884
  5. An, Z., Ding, X., Fu, Y., Chu, C., Li, Y., & Du, W. (2024). Golden-Retriever: High-Fidelity Agentic Retrieval Augmented Generation for Industrial Knowledge Base. University of California, Merced, Western Digital Corporation, & Lawrence Berkeley National Laboratory. Retrieved from https://arxiv.org/pdf/2408.00798.
  6. Jeong, S., Baek, J., Cho, S., Hwang, S. J., & Park, J. C. (2024). Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity. Korea Advanced Institute of Science and Technology (KAIST). Retrieved from https://arxiv.org/pdf/2403.14403.
  7. Eden AI. (n.d.). From RAG to GraphRAG: Enhancing Retrieval with Knowledge Graphs. Retrieved January 10, 2025, from https://www.edenai.co/post/from-rag-to-graphrag-enhancing-retrieval-with-knowledge-graphs
  8. Cookieh, T. (2023, September 1). How I created a Neo4j search engine with generative AI. Medium. https://medium.com/@thcookieh/how-i-created-a-neo4j-search-engine-with-generative-ai-98b43cf8ec1e

Related Posts

Try Eden AI for free.

You can directly start building now. If you have any questions, feel free to chat with us!

Get startedContact sales