App Overview
7

Welcome to rag-template Overview page

Clarifai app is a place for you to organize all of the content including models, workflows, inputs and more.

For app owners, API keys and Collaborators have been moved under App Settings.

rag-template
C
clarifai

RAG Template

This RAG App Template offers a comprehensive guide for building RAG (Retrieval-Augmented Generation) applications swiftly and effectively using Clarifai.  It enables users to quickly experiment with RAG using their own datasets without the need for extensive coding.

This RAG App Template comes with several ready-to-use workflows for RAG Agents, leveraging different LLM models and optimized through various prompt engineering techniques and embedding workflow to index and store inputs in the vector store with any embedding model for the RAG use case

What is RAG?

RAG, or Retrieval-Augmented Generation, is an AI framework designed to enhance large language models (LLMs) by retrieving facts from an external knowledge base. This process not only ensures the provision of accurate, current information but also enriches the understanding of the generative mechanisms of LLMs.

Benefits of using RAG

Implementing RAG in an LLM-based question answering system has three main benefits: 

  • Access to Current, Reliable Facts: Ensures that the LLM has access to the latest, most reliable information, enhancing user trust by allowing verification of sources.
  • Reduced Risk of Hallucination: By grounding an LLM on a set of external, verifiable facts, the model has fewer opportunities to pull information baked into its parameters. This reduces the chances that an LLM will ‘hallucinate’ incorrect or misleading information.
  • Decreased Training Requirements: RAG also reduces the need for users to continuously train the model on new data and update its parameters as circumstances evolve. In this way, RAG can lower the computational and financial costs of running LLM-powered chatbots.

Input Indexing and Vector Store

Vector Creation: Each of these chunk documents is passed through an embedding model, a type of model that creates a vector representation of hundreds or thousands of numbers that encapsulate the meaning of the information. The model assigns a unique vector to each chunk—sort of like creating a unique index that a computer can understand. This is known as the indexing stage.

Vector Store: All the indexed chunk documents are need to be stored somewhere called vector store, so that RAG system in the retrieval step uses these vector embeddings to scour the vector database for the most relevant chunks related to your question.

Input Indexing in Clarifai

In the Clarifai app, inputs are indexed using a base workflow. This Clarifai App Template uses workflow with baai-bge-base-en embedding to index, create, and store inputs in the vector store.

You can change the base workflow with any embedding model to index and store inputs in the vector store for the RAG use case.

Change Base workflow

Here are the simple steps to change the base workflow:

  1. 1) Click on the "Setting" tab on the left side of side panel.
  2. 2) Scroll down until you see the option to change the base workflow
  3. 3) From the dropdown menu, select one of the following workflows with different embedding model.

Pre-built RAG Agents

The App template includes pre-built RAG agents, leveraging different LLM models and optimized through various prompt engineering techniques:

This RAG Agent uses GPT-4 Turbo LLM model with a simple prompt for straightforward integration.

This RAG Agent uses Claude-2.1 LLM model with CoT prompting for enhanced reasoning and performance

This RAG Agent uses GPT-4 Turbo LLM model with React prompting optimizing dynamic reasoning and action planning.

You will find more about each of the RAG Agent and how to use them effectively in their Notes

RAG with 4 line of Code 

RAG system could be build in just 4 lines of code using Clarifai's Python SDK!

Export your PAT as an environment variable. Then, import and initialize the API Client.

Find your PAT in your security settings.

export CLARIFAI_PAT={your personal access token}
from clarifai.rag import RAG

rag_agent = RAG.setup(user_id=YOUR_USER_ID)
rag_agent.upload(folder_path = "~/docs")
rag_agent.chat(messages=[{"role":"human", "content":"What is Clarifai"}])

For a detailed walkthrough, refer to this video.

RAG with LangChain and Clarifai

Clarifai seamlessly supports LLMs, embeddings, and vector stores within the LangChain ecosystem, making it an excellent choice for operationalizing LangChain implementations. Numerous example notebooks demonstrate the seamless integration of LangChain with Clarifai's capabilities.

LangChain framework allows the implementation of RAG using Clarifai LLMs, embeddings, and vector stores, which enables the Langchain users to leverage Clarifai capabilities in using multiple LLM models Clarifai offers and utilizing the vector store for document storage and indexing using various embedding models.

Optimise RAG performance

DSPy and Clarifai Integration 

DSPy is a framework for solving advanced tasks using language models and retrieval models. DSPy offers the flexibility to algorithmically optimize language model prompts and weights. 

So, instead of using a prompt template or crafting a detailed prompt for an LLM, you can simply define your task and the metrics you want to maximize, and prepare a few example inputs. DSPy will then optimize your language model weights and instructions for you. 

Resources

  • YouTube Video: The following video guides what DSPy is and how you can build a simple RAG system with Clarifai DSPy integration. Checkout the video here.

  • Example Notebook: This notebook will walk you through on the integration of clarifai into DSPy which enables the DSPy users to leverage clarifai capabilities of calling llm models from clarifai platform and to utilize clarifai app as retriever for their vector search use cases.

Prompting Techniques

Chain of Thought (CoT)

Chain-of-thought prompting aids LLMs in answering complex questions by guiding them through intermediate reasoning steps.

You can use a RAG Agent with CoT prompting in the module by choosing pre-built RAG Agents from the drop-down menu.

ReAct

ReAct is a prompting framework where LLMs are used to generate both reasoning traces and task-specific actions in an dynamic manner.

RAG Agent with ReAct prompting could be use in the Module by selecting ReAct pre-built RAG Agents from the drop down menu.

.

  • Description
    The RAG App Template streamlines the creation of Retrieval-Augmented Generation or RAG applications with Clarifai, enhancing LLMs with external knowledge for accurate, up-to-date information generation
  • Base Workflow
    workflow-baai-bge-base-en
  • Last Updated
    Mar 18, 2024
  • Default Language
    en
  • Share