AI Agents

Creating an Information Retrieval Chatbot Using AI Agents

A tutorial image illustrating the creation of an Information Retrieval Chatbot using AI Agents.

Create an Information Retrieval Chatbot with AI Agents

Introduction

In this tutorial, we will guide you through the process of creating a sophisticated chatbot for information retrieval using AI agents. Explore the steps to harness the power of AI in building a chatbot that can efficiently retrieve data from various sources.

Setting Up the Environment

Our plan will be to create a chatbot using AI Agents (LangChain), with a simple UI using Chainlit. We would like our chatbot to respond to a query in two stages: planning and retrieval, with the Agent having access to Wikipedia and Web Search.

Preparation & Dependencies

Let's start by creating a new project. Begin with creating a new directory:

mkdir chatbot_project
cd chatbot_project

Next, create a virtual environment and install the required dependencies:

python -m venv venv
source venv/bin/activate  # For Linux/Mac
venv\Scripts\activate  # For Windows
pip install chainlit langchain

Now we can create our app.py file (this name is required by Chainlit):

touch app.py

The last step is to import our dependencies in app.py:

import os
from langchain.chat_models import ChatOpenAI
from langchain.agents import create_openai_functions_agent
from langchain.tools import DuckDuckGoSearchResults, WikipediaAPIWrapper

Disclaimer: It's recommended to define your environment variables in a .env file or directly in code.

Coding

Now it's time to initialize our Language Model (LLM) and Tools. In this tutorial, I will use GPT-4, but feel free to use a different model if you prefer. We will use DuckDuckGoSearchRun and WikipediaAPIWrapper as our tools.

Preparing Prompt Templates

The next step is to prepare the PromptTemplates. We will create two: one for the planning process and one for generating the final response.

planning_prompt = "Plan the response to the query: {query}"
response_prompt = "Generate the final response based on: {plan}"

Initiate the Agent and Planning Chain

Now it's time to initiate the Agent and the Planning Chain. We'll also incorporate memory so the agent can retain information about previous messages.

memory = ConversationBufferMemory()
chain = create_openai_functions_agent(
    llm=ChatOpenAI(model_name='gpt-4'),
    tools=[DuckDuckGoSearchResults(), WikipediaAPIWrapper()],
    verbose=True,
    memory=memory
)

UI Part

Next, we will create the UI using Chainlit. I will utilize a factory function to pass our agent to Chainlit. However, before invoking this factory function, Chainlit runs a function that prepares the input pipeline for the model. I will overwrite this to alter the flow slightly: I want to execute the planning first and then generate the response.

@chainlit.on_message
async def main(message: str):
    plan = chain.run(planning_prompt.format(query=message))
    response = chain.run(response_prompt.format(plan=plan))
    await chainlit.send_message(response)

Results!

Now we can test our application. Start the application, greet it, and then pose a question:

chainlit run app.py

Let's see what response we receive!

Bravo! Let's Dive Deeper

Marvelous! As you witnessed, initially, the model bypassed planning until prompted. Then it effectively organized tasks and crafted a response just as intended!

Venture forth to build your unique AI agent applications, and don't miss our upcoming AI Agents Hackathon commencing June 9th. Elevate your knowledge with our AI tutorials and shape the future using AI's prowess!

Czytaj dalej

A comparison of LLaMA 3.1 and Mistral 2 for AI model performance evaluation.
Cohere Rerank Model Integration with ElevenLabs in a Streamlit App

Zostaw komentarz

Wszystkie komentarze są moderowane przed opublikowaniem.

Ta strona jest chroniona przez hCaptcha i obowiązują na niej Polityka prywatności i Warunki korzystania z usługi serwisu hCaptcha.