Embracing the World of AI: A Beginner's Introduction to LLM (Almost) Without Coding
The realm of AI is vast and filled with countless tools that can seem overwhelming to newcomers. With daily news of breakthroughs, potential risks, and innovative applications, opinions on AI are divided. To form an educated opinion, it's essential to dive into this fascinating world and learn about the intriguing concepts it offers.
Embarking on an Educational Journey
Long hours of self-study can be challenging, especially when distractions are abundant. To accelerate my learning, I participated in an AI hackathon - an event where knowledge-seekers, experts, and enthusiasts come together to learn, collaborate, and have fun.
Hackathons are unique experiences, often combining competition with a supportive atmosphere where participants eagerly help one another. Spending 48 hours surrounded by brilliant minds allowed me to grasp the basics and understand what I needed to create my first AI solution.
However, not everyone has the opportunity to attend a hackathon. That's why I've decided to share my learnings in this tutorial, hoping to kickstart your journey into the world of LangChain and LLM.
LLM Explained for a 5-Year-Old
Imagine you have a magical friend who knows a lot of words and can understand what people say. This friend can help you with your homework, answer your questions, and even tell you stories. An LLM (Large Language Model) is like that magical friend, but it lives inside a computer. It knows many words and can understand what people type. It helps people find information, answer questions, and do many other things on the computer.
What is LangChain?
For those with technical experience, LangChain can be thought of as a glue that enhances your existing apps with AI capabilities. It allows you to stream large amounts of data from your apps into tools like chat interfaces, enabling interaction with the data, or automating certain tasks based on the information found in the data. Imagine talking to your database and receiving responses - LangChain makes this possible, saving you time on writing database queries.
For non-technical individuals, picture data as a flowing river. Just as a power plant harnesses energy from the river, LangChain helps you unlock the potential of your data using various LLMs and third-party tools. These tools are like different types of power plants for diverse terrains and use cases, but without the time and effort required to build them. LangChain makes it easy to put these tools to work, as if they were delivered in a shoebox.
Example Scenario: AI-Assisted E-commerce Queries
Imagine you're an e-commerce owner with a successful shop and a large group of loyal customers. You want to understand your customers better and use AI to help you. Currently, you might rely on your shop's admin area for analytics data, manually searching, sorting, and filtering vast amounts of table records. If you're not so lucky, you have to ask your programmers or data scientists to do it for you, which can sometimes take days.
Ideally, you'd like to simply ask your database and get the response instantly. Here's a user flow demonstrating how this could work using LangChain and LLM:
- Step 1: Input your query related to customer analytics.
- Step 2: The query is processed by the LLM.
- Step 3: Responses are generated and displayed.
In this flow, you interacted with your data, and the data responded. By building a flow like this using tools like LangChain, you enable anyone, even without technical knowledge, to access valuable information in seconds by typing a few words into a prompt.
Scenario Implementation Plan
We want to create a bot that can receive queries, run them against our data stored in an SQL database, and return insights in a short and concise form. Let's discuss the idea and lessons learned during the hackathon.
Lessons Learned During the Hackathon
Our team's plan during the event was to build a conversational bot that would ask the user a fixed number of predefined questions, calculate a score based on the answers, and then serve the user with some tips on how he/she can improve.
We initially approached the problem with a web developer mindset, wanting to create a list of questions with expected answers, assign weights to the answers, and provide the LLM with the list of questions upfront. We planned to feed the questions to the LLM, have it get the answers from the user, and then run an algorithm to calculate a final score based on user answers' weights. This would provide the user with insights stored in a predefined guidebook, initially stored as a Notion page. At some point, we wanted to move the Notion page to a database as well.
However, this approach turned out to be overly complex and confusing. The solution was much simpler:
In our attempt to solve the problem, we tried to control the process from top to bottom, which was our biggest mistake. After talking to experts, we discovered two important things:
- Prompt engineering can work wonders.
- Agents can take a lot of heavy lifting from us.
By letting the AI drive and focusing on prompt engineering, we can create a more efficient and effective solution for our conversational bot.
In order to apply those findings, we fully resigned from trying to create a table with predefined questions and answers, and we simply instructed the LLM through a prompt template on what we want to achieve.
Prompt Engineering Explained for a 5-Year-Old
Imagine you have a magical toy that can answer your questions and help you with many things. But to make the toy work, you need to ask it questions in a special way. Prompt engineering is like figuring out the best way to ask your magical toy questions so it can understand you and give you the best answers. It's like learning how to talk to your toy so you both can have a fun and helpful conversation.
Agent Explained for a 5-Year-Old
Imagine you have a helpful robot friend who can do many things for you, like finding your toys, answering your questions, or even helping you with your homework. This robot friend is called an "agent." An agent is like a helper that lives inside a computer or a device and can do tasks for you. It listens to what you say or type and then tries its best to help you with what you need.
Hackathon Final Prompt Solution
Instead of setting up databases, APIs, and writing algorithms to calculate the score, we came up with a simpler solution using prompt engineering:
Surprisingly, this solution was developed almost entirely by the only non-programming person on our team. Shout out to Iwo Szapar, our hackathon team's Senior Prompt Engineer!
It worked like magic:
Applying the Hackathon Findings to Our Example Scenario
Both the hackathon solution and the example scenario share some assumptions. First, the user produces a query. Second, we send the query back to the LLM, making sure the LLM knows our guidebook and can compare the received score with fixed data stored in the vector database to produce a meaningful response.
We solved it like this:
Now, let me explain what happened here, as I just mentioned a new concept called "vector database" and then shared some code.
Vector Database Explained for a 5-Year-Old
Imagine you have a big box of toys, and you want to find a specific toy quickly. A vector database is like a magical map that helps you find the toy you're looking for in the big box. It knows where all the toys are and can show you the right toy based on what you tell it.
In the vector database, each toy gets a "vector" - a list of numbers that describe the toy and how it's similar or different from other toys. The vector database uses special math to find similarities between vectors. So if you tell it you're looking for a red toy car, the database would filter for toys with a high "red toy car" similarity score based on their vectors.
To build the vector database, you first "teach" it about each toy by giving it details like the toy's color, shape, size, function, and more. The database turns these details into numbers to create a vector for the toy. By comparing how vectors are similar and different, the database builds a "map" of how all the toys relate.
When you search the database, it looks at the vectors to find toys most similar to what you described. The more it learns about the toys and gets better at finding relationships between them, the more magical its capabilities seem. But really, it's all based on math!
In our code, we use a vector database to help the AI find the right information to answer your questions. By teaching the database details about knowledge fragments, it can find the fragments most relevant to a user's query. The database's "magic" comes from its ability to quantify similarities between the knowledge it contains.
Code Explanation
This code defines a conversational AI agent called PlaybookChat that can interact with users and provide information based on a playbook. The agent has a set of tools that it can use to assist users. In this case, there is one tool called query_playbook that allows the agent to search for information in a "People Managers guide".
When a conversation starts, the agent is initialized with the tools and other settings. It uses an underlying language model called "ChatOpenAI" to generate responses. The agent also has a memory component to keep track of the conversation history.
To start a conversation, the start_conversation method is called with a user score as input. The agent then runs and responds to the user's queries based on the playbook information. The response includes the user's score and encourages them to ask insightful questions.
Overall, this code sets up a conversational AI agent that can provide information and engage in conversation with users based on the playbook's content.
Final Solution for Our Example
We have a working solution already, so let's repurpose it a little bit.
Prompt template:
Do you need a place to experiment a little bit more with prompt engineering? OpenAI provides you with an awesome playground to do just that! Feel free to copy & paste the code, and start experimenting over there.
Final Chain Solution for Our Example
Example usage:
- Homework: The remaining task for you is to set up a database and populate it with relevant data. While I won't provide detailed instructions in this article, you can refer to our comprehensive guide on similarity search and vector databases, which provides step-by-step guidance. You can find the detailed instructions in one of our other AI tutorials.
Summary
Embarking on an AI journey may seem challenging, but the true power of AI lies in its ability to support you every step of the way. There is just one key requirement: "Let the AI take the wheel," while you guide it in the right direction.
To summarize everything I've discussed, it is crucial to understand key concepts such as prompt engineering, chains, agents, and vector databases before you begin. With a solid grasp of these concepts, you will be empowered to achieve remarkable feats!
Wishing you the best of luck on your journey of creating AI apps, and thank you for your valuable time.
Leave a comment
All comments are moderated before being published.
This site is protected by hCaptcha and the hCaptcha Privacy Policy and Terms of Service apply.