AI

How to Use an Agent Optimized for Conversation in LangChain?

LangChain uses the dependencies for building chat models that can be used as an interface to have a conversation with humans. The LangChain module also enables the developers to configure and use the agents to get the output from the internet with the proper source. The agents are used to get the optimized performance in the chat models as they produce authentic outputs or results.

Quick Outline

This post will demonstrate the following:

How to Use a Conversational Optimized Agent in LangChain

Method 1: Using LangChain Expression Language

Method 2: Using ChatModel

Conclusion

How to Use a Conversational Optimized Agent in LangChain?

Agents are used to get optimum performance in multiple aspects like conversation, tools, etc. and here we are using the conversation-optimized agent. The conversation-optimized agent is a dialogue system that is used to get the output for the queries efficiently and accurately. To learn the process of using the agent optimized for conversation in LangChain, simply follow this guide:

Step 1: Installing Frameworks

Firstly, install the LangChain framework to get the libraries and dependencies for building and using the conversational agent:

pip install langchain

Use the following command to install the “google-search-results” module which will be used to design the conversational optimized agent in Langchain:

pip install google-search-results

After getting the conversational agent, the user needs to install another module called OpenAI to set up the environment for building the agent:

pip install openai

Set up the environment using the OpenAI and SerpAPi credentials (API keys) using the os and getpass libraries:

import os

import getpass

os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")

os.environ["SERPAPI_API_KEY"] = getpass.getpass("Serpapi API Key:")

Step 2: Importing Libraries

After getting all the required modules and frameworks, import the necessary libraries for using the conversational agent:

from langchain.agents import Tool

from langchain.memory import ConversationBufferMemory

from langchain.llms import OpenAI

from langchain.utilities import SerpAPIWrapper

from langchain.agents import initialize_agent

from langchain.agents import AgentType

Configure the tools for the agent using the arguments for the tasks after assigning the SerpAPIWrapper() method to the search variable:

search = SerpAPIWrapper()
tools = [
    Tool(
        name="Current Search",
        func=search.run,
        description="You are a chatbot that can answer questions about specific events"
    ),
]

Build the LLM model using the OpenAI() method by assigning it to the llm variable:

llm=OpenAI(temperature=0)

Method 1: Using LangChain Expression Language

This method explains the process of using the agent with the help of LCEL(conversational agent system) to get the conversational optimized agent:

Step 1: Installing LangChainHub

Get started with this method to use the agent in LangChain by installing the “langchainhub” module to build the agents using its libraries:

pip install langchainhub

Step 2: Configure Prompt Template Using Libraries

Import the required libraries for building agents and configuring tools for these agents using the following code:

from langchain.tools.render import render_text_description

from langchain.agents.output_parsers import ReActSingleInputOutputParser

from langchain.agents.format_scratchpad import format_log_to_str

from langchain import hub

Get the chat or conversational model for building the agents using the dataset:

prompt = hub.pull("hwchase17/react-chat")

Use the prompt.partial() method to configure the structure of the prompts for the model by providing the tools for the agent:

prompt = prompt.partial(
    tools=render_text_description(tools),
    tool_names=", ".join([t.name for t in tools]),
)

Step 3: Configure Conversational Agent

Here comes the setting of agents by defining its tools and series of tasks it needs to implement to produce efficient results:

llm_with_stop = llm.bind(stop=["\nObservation"])

After storing the observations, simply configure the arguments for the agent to understand the input from the user:

agent = {
    "input": lambda x: x["input"],
    "agent_scratchpad": lambda x: format_log_to_str(x['intermediate_steps']),
    "chat_history": lambda x: x["chat_history"]
} | prompt | llm_with_stop | ReActSingleInputOutputParser()

Now, import the Agent Executer library which can be used to call the agent to use its functionalities:

from langchain.agents import AgentExecutor

Add the memory to the agent so it can store the observation in the additional memory to understand the context of the conversation:

memory = ConversationBufferMemory(memory_key="chat_history")

agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)

Step 4: Test the Agent

After configuring all the components of the agent, simply call the agent_executor using the invoke() function with the input as its argument:

agent_executor.invoke({"input": "hi, i am bob"})['output']

Ask a query that a user usually asks from the Google search like the movies showing on a particular date or time:

agent_executor.invoke({"input": "what are some movies showing 10/10/2023"})['output']

The agent has displayed the list of movies showing on 10/10/2023 in the output section as displayed in the screenshot below:

Method 2: Using ChatModel

The second method for using the Agent optimized for conversations in LangChain is using the chat model as mentioned below:

Step 1: Building Chat Model

Start the method of using the ChatModel to design a conversational agent by importing libraries from the LangChain:

from langchain.chat_models import ChatOpenAI

from langchain import hub

Get the chat model for prompts and configure the chat model using the ChatOpenAI() method with its arguments:

prompt = hub.pull("hwchase17/react-chat-json")

#configuring language model with the library imported from LangChain

chat_model = ChatOpenAI(temperature=0, model='gpt-4')

Define the prompt variable with the input structure and tools used by the agent to make a conversational model:

prompt = prompt.partial(
    tools=render_text_description(tools),
    tool_names=", ".join([t.name for t in tools]),
)

Stepp 2: Configure Conversational Agent

Once the chat model is designed, integrate it with the agent by storing the observations in the “chat_model_with_stop” variable:

chat_model_with_stop = chat_model.bind(stop=["\nObservation"])

Import the output parser library to get the answer in JSON format and format_log_to_message library from the agents’ dependency on the LangChain:

from langchain.agents.output_parsers import JSONAgentOutputParser

from langchain.agents.format_scratchpad import format_log_to_messages

Step 3: Setting Prompt Template

Now, design the format or structure of the interface while having a conversation with humans and the agent tools to generate output:

TEMPLATE_TOOL_RESPONSE = """TOOL RESPONSE:
---------------------
{observation}

USER'S INPUT
--------------------

Give me an output responding to the input provided by user
If the information is gathered from a tool, you should mention it- Remember to respond with a code written in markdown with a single action, simply respond to the user in a JSON snippet no matter what!"""

#Configuring agent to invoke its tasks/actions performed using the tools
agent = {
    "input": lambda x: x["input"],
    "agent_scratchpad": lambda x: format_log_to_messages(x['intermediate_steps'], template_tool_response=TEMPLATE_TOOL_RESPONSE),
    "chat_history": lambda x: x["chat_history"],
} | prompt | chat_model_with_stop | JSONAgentOutputParser()

Define the memory variable by calling the ConversationBufferMemory to store the previous messages and then configure the AgentExecutor() method:

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)

Step 4: Test the Agent

The last step in this method is to test the agent by initiating the chat using the input string in the invoke() method:

agent_executor.invoke({"input": "hi, i am bob"})['output']

Now, test the memory by asking the question about the previous data stored in it:

agent_executor.invoke({"input": "whats my name?"})['output']

The data can be extracted from the internet as well by asking a query and the output will be displayed in the JSON format:

agent_executor.invoke({"input": "what are some movies showing 10/10/2023"})['output']

That’s all about using the agent optimized for conversation in LangChian.

Conclusion

To use an agent optimized for conversation in LangChain, install the required modules for importing the libraries to build the LLM. There are multiple methods to use an agent in LangChain that is optimized for conversation such as using the LCEL and ChatModel. This guide has elaborated on the process of building and testing the agent using both the methods offered by the LangChain.

About the author

Talha Mahmood

As a technical author, I am eager to learn about writing and technology. I have a degree in computer science which gives me a deep understanding of technical concepts and the ability to communicate them to a variety of audiences effectively.