LangChain

How to Create and Work With Human Input LLMs Using Langchain?

LangChain is a framework for building Large Language Models that can answer queries in textual form written in human languages. The framework can also be used to build fake LLMs and input LLMs to mock the call using the prompt a human can ask any chatbot. These human-input LLMs can get inputs from humans, ask humans for their answers in detail, and then display them as the chatbot has generated the answer.

This guide will explain creating and working with human-input LLMs using LangChain.

How to Create and Work With Human Input LLMs Using Langchain?

To create and work with human input LLMs using LangChain, simply follow this step-by-step guide:

Step 1: Install Modules

To start the process of creating and working with human input LLMs, simply install LangChain to get its resources to get started with the process:

pip install langchain

A screenshot of a computer Description automatically generated

Install the OpenAI module to get its functions required for building the LLM in LangChain:

pip install openai

A screenshot of a computer Description automatically generated

This example is set to get answers using Wikipedia, so it is required to install its module in Python:

pip install wikipedia

A screenshot of a computer Description automatically generated

Get the OpenAI API key from the environment and provide it in the IDE to connect to the OpenAI framework:

import os

import getpass

os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')

Step 2: Import Libraries

The next step after getting all the modules is to import all the necessary libraries from the LangChain framework:

from langchain.agents import initialize_agent

from langchain.agents import load_tools

from langchain.agents import AgentType

from langchain.llms.human import HumanInputLLM

Step 3: Build Human Input LLM

Now, the user can build the human input by loading tools and using the “llm” variable to call the “HumanInputLLM()” method. The function contains its configuration to put prompts in between the start and end lines with specified format:

tools = load_tools(["wikipedia"])

llm = HumanInputLLM(

  prompt_func=lambda prompt: print(

     f"\n===START====\n{prompt}\n=====END======"

  )

)

After that, simply call the initialize_agent() method with all the settings and parameters in it:

agent = initialize_agent(

  tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True

)

Step 4: Test the HumanInputLLM

Once the HumanInputLLM is created, simply use it to run the agent with the prompt inside its braces:

agent.run("What is 'Bocchi the Rock!'?")

Human Input

After executing the above command, the LLM will ask the user to get the answer related to the query, and the first question will be to get the source of the response:

Action: Wikipedia

After that, the action input contains the exact answer for the query with precision and correctness:

Action Input: Bocchi the Rock!, Japanese four-panel manga series written and illustrated by Aki Hamaji

After the answer, provide the name of the page that contains the above information so the bot can validate the answer from the source page using the Wikipedia library:

Observation: Page: Bocchi the Rock!

Now it is the time to explain the answer by providing the summary of the source page in a paragraph:

Summary: Bocchi the Rock! (ぼっち・ざ・ろっく!, Bocchi Za Rokku!) is a Japanese four-panel manga series created by AKi that started in 2017 and has been 5 years since its release. There have been 5 chapters of this series as of 2022 as it started in 2017 and each year released a chapter.

The user can elaborate the answer in multiple lines so the bot can answer the query with more compactness and this explanation gives more details about the query:

An anime television series adaptation produced by Clover Works aired from October to December 2022. The seasons have been a block buster as its characters are great in their acting with the story line as the writing is amazing and depiction of social anxiety, with the anime's visual creativity receiving acclaim.

A screenshot of a computer Description automatically generated

Output

The output provided by the human as the input is placed as the answer to the query after verifying from the source by the LLM, so the response should be authentic:

That is all about creating and working with human input LLMs using LangChain.

Conclusion

To create and work with the human input LLMs using the Long Chain, simply install the necessary modules and frameworks such as LangChain, Wikipedia, etc. After that, import libraries to build the human input LLM and then run the agent with the prompt to get the answer from the LLM. The LLM asks for the response from the user in the specified format and then processes it to generate the relevant replies for the user. This blog illustrated the process of creating and working with the human input LLMs in LangChain.

About the author

Talha Mahmood

As a technical author, I am eager to learn about writing and technology. I have a degree in computer science which gives me a deep understanding of technical concepts and the ability to communicate them to a variety of audiences effectively.