LangChain

How to Use the LangChain LLMChain Function in Python

LangChain has a plethora of modules for creating language model applications. Applications can be made more complicated by combining modules, or they can be made simpler using a single module. Calling an LLM on a certain input is the most essential LangChain component.

Chains do not work only for a single LLM call; they are collections of calls, either to an LLM or another utility. End-to-end chains for widely used applications are provided by LangChain along with a standard chain API and numerous tool integrations.

The flexibility and capability to link multiple elements into a single entity may be useful when we want to design a chain that accepts user input, sets it up using a PromptTemplate, and then delivers the generated result to an LLM.

This article helps you in grasping the use of a LangChain LLMchain function in Python.

Example: How to Use the LLMchain Fuction in LangChain

We talked about what chains are. Now, we will see a practical demonstration of these chains which are implemented in a Python script. In this example, we use the most basic LangChain chain which is LLMchain. It contains a PromptTemplate and an LLM, and chains them together to generate an output.

To start implementing the concept, we have to install some required libraries which are not included in the Python standard library. The libraries that we need to install are LangChain and OpenAI. We install the LangChain library because we need to use its LLMchain module as well as the PromptTemplate. The OpenAI library lets us use the OpenAI’s models to predict the outputs, i.e. GPT-3.

To install the LangChain library, run the following command on the terminal:

$ pip install langchain

Install the OpenAI library with the following command:

$ pip install openai

Once the installations are complete, we can start working on the main project.

from langchain.prompts import PromptTemplate

from langchain.llms import OpenAI

import os

os.environ["OPENAI_API_KEY"] = "sk-YOUR API KEY"

The main project begins by importing the necessary modules. So, we first import the PromptTemplate from the “langchain.prompts” library. Then, we import the OpenAI from the “langchain.llms” library. Next, we import the “os” to set the environment variable.

Initially, we set the OpenAI API key as the environment variable. The environment variable is a variable that consists of a name and a value and is set on our operating system. The “os.environ” is an object that is used to map the environment variables. So, we call the “os.environ”. The name that we set for the API key is OPENAI_API_KEY. We then assign the API key as its value. The API key is unique for each user. So, when you are practicing this code script, write your secret API key.

llm = OpenAI(temperature=0.9)

prompt = PromptTemplate(

input_variables=["products"],

template="What would a brand be named that sells {products}?",

)

Now that the key is set as the environment variable, we initialize a wrapper. Set the temperature for the OpenAI GPT models. The temperature is a characteristic that helps us determine how unpredictable the response will be. The higher the temperature value, the more erratic the responses are. We set the temperature value to 0.9 here. Thus, we get the most random results.

Then, we initialize a PromptTemplate class. When we use the LLM, we generate a prompt from the input that is taken from the user and then pass it to the LLM rather than sending the input directly to the LLM which requires hard coding (a prompt is an input that we took from the user and on which the defined AI model should create a response). So, we initialize the PromptTemplate. Then, within its curly braces, we define the input_variable as “Products” and the template text is “What would a brand be named that sells {products}?” The user input tells what the brand does. Then, it formats the prompt based on this information.

from langchain.chains import LLMChain

chain = LLMChain(llm=llm, prompt=prompt)

Now that our PromptTemplate is formatted, the next step is to make an LLMchain. First, import the LLMchain module from the “langchain.chain” library. Then, we create a chain by calling the LLMchain() function which takes the user input and formats the prompt with it. Lastly, it sends the response to the LLM. So, it is connects the PromptTemplate and LLM.

print(chain.run("Art supplies"))

To execute the chain, we call the chain.run() method and provide the user input as the parameter which is defined as “Art Supplies”. Then, we pass this method to the Python print() function to display the predicted outcome on the Python console.

The AI model reads the prompt and makes a response based on it.

Since we asked to name a brand that sells art supplies, the predicted name by the AI model can be seen in the following snapshot:

This example shows us the LLMchaining when a single input variable is provided. This is also possible when using multiple variables. For that, we simply have to create a dictionary of variables to input them altogether. Let’s see how this works:

from langchain.prompts import PromptTemplate

from langchain.llms import OpenAI

import os

os.environ["OPENAI_API_KEY"] = "sk- Your-API-KEY”

llm = OpenAI(temperature=0.9)

prompt = PromptTemplate(

  input_variables=["
Brand", "Product"],

  template="
What would be the name of {Brand} that sells {Product}?",

)

from langchain.chains import LLMChain

chain = LLMChain(llm=llm, prompt=prompt)

print(chain.run({

  'Brand': "
Art supplies",

  'Product': "
colors"

}))

The code goes the same as the previous example, except that we have to pass two variables in the prompt template class. So, create a dictionary of input_variables. The long brackets represent a dictionary. Here, we have two variables – “Brand” and “Product” – which are separated by a comma. Now, the template text that we provide is “What would be the name of {Brand} that sells {Product}?” Thus, the AI model predicts a name that focuses on these two input variables.

Then, we create an LLMchain which formats the user input with the prompt to send the response to LLM. To run this chain, we use the chain.run()method and pass the dictionary of variables with the user input as “Brand”: “Art supplies” and “Product” as “Colors”. Then, we pass this method to the Python print() function to display the obtained response.

The output image shows the predicted result:

Conclusion

Chains are the building blocks of LangChain. This article goes through the concept of using the LLMchain in LangChain. We made an introduction to LLMchain and depicted the need to employ them in the Python project. Then, we carried out a practical illustration which demonstrates the implementation of the LLMchain by connecting the PromptTemplate and LLM. You can create these chains with a single input variable as well as multiple user-provided variables. The generated responses from the GPT model are provided as well.

About the author

Omar Farooq

Hello Readers, I am Omar and I have been writing technical articles from last decade. You can check out my writing pieces.