This article demonstrates the usage of the “asyncio” library in LangChain.
How to Use the “asyncio” Library to Build Chains in LangChain?
Async API can be used as support for chains using the asyncio library in LangChain, simply follow this guide:
Install Prerequisites
Install the LangChain module to start using the asyncio library in LangChain to concurrently call models:
The OpenAI module is also required to build chains using the OpenAIEmbeddings:
Once the required modules are installed successfully, connect to the OpenAI using its API key:
import getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
Using Async API for Chain
The asyncio library is also used to support building chains in LangChain using a template for the prompt to create a chain in LLM:
import time
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
def generate_serially():
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What do you call the company that sells the {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
for _ in range(5):
resp = chain.run(product="Bread")
print(resp)
async def async_generate(chain):
resp = await chain.arun(product="Bread")
print(resp)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What do you call the company that sells the {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
tasks = [async_generate(chain) for _ in range(5)]
#using await from asyncio() method to build chains with tasks
await asyncio.gather(*tasks)
#configure the resultant output using the asyncio to get concurrent output
s = time.perf_counter()
await generate_concurrently()
elapsed = time.perf_counter() - s
print("\033[1m" + f"Concurrent executed in {elapsed:0.2f} seconds." + "\033[0m")
#configure the timestamp for resultant output to get serial output
s = time.perf_counter()
generate_serially()
elapsed = time.perf_counter() - s
print("\033[1m" + f"Serial executed in {elapsed:0.2f} seconds." + "\033[0m"
)
The above code builds an LLM model that is useful for answering questions related to the company making bread like a bakery:
Output
The following screenshot displays that the model answers the questions related to the bakery shop making bread using concurrent calls:
That is all about using the “asyncio” library to build chains in LangChain.
Conclusion
To use the asyncio library in LangChain, simply install LangChain and OpenAI modules before moving on with the process of building chains. The Async API can be helpful while building chains for creating chatbots to learn from previous conversations. This guide has explained the process of using the asyncio library to build chains using the LangChain framework.