LangChain

How to Track Token Usage of OpenAI LLM Using LangChain?

LangChain is used to build LLM applications that can interact with humans in natural languages and tokens are used to convert the text into meaningful data. Tokens are a very important aspect of LangChain in building LLM applications; however, tracking their usage is also helpful. Token usage is tracked to manage the cost and optimize the performance of tokens in Natural Language Processing or NLP pipelines.

This guide will explain the process of tracking the token usage of OpenAI LLM using LangChain.

How to Track Token Usage of OpenAI LLM Using LangChain?

To track the usage of tokens of OpenAI LLM in LangChain, follow this simple guide for the complete process:

Install Modules

Install the LangChain framework to use tokens or OpenAI LLMs for building the AI applications:

pip install langchain

Install OpenAI to use the OpenAI LLMs and build bots for interacting with humans:

pip install openai

Install the tiktoken module to use the tokens for splitting texts in meaningful ways:

pip install tiktoken

After installing the required modules, simply set up the OpenAI by providing the OpenAI API key using the following code:

import os
import getpassos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")

Import Libraries

Import OpenAI and get_openai_callback libraries to use the OpenAI LLMs and call back the usage of tokens in the application:

from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback

Import SerpAPIWrapper is a real-time API to search data from Google:

from langchain.utilities import SerpAPIWrapper

After that, simply configure the OpenAI() function to store it in the “llm” variable:

llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)

Track Token Usage

Once the “OpenAI LLM” is configured, simply use the “get_openai_callback()” function. It returns the details of tokens used to understand the command used by “llm()” method:

with get_openai_callback() as cb:
    result = llm("I want a poem about rain")
    print(cb)

Printing the cb variable that displays the details of the tokens used for the prompt via the llm() function:

Call the get_openai_callback() method as cb and use multiple prompts to print the total number of tokens:

with get_openai_callback() as cb:
    result = llm("I want a poem about rain")
    result2 = llm("I want a poem about rain")
    print(cb.total_tokens)

Executing the above code prints the tokens by calling the cb variable as the following screenshot displays 281 tokens:

Track Tokens with Multiple Steps

Import multiple agents like load_tools, AgentType, initialize_agent, and OpenAI library from LangChain and configure the LLM application using OpenAI() method:

from langchain.agents import load_tools

from langchain.agents import AgentType
from langchain.agents import initialize_agent
from langchain.llms import OpenAI

llm = OpenAI(temperature=0)
    tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)

After configuring the LLM using OpenAI, simply run prompts on LLM by running the agent and print the details of the token used by the prompt:

with get_openai_callback() as cb:
    response = agent.run(
        "Who is Olivia Wilde's boyfriend? What is his current age raised to 0.23?"
    )
    print(f"Total Tokens: {cb.total_tokens}")
#Printing the token used for the prompt
    print(f"Prompt Tokens: {cb.prompt_tokens}")
#Printing the number of tokens used to complete the prompt
    print(f"Completion Tokens: {cb.completion_tokens}")
#Printing the total cost of the token used
    print(f"Total Cost (USD): ${cb.total_cost}")

Output

The following screenshot displays the output according to the above code and then displays the details of the token used and the cost for these tokens:

That’s all about tracking the token usage of OpenAI LLM using LangChain.

Conclusion

To track token usage for OpenAI LLM applications using LangChain, simply install LangChain, OpenAI, and Tiktoken modules. Set up the OpenAI API key before importing OpenAI and get_openai_callback libraries from LangChain. After that, use the call back() function to get token usage of the prompt that is used by the LLM application and track the usage of tokens in multi-step operation. This guide has explained the process of tracking token usage for OpenAI LLM using LangChain.

About the author

Talha Mahmood

As a technical author, I am eager to learn about writing and technology. I have a degree in computer science which gives me a deep understanding of technical concepts and the ability to communicate them to a variety of audiences effectively.