LangChain

How to Use Conversation Knowledge Graph in LangChain?

LangChain is the module to create language models that can imitate the conversational format like humans interact with each other. The user can ask questions in the form of strings or text in natural languages and the model will extract or generate information for the user. These models have memory attached to them so they can store previous messages to get the context of the conversation.

This guide will illustrate the process of using the conversation knowledge graph in LangChain.

How to Use Conversation Knowledge Graph in LangChain?

The ConversationKGMemory library can be used to recreate the memory that can be used to get the context of the interaction. To learn the process of using the conversation knowledge graph in LangChain, simply go through the listed steps:

Step 1: Install Modules

First, get started with the process of using the conversation knowledge graph by installing the LangChain module:

pip install langchain

Install the OpenAI module which can be installed using the pip command to get its libraries for building Large Language Models:

pip install openai

Now, set up the environment using the OpenAI API key that can be generated from its account:

import os

import getpass

os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")

Step 2: Using Memory With LLMs

Once the modules are installed, start using the memory with LLM by importing the required libraries from the LangChain module:

from langchain.memory import ConversationKGMemory

from langchain.llms import OpenAI

Build the LLM using the OpenAI() method and configure the memory using the ConversationKGMemory() method. After that, save the prompt templates using multiple inputs with their respective response to train the model on this data:

llm = OpenAI(temperature=0)

memory = ConversationKGMemory(llm=llm)

memory.save_context({"input": "say hi to john"}, {"output": "john! Who"})

memory.save_context({"input": "he is a friend"}, {"output": "sure"})

Test the memory by loading the memory_variables() method using the query related to the above data:

memory.load_memory_variables({"input": "who is john"})

Configure the memory using the ConversationKGMemory() method with the return_messages argument to get the history of the input as well:

memory = ConversationKGMemory(llm=llm, return_messages=True)

memory.save_context({"input": "say hi to john"}, {"output": "john! Who"})

memory.save_context({"input": "he is a friend"}, {"output": "sure"})

Simply test the memory by providing the input argument with its value in the form of a query:

memory.load_memory_variables({"input": "who is john"})

Now, test the memory by asking the question that is not mentioned in the training data, and the model has no idea about the response:

memory.get_current_entities("what's the favorite color of john")

Use the get_knowledge_triplets() method by responding to the query asked previously:

memory.get_knowledge_triplets("his favorite color is red")

Step 3: Using Memory in Chain

The next step uses the conversation memory with the chains to build the LLM model using the OpenAI() method. After that, configure the prompt template using the conversation structure and the text will be displayed while getting the output by the model:

llm = OpenAI(temperature=0)
from langchain.prompts.prompt import PromptTemplate
from langchain.chains import ConversationChain

template = """This is the template for the interaction among human and machine
The system is an AI model that can talk or extract information about multiple aspects
If it does not understand the question or have the answer, it simply says so
The system extract data stored in the "Specific" section and does not hallucinate

Specific:

{history}

Conversation:
Human: {input}
AI:"""

#Configure the template or structure for providing prompts and getting response from the AI system
prompt = PromptTemplate(input_variables=["history", "input"], template=template)
conversation_with_kg = ConversationChain(
    llm=llm, verbose=True, prompt=prompt, memory=ConversationKGMemory(llm=llm)
)

Once the model is created, simply call the conversation_with_kg model using the predict() method with the query asked by the user:

conversation_with_kg.predict(input="Hi, what's up?")

Now, train the model using conversation memory by giving the information as the input argument for the method:

conversation_with_kg.predict(

input="My name is James and I'm helping Will, He's an engineer"

)

Here is the time to test the model by asking the queries to extract information from the data:

conversation_with_kg.predict(input="Who is Will")

That is all about using the conversation knowledge graph in LangChain.

Conclusion

To use the conversation knowledge graph in LangChain, install the modules or frameworks to import libraries for using the ConversationKGMemory() method. After that, build the model using the memory to build the chains and extract information from the training data provided in the configuration. This guide has elaborated on the process of using the conversation knowledge graph in LangChain.

About the author

Talha Mahmood

As a technical author, I am eager to learn about writing and technology. I have a degree in computer science which gives me a deep understanding of technical concepts and the ability to communicate them to a variety of audiences effectively.