LangChain

How to Use Retry Parser in LangChain?

LangChain is the framework to build models that can interact with humans in natural languages to extract information from the data. The parser is a program in the Python language that is used to take the input data like source code and convert it into an understandable format. LangChain allows the user to import parser libraries to create/build Large Language Models or LLMs for answering the questions.

This guide will demonstrate the process of using the Retry parser in LangChain.

How to Use Retry Parser in LangChain?

The parser output can display the error response meaning that the output is not according to the prompt. The Retry parser is used to fix the problems that are not fixable by looking at the output and the following guide explains the process in detail:

Setup Prerequisites

Install the LangChain module to get started with the process:

pip install langchain

Another module required for the process is OpenAI which can be installed using the “pip install” command:

pip install openai

A screenshot of a computer Description automatically generated

Use the “os” library to access the operating system and the “getpass” library to connect to the OpenAI using its API key:

import os

import getpass

os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')

Import Libraries

After connecting to the OpenAI and installing LangChain, simply import the libraries required for this process like templates, parsers, etc.:

from langchain.chat_models import ChatOpenAI

from langchain.prompts import (

  ChatPromptTemplate,

  PromptTemplate,

  HumanMessagePromptTemplate,

)

from langchain.llms import OpenAI

from langchain.output_parsers import (

  PydanticOutputParser,

  OutputFixingParser,

  RetryOutputParser,

)

from pydantic import BaseModel, Field, validator

from typing import List

Specify the Prompt Template

Build the template of the model using the Action class with BaseModel as its argument, however, the model is not complete to extract information:

template = """According to the query, give an Action Input for what step should be taken.

{format_instructions}

Question: {query}

Response:"""


#Prompt template to let the model get the idea about the query it needs to understand

class Action(BaseModel):

  action: str = Field(description="action to take")

  action_input: str = Field(description="input to the action")

parser = PydanticOutputParser(pydantic_object=Action)

Set the prompt variable with the PromptTemplate() method containing the template for the query:

prompt = PromptTemplate(

  template="Reply the query\n{format_instructions}\n{query}\n",

  input_variables=["query"],

  partial_variables={"format_instructions": parser.get_format_instructions()},

)

Run the prompt/query using the format_prompt() function and store it inside the prompt_value variable:

prompt_value = prompt.format_prompt(query="Number of Sea in the world?")

Initialize the bad response with the action and search arguments to display it in case of error:

bad_response = '{"action": "search"}'

A screenshot of a computer program Description automatically generated

After that, simply use the parse() function containing the bad_response variable:

parser.parse(bad_response)

Executing the above command displays the error response as the format template was not correct or complete:

Using Retry Parser

After getting the error, simply import the RetryWithErrorOutputParser library from the LangChain parser:

from langchain.output_parsers import RetryWithErrorOutputParser

Use the retry_parser variable containing the llm using the OpenAI environment:

retry_parser = RetryWithErrorOutputParser.from_llm(

parser=parser, llm=OpenAI(temperature=0)

)

Now we have the LLM setup with the OpenAI that enables the model to understand the prompt provided by the user:

retry_parser.parse_with_prompt(bad_response, prompt_value)

The user can change the prompts and place them in the action_input variable:

Action(action='search', action_input='who is leo di caprios gf?')

A screenshot of a computer program Description automatically generated

That is all about using the Retry parser in LangChain.

Conclusion

To use the Retry parser in LangChain, simply install LangChain and OpenAI modules to use their libraries and methods for completing the process. After that, connect to the OpenAI environment using its API key so it can be used in LLM for understanding the text in prompts. After that, import libraries and build the model using the prompt template to get a response using the Retry parser. This blog has illustrated the concept of using the Retry parser in LangChain.

About the author

Talha Mahmood

As a technical author, I am eager to learn about writing and technology. I have a degree in computer science which gives me a deep understanding of technical concepts and the ability to communicate them to a variety of audiences effectively.