Langchain Basic - Agents Walkthrough
This article demonstrates how to leverage LangChain and OpenAI’s models to create an automated system for monitoring API health and triggering notifications. This is for experimentation and to understand the basics of Langchain and Agents , it could be force-fit solution.
Langchain
LangChain is a robust framework for building applications powered by large language models (LLMs). By integrating tools and crafting intelligent agents, developers can automate complex workflows. Emphasize is on how to use LangChain /Agents capabilities to monitor an API’s health and send email alerts if the service is down.
What Are Agents in LangChain?
In LangChain, agents act as orchestrators that leverage large language models (LLMs) to make decisions and invoke specific tools. These tools represent discrete functions or tasks, such as checking API health or sending notifications. Agents dynamically decide which tools to use, in what order, and how to process their outputs.
Code Walkthrough
Let’s explore how agents are used in this solution to monitor an API and notify stakeholders when an issue arises.
1. Defining the Problem
We want to answer a query: “Is the API up and running?” If the API is down, the system should automatically send an email notification. The agent will:
- Use a tool to check the API’s status.
- Based on the result, decide whether to invoke the email notification tool.
2. Setting Up the Agent’s
The agent relies on a large language model (LLM) for decision-making:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o", temperature=0)
The ChatOpenAI
instance is configured for precise and deterministic behavior with temperature=0
.
3. Creating Tools
Tools are individual functionalities that the agent can invoke. Here, we define two tools:
checkservice_availability
: Monitors API health.
@tool
def checkservice_availability():
"""Check availability of API Health."""
response = requests.get("http://api.open-notify.org/this-api-doesnt-exist")
print(response.status_code)
return response.status_code
@tool
def mailservice():
""" Send Mail once service is down"""
print('Mail function invoked')
return 'Mailed to rangesh@gmail.com'
Configuring the Agent
The agent is where the real magic happens. It ties the LLM and tools together, enabling dynamic decision-making.
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
tools = [mailservice, checkservice_availability]
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful DevOps assistant. Do the needful in case service is down, i.e., utilize tools/agents to make decisions."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
- System Role: Defines the agent as a DevOps assistant capable of executing tasks.
- Tools Integration: The agent is configured with the
checkservice_availability
andmailservice
tools. - Dynamic Queries: Prompts allow the agent to handle various inputs.
How the Agent Works
- Query Understanding: The agent interprets the query to determine the user’s intent.
- Tool Invocation: It uses the
checkservice_availability
tool to assess the API status. - Decision Making: Based on the status code:
- If the API is down, it triggers the
mailservice
tool. - If the API is up, it informs the user.
The agent autonomously manages this sequence, ensuring smooth and intelligent task execution.
This example illustrates how agents in LangChain transform simple tasks into intelligent workflows. By autonomously making decisions and invoking tools, agents enhance automation, reduce human intervention, and deliver scalable solutions