In our earlier tutorial, we constructed an AI agent able to answering queries by browsing the online and added persistence to take care of state. Nevertheless, in lots of situations, chances are you’ll need to put a human within the loop to watch and approve the agent’s actions. This may be simply achieved with LangGraph. Let’s discover how this works.
Setting Up the Agent
We’ll proceed from the place we left off within the final lesson. First, arrange the surroundings variables, make the required imports, and configure the checkpointer.
pip set up langgraph==0.2.53 langgraph-checkpoint==2.0.6 langgraph-sdk==0.1.36 langchain-groq langchain-community langgraph-checkpoint-sqlite==2.0.1
import os
os.environ['TAVILY_API_KEY'] = ""
os.environ['GROQ_API_KEY'] = ""
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage, AIMessage
from langchain_groq import ChatGroq
from langchain_community.instruments.tavily_search import TavilySearchResults
from langgraph.checkpoint.sqlite import SqliteSaver
import sqlite3
sqlite_conn = sqlite3.join("checkpoints.sqlite",check_same_thread=False)
reminiscence = SqliteSaver(sqlite_conn)
# Initialize the search device
device = TavilySearchResults(max_results=2)
Defining the Agent
class Agent:
def __init__(self, mannequin, instruments, checkpointer, system=""):
self.system = system
graph = StateGraph(AgentState)
graph.add_node("llm", self.call_openai)
graph.add_node("motion", self.take_action)
graph.add_conditional_edges("llm", self.exists_action, {True: "motion", False: END})
graph.add_edge("motion", "llm")
graph.set_entry_point("llm")
self.graph = graph.compile(checkpointer=checkpointer)
self.instruments = {t.title: t for t in instruments}
self.mannequin = mannequin.bind_tools(instruments)
def call_openai(self, state: AgentState):
messages = state['messages']
if self.system:
messages = [SystemMessage(content=self.system)] + messages
message = self.mannequin.invoke(messages)
return {'messages': [message]}
def exists_action(self, state: AgentState):
end result = state['messages'][-1]
return len(end result.tool_calls) > 0
def take_action(self, state: AgentState):
tool_calls = state['messages'][-1].tool_calls
outcomes = []
for t in tool_calls:
print(f"Calling: {t}")
end result = self.instruments[t['name']].invoke(t['args'])
outcomes.append(ToolMessage(tool_call_id=t['id'], title=t['name'], content material=str(end result)))
print("Again to the mannequin!")
return {'messages': outcomes}
Setting Up the Agent State
We now configure the agent state with a slight modification. Beforehand, the messages checklist was annotated with operator.add, appending new messages to the present array. For human-in-the-loop interactions, typically we additionally need to substitute current messages with the identical ID slightly than append them.
from uuid import uuid4
def reduce_messages(left: checklist[AnyMessage], proper: checklist[AnyMessage]) -> checklist[AnyMessage]:
# Assign IDs to messages that do not have them
for message in proper:
if not message.id:
message.id = str(uuid4())
# Merge the brand new messages with the present ones
merged = left.copy()
for message in proper:
for i, current in enumerate(merged):
if current.id == message.id:
merged[i] = message
break
else:
merged.append(message)
return merged
class AgentState(TypedDict):
messages: Annotated[list[AnyMessage], reduce_messages]
Including a Human within the Loop
We introduce a further modification when compiling the graph. The interrupt_before=[“action”] parameter provides an interrupt earlier than calling the motion node, making certain guide approval earlier than executing instruments.
class Agent:
def __init__(self, mannequin, instruments, checkpointer, system=""):
# Every part else stays the identical as earlier than
self.graph = graph.compile(checkpointer=checkpointer, interrupt_before=["action"])
# Every part else stays unchanged
Working the Agent
Now, we’ll initialize the system with the identical immediate, mannequin, and checkpointer as earlier than. Once we name the agent, we go within the thread configuration with a thread ID.
immediate = """You're a sensible analysis assistant. Use the search engine to lookup info.
You might be allowed to make a number of calls (both collectively or in sequence).
Solely lookup info if you end up certain of what you need.
If you must lookup some info earlier than asking a comply with up query, you're allowed to do this!
"""
mannequin = ChatGroq(mannequin="Llama-3.3-70b-Specdec")
abot = Agent(mannequin, [tool], system=immediate, checkpointer=reminiscence)
messages = [HumanMessage(content="Whats the weather in SF?")]
thread = {"configurable": {"thread_id": "1"}}
for occasion in abot.graph.stream({"messages": messages}, thread):
for v in occasion.values():
print(v)
Responses are streamed again, and the method stops after the AI message, which signifies a device name. Nevertheless, the interrupt_before parameter prevents fast execution. We will additionally get the present state of the graph for this thread and see what it incorporates and it additionally incorporates what’s the subsequent node to be referred to as (‘motion’ right here).
abot.graph.get_state(thread)
abot.graph.get_state(thread).subsequent
To proceed, we name the stream once more with the identical thread configuration, passing None as enter. This streams again outcomes, together with the device message and last AI message. Since no interrupt was added between the motion node and the LLM node, execution continues seamlessly.
for occasion in abot.graph.stream(None, thread):
for v in occasion.values():
print(v)
Interactive Human Approval
We will implement a easy loop prompting the consumer for approval earlier than persevering with execution. A brand new thread ID is used for recent execution. If the consumer chooses to not proceed, the agent stops.
messages = [HumanMessage("What's the weather in LA?")]
thread = {"configurable": {"thread_id": "2"}}
for occasion in abot.graph.stream({"messages": messages}, thread):
for v in occasion.values():
print(v)
whereas abot.graph.get_state(thread).subsequent:
print("n", abot.graph.get_state(thread), "n")
_input = enter("Proceed? (y/n): ")
if _input.decrease() != "y":
print("Aborting")
break
for occasion in abot.graph.stream(None, thread):
for v in occasion.values():
print(v)
Nice! Now you recognize how one can contain a human within the loop. Now, attempt experimenting with totally different interruptions and see how the agent behaves.
References: DeepLearning.ai (https://learn.deeplearning.ai/courses/ai-agents-in-langgraph/lesson/6/human-in-the-loop)

Vineet Kumar is a consulting intern at MarktechPost. He’s at the moment pursuing his BS from the Indian Institute of Know-how(IIT), Kanpur. He’s a Machine Studying fanatic. He’s keen about analysis and the most recent developments in Deep Studying, Laptop Imaginative and prescient, and associated fields.