AI Technology
LangGraph
Agents

Agents in LangGraph

Agents are autonomous decision-makers that can observe their environment, take actions, and work towards goals. LangGraph provides sophisticated frameworks for building intelligent AI agents that can handle complex tasks through reasoning and tool usage.

🤖 What are AI Agents?

AI agents are systems that can:

  • Perceive - Observe and understand their environment
  • Reason - Make decisions based on available information
  • Act - Use tools and APIs to affect their environment
  • Learn - Adapt based on feedback and experience
  • Persist - Maintain state and memory across interactions

🏗️ Agent Architecture

Core Components

  • State - Agent's current understanding and memory
  • Tools - Capabilities the agent can use
  • Reasoning - Decision-making logic
  • Planning - Strategy for achieving goals
  • Execution - Carrying out planned actions

Agent Types

  1. React Agents - Reasoning and acting cycles
  2. Tool-Using Agents - Agents with external tool access
  3. Multi-Agent Systems - Collaborative agent teams
  4. Human-in-the-Loop - Agents with human supervision

🧠 Single Agent Systems

Basic Reactive Agent

Create a simple agent that reasons and acts in cycles.

from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
from typing import TypedDict, Annotated, List
import operator
 
llm = ChatOpenAI(model="gpt-3.5-turbo")
 
class AgentState(TypedDict):
    messages: Annotated[List[str], operator.add]
    current_task: str
    observations: List[str]
    actions_taken: List[str]
    completed: bool
    reasoning: str
 
def reasoning_node(state: AgentState):
    """Agent reasoning about current state."""
    observations = state.get("observations", [])
    current_task = state.get("current_task", "")
 
    observations_text = "\n".join(observations[-5:])  # Last 5 observations
 
    prompt = f"""
    You are an AI agent. Analyze the current situation and decide what to do next.
 
    Current Task: {current_task}
    Recent Observations:
    {observations_text}
 
    What should you do next? Provide your reasoning.
    """
 
    response = llm.invoke(prompt)
 
    return {
        "reasoning": response.content,
        "messages": [f"Reasoning: {response.content}"]
    }
 
def action_node(state: AgentState):
    """Agent takes action based on reasoning."""
    reasoning = state["reasoning"]
    current_task = state["current_task"]
 
    prompt = f"""
    Based on this reasoning, what action should you take?
 
    Reasoning: {reasoning}
    Task: {current_task}
 
    Choose one action:
    1. Use web search tool
    2. Use calculator tool
    3. Use database tool
    4. Complete task
    """
 
    response = llm.invoke(prompt)
 
    action = response.content.strip()
    return {
        "actions_taken": state.get("actions_taken", []) + [action],
        "messages": [f"Action: {action}"]
    }
 
def observation_node(state: AgentState):
    """Agent observes results of actions."""
    last_action = state["actions_taken"][-1] if state["actions_taken"] else ""
 
    # Simulate tool execution
    if "web search" in last_action.lower():
        observation = "Web search found relevant information about the topic."
    elif "calculator" in last_action.lower():
        observation = "Calculation performed successfully: result = 42"
    elif "database" in last_action.lower():
        observation = "Database query returned 15 matching records."
    elif "complete" in last_action.lower():
        observation = "Task completed successfully."
        return {
            "observations": state.get("observations", []) + [observation],
            "completed": True
        }
    else:
        observation = "Action executed. Ready for next step."
 
    return {
        "observations": state.get("observations", []) + [observation],
        "messages": [f"Observation: {observation}"]
    }
 
def should_continue(state: AgentState):
    """Check if agent should continue working."""
    if state.get("completed", False):
        return "end"
    elif len(state.get("actions_taken", [])) >= 5:  # Max actions limit
        return "end"
    else:
        return "continue"
 
# Build reactive agent workflow
workflow = StateGraph(AgentState)
 
workflow.add_node("reasoning", reasoning_node)
workflow.add_node("action", action_node)
workflow.add_node("observation", observation_node)
 
workflow.set_entry_point("reasoning")
workflow.add_edge("reasoning", "action")
workflow.add_edge("action", "observation")
 
workflow.add_conditional_edges(
    "observation",
    should_continue,
    {
        "continue": "reasoning",  # Loop back
        "end": END
    }
)
 
# Compile and run
reactive_agent = workflow.compile()
 
result = reactive_agent.invoke({
    "current_task": "Research the benefits of renewable energy",
    "observations": [],
    "actions_taken": [],
    "completed": False
})

🛠️ Tool-Using Agents

Agents with External Tools

Create agents that can interact with external systems and APIs.

from langchain.tools import Tool
from langchain_community.tools import DuckDuckGoSearchRun
import requests
import json
 
class ToolAgentState(TypedDict):
    query: str
    available_tools: List[dict]
    tool_results: List[dict]
    final_answer: str
    reasoning_chain: List[str]
 
# Define tools
search_tool = DuckDuckGoSearchRun()
 
def weather_tool(location: str) -> str:
    """Get weather information for a location."""
    # Simulate weather API call
    return f"Weather in {location}: 72°F, Partly cloudy"
 
def calculator_tool(expression: str) -> str:
    """Calculate mathematical expressions."""
    try:
        # Safe evaluation (simplified for demo)
        result = eval(expression)
        return f"Calculation result: {result}"
    except:
        return "Invalid expression"
 
def news_tool(topic: str) -> str:
    """Get recent news about a topic."""
    # Simulate news search
    return f"Recent news about {topic}: Major developments reported today"
 
# Create tool definitions
tools = [
    Tool(
        name="search",
        func=search_tool.run,
        description="Search for information on the internet"
    ),
    Tool(
        name="weather",
        func=weather_tool,
        description="Get weather information for a location"
    ),
    Tool(
        name="calculator",
        func=calculator_tool,
        description="Calculate mathematical expressions"
    ),
    Tool(
        name="news",
        func=news_tool,
        description="Get recent news about a topic"
    )
]
 
def tool_selection_node(state: ToolAgentState):
    """Select appropriate tool based on query."""
    query = state["query"]
    tools_desc = "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
 
    prompt = f"""
    Analyze this query and select the best tool to use:
 
    Query: {query}
 
    Available tools:
    {tools_desc}
 
    Which tool should be used and why?
    """
 
    response = llm.invoke(prompt)
 
    return {
        "reasoning_chain": state.get("reasoning_chain", []) + [f"Tool selection: {response.content}"]
    }
 
def tool_execution_node(state: ToolAgentState):
    """Execute the selected tool."""
    query = state["query"]
    reasoning = state["reasoning_chain"][-1] if state["reasoning_chain"] else ""
 
    # Extract tool name from reasoning (simplified)
    selected_tool = None
    for tool in tools:
        if tool.name in reasoning.lower():
            selected_tool = tool
            break
 
    if not selected_tool:
        # Default to search
        selected_tool = search_tool
 
    try:
        # Execute tool
        if selected_tool.name == "search":
            result = selected_tool.run(query)
        elif selected_tool.name == "weather":
            # Extract location from query
            location = "New York"  # Simplified extraction
            result = selected_tool.func(location)
        elif selected_tool.name == "calculator":
            # Extract expression from query
            expression = "2 + 2"  # Simplified extraction
            result = selected_tool.func(expression)
        elif selected_tool.name == "news":
            # Extract topic from query
            topic = "technology"  # Simplified extraction
            result = selected_tool.func(topic)
        else:
            result = "Tool execution failed"
 
        tool_result = {
            "tool": selected_tool.name,
            "input": query,
            "output": result,
            "success": True
        }
 
    except Exception as e:
        tool_result = {
            "tool": selected_tool.name,
            "input": query,
            "output": str(e),
            "success": False
        }
 
    return {
        "tool_results": state.get("tool_results", []) + [tool_result],
        "reasoning_chain": state.get("reasoning_chain", []) + [f"Tool execution: {tool_result}"]
    }
 
def synthesis_node(state: ToolAgentState):
    """Synthesize tool results into final answer."""
    query = state["query"]
    tool_results = state["tool_results"]
 
    results_text = "\n".join([
        f"Tool: {result['tool']}\nOutput: {result['output']}\n"
        for result in tool_results
    ])
 
    prompt = f"""
    Based on the tool results, provide a comprehensive answer to the original query.
 
    Original Query: {query}
 
    Tool Results:
    {results_text}
 
    Provide a clear, helpful answer:
    """
 
    response = llm.invoke(prompt)
 
    return {
        "final_answer": response.content,
        "reasoning_chain": state.get("reasoning_chain", []) + [f"Final synthesis: {response.content}"]
    }
 
# Build tool-using agent workflow
workflow = StateGraph(ToolAgentState)
 
workflow.add_node("select_tool", tool_selection_node)
workflow.add_node("execute_tool", tool_execution_node)
workflow.add_node("synthesize", synthesis_node)
 
workflow.set_entry_point("select_tool")
workflow.add_edge("select_tool", "execute_tool")
workflow.add_edge("execute_tool", "synthesize")
workflow.add_edge("synthesize", END)
 
tool_agent = workflow.compile()
 
# Usage
result = tool_agent.invoke({
    "query": "What's the weather in San Francisco today?",
    "available_tools": [{"name": tool.name, "description": tool.description} for tool in tools],
    "tool_results": [],
    "final_answer": "",
    "reasoning_chain": []
})

👥 Multi-Agent Systems

Collaborative Agent Teams

Create systems where multiple specialized agents work together.

class MultiAgentState(TypedDict):
    task: str
    current_agent: str
    agent_results: List[dict]
    final_synthesis: str
    workflow_log: List[str]
 
def research_agent(state: MultiAgentState):
    """Specialized research agent."""
    task = state["task"]
 
    prompt = f"""
    As a research specialist, gather comprehensive information about: {task}
    Provide key facts, statistics, and context.
    """
 
    response = llm.invoke(prompt)
 
    result = {
        "agent": "researcher",
        "output": response.content,
        "confidence": 0.9
    }
 
    return {
        "agent_results": state.get("agent_results", []) + [result],
        "current_agent": "researcher",
        "workflow_log": state.get("workflow_log", []) + ["Research agent completed"]
    }
 
def analysis_agent(state: MultiAgentState):
    """Specialized analysis agent."""
    task = state["task"]
    research_data = state["agent_results"][-1]["output"] if state["agent_results"] else ""
 
    prompt = f"""
    As an analysis specialist, analyze this research and provide insights:
 
    Task: {task}
    Research Data: {research_data}
 
    Focus on trends, implications, and patterns.
    """
 
    response = llm.invoke(prompt)
 
    result = {
        "agent": "analyst",
        "output": response.content,
        "confidence": 0.85
    }
 
    return {
        "agent_results": state.get("agent_results", []) + [result],
        "current_agent": "analyst",
        "workflow_log": state.get("workflow_log", []) + ["Analysis agent completed"]
    }
 
def writing_agent(state: MultiAgentState):
    """Specialized writing agent."""
    task = state["task"]
    all_data = "\n".join([result["output"] for result in state["agent_results"]])
 
    prompt = f"""
    As a communication specialist, create a well-structured response based on this information:
 
    Original Task: {task}
    Research and Analysis: {all_data}
 
    Create a clear, engaging response that addresses the original question.
    """
 
    response = llm.invoke(prompt)
 
    result = {
        "agent": "writer",
        "output": response.content,
        "confidence": 0.8
    }
 
    return {
        "agent_results": state.get("agent_results", []) + [result],
        "current_agent": "writer",
        "final_synthesis": response.content,
        "workflow_log": state.get("workflow_log", []) + ["Writing agent completed"]
    }
 
def coordinate_agents(state: MultiAgentState):
    """Coordinate which agent should work next."""
    current_agent = state.get("current_agent", "")
    completed_agents = [result["agent"] for result in state.get("agent_results", [])]
 
    agent_sequence = ["researcher", "analyst", "writer"]
 
    for agent in agent_sequence:
        if agent not in completed_agents:
            return {"current_agent": agent}
 
    return {"current_agent": "complete"}
 
def agent_dispatcher(state: MultiAgentState):
    """Dispatch to appropriate agent."""
    current_agent = state["current_agent"]
 
    if current_agent == "researcher":
        return research_agent(state)
    elif current_agent == "analyst":
        return analysis_agent(state)
    elif current_agent == "writer":
        return writing_agent(state)
    else:
        return {"current_agent": "complete"}
 
# Build multi-agent workflow
workflow = StateGraph(MultiAgentState)
 
workflow.add_node("coordinate", coordinate_agents)
workflow.add_node("dispatch", agent_dispatcher)
 
workflow.set_entry_point("coordinate")
workflow.add_edge("coordinate", "dispatch")
 
def should_continue(state: MultiAgentState):
    """Check if multi-agent workflow should continue."""
    if state.get("current_agent") == "complete":
        return "end"
    else:
        return "continue"
 
workflow.add_conditional_edges(
    "dispatch",
    should_continue,
    {
        "continue": "coordinate",
        "end": END
    }
)
 
multi_agent = workflow.compile()
 
# Usage
result = multi_agent.invoke({
    "task": "Explain the impact of remote work on company culture",
    "current_agent": "",
    "agent_results": [],
    "final_synthesis": "",
    "workflow_log": []
})

Competitive Agent Systems

Multiple agents competing to provide the best solution.

class CompetitiveState(TypedDict):
    problem: str
    agent_solutions: List[dict]
    best_solution: dict
    evaluation_criteria: dict
 
def solution_agent(state: CompetitiveState, agent_id: str, approach: str):
    """Agent that generates a solution with specific approach."""
    problem = state["problem"]
 
    prompt = f"""
    As Agent {agent_id}, solve this problem using {approach} approach:
 
    Problem: {problem}
 
    Provide your solution and confidence level (0-1).
    """
 
    response = llm.invoke(prompt)
 
    return {
        "agent_id": agent_id,
        "approach": approach,
        "solution": response.content,
        "confidence": 0.8  # Would be parsed from response
    }
 
def creative_solution_node(state: CompetitiveState):
    """Creative approach solution."""
    solution = solution_agent(state, "Creative", "innovative and outside-the-box thinking")
    return {"agent_solutions": state.get("agent_solutions", []) + [solution]}
 
def analytical_solution_node(state: CompetitiveState):
    """Analytical approach solution."""
    solution = solution_agent(state, "Analytical", "data-driven and methodical approach")
    return {"agent_solutions": state.get("agent_solutions", []) + [solution]}
 
def practical_solution_node(state: CompetitiveState):
    """Practical approach solution."""
    solution = solution_agent(state, "Practical", "realistic and implementable approach")
    return {"agent_solutions": state.get("agent_solutions", []) + [solution]}
 
def evaluator_node(state: CompetitiveState):
    """Evaluate and select the best solution."""
    solutions = state["agent_solutions"]
    problem = state["problem"]
 
    solutions_text = "\n".join([
        f"Agent {sol['agent_id']} ({sol['approach']}): {sol['solution']} (Confidence: {sol['confidence']})"
        for sol in solutions
    ])
 
    prompt = f"""
    Evaluate these solutions to the problem and select the best one:
 
    Problem: {problem}
 
    Solutions:
    {solutions_text}
 
    Criteria: Feasibility, Creativity, Effectiveness, Cost-efficiency
 
    Which solution is best and why?
    """
 
    response = llm.invoke(prompt)
 
    # Select best solution (simplified)
    best_solution = max(solutions, key=lambda x: x["confidence"])
 
    return {
        "best_solution": best_solution,
        "evaluation_criteria": {"method": "comparative_analysis", "winner": best_solution["agent_id"]}
    }
 
# Build competitive agent workflow
workflow = StateGraph(CompetitiveState)
 
workflow.add_node("creative_solution", creative_solution_node)
workflow.add_node("analytical_solution", analytical_solution_node)
workflow.add_node("practical_solution", practical_solution_node)
workflow.add_node("evaluate", evaluator_node)
 
# Parallel execution of solution agents
from langgraph.graph import START
 
workflow.add_edge(START, "creative_solution")
workflow.add_edge(START, "analytical_solution")
workflow.add_edge(START, "practical_solution")
 
# Convergence to evaluation
workflow.add_edge("creative_solution", "evaluate")
workflow.add_edge("analytical_solution", "evaluate")
workflow.add_edge("practical_solution", "evaluate")
workflow.add_edge("evaluate", END)
 
competitive_agent = workflow.compile()

👤 Human-in-the-Loop Agents

Supervised Agent Workflows

Agents that require human approval at key decision points.

class HumanLoopState(TypedDict):
    task: str
    agent_plan: str
    human_feedback: str
    execution_result: str
    approval_required: bool
    approved: bool
 
def planning_node(state: HumanLoopState):
    """Agent creates a plan for human review."""
    task = state["task"]
 
    prompt = f"""
    Create a detailed plan to accomplish this task: {task}
 
    Include:
    1. Specific steps
    2. Required resources
    3. Potential risks
    4. Timeline
    """
 
    response = llm.invoke(prompt)
 
    return {
        "agent_plan": response.content,
        "approval_required": True,
        "approved": False
    }
 
def human_approval_node(state: HumanLoopState):
    """Wait for human approval."""
    plan = state["agent_plan"]
 
    print(f"\n=== Agent Plan ===\n{plan}\n==================")
    print("Do you approve this plan? (yes/no)")
    print("Provide feedback or modifications:")
 
    # In a real system, this would be a web interface or other input method
    approval = input("Approval (yes/no): ").lower() == "yes"
    feedback = input("Feedback: ")
 
    return {
        "approved": approval,
        "human_feedback": feedback,
        "approval_required": False
    }
 
def execution_node(state: HumanLoopState):
    """Execute the approved plan."""
    if state.get("approved", False):
        task = state["task"]
        plan = state["agent_plan"]
        feedback = state.get("human_feedback", "")
 
        prompt = f"""
        Execute this approved plan:
 
        Task: {task}
        Plan: {plan}
        Human Feedback: {feedback}
 
        Provide the execution results.
        """
 
        response = llm.invoke(prompt)
        return {"execution_result": response.content}
    else:
        return {"execution_result": "Plan not approved. Task cancelled."}
 
def revision_node(state: HumanLoopState):
    """Revise plan based on human feedback."""
    original_plan = state["agent_plan"]
    feedback = state["human_feedback"]
    task = state["task"]
 
    prompt = f"""
    Revise this plan based on human feedback:
 
    Task: {task}
    Original Plan: {original_plan}
    Human Feedback: {feedback}
 
    Provide a revised plan addressing the feedback.
    """
 
    response = llm.invoke(prompt)
 
    return {
        "agent_plan": response.content,
        "approval_required": True,
        "approved": False
    }
 
def check_approval(state: HumanLoopState):
    """Check if plan is approved or needs revision."""
    if state.get("approved", False):
        return "execute"
    elif state.get("approval_required", False):
        return "approve"
    else:
        return "revise"
 
# Build human-in-the-loop workflow
workflow = StateGraph(HumanLoopState)
 
workflow.add_node("plan", planning_node)
workflow.add_node("approve", human_approval_node)
workflow.add_node("execute", execution_node)
workflow.add_node("revise", revision_node)
 
workflow.set_entry_point("plan")
workflow.add_edge("plan", "approve")
 
workflow.add_conditional_edges(
    "approve",
    check_approval,
    {
        "execute": "execute",
        "approve": "approve",  # Wait for approval
        "revise": "revise"
    }
)
 
workflow.add_edge("revise", "approve")  # After revision, seek approval again
workflow.add_edge("execute", END)
 
human_loop_agent = workflow.compile()

🎯 Advanced Agent Patterns

Hierarchical Agent Systems

Agents that manage other agents.

class HierarchicalState(TypedDict):
    objective: str
    subtasks: List[str]
    agent_assignments: List[dict]
    completed_subtasks: List[dict]
    final_result: str
 
def manager_agent(state: HierarchicalState):
    """Manager agent that breaks down tasks and assigns work."""
    objective = state["objective"]
 
    prompt = f"""
    As a project manager, break down this objective into specific subtasks:
 
    Objective: {objective}
 
    For each subtask, specify:
    1. Description
    2. Required skills (research, analysis, writing, etc.)
    3. Priority (high/medium/low)
    """
 
    response = llm.invoke(prompt)
 
    # Parse subtasks (simplified)
    subtasks = [
        "Research the topic",
        "Analyze findings",
        "Create summary report"
    ]
 
    assignments = [
        {"subtask": subtasks[0], "agent": "researcher", "status": "pending"},
        {"subtask": subtasks[1], "agent": "analyst", "status": "pending"},
        {"subtask": subtasks[2], "agent": "writer", "status": "pending"}
    ]
 
    return {
        "subtasks": subtasks,
        "agent_assignments": assignments
    }
 
def worker_agent(state: HierarchicalState):
    """Worker agent that completes assigned tasks."""
    assignments = state["agent_assignments"]
    objective = state["objective"]
 
    # Find pending assignments
    pending = [a for a in assignments if a["status"] == "pending"]
    if not pending:
        return {"agent_assignments": assignments}
 
    # Work on first pending task
    current_assignment = pending[0]
 
    prompt = f"""
    Complete this subtask as part of the overall objective:
 
    Objective: {objective}
    Subtask: {current_assignment['subtask']}
    Agent Role: {current_assignment['agent']}
 
    Provide the completed work.
    """
 
    response = llm.invoke(prompt)
 
    # Update assignment status
    for assignment in assignments:
        if assignment["subtask"] == current_assignment["subtask"]:
            assignment["status"] = "completed"
            assignment["result"] = response.content
            break
 
    return {
        "agent_assignments": assignments,
        "completed_subtasks": state.get("completed_subtasks", []) + [{
            "subtask": current_assignment["subtask"],
            "agent": current_assignment["agent"],
            "result": response.content
        }]
    }
 
def synthesizer_node(state: HierarchicalState):
    """Synthesize completed work into final result."""
    completed_tasks = state["completed_subtasks"]
    objective = state["objective"]
 
    work_summary = "\n".join([
        f"{task['agent']}: {task['result'][:200]}..."
        for task in completed_tasks
    ])
 
    prompt = f"""
    Synthesize this completed work into a comprehensive final result:
 
    Original Objective: {objective}
 
    Completed Work:
    {work_summary}
 
    Provide a polished final output that achieves the objective.
    """
 
    response = llm.invoke(prompt)
 
    return {"final_result": response.content}
 
def check_completion(state: HierarchicalState):
    """Check if all subtasks are completed."""
    assignments = state["agent_assignments"]
    pending = [a for a in assignments if a["status"] == "pending"]
 
    if pending:
        return "continue_work"
    else:
        return "synthesize"
 
# Build hierarchical workflow
workflow = StateGraph(HierarchicalState)
 
workflow.add_node("manage", manager_agent)
workflow.add_node("work", worker_agent)
workflow.add_node("synthesize", synthesizer_node)
 
workflow.set_entry_point("manage")
workflow.add_edge("manage", "work")
 
workflow.add_conditional_edges(
    "work",
    check_completion,
    {
        "continue_work": "work",
        "synthesize": "synthesize"
    }
)
 
workflow.add_edge("synthesize", END)
 
hierarchical_agent = workflow.compile()

🎛️ Agent Configuration and Deployment

Agent Configuration

class AgentConfig(TypedDict):
    model_name: str
    temperature: float
    max_iterations: int
    tools_available: List[str]
    personality: str
    expertise_domains: List[str]
    error_handling: str
 
def create_configured_agent(config: AgentConfig):
    """Create an agent with specific configuration."""
    llm = ChatOpenAI(
        model=config["model_name"],
        temperature=config["temperature"]
    )
 
    # Customize agent behavior based on configuration
    personality_prompt = f"""
    You are an AI agent with the following characteristics:
    Personality: {config['personality']}
    Expertise: {', '.join(config['expertise_domains'])}
    Error Handling: {config['error_handling']}
 
    Incorporate these traits into your responses and decision-making.
    """
 
    return {
        "llm": llm,
        "config": config,
        "personality_prompt": personality_prompt
    }

🎯 Best Practices

1. Agent Design

  • Clear purpose - Define specific agent responsibilities
  • Appropriate complexity - Match complexity to task requirements
  • Tool selection - Provide relevant and reliable tools
  • Error resilience - Handle failures gracefully

2. Performance Optimization

  • Efficient reasoning - Minimize unnecessary deliberation
  • Tool caching - Cache expensive tool results
  • Parallel execution - Use parallel workflows when possible
  • Resource management - Monitor and limit resource usage

3. Safety and Reliability

  • Input validation - Validate all inputs and tool results
  • Fallback mechanisms - Provide backup options
  • Human oversight - Include human-in-the-loop for critical decisions
  • Logging and monitoring - Track agent behavior and performance

4. Testing and Validation

def test_agent_behavior():
    """Test agent behavior with various inputs."""
    test_cases = [
        {"task": "simple calculation", "expected_behavior": "use calculator tool"},
        {"task": "current events", "expected_behavior": "use search tool"},
        {"task": "creative writing", "expected_behavior": "generate original content"}
    ]
 
    for case in test_cases:
        result = agent.invoke({"task": case["task"]})
        # Validate expected behavior
        assert case["expected_behavior"] in str(result)
 
def test_multi_agent_coordination():
    """Test multi-agent system coordination."""
    result = multi_agent.invoke({"task": "complex task"})
 
    # Verify all agents contributed
    agent_outputs = [r["agent"] for r in result["agent_results"]]
    expected_agents = ["researcher", "analyst", "writer"]
    assert all(agent in agent_outputs for agent in expected_agents)

Master agent systems to create autonomous, intelligent AI applications. Combine agents with workflows and state management for complete AI solutions.