Back to Resources

Agent Design Patterns

Building Intelligent AI Systems

A comprehensive visual guide to building AI agents. Learn the core design patterns for creating autonomous systems that reason, plan, and take action.

15 min readIntermediateVisual Guide

What Are AI Agents?

AI agents are autonomous systems that can perceive their environment, make decisions, and take actions to achieve goals. Unlike simple LLM completions that generate a single response, agents can:

Break down complex tasks

Decompose large problems into manageable steps and execute them sequentially or in parallel.

Use external tools

Call APIs, query databases, run code, and interact with external systems dynamically.

Iterate and improve

Reflect on outputs, identify errors, and refine responses through multiple iterations.

Reason about actions

Think through decisions step-by-step before taking action, explaining their reasoning.

But building effective agents requires more than just prompting an LLM to "be autonomous." You need structured patterns that guide how agents perceive, reason, plan, and act.

In this guide, we'll explore the four core design patterns that power modern AI agents:

1. ReAct (Reasoning + Acting)

Interleave reasoning traces with actions for grounded decision-making

2. Reflection & Self-Critique

Evaluate and improve outputs through iterative refinement

3. Planning Agents

Break down complex tasks into executable plans before acting

4. Tool-Using Agents

Extend capabilities by calling external tools and APIs dynamically

ReAct: Reasoning + Acting

The ReAct pattern combines reasoning traces with task-specific actions in an interleaved manner. Instead of jumping straight to action, the agent explicitly reasons about what to do next, why, and what it expects to learn.

Thought

Reason about the current situation and decide next action

Action

Execute the selected action or tool call

Observation

Receive feedback and results from the environment

Loop continues until task complete

How ReAct Works

  1. 1.Thought: Agent reasons about the current state and decides what action to take
  2. 2.Action: Agent executes a specific action (e.g., search, calculate, call API)
  3. 3.Observation: Agent receives feedback from the environment
  4. 4.Repeat: Agent continues the thought-action-observation loop until task is complete

Example: Multi-Step Question Answering

Thought:

I need to find when React was first released to answer this question.

Action:

search("React framework initial release date")

Observation:

React was released by Facebook in May 2013.

Thought:

Now I need to find the current year to calculate how long ago that was.

Action:

get_current_date()

Observation:

The current year is 2026.

Thought:

I have enough information now to answer the question.

React was released in May 2013, approximately 13 years ago.

When to Use

  • • Multi-step reasoning tasks
  • • Tasks requiring external information
  • • Debugging and troubleshooting
  • • Exploratory problem-solving

Limitations

  • • Can be verbose and slow
  • • May get stuck in reasoning loops
  • • Requires careful prompt engineering
  • • Higher token costs due to traces

Reflection & Self-Critique

The Reflection pattern enables agents to evaluate their own outputs, identify errors or areas for improvement, and iteratively refine their responses. This pattern is inspired by how humans revise and improve their work through self-review.

Generate

Create initial output or solution

Critique

Evaluate quality, identify issues and improvements

Refine

Improve output based on critique feedback

Iterate until quality threshold met

How Reflection Works

  1. 1.Generate: Agent produces an initial output or solution
  2. 2.Critique: Agent evaluates the output against criteria (correctness, completeness, clarity)
  3. 3.Refine: Agent improves the output based on identified issues
  4. 4.Iterate: Repeat critique-refine loop until quality threshold is met or max iterations reached

Example: Code Generation with Self-Correction

Initial Generation:

def calculate_average(numbers):
    return sum(numbers) / len(numbers)

Self-Critique:

This function will raise a ZeroDivisionError if the list is empty. It should handle edge cases and validate input.

Refined Output:

def calculate_average(numbers):
    """Calculate the average of a list of numbers."""
    if not numbers:
        raise ValueError("Cannot calculate average of empty list")
    if not all(isinstance(n, (int, float)) for n in numbers):
        raise TypeError("All elements must be numbers")
    return sum(numbers) / len(numbers)

When to Use

  • • Code generation and refactoring
  • • Content creation (writing, editing)
  • • Complex problem-solving
  • • Tasks requiring high quality output

Limitations

  • • Significantly higher latency
  • • Multiple LLM calls = higher costs
  • • May critique correctly initially
  • • Needs clear evaluation criteria

Planning Agents

Planning agents decompose complex tasks into structured plans before taking action. Rather than acting immediately, they first create a roadmap of steps needed to achieve the goal, then execute each step systematically.

Planning Phase

1. Understand Goal

Analyze requirements and constraints

2. Decompose Task

Break into sequential sub-tasks

3. Create Plan

Generate structured execution plan

Execution Phase

4. Execute Steps

Work through plan systematically

5. Monitor Progress

Track completion and identify issues

6. Adapt Plan

Adjust strategy if obstacles arise

How Planning Agents Work

  1. 1.Understand: Agent analyzes the high-level goal and constraints
  2. 2.Decompose: Agent breaks the goal into a sequence of sub-tasks or milestones
  3. 3.Execute: Agent works through each step in the plan, adapting as needed
  4. 4.Monitor: Agent tracks progress and adjusts the plan if obstacles arise

Example: Building a Web Scraper

Goal:

Build a web scraper to extract product prices from an e-commerce site

Generated Plan:

  1. 1.Inspect the target website structure and identify product price selectors
  2. 2.Set up a Python environment with requests and BeautifulSoup
  3. 3.Write code to fetch HTML content and parse product data
  4. 4.Handle pagination to scrape multiple pages
  5. 5.Add error handling for network issues and invalid HTML
  6. 6.Export data to CSV format
  7. 7.Test with sample URLs and validate output

The agent then executes each step sequentially, adjusting the plan if issues arise (e.g., if the site uses JavaScript rendering, add Selenium to the plan).

When to Use

  • • Complex, multi-step projects
  • • Tasks requiring coordination
  • • Long-running workflows
  • • Resource allocation problems

Limitations

  • • Plans may become outdated
  • • Requires good task decomposition
  • • Overhead of planning phase
  • • Needs flexibility to adapt

Tool-Using Agents

Tool-using agents extend their capabilities by calling external tools, APIs, and functions dynamically. They decide which tools to use based on the task at hand, transforming LLMs from text generators into action-taking systems.

AI Agent

Decides which tool to use

Web Search

Calculator

Database

Send Email

Agent dynamically selects and calls tools

How Tool-Using Agents Work

  1. 1.Tool Registry: Agent has access to a set of available tools with descriptions
  2. 2.Tool Selection: Agent decides which tool(s) to use based on the user query
  3. 3.Parameter Extraction: Agent extracts required parameters from context
  4. 4.Execution: System executes the tool call and returns results to the agent
  5. 5.Response: Agent synthesizes tool outputs into a natural language response

Example Tool Registry

search_web(query: str)

Search the web for current information

get_weather(location: str)

Get current weather for a location

calculate(expression: str)

Evaluate mathematical expressions

query_database(sql: str)

Execute SQL queries on the database

Example Tool Usage Flow

User Query:

"What's the weather in Paris and how do I say 'hello' in French?"

Tool Call 1:

get_weather(location="Paris, France")

Result 1:

{temperature: "18°C", condition: "Partly cloudy"}

Tool Call 2:

search_web(query="how to say hello in French")

Result 2:

"Bonjour" is the standard French greeting for "hello"

The weather in Paris is currently 18°C and partly cloudy. In French, you say "Bonjour" for hello!

When to Use

  • • Tasks requiring real-time data
  • • Integration with external systems
  • • Computations beyond LLM capabilities
  • • Actions in physical/digital environments

Limitations

  • • Requires well-documented tools
  • • Tool selection can be unreliable
  • • Need proper error handling
  • • Security considerations for tool access

Choosing the Right Pattern

Each pattern solves different problems. Here's a quick comparison to help you choose:

PatternBest ForComplexityCost
ReActMulti-step reasoning, exploratory tasksMediumMedium-High
ReflectionHigh-quality outputs, iterative refinementHighHigh
PlanningComplex projects, multi-step coordinationHighMedium
Tool UseReal-time data, external integrationsLow-MediumLow-Medium

Combining Patterns

The most powerful agents often combine multiple patterns:

ReAct + Tool Use

Agent reasons about which tool to use, executes it, observes results, and continues

Planning + Reflection

Create a plan, execute steps, reflect on outcomes, and adjust the plan as needed

Planning + ReAct + Tool Use

Decompose task into plan, use ReAct loop for each step, call tools as needed

Production Considerations

Building agents for production requires more than just implementing patterns. Here are key considerations:

Safety & Security

  • • Validate tool inputs to prevent injection attacks
  • • Implement rate limiting on tool calls
  • • Sandbox tool execution environments
  • • Add human-in-the-loop for sensitive actions
  • • Log all agent decisions for auditability

Reliability & Monitoring

  • • Set max iterations to prevent infinite loops
  • • Implement fallback strategies for failures
  • • Track success rates and failure modes
  • • Monitor token usage and costs per task
  • • Add timeouts for long-running operations

Performance Optimization

  • • Cache tool results when appropriate
  • • Use smaller models for simpler sub-tasks
  • • Parallelize independent tool calls
  • • Implement streaming for long responses
  • • Optimize prompts to reduce token usage

Testing & Evaluation

  • • Build test suites with diverse scenarios
  • • Evaluate output quality systematically
  • • Test edge cases and failure modes
  • • Use evaluation frameworks (e.g., LangSmith)
  • • A/B test different prompts and patterns

Key Takeaway

Start simple with tool-using agents or basic ReAct loops. Add complexity (planning, reflection) only when needed. Monitor your agents closely in production, and always have fallbacks for when they fail. The best agent architecture is the simplest one that solves your problem reliably.

Ready to build your own AI agent?

Let's discuss how these patterns can solve your specific use case. I can help you design, implement, and deploy production-ready AI agents.

Get in Touch