Agentic

ReAct Pattern

A prompting pattern where the model alternates between reasoning ("Thought"), acting ("Action" = tool call), and observing ("Observation" = tool result), looping until a final answer.

First published April 14, 2026

ReAct = Reason + Act. Introduced by Yao et al. (2022), it's the foundational pattern for tool-using agents. The model explicitly writes out its reasoning before each action, which makes behavior debuggable and lets you catch plan errors before they execute.

Modern frameworks (LangChain agents, LlamaIndex, OpenAI Assistants) wrap ReAct with structured function calling, so you don't see raw "Thought:" traces anymore -- but the underlying shape is still this loop.

Example Prompt

Task: Find out when the user's last order shipped.

Thought: I need the user's order history.
Action: get_orders(user_id=42)
Observation: [{"id": 555, "status": "shipped", "shipped_at": "2026-04-08"}, ...]
Thought: The most recent shipped order was on 2026-04-08.
Final Answer: Your last order shipped on April 8, 2026.

When to use it

  • You need the agent's reasoning traceable for debugging or auditing
  • Complex tasks where jumping straight to action would be premature
  • Teaching smaller models to use tools reliably

When NOT to use it

  • Frontier reasoning models that already think internally (it duplicates work)
  • Simple 1-tool-call tasks where the loop is overhead
  • Latency-sensitive paths