Skip to content
Intermediate AI for developers First modules free after sign-in

Building a Reasoning AI Agent with LlamaIndex: ReAct and Function Agent in Python

A workshop for mid-level developers who want to build, step by step, a reasoning agent in Python with LlamaIndex, compare ReActAgent and FunctionAgent, learn tool design, debugging, guardrails, and finish with a complete listing of a working agent.

8 hours 6 modules Certificate

A practical workshop course showing how to design, implement, and run a reasoning AI agent from scratch using the LlamaIndex library. The participant will go through the full process: from choosing the agent architecture, through defining tools in Python, designing system instructions, handling memory and state, all the way to testing, diagnosing errors, and assembling a complete end-to-end solution. The course deliberately compares two approaches supported by the current LlamaIndex documentation: FunctionAgent, preferred for models with native function/tool calling, and ReActAgent, useful when we want an explicit reason-act cycle or work with models without native function calling. The workshop is based on current patterns from the LlamaIndex documentation, where agents are built from the workflow package, tools can be ordinary Python functions or QueryEngineTool, and the broader architectural context is embedded in event-driven Workflows. In addition to the code itself, the course explains why specific design decisions are correct, which errors occur most often, and how to avoid them in production practice.

What you will learn

  • You will explain the differences between ReActAgent and FunctionAgent and choose the right type of agent for the capabilities of the LLM model.
  • You will build a working LlamaIndex agent in Python based on your own function tools and properly described signatures and docstrings.
  • You will design the system instruction, tool descriptions, and the input/output contract so that the agent more often chooses the right actions.
  • You will add conversation state and execution context using workflow mechanisms and learn how to control the flow of multi-step reasoning.
  • You will diagnose the most common errors: poor tool selection, hallucinated arguments, agent loops, overly broad prompts, and unclear function descriptions.
  • You will compare complete before/after artifacts: poorly designed and well-designed tools, prompts, and the agent’s decision flows.
  • You will integrate function-based tools with QueryEngineTool-based tools so the agent can use both application logic and the knowledge layer.
  • You will perform manual and scenario-based tests of the agent, add event logging, and prepare a quality checklist before deployment.
  • By the end of the course, you will assemble a complete, full listing of the entire agent along with an explanation of each code section and the reasons behind the architecture.

Prerequisites

Intermediate-level knowledge of Python, basics of working with LLM model APIs, ability to run projects in virtualenv or uv, basic knowledge of JSON and function typing. Experience with prompt engineering and basic RAG knowledge will be helpful, but the course also guides you step by step through architectural decisions. The participant should have a Python 3.10+ environment configured and an API key for the chosen model provider.

Course syllabus

  • Course use case: a developer agent that chooses a tool, calls an API, and justifies the result
  • The current LlamaIndex stack for agents: tools, Context, workflow, and instrumentation
  • ReActAgent vs FunctionAgent in Practice: Decision Table, Model Limitations, and the Cost of Wrong Choices
  • Working artifact: ReAct vs FunctionAgent selection matrix for 6 task types
  • Quiz: recognizing the appropriate agent architecture based on system requirements
  • From a plain Python function to FunctionTool: signature, docstring, types, and input contract
  • Weak vs strong tool description: a comparison of full before/after tool definitions
  • Designing JSON arguments without pitfalls: enums, default values, optional fields, and validation
  • Return_direct, response format, and when a tool should return raw data instead of narration
  • Workshop: preparing a tool quality checklist and a specification template for the repository
  • Quiz: identifying errors in tool definitions that cause bad tool calling
  • Project setup: virtualenv or uv, directory structure, dependencies, and API key configuration
  • Implementing a shared toolset: calculation, API data retrieval, and a simple local lookup
  • Building a ReActAgent: control prompt, Thought/Action steps, and final answer control
  • Building a FunctionAgent: native tool calling, Context, and state handling between calls
  • The same task, two agents: a full worked example from input to output with answer critique
  • Quiz: which part of the implementation is responsible for tool selection, state, and response format
  • How to stream events and log agent execution: AgentInput, ToolCall, ToolCallResult, and output
  • Typical ReActAgent mistakes: reasoning loops, wrong tool selection, and overriding the system prompt
  • Common FunctionAgent mistakes: mismatched schema, wrong arguments, and brittle data serialization
  • Debug notebook: 5 end-to-end failures and how to fix them step by step with before/after logs
  • Team artifact: a scorecard for evaluating agent responses, tool calls, and the quality of justifications
  • Quiz: diagnosing the source of an error based on the execution log
  • Agent responsibility boundaries: what the model should decide, and what the application’s hard code should
  • Memory and state in LlamaIndex: when to use Context, when to isolate sessions and reset history
  • Guardrails for tools and responses: validation, retry, timeouts, fallbacks, and safe error messages
  • Cost and latency: how to reduce the number of model and tool calls without losing quality
  • Architecture comparison: single agent, agent with workflow, and a lightweight agent on top of RAG
  • Quiz: Choosing Hardening Mechanisms for Specific Production Scenarios
  • Project brief: a Python agent for handling operational queries over APIs and local tools
  • Architecture choice and implementation plan: Mermaid flow diagram, tool list, and success criteria
  • MVP implementation: the full journey from the first prompt to a working agent response
  • Scenario and regression tests: case set, expected results, and evaluation table
  • Hardening review: checklist before production, technical debt, and a plan for the next iterations
  • Final quiz: selecting architectural improvements based on symptoms from the mini-project

FAQ

You will learn from scratch how to build a reasoning AI agent in Python using LlamaIndex: from choosing the architecture, through defining tools and system prompts, to handling memory, state, debugging, and running a full end-to-end solution.

Because both approaches are important in practice. The current LlamaIndex documentation indicates that FunctionAgent should be preferred for models supporting native function calling, while ReActAgent remains very useful where greater model universality and an explicit action flow matter. This helps the course explain not only how to build an agent, but also when to choose a given approach.

Yes. The market is rapidly moving toward agentic systems: Gartner predicted that by the end of 2026, 40% of enterprise applications will include specialized AI agents, while in 2025 it was less than 5%. This makes practical skills in building agents with tools, memory, and flow control increasingly valuable.

For Python developers, AI/ML engineers, people building LLM applications, and practitioners who want to move from simple chatbots to agents that perform tasks using tools, reasoning logic, and state control.

No. The course guides you step by step through the most important elements of the ecosystem needed to build an agent. However, basic Python knowledge and a general understanding of how language models work will be useful.

This is a workshop-style course focused on implementation rather than theory. Instead of a general overview, you get the design process, concrete architectural decisions, work with Python tools, the agent’s memory and state, and methods for diagnosing errors in a real solution.

Building a Reasoning AI Agent with LlamaIndex: ReAct and Function Agent in Python
75 USD
38 USD
Try the free preview
  • 8 hours
  • Intermediate
  • Certificate on completion
  • Access immediately after purchase

We use cookies to provide the best service quality. Details in the cookie policy