Skip to content
Intermediate AI for developers

Building a Reasoning AI Agent with LlamaIndex: ReAct and Function Agent in Python

A workshop for mid-level developers who want to build a reasoning agent in Python with LlamaIndex step by step, compare ReActAgent and FunctionAgent, learn tool design, debugging, guardrails, and finish with a complete listing of a working agent.

8 hours 6 modules Certificate

A practical workshop course showing how to design, implement, and run a reasoning AI agent from scratch using the LlamaIndex library. The participant will go through the full process: from choosing the agent architecture, through defining tools in Python, designing system instructions, handling memory and state, all the way to testing, diagnosing errors, and assembling a complete end-to-end solution. The course deliberately compares two approaches supported by the current LlamaIndex documentation: FunctionAgent, preferred for models with native function/tool calling, and ReActAgent, useful when we want an explicit reason-act cycle or work with models without native function calling. The workshop is based on current patterns from the LlamaIndex documentation, where agents are built from the workflow package, tools can be plain Python functions or QueryEngineTool, and the broader architectural context is embedded in event-driven Workflows. In addition to the code itself, the course explains why specific design decisions are correct, which errors occur most often, and how to avoid them in production practice.

What you will learn

  • You will explain the differences between ReActAgent and FunctionAgent and choose the right agent type for the LLM’s capabilities.
  • You will build a working LlamaIndex agent in Python based on your own tool functions and properly described signatures and docstrings.
  • You will design the system instruction, tool descriptions, and input/output contract so that the agent more often chooses the correct actions.
  • You will add conversation state and execution context using workflow mechanisms and learn how to control the flow of multi-step reasoning.
  • You will diagnose the most common errors: poor tool selection, hallucinated arguments, agent loops, overly broad prompts, and unclear function descriptions.
  • You will compare complete before/after artifacts: poorly designed and well-designed tools, prompts, and agent decision flows.
  • You will integrate function-based tools with QueryEngineTool-based tools so the agent can use both application logic and the knowledge layer.
  • You will perform manual and scenario-based tests of the agent, add event logging, and prepare a quality checklist before deployment.
  • By the end of the course, you will assemble a complete, full listing of the entire agent along with an explanation of each code section and the reasons behind the architecture.

Prerequisites

Intermediate-level knowledge of Python, basics of working with LLM APIs, ability to run projects in virtualenv or uv, basic knowledge of JSON and function typing. Experience with prompt engineering and basic RAG knowledge will be helpful, but the course also guides you step by step through architectural decisions. The participant should have a configured Python 3.10+ environment and an API key for the chosen model provider.

Course syllabus

  • What does the agent API in LlamaIndex look like today: `llama_index.core.agent.workflow` without guessing the architecture
  • ReActAgent vs FunctionAgent on One Task: Comparing the reasoning -> tool -> answer flow
  • Why FunctionAgent Is Usually the First Choice for Function-Calling Models, and ReActAgent Becomes Plan B
  • Workshop environment: Python 3.10+, installing `llama-index`, LLM provider, and a minimal project bootstrap
  • Quiz: selecting the type of agent for the model, tools, and flow control
  • The first working tool from a Python function: types, docstring, and returning data without chaos
  • `FunctionTool` and automatic schema derivation: what LlamaIndex takes from the signature, and what it won’t guess
  • Weak vs good tool description: full examples of names, arguments, and docstrings that change an agent’s choices
  • The most common mistakes in tools: hidden side effects, implicit required fields, overly broad responsibility scope
  • Designing a mini toolset for an operational agent: calculation, validation, search, and fallback
  • Quiz: Recognizing Poorly Designed Tools and Improving Them
  • Minimal `FunctionAgent` with two tools: first end-to-end run in Python
  • System prompt that steers the agent instead of being a wish list: rules, anti-patterns, and fixes
  • How to enforce correct tool usage: decision instructions for when to answer yourself and when to call a tool
  • Function call error handling: argument validation, exceptions, and return messages for the model
  • Streaming and observing an agent’s execution: what to log so you can see the model’s decisions, not just the final answer
  • Comparative before/after workshop: the same agent before refactoring and after refactoring the prompt and tools
  • Quiz: Is this FunctionAgent ready for user use?
  • Minimal `ReActAgent` with the same tools: what changes compared to FunctionAgent
  • Reading Reasoning/Action/Observation traces: how to diagnose a faulty chain of thought step by step
  • Loops, overthinking, and wrong follow-up actions: guardrails for ReActAgent
  • When ReActAgent can be better than FunctionAgent: models without function calling and scenarios with explicit planning
  • Comparative exercise: the same task solved by FunctionAgent and ReActAgent with an analysis of answer quality
  • Quiz: choosing an agent strategy based on logs and project requirements
  • Agent session and `Context`: how to store conversation state without manually stitching the history together
  • Adding `QueryEngineTool` to an agent: when a tool should calculate and when it should ask the index
  • Combining functional tools and knowledge tools in one agent without decision conflicts
  • The most common mistakes with state and memory: context leakage, too much history, implicit dependencies between steps
  • Introduction to Workflows and event-driven orchestration: why you should know this API even for a simple agent
  • Quiz: selecting the right tool based on data, status, and the type of user query
  • Final project: an agent for solving multi-step tasks using tools and a knowledge source
  • End-to-end implementation: file structure, model configuration, tool definitions, and agent initialization
  • Scenario tests for an agent: full user inputs, expected tool actions, and pass criteria
  • Production checklist: observability, timeouts, validation, fallbacks, and limiting call cost
  • Full listing of the entire agent with line-by-line commentary: why each section of the code looks exactly like this
  • Final quiz: diagnosing errors in a finished agent and planning further development

FAQ

You will learn from scratch how to build a reasoning AI agent in Python using LlamaIndex: from choosing the architecture, through defining tools and system prompts, to handling memory, state, debugging, and running a complete end-to-end solution.

Because both approaches are important in practice. The current LlamaIndex documentation indicates that FunctionAgent should be preferred for models supporting native function calling, while ReActAgent remains very useful where greater model universality and an explicit action flow matter. This helps the course teach not only how to build an agent, but also when to choose a given approach.

Yes. The market is rapidly moving toward agentic systems: Gartner predicted that by the end of 2026, 40% of enterprise applications will include specialized AI agents, while in 2025 it was less than 5%. This makes practical skills in building agents with tools, memory, and flow control increasingly valuable.

For Python developers, AI/ML engineers, people building LLM applications, and practitioners who want to move from simple chatbots to agents that perform tasks using tools, reasoning logic, and state control.

No. The course guides you step by step through the most important ecosystem elements needed to build an agent. However, basic Python knowledge and a general understanding of how language models work will be helpful.

This is a workshop-style course focused on implementation rather than theory. Instead of a general overview, you get a design process, concrete architectural decisions, work with Python tools, agent memory and state, and methods for diagnosing errors in a real solution.

Building a Reasoning AI Agent with LlamaIndex: ReAct and Function Agent in Python
67 EUR
45 EUR
  • 8 hours
  • Intermediate
  • Certificate on completion
  • Access immediately after purchase

We use cookies to provide the best service quality. Details in the cookie policy