MCP in Practice: Building Servers and Integrations for AI Agents in Python
A workshop for developers who want to move from using AI in the IDE to building their own MCP servers, tools, and integrations for agents. The course takes you from a local server in Python, through a second example in TypeScript, to integration with the Responses API, Claude Code, and a public remote MCP server.
An intensive build-along workshop course: the participant starts from an empty repository and step by step builds a working MCP ecosystem for AI agents. Along the way, they clarify the differences between MCP, function calling, and classic API integrations; implement a client-server architecture with tools, resources, and prompts; run a local MCP server in Python; create a second server in TypeScript; test and debug everything in MCP Inspector; add security mechanisms, approval flow, and permission limiting; expose a remote MCP server over HTTP; and finally assemble a mini-project of an agent that combines a code repository, documentation, and an external API. The course deliberately mixes short architectural briefings, quality checklists, comparisons of good and bad tool definitions, implementation tasks, debugging sessions, and artifact reviews so that learning does not come down to clicking through a UI. The scope and exercises are based on the current official MCP ecosystem: the official SDKs for Python and TypeScript, MCP Inspector, modern local and remote transports, integrations with OpenAI Docs MCP / OpenAI documentation, materials on building MCP for OpenAI API integrations, and the current capabilities of Claude Code and remote MCP connectors.
What you will learn
- You will explain in practical terms when MCP has an advantage over standard function calling and classic API integration, and when it is an unnecessary layer.
- You will design an MCP architecture covering host, client, server, tools, resources, prompts, and the choice of local or remote transport.
- You will build a local MCP server in Python from scratch with sensibly described tools, resources, and prompts.
- You will prepare a second, parallel example of an MCP server in TypeScript and compare the idioms of both SDKs.
- You will connect the MCP server to a workflow with the OpenAI Responses API and to tools such as Claude Code and desktop clients.
- You will test and debug servers through MCP Inspector, logs, and failure scenarios instead of guessing why the agent is not using a tool.
- You will implement secure tool exposure: auth, approval flow, scope limitations, and data leak risk minimization.
- You will expose a remote MCP server through a public HTTP endpoint and prepare it for use by external clients.
- You will assemble a final mini-project: an agent that combines a repository, documentation, and an external API in one workflow.
- You will learn to evaluate MCP quality not by whether it “works,” but by whether the agent chooses the right tools, returns useful results, and can be maintained in production.
Prerequisites
Proficiency in Python at the level of everyday development work, basic knowledge of TypeScript/Node.js, experience with REST/HTTP and JSON, ability to work with the terminal, Git, and a virtual environment. Nice to have: basics of FastAPI/Express, Docker, and any AI client such as Claude Code, Cursor, VS Code Agent, or your own script using the API.
Course syllabus
- MCP vs function calling vs manually wired API: three architectures for one agent task
- Case review: why an agent with only function calling loses tool and permission context
- Decision checklist: when MCP simplifies integration and when it is overengineering
- Anatomy of the MCP ecosystem: host, client, server, tools, resources, prompts, transports
- Decision quiz: choose the right integration pattern for 8 product scenarios
- Bootstrap repo: project structure, env, dependencies, and dev scripts for the MCP workshop
- The First Server in Python: a Minimal FastMCP over stdio That Actually Responds
- Not just a tool: adding a resource and a prompt so the agent has more than actions
- Full artifact for comparison: weak vs good tool definition and parameter description
- First run with a local client: what you should see before moving on
- Startup error quiz: identify the problem from the symptom, log, and client behavior
- Design framework for MCP tools: name, input contract, result, errors, side effects
- Contract workshop: rebuilding 3 tools from “demo” to “agent-friendly”
- Resources as a context layer: when URI and read access are better than another tool
- Prompts as Ready-Made Procedures: How to Package Workflows Without Hard-Coding Logic into the Model
- Critique of artifacts: comparing before/after agent responses on the same tasks
- Quality quiz: indicate which tool description increases the model's chance of making the correct choice
- MCP Inspector from the inside: what questions you ask the server before launching the AI client
- Local session debugging: enumerating tools/resources/prompts and verifying the capability handshake
- Working with errors: schema mismatch, tool exception, wrong content type, timeout
- Too-large responses and too many tokens: how to trim payloads, paginate, and return sensible results
- Logs, traces, and a reproduction checklist: how to report an MCP bug so someone else can fix it
- Diagnostic quiz: match the symptom to the most likely cause
- Why the second example in TypeScript should not be a 1:1 copy of Python
- Building a server in TypeScript: project setup, SDK, transport, and the first tool
- Integrative example: a documentation/project server with resource templates and result filtering
- Python vs TypeScript comparison: decorator ergonomics, types, validation, and code structure
- Code review of two implementations: which decisions to transfer between repositories, and which not to
- Comparative quiz: choose the better stack for a local helper, a team server, and a public endpoint
- Integration map: local MCP client, Claude Code, desktop clients, and remote API connectors
- OpenAI and MCP in practice: how to plug documentation and your own tools into a workflow with Responses API
- Claude Code + local server: configuration, tool selection, and approval expectations
- Desktop client workflow: launching the same server in an end-user configuration
- End-to-end scenario: the agent reads a resource, calls a tool, and composes an answer with citable context
- Integration quiz: indicate which limitations come from the client and which from the server
- Threat model for MCP: what can go wrong when reading files, repos, secrets, and performing mutating actions
- Approval flow in practice: which operations require human confirmation and how to enforce it
- Auth for a remote server: tokens, bearer flow, and separating user identity from the server
- Least privilege by design: limiting the scope of repos, endpoints, parameters, and response formats
- Anti-pattern review: tools with hidden side effects, secrets in the payload, trust in unofficial proxies
- Safety quiz: assess the risk and choose the minimal safe tool contract
- STDIO vs Streamable HTTP: transport decision before deployment and its architectural consequences
- Deploying a remote MCP server: public endpoint, environment configuration, and post-deploy smoke test
- Production checklist: health checks, limits, observability, versioning, and rollback
- Mini-project Part 1: an agent for working with a repo, documentation, and an external API — plan and contracts
- Mini-project Part 2: implementing the flow and testing success and failure scenarios
- Mini-project part 3: critique of results, refactoring contracts, and improving the quality of the agent's responses
- Final quiz: architectural and debugging decisions based on a single production case study
FAQ
You will learn step by step how to build a production MCP ecosystem from an empty repository to a working integration for AI agents. You will organize the differences between MCP, function calling, and classic API integrations, implement a client-server architecture with tools, resources, and prompts, run a local MCP server in Python, create a second server in TypeScript, and test and debug everything in MCP Inspector. The course is an intensive build-along workshop, so instead of theory alone you create real components that can be further developed in a company project.
Because the AI agent market is maturing quickly, and interoperability is becoming crucial. OpenAI positions the Responses API as a direction for building agents and tools, while Anthropic is developing the Model Context Protocol as an open standard for working with tools and context for agents. This means that skills related to MCP, tool integrations, and safe agent orchestration are becoming increasingly practical and in demand, especially in teams building their own AI workflows. The course helps you enter this area not through vague slogans, but through implementing a working architecture.
Definitely through practice. This is a build-along workshop course: the participant starts from an empty repository and builds a complete solution together with the instructor. Every element — from MCP architecture, through creating a server in Python and TypeScript, to testing, debugging, security, and permission limiting — is implemented live. As a result, after the course you have not only an understanding of the concepts, but also code, project structure, and implementation patterns that can be used in your own AI agents.
For Python developers, backend developers, AI engineers, automation creators, and people building AI agents who want to go beyond a simple function-calling demo. It will also work well for product and technical teams that need to understand when a standard function call is enough and when it is better to design a full MCP ecosystem with access control, approval flow, and a clear integration model.
No. The course organizes the fundamentals from the very beginning and shows the practical differences between MCP, function calling, and a classic API. If you know the basics of Python and understand what HTTP/API integrations are, you will get into the material smoothly. At the same time, the pace and scope are concrete enough that more advanced participants will also gain ready-to-use implementation patterns and a better approach to agent design.
The main language is Python, in which you run a local MCP server and build practical integrations for AI agents. In addition, you create a second server in TypeScript, so you understand how to design a multi-language environment and how to transfer patterns between stacks. The course also includes MCP Inspector for testing and debugging, as well as topics related to security, permission control, and approval flows.
Yes — and that is one of its strongest points. In addition to simply launching the server, you learn to add security mechanisms, approval flow, and permission limiting. This is especially important today, when agent tools can perform actions in external systems, and platforms such as OpenAI are developing increasingly advanced tools and agent orchestration loops. The course therefore shows not only how to connect something, but how to do it responsibly and in a way that is ready for further development.
Function calling teaches the model to invoke specific functions, but MCP introduces a broader, more structured model of collaboration between the agent and tools, resources, and prompts. In practice, this means better integration scalability, a clearer client-server architecture, and greater predictability in the development of the agent environment. This course does not stop at a single tool call — it shows how to design an entire integration ecosystem, test it in MCP Inspector, and develop it in real-world use cases.
- 12 hours
- Intermediate
- Certificate on completion
- Access immediately after purchase