Day 16: The Anthropic’s MCP as a Data Agent —MCP Illustrated — PydanticAI
100 days of Agentic AI: From foundations to Autonomous workflows
Pydantic AI to build a BankBot:
- Define our control rules as Pydantic models.
- Spin up a PydanticAI agent that calls an LLM (e.g., Anthropic’s Claude).
- Run each LLM draft through our Pydantic control model.
- Return either the safe answer or a polite fallback.
1. Why Use PydanticAI?
- MCP Under the Hood: PydanticAI implements the Model Context Protocol (MCP), so under the covers it can connect to any MCP-compliant LLM server or tool set.
- Data Schema & Validation: We can use Pydantic models both to represent an LLM draft and to enforce our safety-check rules.
- Simple Pythonic Workflow: PydanticAI feels a lot like using FastAPI & Pydantic — but for AI agents. You get type safety, automatic conversion, and structured error handling “for free.”
In other words, instead of manually wiring an LLM call + regex checks, we’ll:
- Declare “what a safe vs. unsafe draft looks like” via Pydantic models.
- Let PydanticAI fetch a draft from the LLM.
- Validate that draft against our safety-check model.
- Return the final answer or a fallback.
2. Installation & Setup
- Make sure you have Python 3.10+ installed, since PydanticAI’s MCP support requires it.
- Install PydanticAI (slim version with MCP support):
pip install "pydantic-ai-slim[mcp]"
- (Optional) If you want to run a local MCP server that executes Python code, you can pull in mcp-run-python. But for this example, we’ll assume you already have an MCP-compliant LLM endpoint (e.g., Anthropic’s Claude) accessible via environment variables — no local subprocess needed.
3. Defining Our Control Rules with Pydantic
We want to block any LLM draft that:
- Mentions insider trading or nonpublic data.
- Predicts an exact future stock price (e.g., “XYZ will be $100 next quarter”).
We’ll encode those checks as a Pydantic model with a custom validator. If the draft fails any check, we raise a validation error. Otherwise, it passes.
# controls.py
from pydantic import BaseModel, root_validator, ValidationError
import re
from typing import Optional
class DraftSafetyCheck(BaseModel):
draft_text: str
@root_validator
def check_for_insider_and_future_prices(cls, values):
text = values.get('draft_text', '')
# Rule A: Block any mention of "insider trading" (case-insensitive)
if re.search(r"\binsider trading\b", text, re.IGNORECASE):
raise ValueError("Mention of insider trading is disallowed.")
# Rule B: Block references to "nonpublic" data
if re.search(r"\bnonpublic\b", text, re.IGNORECASE):
raise ValueError("References to nonpublic data are disallowed.")
# Rule C: Block any phrase like "$<number> next quarter"
if re.search(r"\$\d+(?:\.\d+)?\s+next quarter", text, re.IGNORECASE):
raise ValueError("Predicting future stock prices is disallowed.")
return values
- We subclass BaseModel and declare a single field: draft_text: str.
- In a @root_validator, we run three regex checks. If anything triggers, we raise a ValueError, which Pydantic bundles into a ValidationError.
Later, we’ll attempt to instantiate DraftSafetyCheck(draft_text=some_llm_reply). If it succeeds, the draft was “safe.” If it throws, we know exactly which rule was violated.
4. Writing a PydanticAI Agent to Call the LLM
PydanticAI gives us an Agent class that can talk to any MCP endpoint. In practice, you might have:
- MCPServerHTTP if your LLM — or a proxy in front of it — speaks HTTP/SSE.
- MCPServerStdio if you’re running a local MCP subprocess (like mcp-run-python).
For simplicity, let’s assume your Anthropic Claude endpoint is already registered as an MCP server under the name “anthropic_mcp”. We’ll show how to connect using MCPServerHTTP.
# bankbot_agent.py
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerHTTP
import os
# 1. Read your CLAUDE_API_KEY from environment (already configured)
# and the HTTP URL of your MCP-compliant Claude proxy.
os.environ.setdefault("CLAUDE_API_KEY", "your-api-key-here")
# Suppose your MCP HTTP server is at:
# https://mcp-server.example.com/anthropic
MCP_HTTP_URL = "https://mcp-server.example.com/anthropic"
# 2. Create an MCP client that speaks HTTP/SSE to the CLAUDE MCP server
mcp_http = MCPServerHTTP(
base_url=MCP_HTTP_URL,
# If your server needs authentication headers, supply them here.
headers={"Authorization": f"Bearer {os.getenv('CLAUDE_API_KEY')}"}
)
# 3. Instantiate a PydanticAI Agent that uses that MCP server
bankbot = Agent(
model="claude-3-5-haiku-latest", # This name is illustrative
system_prompt="You are BankBot, an investment-banking assistant. Answer clearly and accurately.",
mcp_server=mcp_http, # hook our MCP server into the Agent
)
# Now `bankbot` can run queries via MCP:
# response = await bankbot.prompt("Your question here")
A few notes:
- We passed mcp_server=mcp_http into the Agent constructor. That means whenever we call bankbot.prompt(…), PydanticAI will route the request through MCP over HTTP (SSE).
- The Agent’s model field — “claude-3–5-haiku-latest” — should match whatever model name your MCP server expects.
- The system_prompt is what the LLM “remembers” as background context for every call.
5. Putting It All Together: The Safety-Checked BankBot
Now we want a function that:
- Accepts a user question (e.g., “How can I value a Series B startup?”).
- Asks BankBot (the Agent) for a draft.
- Runs that draft through our DraftSafetyCheck.
- If it passes, return it.
- If it fails, return a polite fallback.
Create a new file — say mcp_bankbot.py — and glue everything:
# mcp_bankbot.py
import asyncio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerHTTP
from controls import DraftSafetyCheck, ValidationError
import os
# 1. Configuration (same as before)
os.environ.setdefault("CLAUDE_API_KEY", "your-api-key-here")
MCP_HTTP_URL = "https://mcp-server.example.com/anthropic"
mcp_http = MCPServerHTTP(
base_url=MCP_HTTP_URL,
headers={"Authorization": f"Bearer {os.getenv('CLAUDE_API_KEY')}"}
)
bankbot = Agent(
model="claude-3-5-haiku-latest",
system_prompt="You are BankBot, an investment-banking assistant. Answer clearly and accurately.",
mcp_server=mcp_http,
)
# 2. The main wrapper: ask BankBot, then enforce safety
async def ask_bankbot_safe(user_question: str) -> str:
# 2a. Ask the LLM (via MCP).
# The Agent returns a simple string (the draft).
draft_reply = await bankbot.prompt(user_question)
# 2b. Validate against our Pydantic control model
try:
# If this succeeds, it was "safe"
DraftSafetyCheck(draft_text=draft_reply)
return draft_reply
except ValidationError as e:
# If validation fails, we log `e` or inspect `e.errors()`,
# then return a polite fallback
return (
"I'm sorry, I can't answer that right now. "
"Please consult a licensed advisor for more details."
)
# 3. Example usage
async def main():
question = "How should I value a Series B startup? Any tips?"
final_answer = await ask_bankbot_safe(question)
print("User asked:", question)
print("\nBankBot's final response:\n", final_answer)
if __name__ == "__main__":
asyncio.run(main())
Explanation, Step by Step
- draft_reply = await bankbot.prompt(user_question)
- Under the hood, PydanticAI sends the user question + system prompt to the MCP server over HTTP/SSE.
- The MCP server invokes the LLM (e.g., Anthropic’s Claude).
- We get back a raw string — BankBot’s “first draft.”
2. DraftSafetyCheck(draft_text=draft_reply)
- We try to construct a DraftSafetyCheck model.
- If all three rules pass, Pydantic returns a valid model and we simply return the draft_reply.
- If any rule fails (insider trading, nonpublic data, or future stock price), Pydantic raises a ValidationError.
3. On ValidationError
- We catch it, then return a fallback:
- “I’m sorry, I can’t answer that right now. Please consult a licensed advisor for more details.”
- In a real system, you might log e.errors() for auditing (e.g., “which rule did it trip, at which timestamp?”).
6. Sample Run (What We Expect)
Let’s say the LLM’s first draft (for demonstration purposes) is:
“A common way is to do a discounted cash flow. If you know nonpublic revenue numbers from their Series A, use those to forecast. Many bankers also buy at $5 and sell at $12 next quarter.”
Because that draft:
- Mentions “nonpublic revenue” → fails Rule B.
- Mentions “$5 … $12 next quarter” → fails Rule C.
When we run ask_bankbot_safe(…), the Pydantic validator will catch those, and our final output will be:
I’m sorry, I can’t answer that right now. Please consult a licensed advisor for more details.
If instead the LLM’s draft had been:
“A straightforward approach is discounted cash flow using public projections, combined with a multiple of comparable public-company valuations. Always cross-check your assumptions against recent VC term sheets and comparables.”
— then none of our regex rules fire, and we simply return that clean draft back to the user.
7. How to Customize or Extend
- More Control Rules
If your compliance team says “also block any mention of ‘tax avoidance’ or ‘insider tips,’” just add another regex in DraftSafetyCheck.
if re.search(r"\btax avoidance\b", text, re.IGNORECASE):
raise ValueError("Advice on tax avoidance is disallowed.")
- Redact Instead of Block Entirely
You could attempt to strip out the offending phrase and re-validate:
safe_text = re.sub(r"nonpublic revenue.*?[\.\!]", "[REDACTED]", draft)
DraftSafetyCheck(draft_text=safe_text)
return safe_text
- That way, you preserve all “compliant” content and only remove the illicit snippet.
- Loopback to LLM for “Clean Rewrite”
- If validation fails, you can re-prompt the LLM with a new instruction:
if not safe:
revised = await bankbot.prompt(
f"Rewrite the last response without mentioning insider trading or nonpublic data."
)
# Then re-run DraftSafetyCheck on `revised` before returning.
- Logging & Auditing
- Whenever a draft trips the safety check, record:
import logging
logging.warning("Draft failed safety: %s", e.errors())
You’ll build an audit trail for compliance officers to review why and when BankBot’s draft was blocked.
8. Final Thoughts
By combining PydanticAI (our MCP-enabled agent framework) with a small Pydantic model for safety checks, we get:
- A single source of truth for “what counts as disallowed content.”
- Automatic type checking and graceful error handling (via ValidationError).
- A clear, Pythonic workflow that feels nearly identical to writing a FastAPI endpoint — but for AI output instead of HTTP requests.
In an investment-banking context, where the line between “helpful advice” and “illegal insider tips” can be razor-thin, this pattern:
- Puts you in control of exactly what your AI assistant can or cannot say.
- Keeps your compliance team happy — because every blocked draft is traceable.
- Lets you build on top of existing LLM infrastructure (Anthropic, OpenAI, etc.) without reengineering the model itself.
- Give it a try: copy the code above, set your own CLAUDE_API_KEY + MCP server URL, and watch how easily you can enforce domain-specific guardrails with just a few lines of Pydantic logic. Happy (and safe) AI-driven banking!
PydanticAI
Pydantic Settings Enhancements
- Load from .env files or custom sources (YAML, JSON).
- Use hierarchical configuration, secrets management, or dynamic defaults.
Error Handling & Logging
- Log ValidationError details in a centralized logger so you can spot common user mistakes.
- Add middleware in your web framework to catch and format Pydantic errors uniformly.
Performance Tips
- If you’re instantiating thousands of models per second, consider using model_validate with mode=’lazy’ or exploring Pydantic’s validate_model internals to minimize overhead.
By structuring your application around MCP (Model → Controller → Pydantic), you keep your codebase:
- Modular: Models defined in one place, controllers in another, and Pydantic doing the heavy lifting.
- Maintainable: Validation rules live next to field definitions, so you never lose track.
- Robust: Malformed data fails fast at the boundary, preventing downstream errors.
- Scalable: As your data schemas evolve, you only update Pydantic models, and your controllers adapt automatically.
Happy coding with Pydantic — your Python projects just got a whole lot more predictable and type-safe!