Python

A decorator, 3 imports,
and a runtime you didn't ask for.

zeromcp vs FastMCP — side by side.

Dependencies
0 ZeroMCP
vs.
37 Official SDK
Throughput
3.08K req/s ZeroMCP
vs.
623 req/s Official SDK
Memory
27 MB ZeroMCP
vs.
87 MB Official SDK

This is a hello world

A decorator, a server class, and a run call. The official SDK wraps everything in ceremony before your tool does anything.

FastMCP 8 lines
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("test")

@mcp.tool()
def hello(name: str) -> str:
    return f"Hello, {name}!"

mcp.run()

This is the whole server

No decorator. No server class. No run call.
Drop it in a folder and run zeromcp serve.

ZeroMCP 7 lines
# tools/hello.py
tool = {
    "description": "Say hello",
    "input": {"name": "string"},
}

async def execute(args, ctx):
    return f"Hello, {args['name']}!"

HTTP Performance — Head to Head

Mixed workload across all 7 MCP method types. 5-minute sustained load in Docker. Starlette for ZeroMCP, stdio proxy for the official SDK.

req/s CPU Memory Ratio
ZeroMCP (Starlette) 3.08K 0.28% 27 MB 4.9x
Official SDK 623 0.04% 87 MB

The tradeoff

Choose FastMCP

If you want the decorator-based API and tight integration with pydantic validation. It's the most Pythonic MCP experience.

  • Type hints — Pydantic validation integrated via FastMCP
  • Spec parity — tracks every spec change immediately
  • Enterprise support — maintained by the MCP specification team at Anthropic
Choose ZeroMCP

If you want zero dependencies and production HTTP performance. Drop a .py file, serve on Starlette or FastAPI.

  • 0 dependencies
  • File-based tools — drop a .py file, it's live
  • Starlette + FastAPI + Flask
  • Built-in sandbox with enforced permissions
  • 3.08K req/s on Starlette

Drop a .py file. It's an MCP tool.