SecureMCP
FastMCP-compatible API for building MCP servers with MACAW security. Decorator-based tool registration with automatic policy enforcement, cryptographic signing, and audit logging.
Quick Start
from macaw_adapters.mcp import SecureMCP, Context
mcp = SecureMCP("calculator")
@mcp.tool(description="Add two numbers")
def add(a: float, b: float) -> float:
return a + b
@mcp.tool(description="Subtract two numbers")
def subtract(a: float, b: float) -> float:
return a - b
@mcp.resource("calc://history")
def get_history(ctx: Context) -> list:
return ctx.get("calc_history") or []
if __name__ == "__main__":
mcp.run()
# Every tool call now has:
# • Policy enforcement
# • Cryptographic signing
# • Audit loggingAPI Overview
| Export | Type | Description |
|---|---|---|
| SecureMCP | Class | Primary FastMCP-compatible server (recommended) |
| Context | Class | Request context passed to tool handlers |
| Server | Class | Legacy server API (backwards compatibility) |
| Client | Class | MCP client with discovery and tool invocation |
SecureMCP Constructor
class SecureMCP:
def __init__(
self,
name: str, # Server name (used in agent_id)
version: str = "1.0.0", # Server version
intent_policy: dict = None, # MAPL policy declaration
roots: list[str] = None, # Filesystem paths this server can access
**kwargs # Additional MACAWClient options
)MCP Roots
The roots parameter declares which filesystem directories this server can access, following the MCP Roots specification. These are automatically translated to MAPL resource declarations for policy enforcement.
Decorators
@mcp.tool(
name="search", # Tool name (defaults to function name)
description="Search docs", # Tool description
prompts=["query"] # Parameters to treat as prompts
)
def search(query: str, limit: int = 10) -> list:
# query will be wrapped as AuthenticatedPrompt
return do_search(query, limit)@mcp.resource(
"config://settings", # URI pattern
description="App settings"
)
def get_settings(ctx: Context) -> dict:
return {"theme": "dark", "language": "en"}@mcp.prompt(
name="greeting",
description="Generate a greeting"
)
def greeting(name: str) -> str:
return f"Hello, {name}! How can I help?"Context Object
Tool handlers can receive a ctx parameter for accessing request context:
| Method | Description |
|---|---|
| ctx.get(key) | Get value from context vault |
| ctx.set(key, value) | Store value in context vault |
| await ctx.sample(...) | Request LLM completion from client (MCP Sampling) |
| await ctx.elicit(...) | Request user input from client (MCP Elicitation) |
| await ctx.report_progress() | Report execution progress (0.0-1.0) |
| await ctx.read_resource() | Read another resource (cross-resource access) |
| ctx.audit(action, ...) | Create cryptographically signed audit entry |
| ctx.get_roots() | Get list of filesystem roots server can access |
@mcp.tool(description="Summarize a document")
async def summarize(document: str, ctx: Context) -> str:
# Request LLM completion from the client
summary = await ctx.sample(
prompt=f"Summarize this text: {document}",
system_prompt="You are a helpful summarization assistant.",
max_tokens=500
)
return summaryClient Usage
from macaw_adapters.mcp import Client
# Create client (auto-registers with MACAW)
client = Client(name="my-agent", version="1.0.0")
# Discover available servers
servers = await client.list_servers()
print(f"Found servers: {[s['name'] for s in servers]}")
# Set default server for subsequent calls
client.set_default_server("local:user/app:securemcp-calculator:abc123")
# List tools from a specific server
tools = await client.list_tools("calculator")
# Call a tool
result = await client.call_tool(
tool_name="add",
arguments={"a": 2, "b": 3}
)
# Access resources and prompts
resources = await client.list_resources()
content = await client.get_resource("calc://history")
prompts = await client.list_prompts()
greeting = await client.get_prompt("greeting", {"name": "Alice"})MCP Sampling Handler
Enable server tools to use your LLM by setting a sampling handler:
from macaw_adapters.mcp import Client
from macaw_adapters.openai import SecureOpenAI
client = Client(name="my-agent")
openai = SecureOpenAI(app_name="my-agent")
# Handler called when server tools use ctx.sample()
async def llm_handler(prompt, system_prompt, max_tokens, temperature, **kwargs):
response = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_prompt or "You are helpful."},
{"role": "user", "content": prompt}
],
max_tokens=max_tokens,
temperature=temperature
)
return response.choices[0].message.content
client.set_sampling_handler(llm_handler)MCP Feature Coverage
Implemented
Tool definitions and execution
Resource access and templates
Prompt templates
Server→client LLM requests
Server→user input requests
Filesystem root configuration
Progress notifications
MCP logging integration
Planned
Long-running task management
Request cancellation
Policy Integration
MCP tools are registered as MAPL resources for policy targeting:
{
"policy_id": "user:alice",
"resources": [
"tool:calculator/*", // All calculator operations
"tool:database/query", // Specific operation
"resource:file://reports/*" // Resource access
],
"denied_resources": [
"tool:shell/*", // Block shell access
"resource:file://secrets/*" // Block secret files
],
"constraints": {
"parameters": {
"tool:database/query": {
"limit": {"type": "integer", "max": 1000}
}
}
}
}