Tool Hooks
Run custom logic before and after every tool call. Validate parameters, enforce access control, modify results, and log tool usage.
What are Tool Hooks?
Interceptors that run before and after each tool call within an agent
Tool hooks are Python functions that wrap every tool call an agent makes. While middleware runs once before and after the entire agent execution, tool hooks run around each individual tool call. Use them for access control, parameter validation, result transformation, and logging.
Create a file in hooks/ with the same name as your agent. For example, hooks/order-manager.py applies to the agent whose YAML has name: order-manager. No configuration needed — same convention as middleware.
Execution Flow
Hooks run inside the agent's tool-calling loop. Each time the LLM decides to call a tool, the hooks fire:
If the agent calls multiple tools in a single run, hooks fire for each one independently.
Middleware vs Tool Hooks: Middleware wraps the entire agent run (request in, response out). Tool hooks wrap each individual tool call inside that run. Both can coexist on the same agent.
Basic Tool Hook
"""Tool hooks for the order-manager agent."""
from typing import Any
from connic import AbortTool
async def before(tool_name: str, params: dict[str, Any], context: dict[str, Any]) -> dict[str, Any]:
"""
Called before every tool call.
Args:
tool_name: Name of the tool about to run (e.g. "get_order")
params: Dict of parameters the LLM chose for the tool
context: Shared run context dict (see Context docs)
Returns:
Modified params dict (or original unchanged)
"""
# Block deletions for non-admin users
if tool_name == "delete_order" and not context.get("is_admin"):
raise AbortTool({"error": "Permission denied: only admins can delete orders"})
return params
async def after(tool_name: str, params: dict[str, Any], result: Any, context: dict[str, Any]) -> Any:
"""
Called after every tool call.
Args:
tool_name: Name of the tool that just ran
params: The parameters the tool was called with
result: The tool's return value
context: Shared run context dict (see Context docs)
Returns:
Modified result (or original unchanged)
"""
print(f"[hook] {tool_name}({params}) -> {result}")
return resultBoth hooks are optional. You can define just before(), just after(), or both. The context parameter is also optional — omit it if you don't need run metadata.
Function Signatures
before()
| Parameter | Type | Description |
|---|---|---|
| tool_name | str | Name of the tool about to execute |
| params | dict | Parameters the LLM chose for the tool call |
| context | dict (optional) | Shared run context — same dict available in middleware and tools |
Returns: the params dict (modified or original). The returned dict is passed to the tool.
after()
| Parameter | Type | Description |
|---|---|---|
| tool_name | str | Name of the tool that just ran |
| params | dict | Parameters the tool was called with |
| result | Any | The tool's return value |
| context | dict (optional) | Shared run context |
Returns: the result to pass back to the LLM. Return None to keep the original result unchanged.
Skipping a Tool with AbortTool
Raise AbortTool from before() to skip the tool entirely and return a custom result to the LLM. The tool never executes, and the trace marks it as an error.
"""Block specific tools based on runtime conditions."""
from typing import Any
from connic import AbortTool
async def before(tool_name: str, params: dict[str, Any], context: dict[str, Any]) -> dict[str, Any]:
# Block destructive tools outside business hours
from datetime import datetime
hour = datetime.now().hour
destructive = {"delete_order", "db_delete", "cancel_subscription"}
if tool_name in destructive and not (9 <= hour < 17):
raise AbortTool({
"error": f"Tool '{tool_name}' is only available during business hours (9am-5pm)"
})
return paramsAbortTool vs StopProcessing: AbortTool skips only the current tool call — the agent continues and can call other tools or respond. StopProcessing aborts the entire agent run immediately.
Common Use Cases
Validating and Normalising Parameters
Clean up or enforce defaults on tool parameters before execution.
"""Normalise and validate tool parameters."""
from typing import Any
async def before(tool_name: str, params: dict[str, Any]) -> dict[str, Any]:
# Normalise order IDs to uppercase
if "order_id" in params:
params["order_id"] = params["order_id"].upper()
# Enforce default limit on search tools
if tool_name == "search_orders" and "limit" not in params:
params["limit"] = 10
return paramsRedacting Sensitive Data from Results
Strip PII or sensitive fields from tool results before they reach the LLM.
"""Redact sensitive data from tool results."""
from typing import Any
import re
async def after(tool_name: str, params: dict[str, Any], result: Any, context: dict[str, Any]) -> Any:
# Redact email addresses from results
if isinstance(result, str):
return re.sub(r'[\w.-]+@[\w.-]+\.\w+', '[REDACTED]', result)
if isinstance(result, dict) and "email" in result:
result["email"] = "[REDACTED]"
return resultAborting the Run with StopProcessing
Use StopProcessing when a tool call reveals the entire run should stop.
"""Use StopProcessing to abort the entire run from a hook."""
from typing import Any
from connic import StopProcessing
async def before(tool_name: str, params: dict[str, Any], context: dict[str, Any]) -> dict[str, Any]:
# If a critical tool fails validation, stop the entire run
if tool_name == "charge_customer":
amount = params.get("amount", 0)
if amount > 10000:
raise StopProcessing("Transaction blocked: amount exceeds safety limit")
return paramsLogging Tool Calls
"""Log all tool calls to an external service."""
from typing import Any
import httpx
async def after(tool_name: str, params: dict[str, Any], result: Any, context: dict[str, Any]) -> Any:
try:
async with httpx.AsyncClient() as client:
await client.post("https://logs.internal/tool-calls", json={
"run_id": context.get("run_id"),
"agent": context.get("agent_name"),
"tool": tool_name,
"params": params,
})
except Exception:
pass # Don't fail the tool call if logging fails
return resultOptional Context Parameter
The context parameter is auto-detected from your function signature — include it when you need run metadata, omit it for simpler hooks. This follows the same convention as custom tools.
"""The context parameter is optional."""
from typing import Any
# Without context - simpler signature
async def before(tool_name: str, params: dict[str, Any]) -> dict[str, Any]:
return params
# With context - access run metadata and middleware values
async def after(tool_name: str, params: dict[str, Any], result: Any, context: dict[str, Any]) -> Any:
run_id = context.get("run_id")
return resultProject Structure
my-agent-project/
├── agents/
│ ├── assistant.yaml
│ └── order-manager.yaml
├── hooks/
│ ├── assistant.py # Applied to 'assistant' agent
│ └── order-manager.py # Applied to 'order-manager' agent
├── middleware/
│ └── assistant.py # Middleware and hooks can coexist
└── tools/
└── ...Sync and Async Support
"""Sync hooks also work."""
from typing import Any
def before(tool_name: str, params: dict[str, Any]) -> dict[str, Any]:
"""Sync functions are automatically handled."""
if "query" in params:
params["query"] = params["query"].strip()
return params
def after(tool_name: str, params: dict[str, Any], result: Any) -> Any:
"""Both sync and async are supported."""
return resultUse async functions for I/O operations (API calls, database queries). Sync functions are fine for simple validations and transformations.
- Keep hooks fast: Hooks run on every tool call — avoid slow I/O in the hot path
- Always return params/result: Forgetting to return from
before()passes None, which clears all parameters - Use AbortTool for access control: It skips the tool cleanly without crashing the agent
- Don't swallow errors in after(): If logging fails, catch the exception so the tool result still reaches the LLM
- Use context for shared state: Set values in middleware and read them in hooks for cross-cutting concerns
If a hook raises an unhandled exception (not AbortTool or StopProcessing), the tool call fails and the error is reported back to the LLM, which may retry or respond with an error message. The trace records the exception.
Tool hooks apply to all tools defined in tools/, predefined tools, and API spec tools. They do not apply to tools served by external MCP servers since those are managed by the MCP protocol directly.