Context
Share data between middleware, prompts, and tools with a single shared dictionary that lives for the entire agent run.
What is Context?
A shared, mutable dictionary that flows through your entire agent run
The context is a Python dictionary that is created at the start of every agent run. It is pre-populated with system metadata and can be read and written by middleware and tools. Values stored in context can also be referenced in your system prompt using {var} template syntax. After the run completes, the full context is persisted to the database.
How Context Flows
The Context Dictionary
# The context dict is shared across the entire run
context = {
# System metadata (pre-populated automatically)
"run_id": "uuid-string",
"agent_name": "assistant",
"connector_id": "uuid-string",
"timestamp": "2025-01-15T10:30:00Z",
# Your custom values (set in middleware or tools)
"user_name": "Peter",
"user_id": 123,
# Added automatically after agent completes (available in after() hook)
"token_usage": {
"prompt_tokens": 150,
"candidates_tokens": 200,
"total_tokens": 350
},
"duration_ms": 1234.5
}System fields (run_id, agent_name, connector_id, timestamp) are set automatically. You can add any custom key-value pairs you need.
Context in Middleware
Both the before() and after() hooks receive the context as their second parameter. Use before() to set values and after() to read the final state including any values set by tools during the run.
# middleware/assistant.py
from typing import Any, Dict
async def before(content: Dict[str, Any], context: Dict[str, Any]) -> Dict[str, Any]:
"""Set values on context that are available everywhere."""
# These values can be referenced in the system prompt as {user_name}
context["user_name"] = "Peter"
context["user_id"] = 123
# Fetch data and store it for tools to use later
context["subscription_tier"] = "enterprise"
return content
async def after(response: str, context: Dict[str, Any]) -> str:
"""Read values from context, including those set by tools."""
# Access system metadata
run_id = context.get("run_id")
duration = context.get("duration_ms")
# Access values set by tools during the run
api_calls_made = context.get("api_calls_made", 0)
return responseContext in Tools
To access context in a tool, add an optional context parameter to your function signature. Connic automatically injects it at runtime. The parameter is hidden from the LLM, so it will not appear in the tool schema and the model will never try to fill it.
# tools/crm.py
from typing import Any, Dict
async def lookup_customer(email: str, context: Dict[str, Any]) -> dict:
"""Look up a customer by email in the CRM.
Args:
email: The customer's email address
Returns:
Customer details from the CRM
"""
# Read values set by middleware
user_id = context.get("user_id")
tier = context.get("subscription_tier", "free")
# ... perform the lookup ...
customer = {"email": email, "name": "Jane Doe", "plan": tier}
# Write values back to context for other tools or the after() hook
context["api_calls_made"] = context.get("api_calls_made", 0) + 1
context["last_customer_lookup"] = email
return customerThe context parameter is completely optional. Tools without it work exactly as before. Just add the parameter when you need access to shared run data.
Context in Prompts
Any value in context can be referenced in your agent's system_prompt using {variable_name} syntax. Variables are substituted at runtime before the prompt is sent to the model.
# agents/assistant.yaml
name: assistant
model: gemini/gemini-2.5-flash
description: "Support assistant with user context"
system_prompt: |
You are a support assistant for {user_name} (ID: {user_id}).
Their subscription tier is {subscription_tier}.
Always address the user by name and tailor your responses
to their subscription level.
tools:
- crm.lookup_customer# System prompt template
system_prompt: |
Hello {user_name}, your account ID is {user_id}.
# If context = {"user_name": "Peter", "user_id": 123}
# The agent sees:
# "Hello Peter, your account ID is 123."
# Unmatched placeholders are left as-is:
# {unknown_var} stays as {unknown_var} in the promptSafe substitution: If a placeholder like {unknown_var} has no matching context value, it is left as-is in the prompt. No errors are raised.
Full Example: End-to-End Flow
Here is a complete example showing how context flows from middleware through prompts, tools, and back to the after hook.
# 1. middleware/assistant.py - Set context values
async def before(content, context):
context["user_name"] = "Peter"
context["user_id"] = 123
context["locale"] = "en-US"
return content
# 2. agents/assistant.yaml - Reference in prompts
# system_prompt: "Assist {user_name} (locale: {locale})"
# 3. tools/billing.py - Read and write context
async def get_invoice(invoice_id: str, context: dict) -> dict:
user_id = context.get("user_id") # Read from middleware
context["last_invoice"] = invoice_id # Write for after() hook
return {"invoice_id": invoice_id, "user_id": user_id}
# 4. middleware/assistant.py - Access everything in after()
async def after(response, context):
# context now has: run_id, agent_name, user_name, user_id,
# locale, last_invoice, token_usage, duration_ms
return responsePersistence
After the run completes, the full context dictionary (system metadata + your custom values) is saved to the run log. You can view it in the dashboard run details or retrieve it via the API.
- Set context early: Populate values in
before()so they are available in prompts and tools - Use descriptive keys: Keys like
user_idandsubscription_tierare easier to reference thanxordata1 - Keep values serializable: Context is stored as JSON, so stick to strings, numbers, booleans, lists, and dicts
- Avoid key collisions: Don't overwrite system keys like
run_id,agent_name, ortoken_usage - Tools are optional: Only add the
contextparameter to tools that actually need it