April focused on giving you more control over agents in production. Pause runs for human approval before destructive tool calls, reach private services from any custom code through the Bridge, hook into every tool call, plug in any OpenAI-compatible model, and watch logs from your own code stream into the dashboard live.
Human-in-the-Loop Approvals
Some tool calls should never run unattended. With the new approvals system, you can pause an agent before any tool you mark as sensitive and require a human to approve or reject the call from the dashboard.
approval:
tools:
- order_tools.cancel_order # always
- order_tools.process_refund: param.amount > 50 # conditional
timeout: 600
message: "This order action requires manager approval."
on_rejection: continue # let the agent adapt instead of failing- •Conditional gates: Approve only when an expression matches, e.g. refund amount over a threshold, or a non-admin caller
- •Timeouts: If no decision arrives in time, the run fails. With
on_rejection: continue, the rejection is fed back to the agent so it can adapt instead - •on_rejection: Choose whether a rejection ends the run (default) or feeds back to the agent so it can pick another path
- •Approvals page: A dedicated dashboard view lists every pending request with the tool name and parameters, plus a direct link to the agent and approve/reject actions
For the full walkthrough, read Human-in-the-Loop Approvals for AI Agents.
Bridge: Now for Custom Tools and Private Services
The Connic Bridge used to be just for connectors. It now extends to every piece of code you write: custom tools, middlewares, tool hooks, and custom guardrails, plus custom LLM providers. Reach a Postgres, an internal API, or a self-hosted inference endpoint that lives only inside your private network, without changing your client library.
Address private services with a magic hostname and the runtime tunnels the connection through the named bridge for you:
import psycopg
from connic import bridge_host
async def lookup_order(args, context):
# postgres-primary lives in a private VPC, only reachable via the bridge
dsn = f"postgresql://app:secret@{bridge_host('abc123', 'postgres-primary')}:5432/orders"
async with await psycopg.AsyncConnection.connect(dsn) as conn:
async with conn.cursor() as cur:
await cur.execute("SELECT * FROM orders WHERE id = %s", (args["order_id"],))
return await cur.fetchone()Projects can also run multiple named bridges side by side, one per environment or private network, and individual connectors and LLM providers can pick which bridge they route through. For background, read Connic Bridge: Agents in Your Private Network.
Tool Hooks
Run your own code before and after every tool call with tool hooks. Validate or rewrite parameters, enforce access control, log usage, or transform results, all without touching the tools themselves.
from connic import AbortTool
async def before(tool_name, params, context):
# Block deletions for non-admin callers
if tool_name == "delete_order" and not context.get("is_admin"):
raise AbortTool({"error": "Permission denied"})
# Normalise IDs so the LLM can be sloppy
if "order_id" in params:
params["order_id"] = params["order_id"].upper()
return params
async def after(tool_name, params, result, context):
print(f"[hook] {tool_name}({params}) -> {result}")
return resultDrop a Python file in the hooks/ folder named after your agent (e.g. hooks/order-manager.py) and the SDK picks it up. Raise AbortTool from before to skip the call entirely and feed a structured response back to the model.
Discoverable Tools
Loading dozens of rarely-used tools into every prompt burns tokens and pushes the model towards wrong choices. With discoverable tools, you list the tools an agent should always have alongside a much larger pool that gets indexed for search instead. The agent finds them at runtime with a natural-language query.
tools:
- order_tools.lookup_order # always loaded
- order_tools.process_refund
discoverable_tools:
- reporting.* # 40 reporting helpers, indexed not preloaded
- support_tools.*
mcp_servers:
- name: github
discoverable: true # MCP server tools indexed for searchDiscoverable tools use the same syntax as regular tools (wildcards, conditional expressions, everything) and work for both your own functions and tools exposed by an MCP server.
AI Dashboard Builder
Describe the dashboard you want and Connic builds it. Open a dashboard in edit mode and use the AI bar at the bottom to add or change widgets in plain English. “Show P95 duration for the order-manager agent over the last 7 days” or “group runs by customer_tier as a bar chart.”
Every prompt creates an undo step, so you can iterate quickly and roll back any change with Cmd+Z without losing the rest of the layout.
Custom OpenAI-Compatible LLM Providers
Bring your own model. Any OpenAI-compatible endpoint (vLLM, Ollama, a LiteLLM proxy, an internal inference gateway) can now be added as a custom LLM provider. Open Project Settings > LLM Provider, click Add Custom Provider, give it a unique prefix and a base URL, and use it from any agent like a built-in:
# Custom provider configured with prefix "ollama"
model: ollama/llama3
# Custom provider configured with prefix "vllm"
model: vllm/mistral-7bIf the endpoint lives inside a private network, pair it with the Bridge and pick a bridge in the provider's Route via Bridge dropdown. Every request from every agent gets tunneled for you.
Live Logs From Your Own Code
Custom code is no longer a black box. Anything you print or log from a tool, middleware, hook, or guardrail now streams into the project's Logs tab in real time, tagged by source so you can tell at a glance which component emitted what.
- •stdout, stderr, and stdlib logging: All three are captured. No special SDK calls needed
- •Tracebacks on crashes: Unhandled exceptions in your code are logged with the full traceback so you can debug without redeploying
- •Filtering: Slice the Logs tab by agent, log level, or source
More Improvements
- •Project overview dashboard: Every project gets an auto-provisioned Overview dashboard embedded right on the project home page, with a curated set of health metrics, charts, and recent runs
- •Live streaming run traces: A live indicator shows when a run is still in flight, and trace steps stream in incrementally as the agent works. Plus a new graceful termination flow for stopping long runs cleanly
- •Flexible run filtering: Filter run tables and dashboard widgets with expressions like
context.customer == 'acme'oroutput.sentiment == 'positive', and group bar charts by anyrun_contextfield - •Dashboard reordering & navigation: Reorder dashboards from project settings, see them prominently in the project sidebar, and create new ones from a dedicated drawer
- •Wildcard tool patterns: Reference whole modules in one line with patterns like
calculator.*for local tools orapi:my_api.users_*for API spec tools - •Git monorepo support: Set a Repository root directory in project settings so Connic builds from the subdirectory where your SDK files live, not the repo root