Your AI agent just deleted 2,000 customer records. The prompt was clear, the tool call was valid, the agent did exactly what it was designed to do. But the input was wrong. A malformed API request, a hallucinated parameter, a user who typed the wrong ID. By the time anyone noticed, the damage was done.
This is the reality of running AI agents in production. They are fast, autonomous, and increasingly capable. But autonomy without oversight is a liability. The more powerful your agents become, the more critical it is to have a human in the loop for actions that cannot be undone.
Connic Approvals solve this. When an agent reaches a sensitive tool call, execution pauses. A human reviews the action and its parameters, approves or rejects it, and the agent resumes automatically. No data is lost. No context is forgotten. The agent picks up exactly where it left off.
Why Agents Need Human-in-the-Loop
AI agents are not traditional software. They make decisions based on probabilistic reasoning. The same input can lead to different tool calls across runs. A perfectly written prompt does not guarantee a perfectly executed action every single time. And unlike a REST API that returns a 400 error, an agent with access to a delete endpoint will use it if it believes that is the right thing to do.
| Operation Type | Examples | Why It Needs Oversight |
|---|---|---|
| Destructive | Deleting records, canceling subscriptions, revoking access | Irreversible. A single bad tool call can affect thousands of users. |
| Financial | Processing refunds, transferring funds, modifying billing | When money moves, you want a human to verify the amount and recipient first. |
| External | Sending emails, posting to third-party APIs, triggering webhooks | Once a message leaves your system, you cannot recall it. |
| Compliance | Actions under EU AI Act, GDPR, or industry-specific rules | Regulations demand a qualified human reviews AI-driven decisions. |
Connic Approvals let you draw the line exactly where you need it. Low-risk tool calls execute instantly. High-risk ones pause and wait for a human decision. Your agents stay fast where speed matters and safe where safety matters.
How It Works
The approval flow is built into the agent execution pipeline. When a gated tool is called, the runner pauses execution, creates an approval request, notifies your team, and waits. The full conversation history and agent state are preserved. Once a decision is made, the agent resumes exactly where it stopped.
If approved, the tool executes with the exact parameters that were reviewed. The agent continues its workflow as if nothing happened. Every subsequent tool call proceeds normally unless it hits another gated tool.
If rejected, the default behavior terminates the run with a clear error message that includes the rejection reason. Nothing executes. Alternatively, set on_rejection: continue and the agent receives the rejection as a tool response and can adapt — inform the user, try a different approach, or skip the action entirely. Either way, the reviewer can provide context on why the action was denied, which is logged for audit purposes.
If nobody responds, a configurable timeout ensures the run does not hang indefinitely. After the timeout expires, the run fails safely with a clear timeout error — or, with on_rejection: continue, the agent resumes and treats the timeout as a rejection. The default timeout is 1 hour, but you can set it to whatever makes sense for your workflow.
Configuration
Approvals are declared in your agent's YAML configuration. You define which tools require approval, an optional timeout, and a message that gives reviewers context about why this action needs human oversight.
name: order-processor
model: anthropic/claude-sonnet-4-5-20250514
description: "Processes customer orders and handles refunds"
system_prompt: |
You handle incoming customer orders...
tools:
- orders.process
- orders.delete_order
- orders.process_refund
- inventory.check
approval:
tools:
- orders.delete_order
- orders.process_refund
timeout: 600
message: "This order action requires manager approval before execution."In this example, orders.process and inventory.check execute instantly. But when the agent calls delete_order or process_refund, the run pauses and waits up to 10 minutes for a human decision.
By default, rejected approvals terminate the run. If you want the agent to adapt instead, set on_rejection: continue — the tool call returns a rejection message and the agent can try an alternative approach or inform the user. See the rejection behavior docs for details.
Conditional Approvals
Not every refund needs approval. A $5 refund is routine. A $5,000 refund needs a second pair of eyes. Conditional approvals let you define expressions that determine when a tool call requires review, based on the actual parameters being passed or the context of the request.
approval:
tools:
# Always require approval for deletions
- orders.delete_order
# Only require approval for refunds over $50 or non-admin users
- orders.process_refund: param.amount > 50 and not context.is_admin
# Only require approval for bulk operations
- inventory.bulk_update: param.count > 100
timeout: 600
message: "High-value action detected. Please review before proceeding."Conditions have access to two data sources: param.* for the tool's parameters and context.* for values from your middleware. This means you can gate approvals on amounts, user roles, resource types, or any business logic you need. If the condition evaluates to false, the tool executes immediately. If it evaluates to true, approval is required.
Notifications That Reach the Right People
An approval request is only useful if someone sees it. Connic notifies your team through three channels the moment an approval is needed:
Dashboard
Webhooks
Notification routing is configurable per agent and per team member. Your order processing team reviews refund approvals. Your infrastructure team reviews deployment approvals. Each group only sees what is relevant to them.
Full Audit Trail
Every approval decision is recorded with complete context. Who reviewed it, when they decided, what they approved or rejected, and why. This is not just logging — it is a compliance-grade audit trail.
EU AI Act: Why This Matters Now
If you deploy AI systems in the European Union, human oversight is no longer optional. The EU AI Act is the world's first comprehensive AI regulation, and its requirements for high-risk AI systems become enforceable on August 2, 2026. Non-compliance carries fines of up to €15 million or 3% of global annual turnover.
Article 14 of the EU AI Act mandates that high-risk AI systems must be designed so that humans can effectively oversee them during operation. This is not a vague aspiration. The regulation spells out specific capabilities that must be available to human overseers:
Ability to Intervene and Override
Humans must be able to intervene in the AI system's operation, override its decisions, or stop it entirely. Connic Approvals do exactly this — execution pauses before the critical action, and a human decides whether it proceeds, gets rejected, or gets modified.Understand What the System Is Doing
Overseers must be able to understand the AI system's capabilities, limitations, and current behavior. The approval request surfaces the exact tool being called, its parameters, and the agent's reasoning, giving reviewers full visibility into what the AI intends to do and why.Decide Not to Use the System
The Act requires that humans can decide, in any particular situation, not to use the AI system or to disregard its output. Rejecting an approval does precisely this — the action is stopped and the rejection reason is recorded. Depending on the configuredon_rejection mode, the run either terminates cleanly or continues with the agent adapting to the decision.Complete Audit Trail
The regulation requires transparency and logging of AI system behavior. Every approval decision is captured with who reviewed it, what was decided, when, and why. Trace spans provide a complete timeline of the agent's execution including the human intervention points.Beyond Approvals: A Complete Compliance Stack
Human oversight is one piece of the compliance puzzle. The EU AI Act also demands risk mitigation, data governance, transparency, and ongoing monitoring. Connic gives you the building blocks for all of these:
Guardrails
Observability
Judges
A/B Testing
Together, these features form a defense-in-depth approach: guardrails prevent harmful inputs and outputs, approvals gate high-risk actions behind human review, judges evaluate output quality, observability provides the transparency layer, and A/B testing validates changes safely. That is not just good engineering. It is what regulators expect.
Real-World Scenarios
Here are approval configurations we see teams deploying in production:
"Refunds Over $100 Need a Manager"
A SaaS company runs an AI agent that handles customer refund requests. Small refunds process automatically. Anything over $100 pauses for manager review. The conditionparam.amount > 100 handles the logic. Average approval time: under 3 minutes during business hours."All Deletions Require Approval, No Exceptions"
An enterprise data management agent has access to create, update, and delete operations. Creates and updates run freely. Every delete operation, regardless of scope or context, requires human approval. The unconditional gate ensures zero deletions happen without a human saying yes."External API Calls Go Through the On-Call Engineer"
An infrastructure agent can restart services, scale deployments, and post to incident channels. Webhook notifications route approval requests to the on-call engineer's Slack channel. The engineer reviews the action, checks the parameters, and approves from Slack or the Connic dashboard."Regulated Industry: Every Decision Gets Reviewed"
A healthcare company uses an AI agent for patient intake processing. Every action that modifies patient records requires approval from an authorized staff member. Combined with PII guardrails and full trace logging, the system provides the audit trail their compliance team needs for regulatory reviews.Getting Started
Adding approvals to an existing agent takes minutes:
- 1.Identify the tools in your agent that perform destructive, financial, or irreversible actions
- 2.Add an
approvalblock to your agent YAML listing those tools, with optional conditions - 3.Deploy your agent. The next time a gated tool is called, the run will pause and create an approval request
- 4.Open the Approvals page in your dashboard to review, approve, or reject pending requests
- 5.Configure notification routing so the right team members are alerted via email or webhooks
Start with your most dangerous tool. The one that makes you nervous every time your agent calls it. Gate that one, see how the approval flow feels, then expand to other sensitive operations as needed.
For the complete configuration reference, check the Approvals documentation. New to Connic? Start with the quickstart guide to deploy your first agent, then come back here to add human oversight where it matters most.