Connic

Connic Documentation

Build, deploy, and govern AI agents on production infrastructure. Connic handles deployment, observability, evaluation, human approvals, and private networking, so you can focus on building.

Last updated

How Connic Works

Connic is a platform for building and deploying AI agents. Here's how the pieces fit together.

Your Code

Agent Configuration + Python Tools

Connic Composer SDK

Validates & packages agents

Connic

Connic Platform

Deploys, runs, and monitors your agents

Deployments

Automated builds

Execution

Scalable processing

Observability

Runs & traces

Connectors

Bridge between agents and the outside world

Inbound

Trigger agents

Outbound

Deliver results

Sync

Request-response

Use Connic with your AI tools

Skills, plugins, and docs chat

The Connic team maintains an open SKILL.md-format skill that teaches AI coding agents how to write idiomatic Connic projects — the full project layout, every YAML key, all eleven connectors, the real CLI flags, and the team's recommended best practices for guardrails, tests, and tool wrapping. It works with Claude Code, Cursor, Codex, GitHub Copilot, Windsurf, Gemini, and any other agent that supports the SKILL.md standard.

Install for any supported agent:

terminal
npx skills add connic-org/connic-skill

Or, inside Claude Code, install the plugin from the marketplace:

claude code
/plugin marketplace add connic-org/connic-skill
/plugin install connic@connic

View the skill on GitHub for the layout, per-agent install paths, and the evaluation suite used to keep it in sync with the SDK.

Need answers, not code? Chat with the docs using the floating button in the bottom right, or pull them into your IDE with the Context7 MCP server.

Key Concepts

Projects: A project is a collection of agents, connectors, and deployments. Each project connects to a Git repository where your agent code lives.

Agents: Defined in YAML files. Each agent has a model, system prompt, temperature, and optional tools. Agents process inputs and generate outputs.

Tools: Python functions that agents can call. Use them to search the web, query databases, call APIs, or perform any custom logic.

Connectors: Link your agents to external systems. Use inbound connectors to trigger agents, outbound connectors to deliver results, or sync connectors for request-response patterns.

Deployments: Versioned releases of your agents. Push to your Git branch to trigger a new deployment. Roll back anytime.

Runs & Traces: Every agent execution is recorded as a run. View inputs, outputs, token usage, and detailed traces for debugging.

Knowledge & Database: Give agents long-term memory with the knowledge base and persistent state through the built-in database. No migrations or external hosting.

Guardrails & Approvals: Wrap agents with guardrails for input validation, output filtering, and PII protection. Gate sensitive tool calls with human approvals.

Judges & A/B Tests: Score every run with LLM judges against your own rubrics, and compare agent variants with A/B tests on cost, latency, and quality.

REST API: Access your project programmatically using API keys. Trigger agents, query runs, manage your knowledge base, and pull cost data from external applications or CI/CD pipelines.

Quick Start

Create your first agent in seconds

terminal
# Install the SDK
pip install connic-composer-sdk

# Create a new project
connic init my-agents
cd my-agents

# Push to your connected repo to deploy
git push origin <branch>