Connic Documentation
Build, deploy, and govern AI agents on production infrastructure. Connic handles deployment, observability, evaluation, human approvals, and private networking, so you can focus on building.
Quick Start
Get up and running in 5 minutes
SDK Overview
Learn the Connic Composer SDK
Connectors
Integrate with external systems
How Connic Works
Connic is a platform for building and deploying AI agents. Here's how the pieces fit together.
Your Code
Agent Configuration + Python Tools
Connic Composer SDK
Validates & packages agents
Connic Platform
Deploys, runs, and monitors your agents
Deployments
Automated builds
Execution
Scalable processing
Observability
Runs & traces
Connectors
Bridge between agents and the outside world
Inbound
Trigger agents
Outbound
Deliver results
Sync
Request-response
Skills, plugins, and docs chat
The Connic team maintains an open SKILL.md-format skill that teaches AI coding agents how to write idiomatic Connic projects — the full project layout, every YAML key, all eleven connectors, the real CLI flags, and the team's recommended best practices for guardrails, tests, and tool wrapping. It works with Claude Code, Cursor, Codex, GitHub Copilot, Windsurf, Gemini, and any other agent that supports the SKILL.md standard.
Install for any supported agent:
npx skills add connic-org/connic-skillOr, inside Claude Code, install the plugin from the marketplace:
/plugin marketplace add connic-org/connic-skill
/plugin install connic@connicView the skill on GitHub for the layout, per-agent install paths, and the evaluation suite used to keep it in sync with the SDK.
Need answers, not code? Chat with the docs using the floating button in the bottom right, or pull them into your IDE with the Context7 MCP server.
Key Concepts
Projects: A project is a collection of agents, connectors, and deployments. Each project connects to a Git repository where your agent code lives.
Agents: Defined in YAML files. Each agent has a model, system prompt, temperature, and optional tools. Agents process inputs and generate outputs.
Tools: Python functions that agents can call. Use them to search the web, query databases, call APIs, or perform any custom logic.
Connectors: Link your agents to external systems. Use inbound connectors to trigger agents, outbound connectors to deliver results, or sync connectors for request-response patterns.
Deployments: Versioned releases of your agents. Push to your Git branch to trigger a new deployment. Roll back anytime.
Runs & Traces: Every agent execution is recorded as a run. View inputs, outputs, token usage, and detailed traces for debugging.
Knowledge & Database: Give agents long-term memory with the knowledge base and persistent state through the built-in database. No migrations or external hosting.
Guardrails & Approvals: Wrap agents with guardrails for input validation, output filtering, and PII protection. Gate sensitive tool calls with human approvals.
Judges & A/B Tests: Score every run with LLM judges against your own rubrics, and compare agent variants with A/B tests on cost, latency, and quality.
REST API: Access your project programmatically using API keys. Trigger agents, query runs, manage your knowledge base, and pull cost data from external applications or CI/CD pipelines.
Create your first agent in seconds
Quick Start
Get up and running in 5 minutes with your first agent
Migrate from LangChain
Bring an existing LangChain or LangGraph app into Connic
Migrate from ADK
Bring an existing Google ADK project into Connic
Agent Configuration
Learn how to configure LLM, Sequential, and Tool agents
Write Custom Tools
Create Python functions your agents can call
Dev Server
Iterate on agents with connic dev and hot-reload