Connic

Build AI Agents
with YAML and Python

Define agents in YAML, write tools as Python functions, push to Git. No framework abstractions, no infrastructure to run. If you ship backend code today, you can ship a production agent today.

Read the SDK docs
agents/invoice-processor.yaml
# Define the whole agent in one YAML file
name: invoice-processor
model: gemini/gemini-2.5-pro
temperature: 0.3
system_prompt: |
You are an expert accountant.
Extract every field from the invoice
and verify the totals add up.
tools:
- documents.parse
- documents.extract_entities
- database.store_invoice
Anatomy of an agent

One file. Every part of the agent.

Configuration, prompts, tools, schemas, and safety in a single declarative spec. Diffable in PR review, version-pinnable in CI.

agents/invoice-processor.yaml
version: "1.0"
name: invoice-processor
model: gemini/gemini-2.5-pro
description: "Extracts data from invoices and stores them"
temperature: 0.3

system_prompt: |
  Extract every field from the invoice and
  verify the totals add up.

tools:
  - invoices.parse_pdf
  - invoices.extract_fields
  - invoices.store

output_schema: invoice-data  # references schemas/invoice-data.json

guardrails:
  input:
    - type: pii
      mode: redact
model

Pick any provider. Swap by changing one line. Learn more

tools

Reference your own Python functions in tools/, or built-in predefined tools. Learn more

output_schema

Point to a JSON Schema file in schemas/ to force structured JSON output. Learn more

guardrails

Inline PII redaction, prompt-injection defense, custom checks. Learn more

Less code, less drift

What the framework writes for you

Same agent, two definitions. The YAML on the left runs with the same guarantees you'd hand-roll in Python, without the boilerplate that rots over time.

Connic, 11 lines
agents/support-triage.yaml
name: support-triage
model: anthropic/claude-opus-4-7
description: "Triage incoming customer requests"

system_prompt: |
  Triage the customer's request and
  route to the right team.

tools:
  - query_knowledge
  - tickets.create
  - tickets.notify_team
LangChain, 30+ lines, then deploy it yourself
agents/support_triage.py
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_anthropic import ChatAnthropic
from langchain.prompts import ChatPromptTemplate
from tools import query_knowledge, ticket_create, notify_team

llm = ChatAnthropic(model="claude-opus-4-7")

tools = [query_knowledge, ticket_create, notify_team]

prompt = ChatPromptTemplate.from_messages([
    ("system", """Triage the customer's request and
        route to the right team."""),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    max_iterations=10,
    handle_parsing_errors=True,
)

# ...plus deployment, retries, observability,
# secrets, env config, telemetry, you name it.
What's in the box

Production primitives, declarative

The SDK ships with the things you'd otherwise rebuild from scratch.

Variables

Per-environment env vars injected at runtime. Secrets are masked in the dashboard and in logs. See docs

Hooks

Python functions that wrap each tool call. Validate or rewrite params, redact results, or skip a tool with AbortTool. See docs

Middleware

Python before() and after() hooks around the whole run. Attach documents, enrich context, transform responses. See docs

Guardrails

PII redaction, prompt-injection detection, moderation, topic restriction, regex, and custom Python checks, all declared in YAML. See docs

Output schemas

Reference a JSON Schema file in schemas/ to force the LLM into a typed JSON shape. See docs

MCP

Connect any MCP server over HTTP/SSE. Filter to specific tools or mark the whole server as discoverable. See docs

How Connic compares

Composer SDK vs. building on LangChain, CrewAI, or your own framework

How Connic compares
FeatureConnicLangChainCrewAIDIY
Config formatYAMLPython codePython codeCustom
Deployment includedIncludedNot includedNot includedNot included
Built-in observabilityIncludedNot includedPartialNot included
Knowledge base (RAG)IncludedNot includedNot includedNot included
Connectors (Kafka, S3, etc.)IncludedNot includedNot includedNot included
A/B testingIncludedNot includedNot includedNot included
Human-in-the-loop approvalsIncludedNot includedNot includedNot included
Learning curveLowMediumMediumHigh
Migration toolingIncludedNot includedNot includedNot included

Ready for the team, not just the prototype

What changes when more than one engineer touches the agent

Migrate from LangChain or ADK

connic migrate converts LangChain, LangGraph, and Google ADK projects to Connic format. Keep your prompts and tools, drop the framework boilerplate.

Review agents in pull requests

Agents live in your repo as YAML and Python. Map a Git branch to each environment so pushes auto-deploy to staging or production.

Pin model versions, audit diffs

Models are pinned by ID in the YAML (e.g. anthropic/claude-opus-4-7). Tools and middleware are Python files in your repo, so every change is a Git commit with a reviewable diff.

Frequently Asked Questions

YAML keeps the agent's identity in one diffable file: model, prompts, tools, schemas, guardrails. Reviewers see exactly what changed in a PR without reading framework glue. You still write Python for tools and middleware; the YAML is just the spec.

Yes. Tools, middleware, and hooks are all Python. Anything you'd write in a framework, you can write here. The YAML just declares what to compose.

Model IDs are pinned explicitly in the YAML (e.g. anthropic/claude-opus-4-7, gemini/gemini-2.5-pro). Tools, middleware, hooks, and guardrails are Python files in the same repo, so every change ships as a Git commit with a reviewable diff and is rollback-able with git revert.

OpenAI, Azure OpenAI, Anthropic, Google Gemini, OpenRouter, AWS Bedrock, Google Vertex AI, and any OpenAI-compatible endpoint as a custom provider. Switch by changing the provider prefix in the model field.

Run connic migrate inside your existing project. It scans LangChain, LangGraph, and Google ADK code and generates a Connic project: agents as YAML, tools as Python in tools/. Review MIGRATION_REPORT.md, then run connic lint and clean up the parts that need manual attention.

Agents are defined in YAML files under agents/. The Python you write lives in tools/, middleware/, hooks/, and guardrails/. That's where business logic, model orchestration extensions, and custom checks go. The YAML is the spec; Python is the implementation.