- Build
- Composer SDK
Build AI Agents
with YAML and Python
Define agents in YAML, write tools as Python functions, push to Git. No framework abstractions, no infrastructure to run. If you ship backend code today, you can ship a production agent today.
Read the SDK docs# Define the whole agent in one YAML filename: invoice-processormodel: gemini/gemini-2.5-protemperature: 0.3system_prompt: |You are an expert accountant.Extract every field from the invoiceand verify the totals add up.tools: - documents.parse - documents.extract_entities - database.store_invoiceOne file. Every part of the agent.
Configuration, prompts, tools, schemas, and safety in a single declarative spec. Diffable in PR review, version-pinnable in CI.
version: "1.0"
name: invoice-processor
model: gemini/gemini-2.5-pro
description: "Extracts data from invoices and stores them"
temperature: 0.3
system_prompt: |
Extract every field from the invoice and
verify the totals add up.
tools:
- invoices.parse_pdf
- invoices.extract_fields
- invoices.store
output_schema: invoice-data # references schemas/invoice-data.json
guardrails:
input:
- type: pii
mode: redactPick any provider. Swap by changing one line. Learn more
Reference your own Python functions in tools/, or built-in predefined tools. Learn more
Point to a JSON Schema file in schemas/ to force structured JSON output. Learn more
Inline PII redaction, prompt-injection defense, custom checks. Learn more
What the framework writes for you
Same agent, two definitions. The YAML on the left runs with the same guarantees you'd hand-roll in Python, without the boilerplate that rots over time.
name: support-triage
model: anthropic/claude-opus-4-7
description: "Triage incoming customer requests"
system_prompt: |
Triage the customer's request and
route to the right team.
tools:
- query_knowledge
- tickets.create
- tickets.notify_teamfrom langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_anthropic import ChatAnthropic
from langchain.prompts import ChatPromptTemplate
from tools import query_knowledge, ticket_create, notify_team
llm = ChatAnthropic(model="claude-opus-4-7")
tools = [query_knowledge, ticket_create, notify_team]
prompt = ChatPromptTemplate.from_messages([
("system", """Triage the customer's request and
route to the right team."""),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
max_iterations=10,
handle_parsing_errors=True,
)
# ...plus deployment, retries, observability,
# secrets, env config, telemetry, you name it.Production primitives, declarative
The SDK ships with the things you'd otherwise rebuild from scratch.
Per-environment env vars injected at runtime. Secrets are masked in the dashboard and in logs. See docs
Python functions that wrap each tool call. Validate or rewrite params, redact results, or skip a tool with AbortTool. See docs
Python before() and after() hooks around the whole run. Attach documents, enrich context, transform responses. See docs
PII redaction, prompt-injection detection, moderation, topic restriction, regex, and custom Python checks, all declared in YAML. See docs
Reference a JSON Schema file in schemas/ to force the LLM into a typed JSON shape. See docs
Connect any MCP server over HTTP/SSE. Filter to specific tools or mark the whole server as discoverable. See docs
How Connic compares
Composer SDK vs. building on LangChain, CrewAI, or your own framework
| Feature | Connic | LangChain | CrewAI | DIY |
|---|---|---|---|---|
| Config format | YAML | Python code | Python code | Custom |
| Deployment included | Included | Not included | Not included | Not included |
| Built-in observability | Included | Not included | Partial | Not included |
| Knowledge base (RAG) | Included | Not included | Not included | Not included |
| Connectors (Kafka, S3, etc.) | Included | Not included | Not included | Not included |
| A/B testing | Included | Not included | Not included | Not included |
| Human-in-the-loop approvals | Included | Not included | Not included | Not included |
| Learning curve | Low | Medium | Medium | High |
| Migration tooling | Included | Not included | Not included | Not included |
Ready for the team, not just the prototype
What changes when more than one engineer touches the agent
connic migrate converts LangChain, LangGraph, and Google ADK projects to Connic format. Keep your prompts and tools, drop the framework boilerplate.
Agents live in your repo as YAML and Python. Map a Git branch to each environment so pushes auto-deploy to staging or production.
Models are pinned by ID in the YAML (e.g. anthropic/claude-opus-4-7). Tools and middleware are Python files in your repo, so every change is a Git commit with a reviewable diff.