Connic
Connic Composer SDK

Agent Configuration

A complete reference for configuring agents using YAML files. Learn about all available properties and best practices.

The type field determines how your agent executes. Connic supports three agent types:

  • LLM: For tasks requiring AI reasoning, conversation, or complex decision-making
  • Sequential: For multi-step workflows where agents need to work in sequence
  • Tool: For deterministic operations that don't need AI reasoning

LLM Agent

Standard AI agents powered by language models

LLM agents use AI models to process requests. They can reason about inputs, use tools, and generate natural language responses. This is the default type. Use when you need conversational AI, reasoning, or intelligent tool selection.

yaml
version: "1.0"

name: assistant
type: llm  # Default type, can be omitted
model: gemini/gemini-2.5-pro  # Provider prefix required
description: "A helpful general-purpose assistant"
system_prompt: "You are a helpful assistant. Be concise and accurate."

Required: model and system_prompt

Sequential Agent

Chain multiple agents together in a pipeline

Sequential agents execute a chain of other agents in order. Each agent in the chain receives the previous agent's output as its input. Use for multi-step workflows or data pipelines.

yaml
version: "1.0"

name: document-pipeline
type: sequential
description: "Processes documents through extraction and validation"

# Agents execute in order, each receiving the previous agent's output
agents:
  - assistant         # First: extracts key information
  - invoice-processor # Then: validates and processes the data

Required: agents list. Each must be defined in its own YAML file

Tool Agent

Execute tools directly without AI reasoning

Tool agents execute a single tool directly with the incoming payload. No AI model is involved: faster and more deterministic. Use for calculations, data transforms, or API calls.

yaml
version: "1.0"

name: tax-calculator
type: tool
description: "Calculates tax directly using the calculator tool"

# Executes this tool directly with the incoming payload
tool_name: calculator.calculate_tax

Required: tool_name. Payload is passed as keyword arguments

Type Comparison

FeatureLLMSequentialTool
AI ModelYesDepends on chainNo
Uses ToolsMultiple toolsVia sub-agentsSingle tool
SpeedModel-dependentSum of chainFastest
CostPer-tokenSum of chainNo model cost

Configuration Fields

FieldTypeStatusDescription
versionstringRequiredConfiguration schema version. Currently only '1.0' is supported.
namestringRequiredUnique identifier for the agent. Use lowercase letters, numbers, and hyphens.
typestringOptionalAgent type: 'llm' (default), 'sequential', or 'tool'.Default: llm
descriptionstringRequiredHuman-readable description of what the agent does.
modelstringOptionalThe AI model to use. Required for LLM agents.
system_promptstringOptionalInstructions for LLM agents. Use YAML's pipe (|) for multi-line.
toolslistOptionalList of tools for LLM agents. Each entry is either a string (always available) or a mapping with a condition expression. See Conditional Tools. Max 100 per agent.Default: []
agentsstring[]OptionalList of agent names to execute in sequence. Required for sequential agents.Default: []
tool_namestringOptionalTool to execute directly. Required for tool agents.
max_concurrent_runsintegerOptionalMaximum simultaneous runs allowed. Capped by your subscription plan.Default: 1
temperaturenumberOptionalControls randomness in LLM output. Lower = more deterministic.Default: 1
reasoningbooleanOptionalInclude the model's reasoning in run traces. When enabled, the model's internal thinking process is captured and displayed separately in run details. Supported by models with reasoning capabilities (e.g. Gemini 2.5, Claude).Default: true
reasoning_budgetintegerOptionalMaximum number of tokens the model may use for reasoning. Use 0 to disable reasoning or -1 to let the model decide automatically. Only applies when reasoning is enabled.
retry_optionsobjectOptionalConfiguration for automatic retries. Enables two retry mechanisms: (1) Agent-level retries with exponential backoff for transient failures (network, API limits), and (2) Tool-level retries where the model reflects on errors and tries different approaches.
attemptsintegerOptionalMaximum retry attempts for both agent and tool-level retries. Max 10.Default: 3
max_delayintegerOptionalMaximum seconds between agent-level retries (uses exponential backoff: 1s, 2s, 4s... capped at max_delay). Max 300s.Default: 30
timeoutintegerOptionalMaximum execution time in seconds. Minimum is 5 seconds. The actual timeout is always capped by your subscription's limit.
max_iterationsintegerOptionalMaximum number of agent loop iterations per run. Each iteration is one LLM call (e.g. agent output, tool call, next output). Prevents infinite loops and excessive resource consumption. LLM agents only.Default: 100
output_schemastringOptionalName of a JSON Schema file from schemas/ directory. Forces structured JSON output. LLM agents only. See Output Schema docs.
mcp_serversobject[]OptionalList of MCP servers to connect to for external tools. Max 50 per agent. See MCP Integration docs.
namestringRequiredIdentifier for the MCP server
urlstringRequiredURL of the MCP server endpoint
toolsstring[]OptionalFilter to specific tools (omit to use all available)
headersobjectOptionalHTTP headers for authentication (supports ${VAR} syntax)
concurrencyobjectOptionalKey-based concurrency control. Ensures only one run per unique key value is active at a time. Not supported on sequential agents. See Concurrency Rules.
keystringRequiredDot-notation path to extract the concurrency key from the trigger payload (e.g., 'process_id', 'data.customer_id').
on_conflictstringOptionalBehavior when a run with the same key is already active: 'queue' waits for the active run to finish, 'drop' cancels the new run immediately.Default: queue

Required Fields by Type

All agents: version, name, description

LLM: + model, system_prompt · Sequential: + agents · Tool: + tool_name

Writing System Prompts

yaml
system_prompt: |
  This is a multi-line system prompt.
  
  You can write multiple paragraphs here.
  The pipe character (|) preserves newlines.
  
  Use this for complex instructions.

Best practices: Be specific about role and responsibilities, include examples, specify output formats for structured responses, mention available tools.

Referencing Tools

yaml
# Reference tools by module.function_name
tools:
  - search.web_search       # tools/search.py -> web_search()
  - calculator.add          # tools/calculator.py -> add()
  - email.send_notification # tools/email.py -> send_notification()

Tools are Python functions in your tools/ directory. See the Write Tools guide.

Conditional Tools

Tools can be made conditionally available per request. Instead of a plain string, use a mapping where the key is the tool reference and the value is a condition expression. If the condition evaluates to false, the tool is completely removed from the agent for that request.

yaml
# Conditional tools: only available when the expression evaluates to true.
# Uses Python expression syntax (and, or, not, ==, !=, >, <, >=, <=).
tools:
  - calculator.add                                              # always available
  - calculator.multiply: context.multiply_allowed               # available when middleware sets context.multiply_allowed
  - web_search: input.search_enabled                            # available when input JSON has search_enabled=true
  - admin.dangerous_tool: input.role == 'admin' or context.admin  # equality check with or
  - premium.tool: input.tier == 'pro' and context.feature_on    # and with equality
  - optional.tool: not context.disabled                         # negation

Data Sources

  • context.<key> - Values from the middleware context dict. Set in your before middleware. Supports nested paths like context.user.role.
  • input.<key> - Values from the connector's JSON payload. Only works when the incoming payload is valid JSON. Supports nested paths like input.metadata.tier. If the payload is not valid JSON, all input.* checks evaluate to false.

Operators

Conditions use Python expression syntax: and, or, not for logic, ==, !=, >, <, >=, <= for comparisons, and parentheses for grouping. String literals use single quotes.

Setting Context in Middleware

python
# middleware/assistant.py
async def before(content: dict, context: dict) -> dict:
    # Set context values that tool conditions can check
    context["multiply_allowed"] = True
    context["admin"] = content["parts"][0]["text"].startswith("/admin")
    return content

Validation at Deploy Time

Condition expressions are validated when your agent is loaded. Invalid syntax will fail the deployment, not at runtime. A bare accessor like context.foo is a truthy check: it passes if the value is set and not empty/zero/false.

Concurrency Rules

Concurrency rules let you ensure only one run per unique key value is active at a time. This is useful when an agent processes events for specific entities (e.g., a support process, a customer, an order) and you need to prevent parallel processing of events belonging to the same entity.

Queue Mode (default)

When a second event arrives for the same key while a run is active, it waits in a queue until the first run finishes, then processes automatically.

yaml
version: "1.0"

name: support-processor
type: llm
model: gemini/gemini-2.5-pro
description: "Processes support tickets one at a time per process"
system_prompt: "You are a support agent. Process the incoming ticket."

# Only one run per process_id at a time. Additional runs wait in queue.
concurrency:
  key: "process_id"
  on_conflict: queue

Drop Mode

When a second event arrives for the same key while a run is active, it is immediately cancelled. Use this when only the latest state matters and processing stale events would be wasteful.

yaml
version: "1.0"

name: notification-handler
type: tool
description: "Handles notifications, skipping duplicates"
tool_name: notifications.handle

# Drop duplicate runs for the same customer while one is active.
concurrency:
  key: "data.customer_id"
  on_conflict: drop

Key Extraction

The key field uses dot-notation to extract a value from the trigger payload. For example, if your Kafka message payload is {"data": {"customer_id": "abc"}}, use data.customer_id as the key. If the key path is not found in the payload, concurrency enforcement is skipped for that run.

Interaction with max_concurrent_runs

Both constraints apply simultaneously. For example, with max_concurrent_runs: 5 and concurrency.key: "process_id", up to 5 different process IDs can run in parallel, but only 1 run per process ID at a time.

Limitations

Concurrency rules are not supported on sequential agents. The key is extracted from the raw trigger payload, so it applies to all trigger types (Kafka, HTTP, Cron, API, trigger_agent) uniformly.

LLM Agent Examples

Simple LLM Agent

A minimal LLM agent configuration:

yaml
version: "1.0"

name: assistant
type: llm  # Default type, can be omitted
model: gemini/gemini-2.5-pro  # Provider prefix required
description: "A helpful general-purpose assistant"
system_prompt: "You are a helpful assistant. Be concise and accurate."

Full-Featured LLM Agent

An invoice processing agent with multiple tools and detailed instructions:

yaml
version: "1.0"

name: invoice-processor
type: llm
model: openai/gpt-5.2  # Or: gemini/gemini-2.5-pro, ...
description: "Extracts data from invoices and validates totals"
system_prompt: |
  You are an expert accountant specializing in invoice processing.
  
  Your responsibilities:
  1. Extract all relevant fields from invoices (vendor, date, line items, totals)
  2. Use the calculator tool to verify mathematical accuracy
  3. Flag any discrepancies between line items and totals
  4. Format extracted data in a structured JSON format
  
  Always double-check calculations before confirming totals are correct.

max_concurrent_runs: 10
max_iterations: 50        # Stop after 50 iterations to limit cost
temperature: 0.7
reasoning: true           # Capture model reasoning in traces
reasoning_budget: 8192    # Max tokens for reasoning
retry_options:
  attempts: 5
  max_delay: 60
tools:
  - calculator.add
  - calculator.multiply
  - pdf.extract_text
  - validation.check_totals

Sequential Agent Example

Document Processing Pipeline

A sequential agent that chains multiple agents together:

yaml
version: "1.0"

name: document-pipeline
type: sequential
description: "Processes documents through extraction and validation"

# Agents execute in order, each receiving the previous agent's output
agents:
  - assistant         # First: extracts key information
  - invoice-processor # Then: validates and processes the data

Customer Inquiry Pipeline

A three-step pipeline for validation, database lookup, and response formatting:

yaml
# agents/customer-inquiry.yaml
version: "1.0"

name: customer-inquiry
type: sequential
description: "Validate input → fetch account → format response"

# Each step receives the previous agent's output
agents:
  - validate-inquiry
  - fetch-account
  - format-response

Define each step as its own agent:

yaml
# agents/validate-inquiry.yaml
version: "1.0"

name: validate-inquiry
type: tool
description: "Validate and normalize customer inquiry payload"
tool_name: validation.validate_inquiry
yaml
# agents/fetch-account.yaml
version: "1.0"

name: fetch-account
type: tool
description: "Lookup account details from Postgres"
tool_name: postgres.fetch_user_account
yaml
# agents/format-response.yaml
version: "1.0"

name: format-response
type: llm
model: gemini/gemini-2.5-pro
description: "Format a helpful customer response"
system_prompt: |
  You receive validated inquiry data plus account details.
  Respond concisely and include next steps when relevant.

Middleware linking: createmiddleware/validate-inquiry.py to attach hooks toagents/validate-inquiry.yaml. No YAML config required.

Tool Agent Example

Direct Tool Execution

A tool agent that executes a calculator function directly:

yaml
version: "1.0"

name: tax-calculator
type: tool
description: "Calculates tax directly using the calculator tool"

# Executes this tool directly with the incoming payload
tool_name: calculator.calculate_tax

Model Format

Models must include a provider prefix: provider/model-name. Configure API keys in Project Settings before using a provider.

Connic supports multiple LLM providers. Configure your API keys in Project Settings, then use the provider prefix in your agent configuration.

ProviderPrefixConfiguration
OpenAIopenai/API Key only
Azure OpenAIazure/API Key + Base URL + API Version
Anthropicanthropic/API Key only
Google Geminigemini/API Key only
OpenRouteropenrouter/API Key only
AWS Bedrockbedrock/Access Key ID + Secret Access Key + Region
Google Vertex AIvertex_ai/GCP Project ID + Location + Service Account JSON

Example Usage

yaml
# Using OpenAI
model: openai/gpt-5.2

# Using Anthropic
model: anthropic/claude-sonnet-4-5-20250929

# Using Google Gemini
model: gemini/gemini-2.5-pro

# Using Azure OpenAI (use your deployment name)
model: azure/my-gpt5-deployment

# Using OpenRouter (provider/model format)
model: openrouter/anthropic/claude-sonnet-4.5

# Using AWS Bedrock
model: bedrock/us.anthropic.claude-sonnet-4-5-20250929-v1:0

# Using Google Vertex AI
model: vertex_ai/gemini-2.5-pro

Iterate Faster with Local Testing

Use connic test to test your agent configurations with hot-reload. Changes are reflected in 2-5 seconds without pushing to git.