Connic
Connic Composer SDK

Predefined Tools

Ready-to-use tools provided by Connic. Import these directly into your agents without writing any code.

Available Predefined Tools
Tool NameDescriptionDocs
query_knowledgeSearch the knowledge base for relevant informationLearn more
store_knowledgeQueue new information for knowledge base indexingLearn more
delete_knowledgeRemove entries from the knowledge baseLearn more
kb_list_namespacesList knowledge base namespaces and their hierarchyLearn more
db_findQuery documents from a collection using filtersLearn more
db_insertInsert one or more documents into a collectionLearn more
db_updateUpdate documents matching a filterLearn more
db_deleteDelete documents matching a filterLearn more
db_countCount documents in a collection, optionally filteredLearn more
db_list_collectionsList all collections with document counts and storage sizesLearn more
trigger_agentTrigger another agent within the same projectBelow
trigger_agent_atSchedule an agent to be triggered at a specific time in the futureBelow
web_search+1 run/call
Search the web for real-time informationBelow
web_read_page+1 run/call
Fetch a web page and return its content as markdownBelow

How Predefined Tools Work

Predefined tools are built-in capabilities that Connic provides out of the box. Unlike custom tools where you write Python functions, predefined tools are ready to use. Just add their name to your agent's tools list.

  • No Code Required: Just add the tool name to your YAML config. Implementation is handled by Connic.
  • Secure by Default: Tools run in isolated environments with proper authentication and access controls.
  • Environment Scoped: Data is isolated per environment. Dev and prod never mix.
agents/assistant.yaml
version: "1.0"

name: assistant
model: gemini/gemini-2.5-pro
description: "An assistant with knowledge and orchestration capabilities"
system_prompt: |
  You have access to a persistent knowledge base.
  Search it before answering questions.

tools:
  - query_knowledge   # Predefined tool - no code needed
  - store_knowledge
  - trigger_agent

Using in Custom Tools

You can also import and call predefined tools directly from your custom Python tools. This lets you build complex orchestration logic that combines knowledge queries with agent triggers.

from connic.tools import trigger_agent, trigger_agent_at, query_knowledge, store_knowledge, delete_knowledge, kb_list_namespaces, web_search, web_read_page, db_find, db_insert, db_update, db_delete, db_count, db_list_collections
tools/orchestration.py
from connic.tools import trigger_agent, query_knowledge

async def research_and_summarize(topic: str) -> dict:
    """Research a topic and return a summary.
    
    Args:
        topic: The topic to research
    
    Returns:
        A dictionary with the research summary
    """
    # First, check if we have relevant knowledge
    knowledge = await query_knowledge(
        query=f"Information about {topic}",
        max_results=5
    )
    
    # Build context from knowledge base
    context = "\n".join([r["content"] for r in knowledge.get("results", [])])
    
    # Trigger the researcher agent with context
    result = await trigger_agent(
        agent_name="researcher",
        payload={"topic": topic, "context": context}
    )
    
    return {
        "topic": topic,
        "summary": result["response"],
        "sources": len(knowledge.get("results", []))
    }

The custom tool above can be used in your agent YAML just like any other tool:

agents/agent.yaml
tools:
  - orchestration.research_and_summarize

trigger_agent

Orchestrate multiple agents from a single agent

How It Works

The trigger_agent tool lets one agent call another agent within the same project. Use it to build pipelines, delegate specialized tasks, or coordinate complex workflows across multiple agents.

agents/orchestrator.yaml
version: "1.0"

name: orchestrator
model: gemini/gemini-2.5-pro
description: "Coordinates other agents"
system_prompt: |
  You orchestrate tasks by delegating to specialized agents.

tools:
  - trigger_agent

Parameters

ParameterTypeDefaultDescription
agent_namestringrequiredName of the agent to trigger
payloadanyrequiredData to send to the agent (dict, list, or string)
wait_for_responsebooltrueWait for the agent to complete and return its response
timeout_secondsint60Max wait time (only applies if wait_for_response=True)

Return Value

Returns: run_id, status ("completed", "failed", or "timeout"),response (the agent's output), error (if failed). If wait_for_response=False, only run_id is returned.

trigger_agent_at

Schedule future agent triggers with delays or timestamps

How It Works

The trigger_agent_at tool schedules another agent to run at a specific time in the future. Use it for delayed follow-ups, scheduled reports, retry-after patterns, or any workflow that needs time-based orchestration. The run is created immediately with a scheduled status and dispatched automatically when the scheduled time arrives.

agents/scheduler.yaml
version: "1.0"

name: scheduler
model: gemini/gemini-2.5-pro
description: "Schedules tasks for future execution"
system_prompt: |
  You schedule tasks by triggering agents at specific times.

tools:
  - trigger_agent_at

Parameters

ParameterTypeDefaultDescription
agent_namestringrequiredName of the agent to trigger
payloadanyrequiredData to send to the agent (dict, list, or string)
delaydictNoneRelative time offset. Dict with keys: d (days), h (hours), m (minutes), s (seconds). At least one key required. Example: {"h": 2, "m": 30}
unix_timestampfloatNoneAbsolute Unix timestamp (seconds since epoch) for when to trigger

Exactly one of delay or unix_timestamp must be provided. Maximum scheduling window is 7 days into the future.

Return Value

Returns: run_id, scheduled_at (ISO 8601 UTC timestamp), status ("scheduled"). The tool always returns immediately without waiting for the agent to execute.

Examples

Using a relative delay:

tools/scheduler.py
# Schedule a report in 2 hours and 30 minutes
result = await trigger_agent_at(
    agent_name="report-generator",
    payload={"report_type": "daily"},
    delay={"h": 2, "m": 30}
)
# Returns: {"run_id": "...", "scheduled_at": "2026-03-18T16:30:00+00:00", "status": "scheduled"}

Using an absolute timestamp:

tools/scheduler.py
# Schedule at a specific time (Unix timestamp)
import time
target_time = time.time() + 86400  # 24 hours from now

result = await trigger_agent_at(
    agent_name="cleanup-agent",
    payload={"scope": "all"},
    unix_timestamp=target_time
)

web_search

+1 run per call

Search the web for real-time information

How It Works

The web_search tool allows your agent to search the web for real-time information. It returns a list of relevant search results including titles, URLs, and content snippets. You can optionally geo-target results by country or include recent news articles.

Pricing: Each call to web_search adds 1 additional run to your billing. For example, a run with 2 web searches counts as 3 runs (1 base + 2 searches).

agents/researcher.yaml
version: "1.0"

name: researcher
model: openai/gpt-4o
description: "Research agent with web search"
system_prompt: |
  You are a research assistant with web search capabilities.
  Search the web to find current information and cite sources.

tools:
  - web_search

Parameters

ParameterTypeDefaultDescription
querystringrequiredThe search query
max_resultsint5Number of results to return (max: 10)
countrystringNoneISO 3166-1 alpha-2 country code for geo-targeted results (e.g. "DE", "US", "FR")
include_newsboolfalseAlso search recent news articles. News results are merged into the results list.

Return Value

Returns a dictionary with:

  • results - List of search results, each containing:
    • title - Page title
    • url - Page URL
    • content - Snippet of page content

web_read_page

+1 run per call

Fetch a web page and return its content as markdown

How It Works

The web_read_page tool fetches a web page and returns its content as clean markdown. This is useful when your agent needs to read the full content of a page, for example after finding relevant URLs via web_search.

Pricing: Each call to web_read_page adds 1 additional run to your billing. For example, a run with 2 page fetches counts as 3 runs (1 base + 2 scrapes).

agents/reader.yaml
version: "1.0"

name: reader
model: openai/gpt-4o
description: "Agent that reads and summarizes web pages"
system_prompt: |
  You can fetch web pages and summarize their content.
  Use web_search to find pages, then web_read_page to read them.

tools:
  - web_search
  - web_read_page

Parameters

ParameterTypeDefaultDescription
urlstringrequiredThe URL of the page to fetch

Return Value

Returns a dictionary with:

  • markdown - The page content converted to markdown
  • url - The URL that was fetched

Database tools

Persistent schemaless database for your agents

How It Works

The database tools give your agents a built-in persistent database. Each environment has its own isolated database. Collections are created automatically on first insert.

The recommended approach is to wrap these in custom tools so your agent works in domain language (fetch_orders) rather than database primitives (db_find). See the database tools reference for the full pattern.

agents/order-processor.yaml
name: order-processor
model: gemini/gemini-2.5-flash
tools:
  - order_tools.save_order      # custom tool wrapping db_insert
  - order_tools.fetch_orders    # custom tool wrapping db_find
  - order_tools.update_status   # custom tool wrapping db_update

Available tools

ToolDescription
db_findQuery documents using filters. Supports sort, pagination, projection, distinct.
db_insertInsert one or more documents. Collection is created automatically on first insert.
db_updateUpdate all documents matching a filter. Merges the update dict into existing documents.
db_deleteDelete documents matching a filter. Requires a non-empty filter.
db_countCount documents in a collection, optionally filtered.
db_list_collectionsList all collections with document counts and storage sizes.

Full parameter reference, return values, and filter operators: Database tools docs.