Connic

Knowledge, sessions,
and a database, built in

Three storage primitives every agent needs: managed RAG over your documents, persistent conversation sessions, and a schemaless document database. Nothing for you to provision.

Read the storage docs

Knowledge Base

24 entries · 3 namespaces
Search…
SourceEntry IDNamespace
  • invoice-template.pdf
    inv_a1b2c3
    policies.finance
  • tax-rules-2026.md
    tax_d4e5f6
    policies.finance
  • refund-faq.txt
    faq_g7h8i9
    support.faq
  • product-catalog.png
    cat_j0k1l2
    products
  • shipping-policy.md
    shp_m3n4o5
    support.shipping
  • vendor-contract.pdf
    vnd_p6q7r8
    policies.legal
Knowledge

Managed RAG with semantic search

Upload text, markdown, CSV, JSON, YAML, logs, PDFs, and images. Connic chunks, embeds, and namespaces them. Agents query with the predefined query_knowledge tool.

agents/support-agent.yaml
# Give the agent the predefined query_knowledge tool
name: support-agent
model: gemini/gemini-2.5-pro
system_prompt: |
  Answer using the knowledge base.
  Use query_knowledge before responding.
tools:
  - query_knowledge
SourceNamespace
  • refund-faq.txt· 27 chunks
    support.faq
  • shipping-policy.md· 12 chunks
    support.faq
  • tax-rules-2026.md· 42 chunks
    policies.finance
  • vendor-contract.pdf· 31 chunks
    policies.legal
  • product-shot.png· 8 chunks
    products
Many formats

Text, markdown, CSV, JSON, YAML, logs, PDF, images.

Async ingestion

Files are queued, chunked, and embedded in the background. Track each job in the dashboard.

Scored results

Returns content, entry ID, namespace, and a relevance score. See docs

Sessions

Multi-turn conversations that survive restarts

Add a session block to an LLM agent and Connic keeps conversation history across requests, keyed by anything you derive from middleware context or the inbound payload.

agents/support-bot.yaml
name: support-bot
type: llm
model: gemini/gemini-2.5-pro
system_prompt: |
  You are a helpful support agent.
  Use the conversation history for context.

# Persist conversation history per chat
session:
  key: context.chat_id
  ttl: 86400  # expire after 24h of inactivity

key is a dot-path that must start with context. (set in before middleware) or input. (read from the raw payload). Optional ttl in seconds (minimum 60); without it sessions never expire. Without a session block, every request starts fresh. See docs

user
I want a refund for order ORD-184.
user: refund for ORD-184
assistant: Looked it up, refund issued.

Conversation history kept across runs. TTL configurable.

user
When will it arrive?
# Agent already knows the order context.
Database

Schemaless collections, no migrations

Every environment includes a managed document database. Collections are created the first time an agent inserts. Query with expressive filter operators.

tools/save_invoice.py
# No setup needed - the collection "invoices" is created
# automatically the first time db_insert runs.
result = await db_insert("invoices", {
    "vendor":       "Acme Corp",
    "total":        4920,
    "currency":     "EUR",
    "processed_at": "2026-04-12T10:30:00Z",
    "raw_event":    {"id": "evt_123", "type": "invoice.paid"},
})
# result["inserted"][0]["_id"] -> auto-generated UUID
tools/list_invoices.py
# Query with filter operators - no SQL, no migrations
result = await db_find(
    "invoices",
    filter={
        "vendor": "Acme Corp",
        "processed_at": {"$gt": "2026-04-01"},
    },
    sort={"processed_at": -1},
    limit=20,
)
documents = result["documents"]
Auto-created collections

No schema setup. The first db_insert creates the collection. Each document gets _id, _created_at, and _updated_at automatically.

Expressive filters

$eq, $ne, $gt/$gte/$lt/$lte, $in/$nin, $and/$or/$not, $exists, $contains, $elemMatch, $regex. Sort, paginate, project, or list distinct values.

Six predefined tools

db_find, db_insert, db_update, db_delete, db_count, db_list_collections. Browse data and inferred schemas under Storage → Database in the dashboard. See docs

Storage controls built into every project

Environment-scoped isolation, scoped API keys, and a dashboard to inspect everything

Environment-scoped isolation

Knowledge entries, persistent sessions, and database collections are all scoped per environment. Production and staging in the same project keep their data separate by default.

Scoped API keys

REST API keys can be granted granular permissions, including knowledge read and write scopes. Use them to automate ingestion pipelines or sync content from external systems.

Dashboard management

Inspect every primitive from one place: the Knowledge tab tracks ingestion jobs and namespaces, Storage > Sessions lists and clears active sessions, and Storage > Database browses collections, documents, and inferred schemas.

Frequently Asked Questions

Plain text and markdown (.txt, .md, .markdown), CSV, JSON / JSONL, YAML, log files, PDFs, and images (.png, .jpg, .jpeg, .gif, .webp). Text formats are chunked and embedded; PDFs are extracted with page numbers preserved; images go through vision extraction before being embedded.

Uploads are accepted immediately and indexed asynchronously as ingestion jobs you can monitor in the dashboard. Agents query the knowledge base with the predefined query_knowledge tool, which performs semantic search and returns matching content with a relevance score, entry ID, and namespace.

Namespaces are dot-separated paths (e.g. policies.hr.leave) up to 10 levels deep. Querying a parent namespace also searches all sub-namespaces. Entry IDs are unique within a namespace, and agents can discover the hierarchy at runtime with the kb_list_namespaces tool.

Sessions let an LLM agent keep conversation history across requests. Enable them in the agent YAML by setting a session.key (resolved from middleware context.* or input.*) and an optional ttl in seconds (minimum 60). Without a session block, every request starts fresh. Active sessions are managed under Storage > Sessions in the dashboard.

The database stores structured documents in named collections and is queried by exact field values using filter operators ($eq, $gt, $in, $and, etc.). The knowledge base stores text and is queried by meaning. Use the database for orders, users, and events; use knowledge for FAQs, docs, and notes.

No. Documents are free-form and collections are auto-created the first time db_insert runs. Each document automatically gets _id (a UUID), _created_at, and _updated_at system fields alongside whatever fields you write.

The knowledge base, sessions, and database are all scoped per environment. Production and staging in the same project keep separate data.