Bridge (Private Networks)
Securely reach private services from anywhere in Connic: connectors, LLM providers, and your own tools and middlewares, without opening inbound firewall rules.
What is the Connic Bridge?
The Connic Bridge is a lightweight agent that runs inside your private network and creates a secure outbound tunnel to Connic Cloud. Anything in your project (connectors, custom LLM providers, and your own tools and middlewares) can then reach services that are not publicly accessible, as if they were running next to them.
Because the bridge only makes outbound connections, you do not need to open any inbound firewall rules or expose your services to the internet.
When do you need it?
You need the bridge if your target service is:
- Inside a private AWS VPC, GCP VPC, or Azure VNet
- Running on-premises behind a corporate firewall
- Accessible only via private DNS or internal IPs
- Behind an IP allowlist that cannot include Connic's IPs
If your services are publicly reachable (e.g. managed Kafka on Confluent Cloud, AWS SQS via public endpoint), you do not need the bridge.
Architecture
Your Network
Private Services
Kafka, PostgreSQL, SQS, ...
Connic Bridge
Runs in your VPC
TLS encrypted
Connic Cloud
Bridge Relay
relay.connic.co
Connectors, LLM Providers,
Tools & Middlewares, MCP Servers
All reach private services via the bridge
1. You deploy the Connic Bridge as a Docker container inside your network.
2. The bridge makes an outbound WebSocket connection to the Connic relay (no inbound ports needed).
3. When a connector needs to reach a private service, Connic routes the connection through the relay and bridge.
4. The bridge validates the target against its configured allowed hosts (ALLOWED_HOSTS env var), opens a local TCP connection, and proxies the traffic.
5. All traffic between the bridge and relay is encrypted via TLS (WSS).
Setup Instructions
Create a Bridge
Go to Project Settings > Bridge and click Add Bridge. Give it a name (e.g. "Production VPC") and copy the token that is displayed. It will only be shown once. You can create as many bridges as you need, each with its own token, to reach different networks or environments.
Run the Connic Bridge
Deploy the bridge inside your private network. It needs to reach both your private services and the internet.
Docker (recommended):
docker run -d --name connic-bridge \
-e BRIDGE_TOKEN=cbr_your_token_here \
-e ALLOWED_HOSTS=kafka:9092,postgres:5432 \
connicorg/bridge:latestpip:
pip install connic-bridge
connic-bridge \
--token cbr_your_token_here \
--allow kafka:9092 \
--allow postgres:5432Docker Compose:
services:
connic-bridge:
image: connicorg/bridge:latest
restart: always
environment:
BRIDGE_TOKEN: cbr_your_token_here
ALLOWED_HOSTS: kafka:9092,postgres:5432,my-db:5432
LOG_LEVEL: INFOOnce the bridge is online, four different parts of Connic can route through it. Each one is set up independently. Pick the ones you need.
Where bridges are used
Connectors
When creating or editing a connector, choose which bridge to route through in the Bridge dropdown of the Network Access section. Leave it set to None to connect directly without a bridge.
The following connector types support bridge access:
- Apache Kafka (inbound and outbound)
- AWS SQS (inbound and outbound)
- PostgreSQL (inbound via LISTEN/NOTIFY)
- Email / IMAP / SMTP (inbound and outbound)
- AWS S3 (file downloads)
- HTTP Webhook (outbound callbacks)
LLM Providers
For internal LLM endpoints (vLLM, Ollama, a LiteLLM proxy, or any OpenAI-compatible server) that live inside your private network, open Project Settings > LLM Provider, expand the custom provider, and pick a bridge in the Route via Bridge dropdown. Every LLM request from any agent that uses this provider will be tunnelled through the bridge.
Tools & Middlewares
Code you write in your project (custom tools, middlewares, tool hooks, and custom guardrails) can reach private services through any of your bridges by addressing them with a magic hostname:
<target>.cnc-bridge-<bridge_id>
where target is the hostname of the service inside your private network (e.g. postgres-primary, kafka, billing) and bridge_id is copied from Project Settings > Bridge. Each bridge card has a copyable "Custom-tool host" field.
The agent runtime intercepts hostname resolution for that pattern and tunnels the connection through the named bridge. Any standard Python client library works without code changes: psycopg, aiokafka, httpx, requests, redis-py, and so on:
# tools/lookup_order.py
import psycopg
from aiokafka import AIOKafkaProducer
import httpx
BRIDGE_ID = "abc123" # copy from Project Settings > Bridge
async def lookup_order(order_id: str) -> dict:
# Postgres in a private VPC
with psycopg.connect(
host=f"postgres-primary.cnc-bridge-{BRIDGE_ID}",
port=5432, dbname="orders", user="reader", password="...",
) as conn:
row = conn.execute(
"SELECT data FROM orders WHERE id = %s", (order_id,)
).fetchone()
# Internal Kafka topic
producer = AIOKafkaProducer(
bootstrap_servers=f"kafka.cnc-bridge-{BRIDGE_ID}:9092"
)
await producer.start()
await producer.send("order-lookups", order_id.encode())
await producer.stop()
# Internal HTTP service
r = httpx.get(f"http://billing.cnc-bridge-{BRIDGE_ID}/v1/orders/{order_id}")
return {"row": row, "billing": r.json()}You can also import a small helper if you prefer explicit code over string concatenation:
from connic import bridge_host
host = bridge_host("abc123", "postgres-primary")
# -> "postgres-primary.cnc-bridge-abc123"Notes
- The target host:port must be in the bridge agent's
ALLOWED_HOSTS, the same list that gates connector access. - Tunnels are TCP-level. TLS handshakes pass through to the real target, so for HTTPS or TLS databases you must override SNI /
server_hostnameto the real target name (the magic hostname won't match the certificate). - Libraries that bypass
socket.getaddrinfo(e.g. those built onaiodns) are not intercepted.
MCP Servers
Private MCP servers that run inside your VPC, on-prem, or behind a corporate firewall can be reached by setting the bridge field on the server entry in your agent YAML. The agent runtime tunnels the MCP HTTP/SSE connection through the bridge.
mcp_servers:
- name: internal-mcp
url: http://mcp.internal:8080/mcp
bridge: ${INTERNAL_BRIDGE_ID}The MCP server's host:port must be in the bridge agent's ALLOWED_HOSTS. See Private MCP Servers via Bridge for details.
Configuration Reference
| Variable | Required | Description |
|---|---|---|
BRIDGE_TOKEN | Yes | Bridge authentication token from the Connic dashboard |
ALLOWED_HOSTS | Yes | Comma-separated host:port pairs the bridge may connect to |
RELAY_URL | No | Relay URL (default: wss://relay.connic.co) |
LOG_LEVEL | No | DEBUG, INFO, WARNING, or ERROR (default: INFO) |
Security
- Outbound-only - the bridge never accepts inbound connections. No ports need to be opened.
- Allowed hosts - you control exactly which services the bridge can reach. Connections to unlisted hosts are rejected.
- Token authentication - each bridge has its own token tied to a single Connic project. Tokens can be rotated at any time, and you can run multiple bridges in different networks for the same project.
- TLS encryption - all communication between the bridge and relay uses WSS (WebSocket over TLS).
Troubleshooting
Bridge shows "Disconnected" in dashboard
Check that the bridge container is running (docker ps) and has outbound internet access. Verify the token is correct and has not been regenerated.
Connector fails with "Bridge not connected"
The connector references a bridge that is not currently connected. Start the matching Connic Bridge agent in your network, or change the connector's Bridge dropdown to a different bridge or to None.
"Host not in allowed hosts list"
The bridge rejected the connection because the target host:port is not in the allowed hosts list. Add it to the ALLOWED_HOSTS environment variable (or --allow flag) of the bridge container and restart it.
Connection timeout to target
The bridge can reach the relay but cannot connect to the target service. Verify that the bridge container can reach the target host:port from within its network (e.g. via docker exec connic-bridge nc -zv kafka 9092).
Connectors Overview
Pick a bridge on any connector that supports private network access
Write Custom Tools
Use the bridge hostname pattern from any client library in your tools
Middleware
Reach private services from before/after middleware the same way
Deployment
Run the bridge agent next to your production workloads