Connic
Back to BlogTutorial

Migrate from LangChain to Production AI Agents

Your LangChain prototype works. Now you need it to handle real traffic. Learn how to migrate existing agent code to a production-grade platform without rewriting from scratch.

March 23, 202611 min read

Your LangChain prototype works. It impressed the stakeholders, the demo went great, and now someone has asked the question: "When can we ship this to real users?"

That's when things get complicated. LangChain is excellent for experimentation, but moving a LangChain project to production means solving problems the framework was never designed for: isolated execution, concurrency control, deployment pipelines, observability, cost tracking, and retries. Most teams spend months building this infrastructure before their first user ever touches the agent.

This guide walks through how to migrate an existing LangChain or ADK project to Connic, a managed platform that handles the production concerns for you. The key insight: your agent logic does not need to change. Your tools stay the same. What changes is the structure around them.

Why Teams Outgrow Frameworks

LangChain, CrewAI, and Google ADK are agent frameworks. They give you abstractions for defining agents, tools, and chains. What they do not give you is a production runtime.

Here is what teams typically discover when they try to ship a framework-based agent:

No Deployment Story

You have a Python script. Now what? You need Docker, Kubernetes, load balancers, health checks, and rolling deployments. That is months of DevOps work before your first agent run.

No Built-In Observability

When an agent fails at 2am, you need traces, logs, and token-level cost breakdowns. Frameworks give you print statements. Production needs full execution traces with every LLM call, tool invocation, and decision recorded.

No Integration Infrastructure

Your agent needs to trigger from webhooks, process emails, listen to message queues, and write results back to APIs. Each integration is a mini-project of its own.

No Safety Controls

Prompt injection, PII leakage, cost runaway, infinite loops. Production agents need guardrails, iteration limits, and concurrency controls. Frameworks leave this entirely up to you.

The migration is not about abandoning your agent logic. It is about moving it into an environment that handles the 80% of production concerns you should not be building from scratch.

What Changes (and What Does Not)

The most important thing to understand: your tools and business logic stay the same. A Python function that queries your database is still a Python function that queries your database. What changes is how agents are defined and how the project is structured.

LangChain / ADKConnicChange Required
Agent defined in Python codeAgent defined in YAMLStructure change
Tools as decorated functionsTools as plain Python functionsRemove decorators
Model in constructor argsModel in YAML configMove to config
System prompt in Python stringSystem prompt in YAMLMove to config
Tool business logicSame tool business logicNo change
Tracing via LangSmithBuilt-in tracesRemove integration
Custom deployment scriptsGit push or CLI deployRemove scripts

Before and After

Here is a concrete example. A LangChain support agent with two tools:

Before: LangChain
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent
from langchain.tools import tool

@tool
def search_docs(query: str) -> str:
    """Search the documentation for relevant articles."""
    # Your search logic here
    return results

@tool
def create_ticket(summary: str, priority: str) -> str:
    """Create a support ticket in the system."""
    # Your ticket creation logic here
    return ticket_id

llm = ChatOpenAI(model="gpt-4o")
agent = create_react_agent(
    llm=llm,
    tools=[search_docs, create_ticket],
    prompt="You are a customer support agent..."
)

After migration, this becomes two files: a YAML config and a tools module.

After: agents/support.yaml
version: "1.0"
name: support-agent
description: "Handles customer support queries"
model: openai/gpt-4o
system_prompt: |
  You are a customer support agent. Search the docs first,
  then create a ticket if the issue cannot be resolved.
tools:
  - support.search_docs
  - support.create_ticket
retry_options:
  attempts: 3
  max_delay: 30
After: tools/support.py
def search_docs(query: str) -> str:
    """Search the documentation for relevant articles."""
    # Same logic as before - no changes needed
    return results

def create_ticket(summary: str, priority: str) -> str:
    """Create a support ticket in the system."""
    # Same logic as before - no changes needed
    return ticket_id

Notice what happened: the tool functions are identical. The decorators are gone, the model and prompt moved to YAML, and the agent definition is now declarative configuration instead of imperative code.

What You Gain Immediately

That YAML config now gives you retry handling, deployment pipelines, execution traces, token tracking, and cost monitoring without writing a single line of infrastructure code. The agent deploys with a git push or connic deploy.

Automated Migration with the CLI

For projects with many agents and tools, the Connic CLI includes a migrate command that automates the structural conversion. It scans your Python code, extracts agents and tools, and generates a Connic project with the correct structure.

Terminal
$ pip install connic

$ connic migrate --source ./my-langchain-project --dest ./my-connic-project

  Scanning source project...
  Framework: langchain
  Agents found: 3
  Tools found: 8

  Generated Connic project in ./my-connic-project
  Running validation...

  Migration complete
    Project: ./my-connic-project
    Report:  ./my-connic-project/MIGRATION_REPORT.md

The CLI does the heavy lifting:

Agent Extraction

Finds agent definitions in your code, extracts system prompts, model names, and tool references, and generates YAML configuration files.

Tool Preservation

Extracts tool functions with their dependencies. Removes framework decorators. Resolves cross-file imports so your tools work standalone.

Model Normalization

Converts model references to the standard provider/model format. ChatOpenAI("gpt-4o") becomes openai/gpt-4o.

Migration Report

Generates a detailed report listing everything that was migrated and everything that needs manual review. No guesswork about what is left to do.

What Migrates Automatically vs. Manually

Not everything can be migrated automatically. Here is a realistic breakdown:

Automatic

  • Agent definitions (create_agent, create_react_agent, LlmAgent, SequentialAgent)
  • Tool functions (decorators stripped, logic preserved)
  • System prompts extracted from function arguments
  • Model name detection and normalization
  • Cross-file imports and tool dependencies
  • Requirements.txt generation from source dependencies

Manual Review Required

  • Complex orchestration workflows — LangGraph state graphs, parallel execution, and conditional routing need to be restructured as sequential agents or custom tool logic
  • RAG pipelines — Retrieval chains should be converted to use the built-in knowledge base or reimplemented as tools
  • State and memory — Checkpointers and custom stores should be replaced with persistent sessions or the managed database
  • Callbacks and hooks — Framework callbacks should be converted to middleware
  • Tracing integrations — Remove LangSmith or custom tracing code (replaced by built-in observability)

The Migration Report Is Your Roadmap

The CLI generates a MIGRATION_REPORT.md that lists every agent migrated, every tool extracted, and every item that needs manual attention. Treat it as a checklist. Work through it item by item, run connic lint after each change, and you will know exactly when you are done.

The Connic Project Structure

After migration, your project follows a clean, opinionated structure. This is the same structure whether you migrate or start fresh:

Project Structure
my-project/
├── agents/              # YAML agent configurations
│   ├── support.yaml
│   └── classifier.yaml
├── tools/               # Python tool functions
│   ├── support.py
│   └── classify.py
├── middleware/           # Before/after hooks
├── guardrails/          # Custom safety checks
├── schemas/             # Output schemas
└── requirements.txt     # Dependencies

The key difference from framework projects: configuration is separated from logic. Agent definitions are declarative YAML. Tool logic is plain Python. There is no framework boilerplate, no runner scripts, and no deployment configuration to maintain.

What You Get After Migration

Once your project is migrated, the production concerns are handled for you:

Git-Based Deploys

Push to your repository and the platform builds and deploys automatically. Support for GitHub, GitLab, and Bitbucket. Or use connic deploy from the CLI.

Local Testing

Run connic test to spin up a local test environment. Validate your agents and tools before deploying. Lint with connic lint to catch configuration errors early.

Connectors

Trigger agents from webhooks, email, Telegram, S3 uploads, Kafka, SQS, Stripe events, and more. Deliver results back via outbound connectors. No integration code needed.

Instant Rollbacks

Every deployment is versioned. If something breaks, roll back to the previous version with one click. No downtime, no redeployment.

Step-by-Step Migration Checklist

Whether you use the automated CLI or migrate manually, here is the process:

  • 1.Run the migration connic migrate --source ./your-project --dest ./connic-project
  • 2.Read the migration report — Address every follow-up item in MIGRATION_REPORT.md
  • 3.Verify agent configs — Open each YAML file in agents/ and confirm the system prompt, model, and tool references
  • 4.Check tool imports — Make sure functions in tools/ have all their dependencies
  • 5.Remove framework code — Delete LangSmith integrations, custom runners, and deployment scripts
  • 6.Restructure complex patterns — Convert RAG pipelines to knowledge tools, callbacks to middleware, state to sessions
  • 7.Lint and test — Run connic lint then connic test
  • 8.Deploy connic deploy or push to your connected Git repository

The Bottom Line

Migrating from LangChain or ADK is not about rewriting your agents. It is about moving them from a prototyping environment into a production one. Your tools stay the same. Your business logic stays the same. What changes is the infrastructure around them: deployment, observability, retries, guardrails, and integrations that would take months to build yourself.

Most teams complete a migration in days, not weeks. The automated CLI handles the structural conversion. The migration report tells you exactly what needs manual attention. And once you deploy, you get production-grade infrastructure from the first run.

For detailed migration guides, check the LangChain migration docs or the ADK migration docs. If you are starting a new project, the quickstart guide will have you running in under 10 minutes.