From Chatbots to Agents
Most teams start their AI journey with a chatbot — a stateless request/response loop backed by an LLM. It's useful. But it's not an agent. An agent perceives its environment, decides what actions to take, executes those actions, and uses the results to decide what to do next. It can run for minutes or hours, not milliseconds.
LangChain and LangGraph are the two most widely adopted frameworks for building these systems in Python and TypeScript. Understanding when to use each — and how to compose them — is the foundation of production AI automation.
LangChain: The Building Blocks
LangChain provides the primitives: LLM wrappers, prompt templates, output parsers, retrieval chains, and tool integrations. Think of it as the standard library for LLM applications.
The core abstraction is the chain — a sequence of steps where the output of one feeds into the next:
import { ChatOpenAI } from "@langchain/openai"
import { PromptTemplate } from "@langchain/core/prompts"
import { StringOutputParser } from "@langchain/core/output_parsers"
const model = new ChatOpenAI({ model: "gpt-4o" })
const prompt = PromptTemplate.fromTemplate("Summarize this support ticket: {ticket}")
const chain = prompt.pipe(model).pipe(new StringOutputParser())
const result = await chain.invoke({ ticket: "User cannot log in after password reset..." })For simple linear workflows, chains are sufficient. For anything that requires branching, loops, or state — you need LangGraph.
LangGraph: Stateful Agent Orchestration
LangGraph models your agent as a directed graph where nodes are functions (tools, LLM calls, logic) and edges define the transitions between them. State is passed between nodes explicitly.
import { StateGraph, END } from "@langchain/langgraph"
const workflow = new StateGraph({ channels: agentState })
.addNode("analyze", analyzeTicket)
.addNode("search_kb", searchKnowledgeBase)
.addNode("draft_response", draftResponse)
.addNode("escalate", escalateToHuman)
.addEdge("analyze", "search_kb")
.addConditionalEdges("search_kb", routeByConfidence, {
high: "draft_response",
low: "escalate",
})
.addEdge("draft_response", END)
.addEdge("escalate", END)This graph handles a support automation workflow: analyze the ticket, search a knowledge base, and either draft a response or escalate — all based on a confidence score returned by the LLM.
Production Patterns That Actually Work
Human-in-the-loop checkpoints
For high-stakes actions (sending emails, modifying database records, making payments), insert interrupt nodes that pause the graph and wait for human approval before continuing.
Persistent state with Redis or PostgreSQL
Long-running agents need durable state. LangGraph's checkpointing interface integrates with Redis and PostgreSQL, so agents survive process restarts.
Parallel tool execution
LangGraph supports fan-out — executing multiple tool calls in parallel and collecting results before proceeding. Critical for agents that need to aggregate data from several APIs simultaneously.
Retry and fallback logic
Wrap LLM calls with exponential backoff and model fallbacks. If GPT-4o times out, retry with GPT-4o-mini. If the primary knowledge base returns no results, fall back to a broader search.
Real-World Agent Use Cases at nGrid
- Invoice processing automation: Extract line items → validate against PO system → flag discrepancies → route for approval
- Code review agents: Analyze PR diffs → check against internal coding standards → post structured comments on GitHub
- Customer support triage: Classify tickets → search knowledge base → draft responses for agent review → track resolution
The Bottom Line
The shift from chains to graphs is the shift from scripts to programs. LangGraph gives you the control flow primitives — branching, loops, parallelism, human checkpoints — needed to build agents that handle the messy, non-linear reality of real-world automation.