LangChain

January 30, 2026

LangChain is one of the most widely-adopted frameworks for building LLM-powered applications. It provides abstractions for chains, agents, memory, and tool use — plus LangGraph for sophisticated orchestration.

The LangChain Ecosystem

LangChain has evolved into a family of tools:

Quick Start

Create an agent in under 10 lines:

from langchain.agents import create_agent

def get_weather(city: str) -> str:
    """Get weather for a given city."""
    return f"It's always sunny in {city}!"

agent = create_agent(
    model="claude-sonnet-4-5-20250929",
    tools=[get_weather],
    system_prompt="You are a helpful assistant",
)

agent.invoke({
    "messages": [{"role": "user", "content": "what is the weather in sf"}]
})

Core Concepts

Chains

Sequential pipelines of operations:

from langchain import PromptTemplate, LLMChain

chain = (
    PromptTemplate.from_template("Summarize: {text}")
    | llm
    | output_parser
)

result = chain.invoke({"text": document})

Agents

LLMs that decide which tools to use:

from langchain.agents import create_react_agent

agent = create_react_agent(
    llm=llm,
    tools=[search, calculator, wikipedia],
    prompt=react_prompt,
)

Memory

Persistence across conversations:

from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory()
chain = ConversationChain(llm=llm, memory=memory)

LangGraph: Advanced Orchestration

For complex workflows, LangGraph provides graph-based orchestration:

from langgraph.graph import StateGraph

# Define state
class AgentState(TypedDict):
    messages: list
    next_step: str

# Build graph
graph = StateGraph(AgentState)
graph.add_node("research", research_agent)
graph.add_node("analyze", analysis_agent)
graph.add_node("write", writing_agent)

# Define edges
graph.add_edge("research", "analyze")
graph.add_edge("analyze", "write")
graph.set_entry_point("research")

# Compile and run
app = graph.compile()
result = app.invoke({"messages": [user_message]})

LangGraph enables:

When to Use What

As one developer put it:

90% of "AI agents" are just cron jobs with claude attached. You don't need langchain. You don't need autogen. You need: a trigger, context, and an action.

Use LangChain when:

Use LangGraph when:

Skip both when:

Provider Integrations

LangChain supports extensive providers:

# OpenAI
from langchain_openai import ChatOpenAI

# Anthropic
from langchain_anthropic import ChatAnthropic

# Google
from langchain_google_genai import ChatGoogleGenerativeAI

# And 100+ more...

Tool Creation

Define tools for agent use:

from langchain.tools import tool

@tool
def search_database(query: str) -> str:
    """Search the product database for items matching the query."""
    results = db.search(query)
    return format_results(results)
    
@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to the specified address."""
    email_client.send(to, subject, body)
    return f"Email sent to {to}"

LangSmith Integration

Observability for debugging and evaluation:

import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "..."

# All chain/agent runs are now traced

LangSmith provides:

The Criticism

LangChain is polarizing. Common critiques:

The counterpoint: for complex applications, the structure pays off.

Best Practices

  1. Start simple: Use raw SDKs until you need LangChain features
  2. Pick your layer: LangChain for quick starts, LangGraph for complexity
  3. Use LangSmith: Observability is essential for production
  4. Version pin: Lock dependencies to avoid breaking changes

Sources


See also: Orchestration · Vercel AI SDK · OpenAI Agents SDK