LangChain
January 30, 2026
LangChain is one of the most widely-adopted frameworks for building LLM-powered applications. It provides abstractions for chains, agents, memory, and tool use — plus LangGraph for sophisticated orchestration.
The LangChain Ecosystem
LangChain has evolved into a family of tools:
- LangChain: Core library for chains and basic agents
- LangGraph: Low-level orchestration with graphs and state
- LangSmith: Observability and evaluation platform
- LangServe: Deployment infrastructure
Quick Start
Create an agent in under 10 lines:
from langchain.agents import create_agent
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
agent = create_agent(
model="claude-sonnet-4-5-20250929",
tools=[get_weather],
system_prompt="You are a helpful assistant",
)
agent.invoke({
"messages": [{"role": "user", "content": "what is the weather in sf"}]
})
Core Concepts
Chains
Sequential pipelines of operations:
from langchain import PromptTemplate, LLMChain
chain = (
PromptTemplate.from_template("Summarize: {text}")
| llm
| output_parser
)
result = chain.invoke({"text": document})
Agents
LLMs that decide which tools to use:
from langchain.agents import create_react_agent
agent = create_react_agent(
llm=llm,
tools=[search, calculator, wikipedia],
prompt=react_prompt,
)
Memory
Persistence across conversations:
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory()
chain = ConversationChain(llm=llm, memory=memory)
LangGraph: Advanced Orchestration
For complex workflows, LangGraph provides graph-based orchestration:
from langgraph.graph import StateGraph
# Define state
class AgentState(TypedDict):
messages: list
next_step: str
# Build graph
graph = StateGraph(AgentState)
graph.add_node("research", research_agent)
graph.add_node("analyze", analysis_agent)
graph.add_node("write", writing_agent)
# Define edges
graph.add_edge("research", "analyze")
graph.add_edge("analyze", "write")
graph.set_entry_point("research")
# Compile and run
app = graph.compile()
result = app.invoke({"messages": [user_message]})
LangGraph enables:
- Cycles and conditionals
- Persistent state
- Human-in-the-loop
- Parallel execution
When to Use What
90% of "AI agents" are just cron jobs with claude attached. You don't need langchain. You don't need autogen. You need: a trigger, context, and an action.
Use LangChain when:
- Need quick prototyping
- Want extensive integrations (100+ providers)
- Building RAG applications
- Need memory abstractions
Use LangGraph when:
- Complex multi-step workflows
- Need cycles or conditionals
- Human-in-the-loop required
- Stateful, long-running agents
Skip both when:
- Simple API calls suffice
- Need minimal dependencies
- Building production systems (consider raw SDKs)
Provider Integrations
LangChain supports extensive providers:
# OpenAI
from langchain_openai import ChatOpenAI
# Anthropic
from langchain_anthropic import ChatAnthropic
# Google
from langchain_google_genai import ChatGoogleGenerativeAI
# And 100+ more...
Tool Creation
Define tools for agent use:
from langchain.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the product database for items matching the query."""
results = db.search(query)
return format_results(results)
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to the specified address."""
email_client.send(to, subject, body)
return f"Email sent to {to}"
LangSmith Integration
Observability for debugging and evaluation:
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "..."
# All chain/agent runs are now traced
LangSmith provides:
- Trace visualization
- Token usage tracking
- Latency analysis
- Evaluation datasets
The Criticism
LangChain is polarizing. Common critiques:
- Abstraction overhead: Too many layers for simple tasks
- Rapid changes: API instability between versions
- Complexity creep: Easy to over-engineer
The counterpoint: for complex applications, the structure pays off.
Best Practices
- Start simple: Use raw SDKs until you need LangChain features
- Pick your layer: LangChain for quick starts, LangGraph for complexity
- Use LangSmith: Observability is essential for production
- Version pin: Lock dependencies to avoid breaking changes
Sources
- LangChain Documentation — Official docs
- LangGraph — Orchestration framework
- LangSmith — Observability platform
- GitHub: langchain-ai/langchain — Source code
See also: Orchestration · Vercel AI SDK · OpenAI Agents SDK