TanStack AI
January 30, 2026
TanStack AI brings AI capabilities into the TanStack ecosystem — type-safe, server-function-native, and integrated with TanStack Start's execution model.
The TanStack Approach
TanStack Start is a full-stack React framework built on TanStack Router. Its AI integrations follow the same philosophy: type-safe, server-first, and composable.
Rather than a separate AI SDK, TanStack treats AI as another kind of server function:
// Server function that calls an AI model
export const generateSummary = createServerFn()
.input(z.object({ text: z.string() }))
.handler(async ({ input }) => {
const response = await generateText({
model: "anthropic/claude-sonnet-4.5",
prompt: `Summarize: ${input.text}`,
});
return response.text;
});
Key Concepts
Server Functions for AI
AI calls happen on the server, keeping API keys secure and enabling streaming:
// routes/api/chat.ts
export const chat = createServerFn()
.input(z.object({ messages: z.array(messageSchema) }))
.handler(async ({ input }) => {
// Secure: runs server-side
const stream = await streamText({
model: "openai/gpt-4o",
messages: input.messages,
});
return stream;
});
Type Safety Throughout
Full TypeScript inference from input to output:
const result = await chat({
messages: [/* type-checked */]
}); // result is typed
Streaming Support
Built-in support for streaming responses to the client:
// Server: stream generation
const stream = await streamText({ /* ... */ });
// Client: consume stream
for await (const chunk of stream) {
updateUI(chunk);
}
Integration Patterns
With TanStack Query
Combine AI calls with TanStack Query for caching and mutations:
const mutation = useMutation({
mutationFn: (prompt: string) => generateText({ prompt }),
onSuccess: (data) => {
// Update cache, notify, etc.
},
});
With TanStack Router
Route-based AI loading:
// routes/analyze/$id.tsx
export const Route = createFileRoute('/analyze/$id')({
loader: async ({ params }) => {
const data = await fetchData(params.id);
const analysis = await analyzeData(data);
return { data, analysis };
},
});
LLMO (LLM Optimization)
TanStack Start includes guides for optimizing content for LLM consumption:
- Structured data for AI parsing
- Semantic HTML for context extraction
- Content organization for retrieval
This helps when your content is consumed by AI systems — both your own and external crawlers.
When to Use TanStack AI
Good fit:
- Building with TanStack Start
- Already using TanStack Router/Query
- Want server-function-native AI
- Need type safety end-to-end
Consider alternatives when:
- Not using TanStack ecosystem
- Need provider abstraction (use Vercel AI SDK)
- Building complex agent systems (use LangGraph)
Example: Chat Application
// routes/chat.tsx
import { createServerFn } from "@tanstack/start";
import { streamText } from "ai";
const chat = createServerFn()
.input(z.object({ messages: z.array(messageSchema) }))
.handler(async ({ input }) => {
return streamText({
model: "anthropic/claude-sonnet-4.5",
messages: input.messages,
});
});
export default function ChatPage() {
const [messages, setMessages] = useState([]);
const handleSend = async (content: string) => {
const newMessages = [...messages, { role: "user", content }];
setMessages(newMessages);
const stream = await chat({ messages: newMessages });
for await (const chunk of stream) {
// Update assistant message progressively
}
};
return <ChatUI messages={messages} onSend={handleSend} />;
}
Ecosystem Integration
TanStack AI works well with:
- Vercel AI SDK: For the underlying model calls
- TanStack Query: For caching and state management
- TanStack Form: For AI-assisted form validation
- Cloudflare: For edge deployment
Sources
- TanStack Start Docs — Official documentation
- LLMO Guide — LLM optimization
- GitHub: TanStack/router — Source code
See also: Vercel AI SDK · Cloudflare Agents