TanStack AI

January 30, 2026

TanStack AI brings AI capabilities into the TanStack ecosystem — type-safe, server-function-native, and integrated with TanStack Start's execution model.

The TanStack Approach

TanStack Start is a full-stack React framework built on TanStack Router. Its AI integrations follow the same philosophy: type-safe, server-first, and composable.

Rather than a separate AI SDK, TanStack treats AI as another kind of server function:

// Server function that calls an AI model
export const generateSummary = createServerFn()
  .input(z.object({ text: z.string() }))
  .handler(async ({ input }) => {
    const response = await generateText({
      model: "anthropic/claude-sonnet-4.5",
      prompt: `Summarize: ${input.text}`,
    });
    return response.text;
  });

Key Concepts

Server Functions for AI

AI calls happen on the server, keeping API keys secure and enabling streaming:

// routes/api/chat.ts
export const chat = createServerFn()
  .input(z.object({ messages: z.array(messageSchema) }))
  .handler(async ({ input }) => {
    // Secure: runs server-side
    const stream = await streamText({
      model: "openai/gpt-4o",
      messages: input.messages,
    });
    return stream;
  });

Type Safety Throughout

Full TypeScript inference from input to output:

const result = await chat({ 
  messages: [/* type-checked */] 
}); // result is typed

Streaming Support

Built-in support for streaming responses to the client:

// Server: stream generation
const stream = await streamText({ /* ... */ });

// Client: consume stream
for await (const chunk of stream) {
  updateUI(chunk);
}

Integration Patterns

With TanStack Query

Combine AI calls with TanStack Query for caching and mutations:

const mutation = useMutation({
  mutationFn: (prompt: string) => generateText({ prompt }),
  onSuccess: (data) => {
    // Update cache, notify, etc.
  },
});

With TanStack Router

Route-based AI loading:

// routes/analyze/$id.tsx
export const Route = createFileRoute('/analyze/$id')({
  loader: async ({ params }) => {
    const data = await fetchData(params.id);
    const analysis = await analyzeData(data);
    return { data, analysis };
  },
});

LLMO (LLM Optimization)

TanStack Start includes guides for optimizing content for LLM consumption:

This helps when your content is consumed by AI systems — both your own and external crawlers.

When to Use TanStack AI

Good fit:

Consider alternatives when:

Example: Chat Application

// routes/chat.tsx
import { createServerFn } from "@tanstack/start";
import { streamText } from "ai";

const chat = createServerFn()
  .input(z.object({ messages: z.array(messageSchema) }))
  .handler(async ({ input }) => {
    return streamText({
      model: "anthropic/claude-sonnet-4.5",
      messages: input.messages,
    });
  });

export default function ChatPage() {
  const [messages, setMessages] = useState([]);
  
  const handleSend = async (content: string) => {
    const newMessages = [...messages, { role: "user", content }];
    setMessages(newMessages);
    
    const stream = await chat({ messages: newMessages });
    for await (const chunk of stream) {
      // Update assistant message progressively
    }
  };
  
  return <ChatUI messages={messages} onSend={handleSend} />;
}

Ecosystem Integration

TanStack AI works well with:


Sources


See also: Vercel AI SDK · Cloudflare Agents