You know the vibe. Your TanStack Start app is humming along: clean, fast, type-safe, the whole “wow, this is nice” package. Then someone strolls in and goes, “Cool… can we add AI chat?”
And just like that, Adding AI Features to My TanStack Start App stops being a casual idea and turns into “okay, we need a plan before we ship our API keys to the entire internet.”
Here’s the upside though: TanStack Start gives you server routes and server functions, which makes AI integration surprisingly painless, as long as secrets stay on the server and streaming is done right.
If you’re already shipping full-stack TypeScript, you’re basically one endpoint away from something real.
Key takeaways
- Reach for TanStack Start server routes when you want raw HTTP control. Streaming and SSE endpoints love this.
- Use TanStack Start server functions when you want typed, server-only logic you can call from React without hand-rolling
fetch. - For a streaming chat UI, you’ve got two clean paths
- AI SDK using
streamText()anduseChat() - TanStack AI using
chat()+toServerSentEventsResponse()anduseChat()
- AI SDK using
- Keep provider keys in
.env. Validate inputs too. Zod is the easy win. - If you’re talking to OpenAI directly, OpenAI recommends the Responses API for new projects and reports internal gains like ~3% SWE-bench improvement and 40–80% cache utilization improvement versus Chat Completions in their internal tests.
What “Adding AI Features to My TanStack Start App” actually means
Most “AI features” end up being one of these, give or take:
- Streaming chat
- Text transforms
- Tool-using agents
- Multi-model support
TanStack Start helps because it’s already a full-stack framework. SSR. Streaming. Server RPC-ish patterns. TanStack even calls out “SSR, Streaming and Server RPCs” as core features on the Start overview page, which is basically them saying “yes, we built this for apps like yours.”
One small “heads up” I wouldn’t ignore: TanStack Start still shows up as Release Candidate in some ecosystem docs. Convex’s quickstart, for example, explicitly says RC and warns there may be bugs/issues. So I treat production rollouts the same way I treat anything new-ish: feature flags, logs, and a rollback plan ready to go. Boring. Necessary.
Pick your integration: AI SDK vs TanStack AI
Both work. It’s more “what style do you want?” than “which one is correct?”
Option A: AI SDK with TanStack Start
The AI SDK TanStack Start quickstart is refreshingly straightforward.
Create the app:
pnpm create @tanstack/start@latest my-ai-appInstall packages:
pnpm add ai @ai-sdk/react zodSet your key in .env:
AI_GATEWAY_API_KEY=xxxxxxxxxWhy I like this route: useChat() on the client and streamText() on the server is barely any glue. And streaming tokens makes the UI feel alive. Instant-ish. Like it’s paying attention.
Option B: TanStack AI with React integration
TanStack AI’s quick start starts here:
pnpm add @tanstack/ai @tanstack/ai-react @tanstack/ai-openaiIt really leans into a couple things:
- Streaming via Server-Sent Events using
toServerSentEventsResponse() - Modular imports for tree-shaking, so you pull in
chatandopenaiTextinstead of importing the whole universe and accidentally shipping adapters you’ll never use
Also worth noting, their docs mention OpenRouter as a simple way to access “300+ models with a single API key” if you’re not thrilled about juggling provider accounts one-by-one.
Adding AI Features to My TanStack Start App with a streaming /api/chat server route
This is the “I want chat working today, not next week” approach. No shame.
Create src/routes/api/chat.ts:
import { streamText, type UIMessage, convertToModelMessages } from 'ai'
import { createFileRoute } from '@tanstack/react-router'
export const Route = createFileRoute => {
const { messages }: { messages: UIMessage[] } = await request.json()
const result = streamText,
})
return result.toUIMessageStreamResponse()
},
},
},
})What’s actually going on here, without the hand-waving:
TanStack Start server routes live alongside your normal routes and handle raw HTTP. The AI SDK treats UIMessage[] as UI-friendly messages with metadata, which is not necessarily what the model wants. So you run convertToModelMessages().
Then streamText() gives you a streaming result, and toUIMessageStreamResponse() turns it into the streamed HTTP response your client hook expects. Nice and tidy.
Client UI using useChat()
In your index route, or a dedicated component, the AI SDK quickstart looks like this:
import { createFileRoute } from '@tanstack/react-router'
import { useChat } from '@ai-sdk/react'
import { useState } from 'react'
export const Route = createFileRoute
function Chat() {
const [input, setInput] = useState
const { messages, sendMessage } = useChat()
return => =>)}
</div>
))}
<form
onSubmit={ => {
e.preventDefault()
sendMessage
setInput('')
}}
>
<input value={input} onChange={(e) => setInput(e.currentTarget.value)} />
</form>
</div>
)
}Streaming is the big UX unlock, honestly. People will tolerate a so-so answer faster than they’ll tolerate a blank screen that looks frozen.
Adding AI Features to My TanStack Start App with server functions (non-streaming work)
Not everything needs the “tokens flying in” experience. For stuff like “summarize this doc” or “generate release notes,” I usually prefer a server function so I can call it from React like a normal function and skip wiring up fetch endpoints by hand.
TanStack Start server functions use createServerFn(). They always run on the server, and the client bundle gets an RPC stub.
Example:
// src/utils/ai.functions.ts
import { createServerFn } from '@tanstack/react-start'
import { generateText } from 'ai'
import { z } from 'zod'
const Input = z.object({ text: z.string().min(1) })
export const summarizeText = createServerFn({ method. 'POST' })
.inputValidator(Input)
.handler(async ({ data }) => {
const { text } = data
const result = await generateText({
model. 'anthropic/claude-sonnet-4.5',
prompt. `Summarize this in 5 bullets:\n\n${text}`,
})
return { summary: result.text }
})That inputValidator() step isn’t optional in my book. AI endpoints get abused. Validation plus rate limiting is how you sleep at night.
Tools/agents: where TanStack AI gets interesting
If you want the model to call functions like “check weather,” “search docs,” “create ticket,” TanStack AI’s tool architecture is worth a look.
Their guide describes a tool call flow where the server:
- receives messages
- converts tool definitions for the LLM
- streams back tool-call chunks, content chunks, and done
It also supports tool “states” like approval-requested, which is exactly what you want for sensitive actions. Emailing users. Charging money. Deleting data. Stuff you don’t want an overeager model doing on autopilot.
One more practical bonus: TanStack AI is designed for bundle optimization. Their tree-shaking guide explicitly recommends importing only what you need, like chat and openaiText, so you don’t accidentally ship image/video adapters “just because.”
Deployment notes (keys, runtimes, and “where does this run?”)
Secrets stay server-side. Always.
- AI SDK gateway uses
AI_GATEWAY_API_KEYby default (per their TanStack Start quickstart) - TanStack AI quickstart checks
OPENAI_API_KEYon the server before streaming
TanStack Start can deploy in a bunch of places. If you’re aiming for Cloudflare Workers, Cloudflare’s guide shows a TanStack Start template and notes the entrypoint is @tanstack/react-start/server-entry, with compatibility_flags: ["nodejs_compat"] in wrangler.jsonc.
Common mistakes (I’ve made most of these)
- Putting provider keys in client env. Anything
VITE_*is basically public. - Skipping streaming for chat. You can do it, but it feels sluggish.
- No input limits. Cap message length, cap history length, truncate.
- Not handling provider errors. Return JSON errors with status codes. TanStack AI quickstart shows a clean pattern for this.
- Over-importing. With TanStack AI, follow their tree-shaking guidance so you don’t bloat the bundle.
Conclusion
Adding AI Features to My TanStack Start App mostly comes down to a clean server boundary, then picking the streaming path you like.
Want the shortest route? AI SDK’s streamText() plus useChat() is hard to beat.
Want tool-heavy agents and tight bundle control? TanStack AI’s modular setup is really nice.
If you try this in your app, tell me what you’re building, and what model/provider combo you landed on. Also, if you’re tracking weekly model churn, my internal post on releases might be handy. GPT-54 is here: coping with weekly LLM updates.
Sources
- AI SDK. Getting Started with TanStack Start (quickstart, route handler,
streamText,useChat, env key, install commands)
https.//ai-sdk.dev/docs/getting-started/tanstack-start - AI SDK reference.
streamText()
https.//ai-sdk.dev/docs/reference/ai-sdk-core/stream-text - AI SDK reference.
generateText()
https.//ai-sdk.dev/docs/reference/ai-sdk-core/generate-text - TanStack Start docs. Server Routes (raw HTTP endpoints, file routing conventions)
https.//tanstack.com/start/latest/docs/framework/react/guide/server-routes - TanStack Start docs. Server Functions (
createServerFn, validation, server-only execution)
https.//tanstack.com/start/latest/docs/framework/react/guide/server-functions - TanStack AI docs. Quick Start (SSE chat endpoint, React
useChat, OpenAI adapter, OpenRouter “300+ models” tip)
https.//tanstack.com/ai/latest/docs/getting-started/quick-start - TanStack AI docs. Tool Architecture (tool call flow, approval flow, tool states)
https.//tanstack.com/ai/latest/docs/guides/tool-architecture - TanStack AI docs. Tree-Shaking & Bundle Optimization (modular imports, avoid bundling unused activities/adapters)
https.//tanstack.com/ai/latest/docs/guides/tree-shaking - OpenAI developer docs. Responses API migration guide (Responses recommended for new projects. Internal stats. ~3% SWE-bench improvement, 40–80% cache utilization improvement)
https.//developers.openai.com/api/docs/guides/migrate-to-responses/ - Convex docs. TanStack Start quickstart note about Release Candidate stage
https.//docs.convex.dev/quickstart/tanstack-start - Cloudflare Workers docs. TanStack Start framework guide (Workers setup, entrypoint, config)
https.//developers.cloudflare.com/workers/framework-guides/web-apps/tanstack-start/ - LogRocket overview (third-party context: Start built on Vite/TanStack Router. Pros/cons)
https://blog.logrocket.com/tanstack-start-overview/