If you have played with the Vercel AI SDK over the past year you have probably noticed how quickly it keeps growing. Every release tightens the loop between Next.js and the ai package. After building a few streaming chat demos for clients I landed on a setup that feels reliable, stays friendly to Edge runtimes, and does not take days to wire together. In this guide I will walk through that recipe so you can launch your own chat interface with confidence.

Prerequisites

Before diving in make sure a few basics are in place:

  • Node.js 18.17 or newer. That version exposes the Web Streams API that Edge routes rely on.
  • A Next.js 14 project. The steps below assume the App Router but translate cleanly to the Pages Router too.
  • An API key for whichever model provider you plan to use, such as OPENAI_API_KEY or ANTHROPIC_API_KEY.

If you are starting from scratch spin up a fresh Next.js project:

pnpm create next-app@latest my-ai-app --typescript --eslint --app
cd my-ai-app

Install the Vercel AI SDK and provider bindings

Add the ai package along with the provider SDK that matches the model you prefer. For OpenAI you only need the official openai client:

pnpm add ai openai

The ai package ships framework agnostic helpers. When you install it inside a Next.js project the React flavoured utilities such as useChat are ready to go.

Configure environment variables

Create an .env.local file so the server runtime can read your provider secret:

# .env.local
OPENAI_API_KEY="sk-your-key"

Keep this file out of Git. On Vercel you can mirror the key under Project Settings → Environment Variables so deployments pick it up automatically.

Create a streaming chat route

Handling inference inside an Edge route keeps latency low. The streamText helper wraps the provider client and returns a proper ReadableStream that the client can consume without extra glue code.

// app/api/chat/route.ts
import { streamText } from 'ai'
import { openai } from '@ai-sdk/openai'
import { NextRequest } from 'next/server'
 
export const runtime = 'edge'
 
export async function POST(req: NextRequest) {
  const { messages } = await req.json()
 
  const result = await streamText({
    model: openai('gpt-4o-mini'),
    messages,
    system: 'You are a concise technical assistant who answers in markdown.',
  })
 
  return result.toDataStreamResponse()
}

A couple of notes from real world testing:

  • streamText understands the array of chat messages that comes from the client hook, so conversation history flows through automatically.
  • Provider specific helpers such as openai apply the right base URL and auth header using the key from your environment.
  • Calling toDataStreamResponse() gives the App Router everything it needs to progressively stream chunks back to the browser.

Prefer a JSON payload instead of Server Sent Events? Call result.toAIStreamResponse() to get a familiar JSON stream. It plays nicely with server actions or any client that expects the AI SDK wire format.

Build the React chat component

On the client side the useChat hook handles optimistic UI updates, streaming parsing, and error states. Wrap it in a client component so it can respond to user input events.

// app/chat/page.tsx
'use client'
 
import { useChat } from 'ai/react'
 
export default function ChatPage() {
  const {
    messages,
    input,
    handleInputChange,
    handleSubmit,
    isLoading,
    error,
  } = useChat({ api: '/api/chat' })
 
  return (
    <div className="mx-auto flex h-screen max-w-3xl flex-col gap-4 p-6">
      <header>
        <h1 className="text-3xl font-semibold">Build with the Vercel AI SDK</h1>
        <p className="text-sm text-zinc-400">
          Streaming responses keep the interface responsive, even on slower connections.
        </p>
      </header>
 
      <div className="flex flex-1 flex-col gap-3 overflow-y-auto rounded-lg border border-zinc-800 p-4">
        {messages.map((message) => (
          <article key={message.id} className="space-y-1">
            <p className="text-xs uppercase tracking-wide text-zinc-500">
              {message.role === 'user' ? 'You' : 'Assistant'}
            </p>
            <div className="prose prose-invert text-sm whitespace-pre-wrap">
              {message.content}
            </div>
          </article>
        ))}
        {isLoading && (
          <p className="text-sm text-zinc-500">Thinking…</p>
        )}
        {error && (
          <p className="text-sm text-red-400">{error.message}</p>
        )}
      </div>
 
      <form onSubmit={handleSubmit} className="flex gap-2">
        <input
          value={input}
          onChange={handleInputChange}
          className="flex-1 rounded-md border border-zinc-800 bg-zinc-900 px-3 py-2 text-sm focus:border-pink-500 focus:outline-none"
          placeholder="Ask me something about your data"
        />
        <button
          type="submit"
          disabled={isLoading}
          className="rounded-md bg-pink-600 px-4 py-2 text-sm font-medium text-white hover:bg-pink-500 disabled:cursor-not-allowed disabled:opacity-60"
        >
          Send
        </button>
      </form>
    </div>
  )
}

By default useChat posts to /api/chat, sends the accumulated messages, and expects a streaming response. It also handles incremental markdown rendering so the UI can stay lightweight.

Tool calling and structured data

The SDK includes helpers like streamObject plus Zod validation when you need structured replies. Here is a small example that calls a weather tool and returns a typed payload:

import { streamObject } from 'ai'
import { z } from 'zod'
import { openai } from '@ai-sdk/openai'
// fetchWeatherFromAPI represents whatever data source you rely on.
import { fetchWeatherFromAPI } from '@/lib/weather'
 
const WeatherSchema = z.object({
  summary: z.string(),
  temperatureC: z.number(),
})
 
export async function POST(req: Request) {
  const result = await streamObject({
    model: openai('gpt-4o-mini'),
    schema: WeatherSchema,
    tools: {
      getWeather: {
        description: 'Look up the current temperature for a city',
        parameters: z.object({ city: z.string() }),
        execute: async ({ city }) => {
          const temp = await fetchWeatherFromAPI(city)
          return { summary: `It is ${temp}°C in ${city}`, temperatureC: temp }
        },
      },
    },
    prompt: 'Return the temperature in Paris as structured data.',
  })
 
  return result.toJsonStreamResponse()
}

streamObject validates the model output against your Zod schema while optionally running tools. When the model calls getWeather the SDK handles the function call, executes your implementation, and streams back a final JSON object that satisfies WeatherSchema.

Observability and deployment tips

  • Enable Vercel tracing (VERCEL_ENABLE_TOOLING=1) to inspect token usage and tool invocations right in the dashboard.
  • Deploy Edge routes close to the users who will call them. streamText runs happily in the Edge runtime without extra configuration.
  • Remember to polyfill fetch compatible APIs in tests. The SDK leans on the Web Streams API, so use Node 18.17+ or the undici polyfill locally.

Where to go next

The Vercel AI SDK documentation includes recipes for retrieval augmented generation, storage with Vercel KV, and even SvelteKit and Remix examples if you like to mix frameworks. With the pieces above you now have a modern streaming chat experience running end to end in Next.js.