Vercel AI SDK v6 development. Use when building AI agents, chatbots, tool integrations, streaming apps, or structured output with the ai package. Covers ToolLoopAgent, useChat, generateText, streamText, tool approval, smoothStream, provider tools, MCP integration, and Output patterns.
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
npx agent-skills-cli listSkill Instructions
name: ai-sdk-6 description: Vercel AI SDK v6 development. Use when building AI agents, chatbots, tool integrations, streaming apps, or structured output with the ai package. Covers ToolLoopAgent, useChat, generateText, streamText, tool approval, smoothStream, provider tools, MCP integration, and Output patterns. argument-hint: "[question or feature]"
Vercel AI SDK v6 Development Guide
Use this skill when developing AI-powered features using Vercel AI SDK v6 (ai package).
Quick Reference
Installation
bun add ai @ai-sdk/openai zod # or @ai-sdk/anthropic, @ai-sdk/google, etc.
Core Functions
| Function | Purpose |
|---|---|
generateText | Non-streaming text generation (+ structured output with Output) |
streamText | Streaming text generation (+ structured output with Output) |
v6 Note: generateObject/streamObject are deprecated.
Use generateText/streamText with output: Output.object({ schema }) instead.
Structured Output (v6)
import { generateText, Output } from "ai";
import { z } from "zod";
const { output } = await generateText({
model: anthropic("claude-sonnet-4-6"),
output: Output.object({
schema: z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
topics: z.array(z.string()),
}),
}),
prompt: "Analyze this feedback...",
});
Output types: Output.object(), Output.array(), Output.choice(), Output.json()
Agent Class (v6 Key Feature)
import { ToolLoopAgent, tool, stepCountIs } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const myAgent = new ToolLoopAgent({
model: anthropic("claude-sonnet-4-6"),
instructions: "You are a helpful assistant.",
tools: {
getData: tool({
description: "Fetch data from API",
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
return { result: "data" };
},
}),
},
stopWhen: stepCountIs(20),
});
// Usage
const { text } = await myAgent.generate({ prompt: "Hello" });
const stream = myAgent.stream({ prompt: "Hello" });
API Route with Agent
// app/api/chat/route.ts
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/agents/my-agent";
export async function POST(request: Request) {
const { messages } = await request.json();
return createAgentUIStreamResponse({
agent: myAgent,
uiMessages: messages,
});
}
Smooth Streaming
import { createAgentUIStreamResponse, smoothStream } from "ai";
return createAgentUIStreamResponse({
agent: myAgent,
uiMessages: messages,
experimental_transform: smoothStream({
delayInMs: 15,
chunking: "word", // "word" | "line" | "none"
}),
});
useChat Hook (Client)
"use client";
import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
import { useState } from "react";
export function Chat() {
const [input, setInput] = useState("");
const { messages, sendMessage, status } = useChat({
transport: new DefaultChatTransport({
api: "/api/chat",
}),
});
return (
<>
{messages.map((msg) => (
<div key={msg.id}>
{msg.parts.map((part, i) =>
part.type === "text" ? <span key={i}>{part.text}</span> : null
)}
</div>
))}
<form
onSubmit={(e) => {
e.preventDefault();
if (input.trim()) {
sendMessage({ text: input });
setInput("");
}
}}
>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
disabled={status !== "ready"}
/>
<button type="submit" disabled={status !== "ready"}>
Send
</button>
</form>
</>
);
}
v6 Note: useChat no longer manages input state internally. Use useState for controlled inputs.
Reference Documentation
For detailed information, see:
- agents.md - ToolLoopAgent, loop control, workflows
- core-functions.md - generateText, streamText, Output patterns
- tools.md - Tool definition with Zod schemas
- ui-hooks.md - useChat, UIMessage, streaming
- middleware.md - Custom middleware patterns
- mcp.md - MCP server integration
Official Documentation
For the latest information, see AI SDK docs.
More by laguagu
View allCreates Next.js 16 frontends with shadcn/ui. Use when building React UIs, components, pages, or applications with shadcn, Tailwind, or modern frontend patterns. Also use when the user asks to create a new Next.js project, add UI components, style pages, or build any web interface — even if they don't mention shadcn explicitly.
Create new AI chat interface components for the ai-elements library following established composable patterns, shadcn/ui integration, and Vercel AI SDK conventions. Use when creating new components in packages/elements/src or when the user asks to add a new component to ai-elements.
PostgreSQL-based semantic and hybrid search with pgvector and ParadeDB. Use when implementing vector search, semantic search, hybrid search, or full-text search in PostgreSQL. Covers pgvector setup, indexing (HNSW, IVFFlat), hybrid search (FTS + BM25 + RRF), ParadeDB as Elasticsearch alternative, and re-ranking with Cohere/cross-encoders. Supports vector(1536) and halfvec(3072) types for OpenAI embeddings. Triggers: pgvector, vector search, semantic search, hybrid search, embedding search, PostgreSQL RAG, BM25, RRF, HNSW index, similarity search, ParadeDB, pg_search, reranking, Cohere rerank, pg_trgm, trigram, fuzzy search, LIKE, ILIKE, autocomplete, typo tolerance, fuzzystrmatch
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). Use when implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, managing 'use cache: private' for compliance scenarios, pass-through/interleaving patterns, GET Route Handler caching, debugging cache issues, and reviewing Cache Component implementations.
