OpenAI Apps MCP: Build ChatGPT apps with MCP servers on Cloudflare Workers. Extend ChatGPT with custom tools and interactive widgets (HTML/JS UI). Use when: developing ChatGPT extensions, implementing MCP servers, or troubleshooting CORS, widget 404s, MIME types, or ASSETS binding errors.
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
skills listSkill Instructions
name: OpenAI Apps MCP description: | Build ChatGPT apps with MCP servers on Cloudflare Workers. Extend ChatGPT with custom tools and interactive widgets (HTML/JS UI).
Use when: developing ChatGPT extensions, implementing MCP servers, or troubleshooting CORS blocking (allow chatgpt.com), widget 404s (missing ui://widget/), wrong MIME type (text/html+skybridge), or ASSETS binding undefined. allowed-tools: [Read, Write, Edit, Bash, Glob, Grep]
Building OpenAI Apps with Stateless MCP Servers
Status: Production Ready
Last Updated: 2026-01-03
Dependencies: cloudflare-worker-base, hono-routing (optional)
Latest Versions: @modelcontextprotocol/sdk@1.25.1, hono@4.11.3, zod@4.1.13, wrangler@4.54.0
Overview
Build ChatGPT Apps using MCP (Model Context Protocol) servers on Cloudflare Workers. Extends ChatGPT with custom tools and interactive widgets (HTML/JS UI rendered in iframe).
Architecture: ChatGPT → MCP endpoint (JSON-RPC 2.0) → Tool handlers → Widget resources (HTML)
Status: Apps available to Business/Enterprise/Edu (GA Nov 13, 2025). MCP Apps Extension (SEP-1865) formalized Nov 21, 2025.
Quick Start
1. Scaffold & Install
npm create cloudflare@latest my-openai-app -- --type hello-world --ts --git --deploy false
cd my-openai-app
npm install @modelcontextprotocol/sdk@1.25.1 hono@4.11.3 zod@4.1.13
npm install -D @cloudflare/vite-plugin@1.17.1 vite@7.2.4
2. Configure wrangler.jsonc
{
"name": "my-openai-app",
"main": "dist/index.js",
"compatibility_flags": ["nodejs_compat"], // Required for MCP SDK
"assets": {
"directory": "dist/client",
"binding": "ASSETS" // Must match TypeScript
}
}
3. Create MCP Server (src/index.ts)
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { ListToolsRequestSchema, CallToolRequestSchema } from '@modelcontextprotocol/sdk/types.js';
const app = new Hono<{ Bindings: { ASSETS: Fetcher } }>();
// CRITICAL: Must allow chatgpt.com
app.use('/mcp/*', cors({ origin: 'https://chatgpt.com' }));
const mcpServer = new Server(
{ name: 'my-app', version: '1.0.0' },
{ capabilities: { tools: {}, resources: {} } }
);
// Tool registration
mcpServer.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [{
name: 'hello',
description: 'Use this when user wants to see a greeting',
inputSchema: {
type: 'object',
properties: { name: { type: 'string' } },
required: ['name']
},
annotations: {
openai: { outputTemplate: 'ui://widget/hello.html' } // Widget URI
}
}]
}));
// Tool execution
mcpServer.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === 'hello') {
const { name } = request.params.arguments as { name: string };
return {
content: [{ type: 'text', text: `Hello, ${name}!` }],
_meta: { initialData: { name } } // Passed to widget
};
}
throw new Error(`Unknown tool: ${request.params.name}`);
});
app.post('/mcp', async (c) => {
const body = await c.req.json();
const response = await mcpServer.handleRequest(body);
return c.json(response);
});
app.get('/widgets/*', async (c) => c.env.ASSETS.fetch(c.req.raw));
export default app;
4. Create Widget (src/widgets/hello.html)
<!DOCTYPE html>
<html>
<head>
<style>
body { margin: 0; padding: 20px; font-family: system-ui; }
</style>
</head>
<body>
<div id="greeting">Loading...</div>
<script>
if (window.openai && window.openai.getInitialData) {
const data = window.openai.getInitialData();
document.getElementById('greeting').textContent = `Hello, ${data.name}! 👋`;
}
</script>
</body>
</html>
5. Deploy
npm run build
npx wrangler deploy
npx @modelcontextprotocol/inspector https://my-app.workers.dev/mcp
Critical Requirements
CORS: Must allow https://chatgpt.com on /mcp/* routes
Widget URI: Must use ui://widget/ prefix (e.g., ui://widget/map.html)
MIME Type: Must be text/html+skybridge for HTML resources
Widget Data: Pass via _meta.initialData (accessed via window.openai.getInitialData())
Tool Descriptions: Action-oriented ("Use this when user wants to...")
ASSETS Binding: Serve widgets from ASSETS, not bundled in worker code
SSE: Send heartbeat every 30s (100s timeout on Workers)
Known Issues Prevention
This skill prevents 8 documented issues:
Issue #1: CORS Policy Blocks MCP Endpoint
Error: Access to fetch blocked by CORS policy
Fix: app.use('/mcp/*', cors({ origin: 'https://chatgpt.com' }))
Issue #2: Widget Returns 404 Not Found
Error: 404 (Not Found) for widget URL
Fix: Use ui://widget/ prefix (not resource:// or /widgets/)
annotations: { openai: { outputTemplate: 'ui://widget/map.html' } }
Issue #3: Widget Displays as Plain Text
Error: HTML source code visible instead of rendered widget
Fix: MIME type must be text/html+skybridge (not text/html)
server.setRequestHandler(ListResourcesRequestSchema, async () => ({
resources: [{ uri: 'ui://widget/map.html', mimeType: 'text/html+skybridge' }]
}));
Issue #4: ASSETS Binding Undefined
Error: TypeError: Cannot read property 'fetch' of undefined
Fix: Binding name in wrangler.jsonc must match TypeScript
{ "assets": { "binding": "ASSETS" } } // wrangler.jsonc
type Bindings = { ASSETS: Fetcher }; // index.ts
Issue #5: SSE Connection Drops After 100 Seconds
Error: SSE stream closes unexpectedly Fix: Send heartbeat every 30s (Workers timeout at 100s inactivity)
const heartbeat = setInterval(async () => {
await stream.writeSSE({ data: JSON.stringify({ type: 'heartbeat' }), event: 'ping' });
}, 30000);
Issue #6: ChatGPT Doesn't Suggest Tool
Error: Tool registered but never appears in suggestions Fix: Use action-oriented descriptions
// ✅ Good: 'Use this when user wants to see a location on a map'
// ❌ Bad: 'Shows a map'
Issue #7: Widget Can't Access Initial Data
Error: window.openai.getInitialData() returns undefined
Fix: Pass data via _meta.initialData
return {
content: [{ type: 'text', text: 'Here is your map' }],
_meta: { initialData: { location: 'SF', zoom: 12 } }
};
Issue #8: Widget Scripts Blocked by CSP
Error: Refused to load script (CSP directive)
Fix: Use inline scripts or same-origin scripts. Third-party CDNs blocked.
<!-- ✅ Works --> <script>console.log('ok');</script>
<!-- ❌ Blocked --> <script src="https://cdn.example.com/lib.js"></script>
MCP SDK 1.25.x Updates (December 2025)
Breaking Changes from @modelcontextprotocol/sdk@1.24.x → 1.25.x:
- Removed loose type exports (Prompts, Resources, Roots, Sampling, Tools) - use specific schemas
- ES2020 target required (previous: ES2018)
setRequestHandleris now typesafe - incorrect schemas throw type errors
New Features:
- Tasks (v1.24.0+): Long-running operations with progress tracking
- Sampling with Tools (v1.24.0+): Tools can request model sampling
- OAuth Client Credentials (M2M): Machine-to-machine authentication
Migration: If using loose type imports, update to specific schema imports:
// ❌ Old (removed in 1.25.0)
import { Tools } from '@modelcontextprotocol/sdk/types.js';
// ✅ New (1.25.1+)
import { ListToolsRequestSchema, CallToolRequestSchema } from '@modelcontextprotocol/sdk/types.js';
Zod 4.0 Migration Notes (MAJOR UPDATE - July 2025)
Breaking Changes from zod@3.x → 4.x:
.default()now expects input type (not output type). Use.prefault()for old behavior.- ZodError:
error.issues(noterror.errors) .merge()and.superRefine()deprecated- Optional properties with defaults now always apply
Performance: 14x faster string parsing, 7x faster arrays, 6.5x faster objects
Migration: Update validation code:
// Zod 4.x
try {
const validated = schema.parse(data);
} catch (error) {
if (error instanceof z.ZodError) {
return { content: [{ type: 'text', text: error.issues.map(e => e.message).join(', ') }] };
}
}
Dependencies
{
"dependencies": {
"@modelcontextprotocol/sdk": "^1.25.1",
"hono": "^4.11.3",
"zod": "^4.1.13"
},
"devDependencies": {
"@cloudflare/vite-plugin": "^1.17.1",
"@cloudflare/workers-types": "^4.20260103.0",
"vite": "^7.2.4",
"wrangler": "^4.54.0"
}
}
Official Documentation
- MCP Specification: https://modelcontextprotocol.io/ (Latest: 2025-11-25)
- MCP SDK: https://github.com/modelcontextprotocol/typescript-sdk
- OpenAI Apps SDK: https://developers.openai.com/apps-sdk
- MCP Apps Extension (SEP-1865): http://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-apps/
- Context7 Library ID: /modelcontextprotocol/typescript-sdk
Production Reference
Open Source Example: https://github.com/jezweb/chatgpt-app-sdk (portfolio carousel widget)
- Live in Production: Rendering in ChatGPT Business
- MCP Server: Full JSON-RPC 2.0 implementation with tools + resources (~310 lines)
- Widget Integration: WordPress API →
window.openai.toolOutput→ React carousel - Database: D1 (SQLite) for contact form submissions
- Stack: Hono 4 + React 19 + Tailwind v4 + Drizzle ORM
- Key Files:
/src/lib/mcp/server.ts- Complete MCP handler/src/server/tools/portfolio.ts- Tool with widget annotations/src/widgets/PortfolioWidget.tsx- Data access pattern
- Verified: All 8 known issues prevented, zero errors in production
More by jezweb
View allSelf-hosted auth for TypeScript/Cloudflare Workers with social auth, 2FA, passkeys, organizations, RBAC, and 15+ plugins. Requires Drizzle ORM or Kysely for D1 (no direct adapter). Self-hosted alternative to Clerk/Auth.js. Use when: self-hosting auth on D1, building OAuth provider, multi-tenant SaaS, or troubleshooting D1 adapter errors, session caching, rate limits.
/review-skill - Skill Audit Command: Comprehensive skill documentation audit with automated checks and manual review phases.
Build rich text editors with Tiptap - headless editor framework with React and Tailwind v4. Covers SSR-safe setup, image uploads, prose styling, and collaborative editing. Use when creating blog editors, comment systems, or Notion-like apps, or troubleshooting SSR hydration errors, typography issues, or image upload problems.
Run LLMs and AI models on Cloudflare's GPU network with Workers AI. Includes Llama 4, Gemma 3, Mistral 3.1, Flux images, BGE embeddings, streaming, and AI Gateway. Handles 2025 breaking changes. Use when: implementing LLM inference, images, RAG, or troubleshooting AI_ERROR, rate limits, max_tokens, BGE pooling.
