Back to blog

Generative UI and Server-Driven UI: A React Dev's Intro

· 8 min read

You’ve shipped a React app. Product wants to change the homepage layout for a campaign, but that means a PR, review, and deploy. Marketing wants to A/B test three card designs. A user asks your chatbot something, and the right answer isn’t text, it’s a chart or a form.

Both patterns covered here solve versions of this problem. And they share one key insight: UI decisions don’t have to live in client code.


Server-Driven UI

The server describes the interface. The client renders it.

Instead of hardcoding which components appear when, your server sends a JSON payload:

{
	"components": [
		{ "type": "HeroBanner", "props": { "headline": "Summer Sale" } },
		{ "type": "ProductCarousel", "props": { "ids": [1, 2, 3] } }
	]
}

Your React client holds a registry, mapping type strings to actual components:

const REGISTRY = {
	HeroBanner: HeroBanner,
	ProductCarousel: ProductCarousel,
};

function SDUIScreen({ components }) {
	return components.map(({ type, props }) => {
		const Component = REGISTRY[type];
		return Component ? <Component key={type} {...props} /> : null;
	});
}

That’s it. The layout is now controlled by the server. No client deploy needed.

Why it matters: Airbnb, Uber, and Lyft built significant parts of their products this way. The main driver is mobile: shipping an app update takes days waiting for App Store review. With SDUI, a layout change that used to require a release now just takes a server config update. Uber’s team reported 10x feature velocity after adopting this pattern. Faire, an e-commerce platform, eliminated 90% of their rendering logic and 65% of their code after migrating.

On the web, SDUI shines on high-churn surfaces: homepages, promotions, settings pages — anything product or marketing changes constantly.

The honest tradeoff: your backend now needs to understand UI structure. Complex interactions like drag-and-drop, rich animations, or game-like UIs are hard to express in a JSON schema. Most teams apply SDUI to the surfaces that change often, and use native implementations for performance-critical screens.

Reach for SDUI when:

  • You’re building a mobile app and need to ship without App Store delays
  • Marketing or product controls the layout, not engineering
  • You need instant A/B testing across platforms

Generative UI

An AI model decides which components to render based on what the user asked.

The mental model: you build a library of components, describe each one to the AI, and the AI selects and populates them at runtime.

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await streamText({
	model: openai('gpt-4o'),
	tools: {
		showWeather: {
			description: 'Display current weather for a city',
			parameters: z.object({ city: z.string() }),
			execute: async ({ city }) => fetchWeather(city),
		},
	},
	prompt: "What's the weather in Tokyo?",
});

// When the AI calls showWeather, render your component
if (result.toolName === 'showWeather') {
	return <WeatherCard data={result.toolResult} />;
}

The user asked a plain-language question. The AI understood that a WeatherCard — not a text reply — is the right response. You didn’t write conditional logic for this; the model did.

This is the key difference from traditional development: the AI composes the interface contextually rather than you encoding every possible state in advance. You build the components. The AI decides when to show them.

The control spectrum

Not all generative UI is the same. There are roughly three flavors:

  • Static GenUI — the AI picks from a predefined set of components and fills them with data. You control exactly what can render. Safest option, most popular in production.
  • Declarative GenUI — the AI returns a structured UI spec (like Google’s A2UI format), and the client renders it within your design constraints.
  • Open-ended GenUI — the AI generates full HTML or code, run in a sandboxed iframe. Maximum flexibility, maximum risk.

Most production systems use static or declarative patterns. If you’re just starting, static is where to begin.

The honest tradeoff: non-determinism. The same input can produce different UI each run. That’s usually fine for chat interfaces, but it means you need guardrails: clear tool descriptions, constrained parameter schemas, and testing that accounts for variation.

Reach for generative UI when:

  • You’re building an AI chat or assistant feature
  • The interface needs to respond to open-ended user requests
  • You want the AI to pick the right component for the context

Heads up on older content: Vercel’s AI SDK previously had an RSC-based approach using streamUI() and createStreamableUI(). That path is currently paused. If you find tutorials using those APIs, they’re outdated. The current recommended approach uses useChat() with tool parts, as shown above.


At a glance

Server-Driven UIGenerative UI
Who decides layoutBackend serverAI model
Deterministic?Yes — same response, same UINo — varies by inference
Best forHigh-churn surfaces, mobile appsAI chat, agent-driven interfaces
Main benefitShip without deploying client codeContextual, adaptive interfaces
Main riskBackend/frontend couplingNon-determinism, hallucinations

Tooling to know

For Generative UI

  • Vercel AI SDKuseChat() + tool definitions. Start here. Works with any LLM provider and is framework-agnostic. Now on v6.
  • CopilotKit — More opinionated framework for adding AI features to existing React apps. Supports all three patterns (static, declarative, open-ended) and the AG-UI protocol adopted by Google, LangChain, and AWS.
  • assistant-ui — Composable chat UI components (shadcn/ui-style). Handles the UI layer so you can focus on tools. YC-backed, 50k+ weekly npm downloads.
  • Hashbrown — You register React components with schemas, and the LLM generates the entire component tree with streamed props. Good option if you want the model to have more layout control.
  • Tambo — Open-source. Register components with Zod prop schemas that automatically become LLM tool definitions. Minimal setup.

For Server-Driven UI

SDUI is more pattern than library. Most teams build their own renderer, but if you want to explore without starting from scratch:

  • NativeBlocks — SDUI platform with a visual editor and React client.
  • DivKit — Yandex’s open-source cross-platform SDUI framework. Production-tested at scale.

Where they’re heading

These two patterns are converging. In December 2025, Google released A2UI — a protocol where an AI agent selects components from a pre-approved catalog. Structurally, it’s identical to an SDUI component registry, but an LLM does the composition instead of a server config. The client still controls what components exist. The AI just decides which ones to use.

That’s the direction the industry is moving: a stable component library (SDUI’s strength) + AI-driven composition (Generative UI’s strength). CopilotKit already supports this pattern through their AG-UI protocol.


Which one to start with

Working on an AI feature — chat, smart search, an assistant of any kind? Start with Generative UI. The Vercel AI SDK generative UI guide walks through the tool-calling pattern with working code. The Vercel Academy tutorial is a good hands-on complement.

Working on a mobile app or a surface that changes constantly? Start with SDUI. Airbnb’s Ghost Platform writeup is the canonical reference. Apollo GraphQL’s three-part SDUI guide covers schema design patterns if you’re using GraphQL.

Neither requires an all-in architectural commitment. Start with one surface, prove the value, and expand from there.


Go deeper

Generative UI

Server-Driven UI