By Peter Smart

Design After The Page

When every digital experience is generated in real time for each user, what exactly are designers supposed to be making?

For thirty years, we've designed pages. First in print, then on screens. The medium changed, the tools evolved, but the core job stayed the same: author a static state, ship it, watch users navigate through the flows we predetermined.

That assumption—that designers author pages—is about to break.

Google recently released a prototype called Gemini-OS that makes the trajectory viscerally clear. It's a generative interface simulator where you tap a button—any button—and the system generates the next page in real time. No predefined flow. No pre-built windows. The model interprets your last action, assembles a response from components and design rules, and streams it to your screen in under a second.

Nothing pre-baked. Everything conditional.

This is an experiment, not a product. But it's a glimpse of where we're headed. Pages will still exist. You'll still be looking at them. But increasingly, no designer will have authored them directly. Systems will, in milliseconds, specifically for each user. And if that's the trajectory, it changes what designers should actually be building right now.

Example courtesy of Google Gemini

To each their own (page)

For decades, we've designed static states. A page exists. A user navigates to it. They see what everyone else sees, with maybe some light personalization: a name in the header, a few reordered products based on past purchases.

That model is starting to crack.

In the emerging paradigm, there's no universal page. There's a starting point—maybe an Egypt landing page on Expedia—but the moment interaction begins, the experience diverges. The system watches what you click. It infers the job you're trying to do. It assembles the next state specifically for you, drawing from components, content modules, and rules about what matters.

Here's a concrete example. You're exploring travel options. You've looked at cultural attractions in Morocco. You've browsed markets in Istanbul. Now you tap "5 nights in Cairo."

Today, you get the same Cairo page everyone gets. You do the mental math yourself: How does this compare to the other places I've seen? What's different about the museums here? Is this worth my limited time?

In the emerging model, the Cairo page is generated for you. It explicitly contextualizes Cairo against what you've already explored: "Cairo's cultural institutions are world-class—here's how they compare to what you looked at in Istanbul. The markets will feel similar to Marrakech, but the scale is different." The system does the mental math because it knows what you're trying to figure out.

No human team predesigned that page. It was assembled in milliseconds from components, content, and behavioral signals.

What designers should actually build

If systems are generating pages, what's left for us?

The answer reshapes the job entirely. Designers stop authoring screens and start authoring the ingredients and rules that intelligent systems use to compose experiences.

This means:

Components designed for AI composition.

  • The building blocks—cards, content modules, interactive elements—need to be built for recombination by a model, not just arrangement by a human in Figma. They need metadata about what they're for, when they're appropriate, what they can be combined with.

Rules and orchestration logic.

  • When should the system be playful versus serious? How aggressive can upsells be in different contexts? What evidence needs to be shown before making a recommendation? These decisions used to be implicit in the layouts we designed. Now they need to be explicit in the logic we encode. (Google's prototype calls this a "UI constitution"—a set of design rules the model consults before generating each screen.)

Constraint systems for brand and tone.

  • If the model is rendering visuals on the fly, "all illustrations must be blue outlines" becomes a bottleneck. Brand moves up a level: you define acceptable ranges of style, continuity requirements, tone guardrails. The system chooses specifics case by case.

Evaluation frameworks.

  • What does "good" look like when no two users see the same thing? Designers need to define success criteria the system can learn from, not just mockups for engineers to build.

The Future is Generative

Fantasy demos generative interfaces at MIT's Emtech AI

The ground is already moving

The vast majority are approaching AI the way we've always approached new technology: as something to integrate into existing workflows. Add a chatbot to the support page. Use generative copy for product descriptions. Let the model personalize a few modules while the rest of the page stays fixed.

This makes sense. It's low-risk, easy to ship, and produces visible results right now. But it also means optimizing for a paradigm that's already outdated.

The shift to fully generative interfaces is arriving faster than most teams are planning for. You can try the Gemini-OS demo right now. Google is already shipping adaptive responses that generate layouts on the fly. The infrastructure for this future is being built in the open, and it's moving quickly.

Most design teams have their heads down shipping screens. But eighteen months from now, when generative interfaces are the norm, that focus may look like time spent perfecting a craft the industry has already moved past.

3D abstract visualization of clustered white lines and dots radiating outward, resembling a burst or explosion, set against a black background. Some lines are longer, giving a sense of motion and depth.

Design's new substrate

The interesting problems are migrating. The decisions that actually shape what users experience are moving to a layer most designers don't have access to or vocabulary for.

Consider: when an AI decides what information to surface, it's drawing from retrieval chains—systems that determine which sources get consulted and how results get ranked. When it decides how to phrase a response, its following system prompts—invisible instructions that shape tone, boundaries, and behavior. When it remembers (or forgets) what you told it last week, that's context window architecture at work.

These are experience design decisions with massive consequences for how products feel, whether they're trusted, whether they actually help people do what they came to do. And right now, they're being made almost entirely without designer input. 

Stay focused on the surface—polishing pages, debating button colors, shipping predetermined flows—and the work will slowly hollow out. The screens will still get made. But the consequential decisions will happen somewhere else, made by people who aren't waiting for designers to catch up.

3 questions for design leaders

If you're leading a design team, here's a starting diagnostic:

  1. What percentage of your team's work would survive a shift to generative interfaces? If most of what you're producing is static screens and predetermined flows, you're building inventory that may be obsolete the moment it ships.

  1. Can your design system be consumed by a model, or only by humans? Components need metadata, semantic structure, usage rules. If your system only makes sense to a designer arranging things in Figma, it's not ready for what's coming.

  1. Where are you in the room when model behavior gets decided? System prompts, retrieval logic, policy boundaries—these are being written right now, probably by engineers or researchers. If your team isn't part of those conversations, you're ceding the most consequential design decisions to people who may not be thinking about user experience at all.

As static pages rapidly give way to continuous flows, the locus of design is shifting. Are you ready to follow it?

Let's talk. We'd love
to hear from you.


© 2025 Fantasy