By Peter Smart

What Is An App Now?

When software generates function on demand, what's an app actually made of?

A fitness app today is a collection of features: workout libraries, timers, progress charts, maybe a social feed. Teams spend years refining navigation, polishing flows, debating which metrics deserve prominence on the home screen. The app ships.

Now picture something different.

Someone’s mid-workout and props their phone up to record a set of squats. Something looks off. They mention it. The app asks a few clarifying questions—what weight, whether there’s pain, whether it looks more like this from the side, whether they’re feeling tightness here. Then it overlays a diagram on the video showing exactly what’s happening through the movement, adjusts the plan for the day, and explains why.

That interaction, which took a fraction of a second, may never exist in exactly that form again, for anyone else.

The first is an app as we've understood apps for fifteen years: branded, predefined utility. The second is a container for domain intelligence that generates function on demand.

We’re about to redefine what an app actually is.

The old object

For the past decade, product teams have shipped features. The roadmap is a list of capabilities to build: add workout tracking, add nutrition logging, add social challenges. Each one gets designed, developed, tested, and released. Users navigate to it through menus and tabs.

This model treats the app as a fixed structure. You can rearrange the rooms, add new ones, improve the furniture. The house has walls.

The system is legible as a finite set of functions. You can enumerate what the product does, quantify what's been shipped, and point to what's missing.

Design work in this paradigm focuses on information architecture, interaction flows, and visual systems. Good product designers obsess over how users move through screens, where friction lives, what deserves prominence. The craft is real and the details matter.

The roadmap assumes a controlled end state. You can name what "done" looks like in advance, then build toward it feature by feature. When something ships, it ships as a fixed artifact—broadly the same for everyone.

The underlying assumption is that you're authoring a static system. Users get what you built, more or less the same way every time.

The new object

The new object is a product whose core value is domain intelligence.

The winners will be the versions with the most capable intelligence for helping people achieve specific things.

A fitness app becomes a fitness product. It wins not because of predetermined UI and static functionality, but because it contains the most capable fitness agent—one that can help end users in exactly the way they need.

The value lives in the quality of reasoning the system can do on your behalf. Can it understand your goals, your constraints, your history, your body? Can it synthesize that into guidance that actually works for you, right now, in this moment?

Function gets generated accordingly. The experience becomes different for every person who uses it. Functions exist for the second they’re needed, then dissipate.

The outputs aren’t enumerable in the way a feature list is. But the agent is still definable: a set of skills, capabilities, limitations, rules, and boundaries. What remains constant is the intelligence underneath.

The human interface becomes a surface for expressing needs and receiving entirely tailored responses.

Interfaces still exist. They become outputs of intelligence—generated because a specific user needed something specific in that moment. The overlay explaining what’s happening in the squat is something the system conceived and rendered on the fly, because that’s what the moment required.

The Future is Generative

Fantasy demos generative interfaces at MIT's Emtech AI

What product teams actually build

If the core value shifts to domain intelligence, the design objects change.

You shape domain models: what does this intelligence know, what does it care about, what can it reason over? A fitness agent needs to understand biomechanics, load progression, recovery, injury patterns, motivation. The depth and accuracy of that knowledge becomes the product.

You define capability maps: what can this system actually do for people? Diagnose form issues. Adjust training load. Program recovery. Generate explanations. Generate diagrams and images. Generate 3D assets. Render visual overlays on captured video when that’s the clearest way to explain what’s happening. Access health data (with permission). Pull relevant research when it matters and translate it into guidance that fits a specific person.

You’re also defining a set of logic, rules, and boundaries: what it can and can’t do, what it should never do, when it must ask clarifying questions, and when it should defer.

You design interaction patterns: how do users express what they need, and how does the system respond? Sometimes that’s text. Sometimes it’s voice. Sometimes it’s a generated visualization. The modality flexes based on what the moment requires.

The roadmap shifts

Roadmaps are no longer Jira boards filled with discrete functions and concrete specifications. They become plans for enriching what the underlying intelligence can do.

This changes how you talk to stakeholders. You say: "We're teaching the system to reason about nutrition in the context of your training goals." The capability might surface in a dozen different ways depending on what users need in the moment. The value lives in the intelligence behind it.

Where to start

You don't need a fully autonomous agent to begin thinking this way.

First: go back to first principles. What is your user ultimately trying to achieve?

Second: map the many different ways users would want to interact, ask, and receive help—across multi-modal, multi-form factor experiences.

Third: work backward into the model capabilities required to meet those needs, not in one predetermined flow but in multiple forms. Define the access it needs, and the boundaries it must respect.

Design remains essential

The work moves from designing discrete surfaces to designing the intelligence substrate underneath them.

Just as we covered in Design After The Page, what we’re imbuing these intelligences with isn’t just capabilities. Its patterns, design logic components, and atomic building blocks—so the intelligence can draw from the right primitives to assemble the right experience in real time.

The apps that matter in five years won’t be the ones with the most features. They’ll be the ones whose intelligence is deep, trustworthy, and domain-specific enough to be reliably useful.

Let's talk. We'd love
to hear from you.


© 2025 Fantasy