We're at a genuinely weird inflection point in frontend development. AI can generate entire interfaces now. LLMs can reason about data and layout. And yet — most SaaS products still ship hand-crafted React apps, each building its own UI, its own accessibility layer, its own theme system, its own responsive breakpoints. Not every service, but the vast majority.
That's a lot of duplicated effort for what's essentially the same job — showing a human some data and letting them do stuff with it.
I've been thinking about this a lot lately, and I built a proof of concept to test an idea: what if the browser itself generated the UI?
Where we are right now
The industry is circling this idea from multiple angles, but nobody's quite landed on it yet.
Server-driven UI has been around for a while — Airbnb and others pioneered it for mobile, where app store review cycles make shipping UI changes painful. The server sends down a JSON tree describing what to render, and the client just follows instructions. It's clever, but the server is still calling the shots. x.
Google recently shipped Natively Adaptive Interfaces — a framework that uses AI agents to make accessibility a default rather than an afterthought. Really cool idea, and the right instinct. But it's still operating within a single app's boundaries. Your accessibility preferences don't carry between Google's products and, say, your project management tool.
Then there's the generative UI wave — CopilotKit, Vercel's AI SDK, and others building frameworks where LLMs generate components on the fly. These are powerful developer tools, but they're still developer tools. The generation happens at build time or on the server. The service is still in control.
See the pattern? Every approach keeps the power on the service side.
Flip it
Here's the idea behind the adaptive browser: what if the generation happened on your side?
Instead of a service shipping you a finished frontend, it publishes a manifest — a structured description of what it can do. Its capabilities, endpoints, data shapes, what actions are available. Think of it like an API spec, but semantic. Not just "here's a GET endpoint" but "here's a list of repositories, they're sortable by stars and language, you can create, delete, star, or fork them."
Your browser takes that manifest, calls the actual APIs, gets real data back, and then generates the UI based on your preferences. Your font size. Your colour scheme. Your preferred layout (tables vs cards vs kanban). Your accessibility needs. All applied universally, across every service.
The manifest for something like GitHub looks roughly like this — a service describes its capabilities and the browser figures out the rest:
service:
name: "GitHub"
domain: "api.github.com"
capabilities:
- id: "repositories"
endpoints:
- path: "/user/repos"
semantic: "list"
entity: "repository"
sortable_fields: [name, updated_at, stargazers_count]
actions: [create, delete, star, fork]
The browser takes that, fetches the data, and generates a bespoke interface — using an LLM to reason about the best way to present it given who you are and what you're trying to do.
Why this matters more than it sounds
When I was building the app store and integrations platforms at Xero, one of the constant headaches was that every third-party integration had its own UI patterns. Users had to learn a new interface for every app they connected. If the browser was generating the UI from a shared set of preferences, that problem just… goes away.
Accessibility is the big one though. Right now, accessibility is a feature that gets bolted on — and often badly. When the browser generates the UI, accessibility isn't a feature. It's the default. Your preferences — high contrast, keyboard-first navigation, screen reader optimisation, larger text — apply everywhere. Not because every developer remembered to implement them, but because they're baked into how the UI gets generated in the first place.
Customisation becomes genuinely personal too. Not "pick from three themes the developer made" but "this is how I interact with software, full stop."
The trade-off is real though
Frontend complexity drops dramatically, but the complexity doesn't disappear — it moves behind the API. And honestly, it probably increases.
API design becomes way more important. You can't just throw together some REST endpoints and call it a day. Your manifest needs to be semantic — describing what the data means, not just what shape it is. Data contracts between services matter more. Versioning matters more.
graph LR
A[Service] -->|Publishes manifest + APIs| B[Browser Agent]
C[User Preferences] --> B
D[Org Guardrails] --> B
B -->|Generates| E[Bespoke UI]
But here's the thing — this trade-off pushes us somewhere genuinely interesting. If every service needs to describe itself semantically through APIs and manifests, those APIs become the actual product surface. Not the frontend. The APIs.
And once APIs are the product surface, sharing context between platforms becomes the interesting problem. Your project management tool knows what you're working on. Your email client knows who you're talking to. Your code editor knows what you're building. Right now, none of these talk to each other in any meaningful way because they're all locked behind their own UIs. In a manifest-driven world, that context flows through the APIs — and your browser can stitch it all together into something coherent.
Where this is headed (IMHO)
I reckon we're about 3-5 years from this being mainstream. The pieces are all there — LLMs that can reason about UI, standardisation efforts around sending UI intent over APIs, and a growing expectation from users that software should adapt to them, not the other way around.
The services that win in this world won't be the ones with the prettiest hand-crafted UI. They'll be the ones with the best APIs, the richest manifests, and the most useful data. The frontend becomes a generated output, not a hand-crafted input.
Organisations will set preference guardrails — "our people can use dark or light mode, must have destructive action confirmations, these fields are always visible" — while individuals customise within those bounds. Your browser becomes your agent, not just a renderer.
I built the adaptive browser as a proof of concept to test this thinking — it uses Claude to generate UIs from a GitHub manifest and user preferences defined in YAML. It's rough, but the direction feels right.
The frontend isn't dying. But what we think of as "frontend development" is about to change. The interesting work moves to API design, semantic data contracts, and building browsers smart enough to be genuine user agents.