Revolutionizing the desktop since 1975
Spring 2025
In 1968, Douglas Engelbart demonstrated a computer system called the oN-Line System, or NLS. The NLS is the source of a lot of computer firsts. Among other things, Engelbart showed video conferencing, collaborative text editing, embedded graphics, copying and pasting, and hypertext - all of it accessible through a mouse and keyboard. This event has since become known as "The Mother of All Demos".
Recently, Amelia Wattenberger published an article with ideas about a possible future for user interfaces. In short, she asks for interfaces with more tactile "friction", praises multi-modality and suggest variations in both input devices and feedback options. We'll return to her text, but first, let's look at some other visions of future user interfaces.
Xerox Alto
Smalltalk-76 (from 1976), displaying a paint program written in Smalltalk-76. It looks curiously like Microsoft Paint - or, more correctly, it's the other way around. Click the image to view the full version.
The Xerox Alto wasn't as much a vision as an actual product. Inspired by Engelbart and others, a team of researchers at Xerox PARC set about creating computers for the office of the future. This resulted in the Alto, a machine that would probably be recognizable to a lot of present day computer users. The Alto was the origin of Smalltalk, a programming language with its own GUI environment - the one that inspired Steve Jobs to commission the Lisa. Looking at that GUI now, it's baffling how little desktop interfaces have changed since then, almost 50 years ago.
A man and his Alto. The screen was in portrait mode: Xerox was all about paper.
In a commercial for the Alto, we meet a man - some kind of upper middle management, presumably - going about his daily business. He works in a spacious private office and, using the Alto, he can read and send email and produce laser printouts. Eventually, the Alto conjures up a high resolution image of flowers. The man wonders why, and the computer replies - with text on screen - that it's the man's wedding anniversary. "I forgot," says the man, to which the Alto replies, "It's okay, we're only human."
Despite being a very advanced system for its time, the Alto was of course incapable of such banter - and yet, the commercial's producers saw fit to include it, in order to spice things up.
Sun Starfire
Commercials like that for the Alto always feel a bit stuffy and contrived. Upping the ante in several ways, Sun Microsystem's 1994 commercial for Starfire (an imaginary future computer), is a cringe-filled orgy in stilty acting and terrible writing. The protagonist once more seems to be upper middle management, and she's working on a presentation of an electric car. Future! She also engages in a bunch of strange and/or morally questionable activities. We'll probably never know why the producers decided to give her a cold, or why she spends so much of her time spying on co-workers using the live CCTV function on her expensive computer. But I digress.
The commercial presents several concepts that - like in Engelbart's demo - are now commonplace. Tablet computers, video conferencing, touch screens, AI-augmented image and video editing, and instant scanning (today we'd probably just photograph the document with our smartphone). Granted, imminent mainstream adoption of such functions was fairly obvious in 1994.
A woman and her ginormous Starfire. Note the apparent lack of a keyboard. A mouse is present, however, despite the massive touchscreen.
Our hero is working on her presentation in a grotesquely spacious private office, which is probably necessary considering the sheer size of the Starfire. Actually operating it includes a lot of swiping and tapping on its humongous screen - mixed with talking to it, of course.
The Office of the Future
There are many other, similar, visions of our digital future. Plenty of them include touch interfaces and speaking to our computers. In fact, many of these visions are curiously reminiscent of how computers work in Star Trek. There's some vague touch-type interface (LCARS), but you can also speak to the computer - which will reply in conversational English. And of course there's video conferencing.
Star Trek: The quintessential computer of the future. Touchy, talky and, er, video-y.
Some of the more outlandish suggestions in the Starfire commercial (such as a roll of film slowly pouring a texture onto a 3D model) are clearly intended to raise eyebrows, rather than reflect actually working interfaces. Other concepts are recurring in visions of future UI:s, like giant touch displays and the constant desire to talk to our machines.
These suggestions also seem genuine, which brings us to Wattenberger's recent article.
Tactile computing
Wattenberger touches on some relevant issues, especially regarding the prevalence of touchscreens and their inherent lack of tactile feedback. Alas, I think the reasoning presented conflates a desire for "genuine" experiences with something that can be competitive in a global, mass market economy. The former doesn't necessarily disappear because of the latter, but "friction" isn't something that's going to sell a lot of software.
To riff of Wattenberger's own example: Most doughs aren't kneaded by humans. We have machines for that - even in our homes - and the majority of our bread is produced at an industrial scale. That doesn't mean we can't knead our own dough if we want to, or that an artisanal bakery can't do well in an affluent neighborhood.
Computers are successful not because they introduce friction, but because they reduce it. If friction was a selling point, we'd give up spreadsheet software today and go back to the tactile way - using paper worksheets.
We're still allowed to dream, of course. I absolutely agree that more aspects of everyday life should be more tactile than an unyielding flatscreen. Most of that already exists, in the form of actual push buttons, keyboards, joysticks, levers, sliders and turning knobs. It's a question of hardware rather than software. Which is where the harsh reality of capitalism comes in and ruins everything: if it's cheaper with a touchscreen, and at all feasible, it's going to be a touchscreen. Not just in regular smart devices, but in cars, elevators, coffee machines and stovetops as well. Given this, I'd rather we spent some energy on making those interfaces as good as humanly possible, instead of trying to reinvent things that already work well.
The interface suggestions made toward the end of the text is what usually crops up when talking about revolutionizing the way we work with computers: an infinite canvas and - harking back to both the Alto, Starfire and Star Trek - some voice control.
Design and Reality
The infinite canvas isn't bad, necessarily, but the unstructured sprawl of mixed information they often lead to seems to offer little value to the vast majority of computer users. When adding structure and bounds to the canvas, the idea does seem to appeal to a wider audience: spreadsheets are very popular. But there's a limited amount of spreadsheet-type tasks, and at some point we need a different kind of interface and another way to structure our data. Wouter van Oortmerssen (of CryEngine and WebAssembly fame) released TreeSheets some 16 years ago. While it's an interesting concept, it doesn't seem to have caught on.
Traits like structure and relationships also come into play when we work with information. We seem to need a mental model - a spatial psychological abstraction - of our data, to know from which angle to attack it. Such a model can be hard to build when everything exists in a vast 2D space without clear boundaries. That's where things like tree views, lists and grids come into play. Most users aren't performing the kind of work that lends itself to an infinite canvas, anyway: File an order, take the next call, file another one. In short: if infinite canvases really were that great, I think we'd see a lot more of them.
Enormous computer screens, like that of the Starfire, are now cheaper than ever to buy. Touch technology is everywhere. Thankfully, manufacturers have realized that the combination is untenable: Imagine having to raise your arm to swipe, pinch and tap across an ultra-wide screen several times per minute. Touch works best on small surfaces, even if it looks impressive on a bigger screen. For a while, manufacturers wanted us to buy touchscreen laptops. I haven't seen one of those in the wild for several years now - but pure tablets are commonplace. It seems as if there is a keyboard and mouse around, we pick them over touch for most desktop-type tasks.
Voice control always adds a certain futuristic vibe to a demo. In reality, it seems we're stuck with several circumstances making it effectively pointless. Our best LLM:s are a far cry from the AI:s in Star Trek, but even for simpler tasks use cases appear limited. I work with developers and other IT professionals all day, hang out with similar people on my free time, ride a commute jam packed with project managing office dwellers, and regularly find myself in big cities - but I haven't heard a single "Hey, Siri" for years. Like with the scarcity of touchscreen laptops, I'm sure that means something.
Infinite canvases, laissez-faire touchscreens and voice control seem very designer-y to me. By that I mean they're reflecting how a graphical designer would like to work with a computer at certain points in their creative process.
But we shouldn't build entire paradigms, or even just individual interfaces, based on the assumption that everyone else is using computers the same way we ourselves do. Most people don't conceptualize graphic design ideas or freestyle pretend corporate presentations. Some are controlling an industrial process, editing a feature film, designing an airplane or writing code. Most, however, are probably filing a customer complaint, ordering food, booking a flight, or doing some accounting.
Some people have the luxury of working in their own, private offices - or at home. Most computer users don't. They're confined to open floor plan offices, hotel lobbies, bustling trading desks, loud workshops or busy stores. Most of them probably perform tasks where speaking - even if it had been in a quiet and solitary environment - is less efficient than typing, tapping or clicking.
The same goes for multi-sensory interfaces in general. Voice control have many apparent drawbacks, but so do UI:s with audio feedback. Ever been in an open floorplan office where people have audio notifications enabled when receiving a chat message? Welcome to hell.
Even in a spacious private office, audio signals can be annoying. They interfere with music, for example, which is something a lot of people use their computers for both at work and at home. Generally, relying on a single type of feedback is a bad UI idea: Apart from poor accessibility, it reduces the user's freedom to customize their working environment. Some, like me, are annoyed by sound. Others seem to love it, and others still require it if they're to use a computer at all.
The Actual Future
The desktop user interface is a mainstay of computing. Bread and butter, if you will. A pointer, icons, windows, menus and buttons, controlled using a keyboard and a mouse. Ingenious simplicity.
For almost half a century now, we haven't really managed to come up with something better, and that's not for lack of trying. This fact seems to annoy a lot of people looking for a problem to solve - which every so often leads to something rather silly. Removing scrollbars from desktop programs for example, only to realize that people then don't understand there's scrollable content, and then responding by cooking up some other way of doing scrolling instead of just reinstating scrollbars.
Blundering about like that isn't inventing a new paradigm, no matter what individual designers claim - it's just making an existing one worse. But is there a real new paradigm around the corner? Is the desktop going to disappear? I honestly don't know, but I sincerely doubt it. It's probably just going to get slightly worse in some aspects, and hopefully slightly better in others.
Truly new paradigms do appear from time to time, mostly out of necessity. The smartphone is a great example. It needs to pack a lot of different input methods into one tiny little surface, and touch is great for that. The results aren't always optimal, but they're good enough compromises, vastly improving the flexibility of such a small device.
To me, trying to reinvent the desktop experience feels a bit like complaining about steering wheels in cars. To others, it's a great opportunity to confuse change with improvement and different with better. With that said, I'm on board with Wattenberger's wish for slightly more tactile computers. In fact, if I'm being honest, I'd probably want computers and computing to be more like it actually was in 1994, and much less like our current Starfirey world.
Since that's probably not going to happen, I instead urge UI designers to consider the powerful concepts of consistency and familiarity just a bit more often than they presently seem to do.