我们的界面失去了感知能力。
Our interfaces have lost their senses

原始链接: https://wattenberger.com/thoughts/our-interfaces-have-lost-their-senses

我们的数字界面变得扁平化,优先考虑了计算机的简洁性,而非更丰富、更以人为本的体验。我们从打孔卡和拨动开关等物理交互,转变为基于文本的命令,最终转向触摸屏——虽然直接,但却将我们限制在玻璃显示器的“平面世界”中。人工智能聊天机器人进一步减少了感官输入,用文本提示取代了交互式控件。这种“大扁平化”消除了重要的摩擦力,降低了满意度。 作者主张回归能够调动我们全部感官能力的界面。他们提议采用结合文本、视觉、声音和触觉反馈的界面,响应环境信号并实现多模式交互(打字、点击、手势、语音)。这种更丰富的界面将使我们能够在有形物品上进行协作,支持多种并发模式,并根据上下文做出响应,从而组织信息,帮助我们更好地思考。目标是重建人与计算机之间的桥梁,创造出能够与我们一起移动、说我们的语言并适应我们身体的界面。

Hacker News 上的一个帖子讨论了一篇文章,该文章批评现代用户界面缺乏感官参与,认为它们变得过于“扁平”且与物理现实脱节。评论者们就文章的论点展开了辩论,一些人同意用户界面的不一致性以及对美学的关注胜过可用性是有问题的,会导致用户疲劳。另一些人则为当前的界面辩护,理由是触觉和触觉等模式的进步。 文章的一个主要争议点是其使用了 AI 生成的图像,一些人认为这与对缺乏灵魂的设计的批评相矛盾。讨论还探讨了用户界面设计中“摩擦力”的作用,质疑简化、低投入的界面是否总是理想的,或者某种程度的触觉或动觉参与是否能增强用户体验。Bret Victor 在交互设计方面的工作也被提及与之相关。总的来说,这个帖子展示了对用户界面演变以及简洁性、功能性和感官丰富性之间权衡的多种观点。
相关文章
  • (评论) 2024-07-28
  • (评论) 2024-05-22
  • (评论) 2024-07-19
  • (评论) 2024-08-24
  • (评论) 2024-01-17

  • 原文

    Think about how you experience the world—

    you touch, you hear, you move.

    But our digital world has been getting flatter, more muted.

    Reduced to text under glass screens.

    This shift made interfaces simpler.
    But was that really the goal?

    An interface is the bridge between

    It's how we tell computers what we want,

    and it's how computers communicate back to us.

    The shape should fit how we work,

    for ergonomics and ease of use

    and it should fit how the computer works.

    for simplicity and a good mental model

    Recently, we've been too focused on fitting to the computer's shape, and not enough to our own bodies.

    The Great Flattening

    Computers used to be physical beasts.

    We programmed them by punching cards, plugging in wires, and flipping switches. Programmers walked among banks of switches and cables, physically choreographing their logic. Being on a computer used to be a full-body experience.

    Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied. Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied. Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied. Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied. Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied. Then came terminals and command lines. Physical knobs turned into typed commands—more powerful, but our digital world became less embodied.

    We brought back some of the tactile controls with GUIs—graphical user interfaces. We skeumorphed the heck out of our screens, with digital switches, flat sliders, and folder icons. But we kept some of the the functionality in the physical world, with slots to stick disks into and big ol' power buttons. We brought back some of the tactile controls with GUIs—graphical user interfaces. We skeumorphed the heck out of our screens, with digital switches, flat sliders, and folder icons. But we kept some of the the functionality in the physical world, with slots to stick disks into and big ol' power buttons. We brought back some of the tactile controls with GUIs—graphical user interfaces. We skeumorphed the heck out of our screens, with digital switches, flat sliders, and folder icons. But we kept some of the the functionality in the physical world, with slots to stick disks into and big ol' power buttons. We brought back some of the tactile controls with GUIs—graphical user interfaces. We skeumorphed the heck out of our screens, with digital switches, flat sliders, and folder icons. But we kept some of the the functionality in the physical world, with slots to stick disks into and big ol' power buttons. We brought back some of the tactile controls with GUIs—graphical user interfaces. We skeumorphed the heck out of our screens, with digital switches, flat sliders, and folder icons. But we kept some of the the functionality in the physical world, with slots to stick disks into and big ol' power buttons. We brought back some of the tactile controls with GUIs—graphical user interfaces. We skeumorphed the heck out of our screens, with digital switches, flat sliders, and folder icons. But we kept some of the the functionality in the physical world, with slots to stick disks into and big ol' power buttons.

    Then came touchscreens.
    What a beautiful thing! We get to poke things directly!
    But now we live in an flat land, with everything behind a glass display case.
    Then came touchscreens.
    What a beautiful thing! We get to poke things directly!
    But now we live in an flat land, with everything behind a glass display case.
    Then came touchscreens.
    What a beautiful thing! We get to poke things directly!
    But now we live in an flat land, with everything behind a glass display case.
    Then came touchscreens.
    What a beautiful thing! We get to poke things directly!
    But now we live in an flat land, with everything behind a glass display case.
    Then came touchscreens.
    What a beautiful thing! We get to poke things directly!
    But now we live in an flat land, with everything behind a glass display case.
    Then came touchscreens.
    What a beautiful thing! We get to poke things directly!
    But now we live in an flat land, with everything behind a glass display case.

    With increasing amounts of AI chatbots, we're losing even more: texture, color, shape.
    Instead of interactive controls, we have a text input.
    Want to edit an image? Type a command.
    Adjust a setting? Type into a text box.
    Learn something? Read another block of text.
    With increasing amounts of AI chatbots, we're losing even more: texture, color, shape.
    Instead of interactive controls, we have a text input.
    Want to edit an image? Type a command.
    Adjust a setting? Type into a text box.
    Learn something? Read another block of text.
    With increasing amounts of AI chatbots, we're losing even more: texture, color, shape.
    Instead of interactive controls, we have a text input.
    Want to edit an image? Type a command.
    Adjust a setting? Type into a text box.
    Learn something? Read another block of text.
    With increasing amounts of AI chatbots, we're losing even more: texture, color, shape.
    Instead of interactive controls, we have a text input.
    Want to edit an image? Type a command.
    Adjust a setting? Type into a text box.
    Learn something? Read another block of text.
    With increasing amounts of AI chatbots, we're losing even more: texture, color, shape.
    Instead of interactive controls, we have a text input.
    Want to edit an image? Type a command.
    Adjust a setting? Type into a text box.
    Learn something? Read another block of text.
    With increasing amounts of AI chatbots, we're losing even more: texture, color, shape.
    Instead of interactive controls, we have a text input.
    Want to edit an image? Type a command.
    Adjust a setting? Type into a text box.
    Learn something? Read another block of text.

    The Joy of Doing

    We've been successfully removing all friction from our apps — think about how effortless it is to scroll through a social feed. But is that what we want? Compare the feeling of doomscrolling to kneading dough, playing an instrument, sketching... these take effort, but they're also deeply satisfying. When you strip away too much friction, meaning and satisfaction go with it.

    Think about how you use physical tools. Drawing isn't just moving your hand—it's the feel of the pencil against paper, the tiny adjustments of pressure, the sound of graphite scratching. You shift your body to reach the other side of the canvas. You erase with your other hand. You step back to see the whole picture.

    We made painting feel like typing,

    but we should have made typing feel like painting.

    Putting the you back in UI

    So how might our interfaces look if we shaped them to fit us?

    We use our hands to sculpt, our eyes to scan, our ears to catch patterns.

    Our computers can communicate to us in many different formats, each with their own strengths:

    Text

    Great for depth, detail, and precision.

    But it doesn't always have to be in full paragraphs. How about showing key points first, then letting users expand?

    Visualizations

    Ideal for spatial relationships, trends, and quick insights.

    Can we show more content spatially? Or encode it in charts or colors?

    Sound

    Perfect for alerts and background awareness. Also, patterns.

    Why are most web UIs silent? Can we use subtle chimes or sonification to highlight patterns?

    Haptics

    Provides passive feedback (vibrations, force).

    Here's one I always forget about! We can vibrate phones to alert or convey patterns.

    And what about the reverse! We can communicate to our computers in many different ways, each with their own strengths:

    Typing

    Precise, detailed, and familiar

    Good for composing long-form thoughts, keyboard shortcuts, and rough direction.

    Clicking & Dragging

    Direct, fine-grained control.

    Great for spatial tasks (design, organization) and pointing at things-on-a-screen.

    Tapping, Swiping, Pinching

    Intuitive for direct manipulation.

    Great for mobile, but do we have to limit guestures to mimicking mouse interactions?

    Gesturing

    Hands-free, fluid, and expressive.

    Could be powerful for accessibility, quick actions, and complex fine control—reliable detection feels very possible at this time.

    Speaking

    Easy for loose thoughts.

    LLMs have made speech more viable—can we let users think out loud or navigate roughly with their voice?

    And the real magic happens when we combine different modalities. You can't read and listen and speak at the same time—try reading this excerpt while talking about your day:

    If it had not rained on a certain May morning Valancy Stirling’s whole life would have been entirely different. She would have gone, with the rest of her clan, to Aunt Wellington’s engagement picnic and Dr. Trent would have gone to Montreal. But it did rain and you shall hear what happened to her because of it.

    But you can talk while clicking,

    listen while reading,

    look at an image while spinning a knob,

    guesture while talking.

    Let's build interfaces that let us multitask across senses.

    Rebuilding the bridge

    So, what might a richer interface look like? I have strong conviction that our future interfaces should:

    • let us collaborate on tangible artifacts, not just ephemeral chat logs.
    • support multiple concurrent modalities—voice, gestures, visuals, spatial components.
    • respond to ambient signals—detecting context, organizing information, helping us think better.

    Last year, I did a rough exploration of what this could look like for a thought organizing tool. One that listened as you talked or typed, and organized your rambling thoughts into cards.

    This interface is very rough, but felt like a different way of working with technology. Especially how it let me bumble through rough ideas one second, then responded to commands like "re-group my cards" or "add 3 cards about this" the next.

    I would love to see more explorations like this!

    Our interfaces have lost their senses

    All day, we poke, swipe, and scroll through flat, silent screens. But we're more than just eyes and a pointer finger. We think with our hands, our ears, our bodies.

    The future of computing is being designed right now. Can we build something richer—something that moves with us, speaks our language, and molds to our bodies?

    联系我们 contact @ memedata.com