我们不是在创新,而只是在更慢地遗忘。
We're Not Innovating, We're Just Forgetting Slower

原始链接: https://www.elektormagazine.com/articles/opinion-no-innovation-forgetting-slower

作者认为,我们用创新的假象换取了真正的进步和理解,导致技术不可靠,基础知识丧失。他将自己41岁的TI-99/4A电脑(仍然可以立即启动)与经常因不必要的复杂性和抽象性而故障的现代“智能”技术进行了对比。 他批评了用新首字母缩略词(如“边缘计算”和“无服务器”)重新包装旧概念的趋势,以及那些缺乏深入理解的人对人工智能的过度炒作。“创客运动”也因优先考虑美观而非真正的工程而受到批评。 这个系统性问题导致开发人员能够使用工具,但却不懂其原理,从而依赖最终会崩溃的抽象概念。作者呼吁回归基于原理的教育、简洁可靠的设计,以及拒绝为新奇而新奇。我们必须优先考虑理解和持久质量,而不是转瞬即逝的潮流和复杂、不透明的系统。

This Hacker News thread discusses a recent article arguing that modern technology prioritizes abstraction and complexity over understanding and reliability, leading to a decline in innovation. The original article uses the TI-99/4A computer as an example of simpler, better-understood technology, in comparison to contemporary reliance on abstractions and dependencies. Commenters debated whether the author's points were valid, or if they were simply expressing nostalgia. There were arguments in favor of specialization and abstraction, that modern tech stacks are too complex, leading to fragility and lack of understanding. Some highlighted the importance of understanding underlying systems, rather than blindly applying complex tools. Others criticized the "stochastic parrot" view of large language models. Some argued it's important to build upon prior knowledge instead of reinventing things, pointing to the over-engineering, and lack of maintainability. There was also a view on capitalism contributing to the problem due to profit goals.
相关文章

原文

We’ve mistaken complexity for progress — and forgotten how things really work. A 41-year-old home computer still boots instantly, while today’s “smart” tech buckles under its own abstractions.

In our rush to repackage yesterday's ideas with today's branding, we've lost the joy of truly knowing how things work.

My Texas Instruments TI-99/4A Home Computer still boots. Forty-one years after I first plugged it into our family’s wood-grain TV, it fires up in less than two seconds, ready to accept commands in TI BASIC. No updates required. No cloud connectivity. No subscription fees. No ads. Just pure, deterministic computing that does exactly what you tell it to do, every single time.
 

Meanwhile, my Google Nest Wi-Fi router is no longer able to create a PPPoE connection using my fibre modem after a “helpful” update from my ISP. But my PCs can still create a PPPoE connection, so I’ve wired the modem straight into the PC. Which means no Google Nest Wi-Fi. Which means all those Wi-Fi lightbulbs in my home? Not working right now. My fault — I know.

This isn't nostalgia talking — it's a recognition that we’ve traded reliability and understanding for the illusion of progress. Today’s innovation cycle has become a kind of collective amnesia, where every few years we rediscover fundamental concepts, slap a new acronym on them, and pretend we’ve revolutionized computing. Edge computing? That’s just distributed processing with better marketing. Microservices? Welcome to the return of modular programming, now with 300% more YAML configuration files. Serverless? Congratulations, you’ve rediscovered time-sharing, except now you pay by the millisecond.

The Longevity Test

There’s something deeply humbling about opening a 40-year-old piece of electronics and finding components you can actually identify. Resistors, capacitors, integrated circuits with part numbers you can look up. Compare that to today’s black-box system-on-chip designs, where a single failure means the entire device becomes e-waste. We’ve optimized for manufacturing efficiency and planned obsolescence while abandoning the radical idea that users might want to understand — or, heaven forbid, repair — their tools.

The VHS player in my basement could be fixed with a screwdriver and a service manual (OK, sometimes an oscilloscope). Meanwhile, my Wi-Fi router requires a PhD in reverse engineering just to figure out why it won’t connect to the internet. (Google is less than helpful with a dumbed-down user interface that basically tells you that “something went wrong.”) We’ve mistaken complexity for sophistication and abstraction for advancement.
 

This isn't just about hardware. Software has followed the same trajectory, piling abstraction upon abstraction until we’ve created a tower of dependencies so precarious that updating a single package can break an entire application. Modern developers debug through seventeen layers of frameworks to discover that their problem is a missing semicolon in a configuration file generated by a tool that abstracts away another tool that was created to simplify a process that was perfectly straightforward twenty years ago.

The AI Hype Machine

Nothing illustrates our forgetting problem quite like the current AI discourse. Large language models are impressive statistical text predictors — genuinely useful tools that excel at pattern matching and interpolation. But, listening to the breathless coverage from tech journos who couldn’t explain the difference between RAM and storage if their click-through rates depended on it, you’d think we’d achieved sentience in a server rack.

The same publications that use “disruptive AI” unironically are the ones that need to Google “what is a neural network” every time they write about machine learning. They've turned every statistical improvement into a breathless announcement about the future of humanity, completely missing the fact that we’re applying decades-old mathematical concepts with more compute power. It’s less magic and more linear algebra at scale.

This wouldn’t matter if it were just marketing hyperbole, but the misunderstanding has real consequences. Companies are making billion-dollar bets on technologies they don’t understand, while actual researchers struggle to separate legitimate progress from venture capital fever dreams. We’re drowning in noise generated by people who mistake familiarity with terminology for comprehension of the underlying principles.

Subscribe

Tag alert: Subscribe to the tag Artificial Intelligence and you will receive an e-mail as soon as a new item about it is published on our website!

Maker Culture vs. Making Things

The “maker movement” has become another victim of our forgetting problem. What started as a legitimate return to hands-on engineering has been co-opted by influencer culture, where the goal isn’t to build something useful but to generate content about building something photogenic. Scroll through maker TikTok and you'll find endless videos of people hot-gluing LEDs to fruit, calling it “innovative hardware hacking” while completely missing the actual engineering happening in labs and workshops around the world.
 

Real making is messy. It involves understanding material properties, thermal characteristics, failure modes, and the thousand small decisions that separate a working prototype from an expensive paperweight. It requires reading datasheets, calculating tolerances, and debugging problems that don’t have Stack Overflow answers. None of this photographs well or fits into a sixty-second video format.
 

The shallow version of maker culture treats engineering like a lifestyle brand — all the aesthetic appeal of technical competence without the years of study required to develop it. It’s cosplay for people who want to look innovative without doing the unglamorous work of actually understanding how things function.

The Knowledge Drain

We’re creating a generation of developers and engineers who can use tools brilliantly but can't explain how those tools work. They can deploy applications to Kubernetes clusters but couldn’t design a simple op-amp circuit. They can train neural networks but struggle with basic statistics. They can build responsive web applications but have never touched assembly language or understood what happens between pressing Enter and seeing output on screen.

This isn't their fault — it's a systemic problem. Our education and industry reward breadth over depth, familiarity over fluency. We’ve optimized for shipping features quickly rather than understanding systems thoroughly. The result is a kind of technical learned helplessness, where practitioners become dependent on abstractions they can’t peer beneath.

When those abstractions inevitably break — and they always do — we’re left debugging systems we never really understood in the first place. The error messages — if we’re lucky enough to get one — might as well be written in hieroglyphics because we’ve lost the contextual knowledge needed to interpret them.

What We Need Now

We need editors who know what a Bode plot is. We need technical writing that assumes intelligence rather than ignorance. We need educational systems that teach principles alongside tools, theory alongside practice. Most importantly, we need to stop mistaking novelty for innovation and complexity for progress.
 

The best engineering solutions are often elegantly simple. They work reliably, fail predictably, and can be understood by the people who use them. They don't require constant updates or cloud connectivity or subscription services. They just work, year after year, doing exactly what they were designed to do.
 

That TI-99/4A still boots because it was designed by people who understood every component, every circuit, every line of code. It works because it was built to work, not to generate quarterly revenue or collect user data or enable some elaborate software-as-a-service business model.
 

We used to build things that lasted. We can do it again—but first, we need to stop forgetting what we already know.


The author spent fifteen minutes trying to get his smart doorbell to connect to Wi-Fi while writing this. The irony was not lost on him.

Subscribe

Tag alert: Subscribe to the tag Opinion and you will receive an e-mail as soon as a new item about it is published on our website!
联系我们 contact @ memedata.com