权力用户的缓慢消亡
The Slow Death of the Power User

原始链接: https://fireborn.mataroa.blog/blog/the-slow-death-of-the-power-user/

## 消失的精通用户 一项关键技能正在消失:真正*理解*我们所使用技术的的能力。 曾经很常见,但快速掌握系统运作方式、调试问题和阅读错误信息的能力正在衰退,取而代之的是被动消费。 这并非偶然; 过去二十年来,科技巨头故意设计了这种转变,将用户定位为依赖于封闭、受控平台的消费者。 如今的一代人通常缺乏基本的计算知识——不了解文件系统,无法执行基本任务,例如连接到服务器或理解网络基础知识。 这也延伸到开发者,他们越来越依赖于抽象,而缺乏对底层机制的理解。 智能手机,尤其是 iOS 和 Android,加速了这一趋势,优先考虑精心策划的体验而非用户控制。 虽然提供了便利,但它们培养了对平台的依赖,这些平台限制了修改并优先考虑供应商控制。 这种损失不仅仅是关于个人技能; 而是关于失去审计、适应和追究平台责任的能力。 技术能力下降会削弱我们的韧性并扼杀创新。 夺回这些失落的知识需要个人努力——主动学习工具的工作原理,拥抱开放协议,并抵制纯粹管理体验的吸引力——即使行业正在积极阻止这样做。

## 权力用户的衰落 - 摘要 一篇最近的博客文章和随后的Hacker News讨论哀叹着精通技术的计算机用户数量的减少。核心论点是,现代软件优先考虑易用性和抽象化,屏蔽用户对事物运作方式的理解——从DNS和SSH等基本网络概念到基本的系统操作。这造成了对不透明系统的依赖和实践知识的丧失。 评论者争论这是否是真正的损失或自然演变。一些人同意,指出日益增长的复杂性和“人为依赖”,用户被锁定在生态系统中。另一些人认为,高级技能*一直*是小众的,服务于普通用户更有利可图且更有益。 一个关键点是向商品化IT的转变,了解底层机制对于基本功能来说是不必要的,就像驾驶汽车一样。然而,这种依赖性可能会在需要专业知识时产生漏洞并增加成本。讨论还涉及人工智能的作用、复杂的软件架构以及苹果等平台在促进这一趋势中的作用。最终,争论的中心在于,简化技术是赋予用户权力还是降低了他们的理解和控制力。
相关文章

原文

There’s a certain kind of person who’s becoming extinct. You’ve probably met one. Maybe you are one. Someone who actually understood the tools they used. Someone who could sit down at an unfamiliar system, poke at it for twenty minutes, and have a working mental model of what it was doing and why. Someone who read error messages instead of dismissing them. Someone who, when something broke, treated it as a puzzle rather than a betrayal.

That person is dying off. And nobody in the industry seems to care. In fact, most of them are actively celebrating the funeral while billing it as progress.

This isn’t an accident. This is the result of two decades of deliberate, calculated effort by the largest technology companies on earth to turn users into consumers, instruments into appliances, and technical literacy into a niche hobby for weirdos. They succeeded beyond their wildest expectations. Congratulations to everyone involved. You’ve built a generation that can’t extract a zip file without a dedicated app and calls it innovation.


We Raised a Generation That Doesn’t Know How Anything Works

The average person who grew up with smartphones has a fundamentally broken mental model of computing. Not broken in the sense that they can’t operate their devices — they can, with terrifying efficiency. Broken in the sense that their understanding stops at the glass. They know how to use apps. They do not know what apps are. They know files exist somewhere, in the cloud maybe, or possibly inside the app itself — the distinction isn’t clear to them and they’ve never needed it to be.

The concept of a filesystem — of hierarchical storage that you own, that lives on hardware you control, that persists independently of any company’s servers — is genuinely alien to them. Not because it’s complicated. A child can understand that files live in folders. But they’ve never had to understand it because the platforms they grew up on hid it from them. iOS shipped without a user-accessible filesystem for over a decade. Google Drive abstracts away the folder metaphor entirely if you let it. iCloud will “optimize” your local storage, which is a polite way of saying it will silently move your files to Apple’s servers and give you a ghost of them on your own machine, and most users have no idea this is happening or what it means.

Ask a twenty-two-year-old to connect to a remote server via SSH. Ask them to explain what DNS is at a conceptual level. Ask them to tell you the difference between their router’s public IP and the local IP of their laptop. Ask them to open a terminal and list the contents of a directory. These are not advanced topics. Twenty years ago these were things you learned in the first week of any serious engagement with computers. Today they’re exotic knowledge that even a lot of working software developers don’t have, because you can go a long way in modern development without ever leaving the managed abstractions your platform provides.

And that’s the real damage. It’s not just end users who don’t know this stuff. It’s developers. People who write software for a living who’ve never had to think about what happens between their API call and the response. Who’ve never had to debug something at the network layer. Who’ve never had to read a full stack trace and understand every frame of it. Because the frameworks handle all of that, and the frameworks are good enough, and figuring out how things actually work is optional.

Optional until it isn’t. Optional until something breaks in a way the framework didn’t anticipate. Optional until you’re trying to understand why your app is making twenty network requests when it should be making three, and you have no tools to answer that question because you never learned that such tools exist. Wireshark has been free for decades. Most developers have never opened it. That is not a neutral fact about the state of the profession.


Mobile Platforms Did the Most Damage, and They Did It on Purpose

The smartphone didn’t just shift computing to a smaller screen. It replaced a computing paradigm — one built on ownership, modification, and composability — with a consumption paradigm built on managed access, curated experience, and dependency. And it did so with the full, deliberate, enthusiastic participation of every major platform vendor.

iOS set the template. Apple shipped a device in 2007 that was, by any reasonable technical measure, a computer. It had a CPU, RAM, persistent storage, a network stack, and a real operating system descended from BSD Unix. By every cultural and legal measure, however, Apple treated it as something else entirely: an appliance that you licensed rather than owned, that ran software only Apple approved, that couldn’t be meaningfully modified, and that communicated only through channels Apple controlled. No filesystem access. No inter-app communication beyond what Apple chose to expose. No background processes without explicit, limited, grudging permission. No ability to install software from any source other than the App Store — which Apple created, controls, taxes at thirty percent, and can pull your app from at any time for any reason with no meaningful appeals process.

All of this was sold as a feature. “It just works.” Safety. Privacy. User experience. What it actually was, was control — Apple’s control over what you could do with hardware you supposedly bought. And the genius move, the move that should make any serious observer furious, was convincing users that this control was being exercised on their behalf.

It wasn’t. It was being exercised to ensure that Apple could extract maximum revenue from the platform, that no competing software distribution model could gain a foothold, and that users would remain permanently dependent on Apple’s ecosystem. The safety argument is post-hoc rationalization. The App Review process has let through thousands of scam apps, predatory subscription traps, and privacy-violating analytics SDKs — Sensor Tower was caught red-handed running data-harvesting SDKs inside apps in the App Store, for years. What review consistently blocks is not unsafe things. It’s competitive things. Emulators. Third-party browsers that use different rendering engines rather than Apple’s mandatory WebKit wrapper. Payment systems that don’t pay Apple’s cut. Cloud gaming services that would let users run code Apple didn’t approve. The pattern is legible if you’re paying attention: safety is the stated reason, revenue protection is the operational reality.

Android played the same game with better PR. Google launched Android as an open platform, and for a few years it genuinely was. You could sideload APKs trivially. You could root your device and replace the entire OS. Manufacturers shipped custom builds. The ecosystem was messy and fragmented and occasionally awful and genuinely interesting. Then, gradually, systematically, Google started closing it down.

First came the CTS — the Compatibility Test Suite — which manufacturers had to pass to ship Google’s apps. Fine in principle. Google controls what “compatible” means in practice. Then Play Protect, which technically scans for malware but treats every sideloaded app as a threat by default and nags you about it repeatedly. Then a long series of API deprecations that broke the kinds of deep-system access power users relied on: the automation apps, the proper file managers, the backup tools that actually worked at the filesystem level, the accessibility hacks that could do things the official accessibility APIs couldn’t. Then came changes to make bootloader unlocking harder and to push more device-specific security keys into hardware that can’t be bypassed. Then the Play Integrity API — arguably the most user-hostile single API decision Google has ever made — which lets apps query whether your device has been modified in any way and refuse to run if it has. Unlock your bootloader, which is the most basic possible act of taking ownership of your own hardware, and a growing list of apps — banking apps, payment apps, streaming apps — will detect this and refuse to function. You paid for the phone. You own the phone. Google and its partners have decided that ownership does not include the right to modify it.

The direction of travel is unmistakable. The destination is iOS. The messaging is different — Google will never say “you can’t do that” with Apple’s blunt confidence — but the endpoint is the same: a platform where the vendor’s preferences take absolute precedence over user autonomy, and where “open” is a marketing claim that survives in the documentation but not in the lived experience of anyone who tries to actually exercise it.

The users who grew up on these platforms don’t know what they’re missing. They’ve never used a system where they were genuinely in control. The idea that you should be able to run arbitrary code on hardware you paid for is foreign to them — not rejected, but simply absent as a concept. They’ll defend the restrictions without prompting because they’ve internalized the vendor’s framing so thoroughly that they experience the cage as comfortable. “I don’t want to root my phone, that sounds scary.” Cool. You’ve successfully trained yourself to be afraid of ownership. The platform vendors are proud of you.


The Culture Rotted and Nobody Noticed Until It Was Gone

Technology culture used to celebrate technical competence. Not as gatekeeping, not as elitism — as genuine, infectious enthusiasm for understanding how systems worked. The BBS scene in the eighties ran on self-taught systems operators who understood their hardware and their network protocols well enough to build infrastructure that had never existed before. The early web had a “view source” ethos: you saw something interesting, you looked at how it was built, you learned from it, you made something of your own. This was the entire pedagogical model of the early web and it worked extraordinarily well. The modding communities around Doom and Quake produced people who went on to build game engines. The ROM hacking community produced people who understood binary formats and executable structures better than most professional reverse engineers. The jailbreak scene for the original iPhone — a community of people spending nights and weekends figuring out how to take ownership of hardware they’d paid for — produced security researchers who’ve been finding iOS vulnerabilities ever since.

These were not professional circles. You didn’t need a CS degree. You needed curiosity and stubbornness and a tolerance for reading things that were too long and trying things that didn’t work on the first ten attempts. The culture valued that and passed it down. Kids learned by watching, by lurking in forums, by getting their stupid questions answered by people who then expected them to answer someone else’s stupid questions eventually. The knowledge propagated because the culture treated knowledge as worth propagating.

That culture didn’t die because the knowledge became irrelevant. It died because it became economically inconvenient. The platforms that replaced the open internet — YouTube, Reddit, Discord, eventually TikTok — are consumption platforms. Their business model requires passive engagement. A user who spends three hours going down a documentation rabbit hole, breaking things in a terminal, and actually understanding something is worth less to them than a user who watches three hours of content. They don’t ban technical material. They algorithmically deprioritize anything that demands active engagement, they reward passive consumption, and they shape the culture of their platform accordingly over years and years until the culture that emerges is one that treats passive consumption as the default relationship with technology.

The YouTube tutorial is the perfect emblem of this rot. Tutorials are not documentation. A tutorial teaches you to perform a specific sequence of steps to achieve a specific outcome. The steps are usually correct for the specific scenario the tutorial covers. If your scenario differs — if something’s changed, if you get an error the tutorial didn’t anticipate, if you’re using a different version — the tutorial has given you no tools to respond. Documentation teaches you to understand a system: what its components are, how they interact, what the configuration options mean and why they exist, what the error messages indicate. One produces people who can follow instructions. The other produces people who understand what they’re doing. The industry has enthusiastically replaced the latter with the former and called it democratization.

The man page is dead for most users. The RFC is unread by most developers who depend on the protocols it describes. Stack Overflow, which used to be a genuinely valuable resource for understanding why things behaved certain ways, has become a paste-and-pray operation: scan for a code snippet that looks related to your problem, copy it, run it, hope it works. When it doesn’t, find another snippet. The understanding never enters the loop. LLMs have accelerated this to a degree that should make anyone who cares about software quality genuinely alarmed. You can now write complete programs without understanding what a single line of them does, and the programs will often work well enough in the happy path that you’ll never know how thoroughly you don’t understand what you’ve built until something goes wrong in production at two in the morning and you are completely without tools to respond.

This is what the culture has normalized: outcomes without understanding, solutions without models. And the response when you point this out is “okay but who has time for that,” as if understanding were a productivity cost rather than the entire point.


The “Big Brother Knows Best” Capitulation Is the Worst Part

I want to be precise here because people get defensive fast.

The problem is not, primarily, that services collect data. The problem is that users have been convinced to treat pervasive surveillance infrastructure as benign or beneficial, and to respond to any criticism of it as paranoia, technical elitism, or failure to appreciate convenience. The learned helplessness is the crisis. The data collection is the symptom.

Apple tells you that you can’t install software from outside the App Store because it’s dangerous, and people nod. These are the same people who would lose their minds if their city government told them they could only buy food from vendors the city had approved, licensed, and taxed at thirty percent of every transaction — who understand instinctively that such a system is about control and extraction rather than safety. They accept the identical arrangement from a private company without complaint because the phone is pretty and the UX is smooth and the alternative sounds hard.

Google processes your email to serve you targeted advertising. These are your emails. They contain information about your medical situation, your finances, your relationship conflicts, your private communications with people who absolutely did not consent to Google reading their messages to you. Google’s systems build behavioral models from this and those models are sold to advertisers. “Serving you better” is the stated purpose. It is a fiction thin enough to see through in direct sunlight. “I have nothing to hide” is the response, which is not an argument — it’s a thought-terminating cliché that makes it socially awkward to point out that privacy is not about criminality, it’s about power. Whoever holds your behavioral data holds power over you. That’s true whether or not you’ve done anything wrong.

Microsoft’s Recall feature — announced for Copilot+ PCs, rolled back after public outrage, and then quietly re-introduced — takes screenshots of your screen every few seconds, runs OCR on them, and makes the indexed text searchable. This creates a complete, timestamped record of everything you have ever done on your computer. The security concerns are obvious: a single piece of malware with appropriate privileges now has access to your entire computing history. But the deeper problem isn’t the security. The deeper problem is that this is a surveillance product, it was designed as a surveillance product, it is useful for advertising and behavioral profiling purposes as a surveillance product, and it was announced as a productivity feature. That framing succeeded. Most of the coverage treated it as a productivity story with security concerns to be addressed, not as an inherently deranged idea that should never have existed. The window has shifted so far toward normalized surveillance that Microsoft could announce this without losing significant market share and with most enterprise customers still evaluating the product on its merits.

The algorithm situation is the one that most directly affects daily life and receives the least serious scrutiny. Every major platform uses recommendation systems that are, in the most literal sense, making decisions about what information you encounter. What news exists in your world. Which of your friends’ thoughts reach you. Which ideas get surfaced and which get buried. These systems are explicitly not neutral — they’re optimized for engagement, which empirically correlates with outrage, anxiety, conflict, and tribal reinforcement, because those emotional states produce the behavioral signals the engagement metrics reward. The platforms are making your information diet worse on purpose, because worse converts to engagement, and engagement converts to revenue.

The correct response to this is to reject the algorithmic curation model and use information architectures that don’t depend on it. RSS still works. Direct subscriptions still work. You can still bookmark websites and go to them directly. You can run your own feed reader. You can join communities that don’t have engagement-optimized recommendation systems. All of this is possible and most of it is free.

The actual response is to try to game the algorithm. To figure out what the system wants and feed it signals that will produce better outputs. To treat the algorithm as a given rather than a choice. This is the capitulation in pure form: not just accepting the system, but optimizing your behavior around it, internalizing its logic, and experiencing the idea of opting out as exotic or impractical. The managed experience has become so normalized that the alternative — direct, unmediated access to information from sources you chose — sounds like extra work.

It is extra work. A small amount of extra work, the first time, and then it’s just how you use the internet. The question of whether that work is worth doing is actually a question about whether you want to control your information environment or whether you prefer to have a corporation control it for you. Most people, when the question is put that way, will say they want control. But the platforms have been very effective at ensuring the question is never put that way.


What We’re Actually Losing, Concretely

“Technical literacy is valuable” is the kind of claim people agree with and ignore. Let me be specific about the damage.

We’re losing the ability to audit. A person who understands their tools can notice when those tools start behaving badly. They can run a packet capture with tcpdump or Wireshark and see what their phone is actually transmitting. They can look at what their DNS resolver is returning. They can read the permissions an app requests and reason about whether those permissions make sense for what the app claims to do. They can notice when an update changes behavior in ways that benefit the developer at the user’s expense. Most people have none of these capabilities and depend entirely on external review — journalists, academic security researchers, occasionally regulators — which is slow, incomplete, paid for by advertising revenue from the same companies being reviewed, and easily captured. The number of apps caught doing obviously bad things — exfiltrating contact lists, running location tracking in the background without any legitimate purpose, phoning home with behavioral data — and continuing to have millions of users afterward, because those users had no mechanism to detect the behavior themselves, is not small. It is not a footnote. It is the normal operating condition of the app economy. Technical literacy is a prerequisite for meaningful consent. Without it, accepting a privacy policy is not consent. It’s surrender to a document you can’t evaluate.

We’re losing resilience. Communities with high concentrations of technical competence can adapt when platforms change or die. They migrate. They self-host. They fork. When Google killed Reader, the technical community had self-hosted alternatives running within weeks. When Twitter’s API became hostile to third-party clients, developers built ActivityPub implementations and federated alternatives. When a platform shifts its terms in ways that make it untenable, technically competent users can leave and rebuild elsewhere, carrying their data with them, because they understand their data as something they own rather than something that lives in the platform. Communities without those skills get stranded. The graveyard of services that people built workflows and communities around — and then lost when the company pivoted or shut down or got acquired — should be a constant reminder that platform dependency is not a stable long-term strategy. It mostly isn’t, because the loss is distributed and the lessons don’t generalize. You grieve your specific service and migrate to a different managed service and start the cycle again.

We’re losing the builder pipeline. This one compounds over time and the compounding is already visible. Power users become developers. Tinkerers become engineers. The kid who roots their Android phone and breaks it and fixes it and then writes a script to automate something the official interface doesn’t support — that kid, ten years later, has intuitions about system behavior that you cannot get from a bootcamp and cannot get from building inside managed platforms your entire career. They know what it means when something is running slower than it should. They have hypotheses about failure modes before they start debugging because they’ve caused those failure modes themselves. They understand that abstractions are leaky and that the leak is usually where the interesting problems are.

Close off the tinkering and you close off the pipeline. What you get instead is a generation of developers who’ve only ever worked within platform constraints, who’ve never pushed against the edges of the abstractions they’ve been given, who treat framework behavior as ground truth rather than implementation detail. They build more constrained platforms, because the constraints are all they know, for the next generation to be hemmed in by. The technical capability of the field decays, quietly, generation by generation, because the informal education pathway — break things, fix them, understand them — has been systematically closed by platforms that have every financial incentive to keep it closed.

We’re losing the adversarial capacity to hold platforms accountable. This is the one that matters most and gets talked about least. The open-source movement, the early security research community, the hacker culture in the original sense — these were not just about building things. They were a check on the power of institutions. When IBM tried to close the PC platform, the clone manufacturers and the DOS ecosystem that emerged from them broke the lock. When phone companies tried to prevent customers from attaching third-party devices to their networks, the hacker community and subsequent regulatory action broke that lock too. When the early internet’s institutional gatekeepers tried to control what protocols could run on it, the end-to-end principle and the culture of routing around obstacles broke those locks.

The technology industry’s current consolidation into a small number of platform monopolies is only possible because the adversarial capacity to break platform lock-in has atrophied. There are still people doing it — the open-source community is still building, the security research community is still finding vulnerabilities, the right-to-repair movement is still fighting — but the cultural mass behind those efforts has collapsed. They’re fighting a rearguard action against an industry that has successfully convinced most of its users that platform control is a feature, not a bug.


Nobody Is Coming to Save This

The industry isn’t going to fix this. Every financial incentive points the other way. Confused, dependent users are more profitable than competent, autonomous ones. Lock-in is more valuable than interoperability. Opacity is more valuable than transparency. The architecture of modern consumer technology has been optimized against user competence with extraordinary success, and every quarterly earnings report validates the approach.

Regulators aren’t going to fix it. They’re fighting over app store fees while the underlying issue — the right of users to own and control the devices they’ve paid for — gets no serious legislative traction in most jurisdictions. The EU’s Digital Markets Act has done some real work on interoperability requirements and is being fought by every affected platform with everything they have, because the platforms understand that the real threat is not the specific provisions but the principle that user autonomy is a value the law should protect.

Educators aren’t going to fix it. Most digital literacy curricula teach application use. How to use Google Workspace. How to spot a phishing email. “Coding” in the form of block-based visual programming that produces no transferable understanding of how software actually works. The schools that teach real systems thinking, real network knowledge, real debugging skills — those schools cost money and are not where most people go.

The technical community is mostly not going to fix it either, because most of it has retreated into professional specialization and has largely given up on the broader project of maintaining technical literacy outside the profession. The open-source community does important work maintaining alternative infrastructure. It communicates almost entirely with itself.

So what’s left is individual stubbornness. Which is not nothing. Organized individual stubbornness, pointed in the right direction, is how every important counter-cultural technical movement has worked.

Learn how your tools actually work. Not just how to operate them. Use the command line. Set up a home server and break it and fix it. Root a phone or, if you’re on a platform where that’s been made impossibly difficult, buy something where it isn’t. Run a Linux install on bare metal and deal with the driver problems. Learn to read a network capture. Understand what your browser is sending with every request — the dev tools have been there the whole time. Host something yourself instead of using the managed service. Use open protocols where they exist: XMPP, ActivityPub, RSS, SMTP — these are old and unglamorous and they work and you own your data when you use them. Feed the federated alternatives even when they’re worse than the centralized ones, because they’re worse partly due to network effects and network effects respond to participation.

This is not about purity. Nobody is asking you to reject every managed service on principle or run Gentoo on everything. It’s about maintaining enough technical competence that you are a participant in the systems you depend on rather than a permanent subject of them. It’s about being able to make informed choices instead of having choices made for you by systems optimized for someone else’s revenue.

The power user isn’t dead. The skills exist. The communities exist — smaller, grayer, more scattered, fighting an institutional headwind that grows stronger every year. But they exist, and the knowledge is still propagating in the spaces the platforms haven’t fully colonized.

The trajectory is bad. Every generation of new users arrives knowing less and expecting less. Every generation of new developers builds on more layers of managed abstraction and understands fewer of them. Every year it gets harder to explain why ownership matters, why understanding matters, why the convenience-for-control trade is a bad deal even when the convenience is genuinely excellent — because the people you’re explaining it to have lived their entire lives inside the control and experienced it as freedom.

The obituary for the power user is being written right now. The people writing it are the same ones who sold you the phone, designed the app store, wrote the terms of service you didn’t read, and built the algorithm that decided you didn’t need to see this.

They are probably right about the timeline. They’ve been right about most things. The market has validated them at every step.

That is not an argument for giving up. It is an argument for being considerably angrier about it than most people currently are.

联系我们 contact @ memedata.com