为什么高管们迷恋人工智能,但一线员工却不感兴趣?
Why are executives enamored with AI, but ICs aren't?

原始链接: https://johnjwang.com/post/2026/03/27/why-are-executives-enabled-with-ai-but-ics-arent/

执行人员和普通员工(ICs)对人工智能的认知存在显著差距。执行人员普遍拥抱人工智能,甚至强制使用,而普通员工则倾向于持怀疑态度。这源于不同的工作经验:执行人员习惯于应对非确定性系统——不可预测的事件和人为因素——并将人工智能视为另一个可管理的混沌元素。他们已经习惯于在复杂的组织中协调不同的“效用函数”。 然而,普通员工主要根据*确定性*任务进行评估——交付精确、可靠的结果。人工智能在这种领域引入了非确定性,经常提供不如熟练人工准确的输出,需要大量修正。这会将他们的角色从*执行*工作转变为*管理*人工智能,从而贬低他们所磨练的技能。 此外,人工智能提高生产力的承诺可能会让普通员工感到威胁,暗示他们的专业知识正在变得不那么重要。采用率与公司文化相关;优先考虑速度的公司会看到更多的普通员工支持,而专注于质量的公司则常常会遇到阻力。最终,人工智能符合执行人员管理复杂性的经验,但破坏了普通员工赖以生存和被衡量的确定性环境。

## AI 分歧:高管 vs. 个人贡献者 最近的讨论凸显了高管和个人贡献者(ICs)对人工智能的热情存在差距。核心观点是,高管将人工智能视为优化运营、利用现有员工提高产出以及强化他们对组织和战略价值的信念的工具——本质上,证实了他们的世界观。他们通常缺乏评估人工智能输出质量的详细知识。 然而,个人贡献者则更为怀疑。一些人对人工智能增强其工作的潜力感到兴奋,尤其是在调试和样板代码等任务方面。另一些人则担心工作岗位流失和技能贬值,认为人工智能对其专业知识构成威胁。 有几个因素导致了这种分歧:人工智能演示通常过度承诺,高管可能不了解人工智能的局限性,并且存在权力动态——人工智能可能会减少对中层管理人员的需求。最终,讨论表明存在根本性的视角差异:高管关注成本节约和效率,而个人贡献者则应对实际情况以及对他们工作可能产生的影响。
相关文章

原文

I think there’s pretty clearly a divide in AI perception between executives and individual contributors (ICs). Executives seem to love it and evangelize it (going so far as to creating mandates at their companies for AI usage). But ICs are typically much more skeptical of its usage. You can see the divide show up everywhere from Hacker News comment threads to internal Slack debates about adopting coding agents.

Here’s my current posit for why there’s such a big divide: executives have always had to deal with non-determinism and focus on nondeterministic system design, while individual contributors are evaluated by their execution on deterministic tasks.

Executives have always had to deal with non-determinism. That’s par for the course:

  • People being out sick or taking time off unexpectedly
  • Someone not finishing an important project and not talking about it until far too late in the process
  • People reacting to an announcement in an unexpected way
  • A feature being built in a way that doesn’t make sense with respect to the rest of the product, but does technically achieve objectives.

More generally, if you’ve ever taken a Chaos Theory class in math, you’ll know that nonlinear, chaotic systems emerge when individual agents in a system are all acting with different inputs, utility functions, etc. Systems become slightly easier to manage if you’re able to make those utility functions consistent (you’re able to get a grasp on system dynamics).

A manager’s job is to create a model of the world and align everyone’s utility functions, knowing that there’s a large amount of non-determinism in complex systems. So it makes sense that as a manager, you’re ok with a decent amount of this.

AI is something that is non-deterministic but has a lot of characteristics of a well behaved chaotic system (specifically a system where you can understand the general behavior of the system, even if you cannot predict the specific outcomes at any point in time).

For example:

  • LLMs generally continue their work and provide an output regardless of time of day, how difficult the task is, how much information is available
  • LLM’s deficiencies have well defined failure modes (e.g. hallucinations, lack of ability to operate outside of their context, and especially poor outcomes when not given enough context)
  • The types of tasks that an LLM can accomplish are relatively well known, and the capability envelope is getting mapped out quickly. This is different than humans, where each person has a different set of strengths and weaknesses and where you need to uncover these over time.

Many of these properties are more deterministic than large human systems, which makes AI incredibly attractive for an executive who is already used to this and likely has put a large amount of effort into adding determinism into their systems already (e.g. by adding processes and structure in the form of levels and ladders, standard operating procedures, etc.).

ICs are generally much more focused on particular problems that have specific inputs and outcomes. Correctness is easier to determine, and how good you are at your job can largely be described by quality and speed, where the weights on those two depend on which organization you’re in. This changes as you move up the ladder (a staff engineer is expected to tackle large, ambiguous business problems), but for most ICs, the world is relatively well defined.

ICs deal with plenty of non-determinism in practice (unclear requirements, flaky systems, shifting priorities), but the way they’re evaluated pushes in the other direction. An IC’s value often comes from being reliably precise (e.g. writing correct code, getting the analysis right, producing a design that holds up under scrutiny). The more deterministic your output, the better you are at your job.

AI introduces non-determinism into exactly this space, and from an IC’s perspective, there are good reasons to be skeptical:

  • It’s not as good as they are at their job. A highly trained human focused on a specific task will often beat an LLM, especially if that task is long running, requires connecting multiple systems, or demands precise domain intuition. If you’re an expert and you’re handed a tool that does a mediocre version of your work, the overhead of fixing its mistakes can genuinely cost more than doing it yourself.
  • It changes what their job is. You go from doing the work yourself to managing something that does the work. The skills that got you hired (deep focus, precision, domain knowledge) aren’t necessarily the skills that make you good at that. That’s a disorienting shift.
  • It’s tied to self worth. Work accounts for the majority of a person’s waking hours. When executives talk about AI making everyone more productive, ICs can hear that as the things you’ve spent years getting good at are about to matter less. Whether or not that’s what’s actually being said, it’s a reasonable thing to feel.

One note: organizations that bias towards speed over quality tend to see more IC adoption of AI (e.g. my network of engineers at startups are on the whole adopting AI and using it to speed quite a few things up, though not necessarily making things higher quality). Organizations that bias towards quality often see the opposite. AI doesn’t really make quality higher, or it’s quite difficult to make it do so, and it can sometimes make quality on specific tasks worse because these ICs are typically really well trained for their specific task.

The difference in AI perception comes down to what work looks like at different parts of the stack. Executives manage non-deterministic systems and have built their careers around it. ICs operate in a more deterministic world and are evaluated on their ability to deliver precise, reliable output. AI fits neatly into the first worldview and awkwardly into the second.

I think this framing explains a lot of the friction that shows up when companies try to roll out AI adoption broadly. The same tool looks fundamentally different depending on what your job actually asks of you.

联系我们 contact @ memedata.com