Years ago, I was working on the editorial side for what was then a hot new media company, and found myself spending more and more time with Johan, the lead programmer, and his team, asking them a lot of annoying questions as it was all so new – certainly to me. I was standing over Johan’s left shoulder, mesmerized by whatever new video game he was obsessing over that week…when suddenly, out of nowhere, a spreadsheet and a pie chart appeared on his screen.
“Whatcha got there, Johan?” asked Jim, Johan’s boss, peering over a sheaf of print-outs as he sharked past the cubicle.
“Hey, just looking at some numbers,” Johan replied. Johan had hit the “game key” in the nick of time – in those days, every video game had a game key – ALT-G if memory serves - calling up a slight variation of the same spreadsheet and pie chart.
This would never happen today. First, you’re probably not working in a cubicle, and if you are, it’s not the game key you’d hit to give your boss the impression that you’re actually doing productive work…it would be the “AI key.”
“Tech Firms Aren’t Just Encouraging Their Workers to Use AI. They’re Enforcing It.”
This article appeared in the February 24 edition of the Wall Street Journal. It includes the subtitle: From startups to giants, including Meta and Google, companies are factoring AI use into performance reviews and trying to track productivity gains
Across industries, companies are now enforcing AI use through performance reviews, dashboards that track adoption, and explicit mandates that tie it to compensation and promotion. What began in Silicon Valley has rapidly spread to consulting firms, banks, manufacturers, hospitals, and even government agencies.
As you’d expect, Meta, Google, Amazon, and Microsoft were the first to move from encouragement to enforcement. Employees at these firms now see AI usage metrics appear in quarterly reviews. Non-adopters have reported stalled promotions or explicit warnings that “AI fluency” is a core competency (The Wall Street Journal, Feb 2026, reporting on internal policies).
The trend has jumped sectors. PwC requires every consultant to complete an “AI + Human Skillset” curriculum and incorporates usage into evaluations (Business Insider, Feb 5, 2026). Colgate-Palmolive’s “AI evangelist” tracks adoption across global teams. Major banks have begun tying bonuses to the number of AI-assisted analyses completed. Even some hospitals now require doctors and nurses to use AI-assisted diagnostic tools for certain procedures.
Why the shift to mandates?
Executives cite three main drivers: intense competitive pressure to keep pace with rivals, investor demands for visible returns on massive AI investments, and internal data showing that voluntary adoption plateaus at around 30–40% of employees. “We’ve made it clear: AI is no longer optional. Every employee is expected to use it, and it’s now part of how we evaluate performance,” said Accenture CEO Julie Sweet (Fortune, March 2026).
The claimed benefits are real…on paper. Early internal metrics at several companies show 10–25% gains in task speed for routine work. Cross-functional teams using AI report faster ideation and fewer silos. But the drawbacks and unintended consequences are mounting.
While mandatory AI adoption offers productivity benefits, recent research reveals significant drawbacks that undermine organizational health.
Surveillance and autonomy erosion. By 2025, 70% of large companies monitor employee activity, with 68% of employees opposing AI-powered surveillance and 59% saying digital tracking damages workplace trust. AI monitoring systems now track keystroke patterns, mouse movements, email content, and even biometric data, including stress levels. Amazon employees report that surveillance creates “fear and anxiety, which creates a dangerous work environment”.
Burnout and intensified demands. AI meant to reduce workload is paradoxically accelerating burnout. Research found that AI leads to fatigue, burnout, and a growing sense that work is harder to step away from as organizational expectations for speed rise. A South Korean study shows AI adoption significantly increases job stress and burnout, while 63% of workers report AI-related fatigue driven by stress and heavy workloads.
Collapsing trust. Recent research revealed that while AI usage jumped 13% in 2025, worker confidence plummeted 18%, creating a “toxic relationship” as employees receive tools without training or support. Deloitte’s TrustID Index showed trust in company-provided generative AI fell 31% between May and July 2025, with trust in agentic AI systems dropping 89%.
Retention risks. Without adequate training, 56% of workers receive no recent skills development despite widespread AI adoption, and 85% say they would be more loyal to employers investing in continuing education - top performers become increasingly vulnerable to departure. Analysis warns of an impending “seniority cliff” as companies that stop hiring juniors eliminate the pipeline for developing senior talent with deep institutional knowledge.
Critics argue the enforcement model is shortsighted.
“You can force usage, but you can’t force wisdom,” said Dr. Ethan Mollick, professor at the Wharton School and author of Co-Intelligence (interview, March 2026). “When AI becomes compulsory, people stop experimenting and start complying — and that’s when the real mistakes happen.” Yet the train has left the station. In boardrooms and earnings calls, executives are increasingly judged on how aggressively they have embedded AI into daily operations.
The message is clear: in 2026, using AI is part of your job. The question companies are only beginning to confront is whether forcing the technology will ultimately make their workforces more cohesive, smarter, and more efficient, or simply more exhausted, distrustful, and replaceable.
