人工智能提高了我的效率。
AI has fixed my productivity

原始链接: https://blog.dmcc.io/journal/ai-has-fixed-my-productivity/

## 人工智能与生产力:首席执行官可能错失了重点 最近的一项调查显示,许多首席执行官报告说人工智能并未带来生产力提升,这导致一些人宣布人工智能是失败的。然而,这种评估很可能源于*如何*实施人工智能,而非其内在能力。问题不在于缺乏潜力,而在于未能将人工智能周全地融入工作流程。 与其进行大规模、全公司范围的部署,且培训不足,不如从小规模、有针对性的应用中获得实际收益。例如,协助会议记录、代码搭建和研究整理等工具可以节省大量时间——从而累积地提高个人生产力。这些“无摩擦”的改进难以在传统报告中量化,导致首席执行官忽略了它们。 此外,依赖第三方人工智能会引发隐私问题,因为这些工具需要数据输入。虽然提供了生产力优势,但这种数据共享也带来了许多组织尚未解决的治理挑战。 归根结底,差距不在于技术,而在于技能差距。有效使用人工智能需要个人进行实验并根据特定需求进行调整——这个过程不像软件购买那样易于扩展。人工智能并未失败;大多数公司只是还没有学会*如何*有效地使用它。

## AI 与生产力:好坏参半的体验 最近一篇 Hacker News 的讨论,源于一篇博客文章 ([dmcc.io](https://dmcc.io)),围绕着 AI 对生产力的影响展开。发帖者声称,通过用自托管的替代方案取代大型科技公司的工具,并利用 AI,获得了显著的收益,但具体细节并不多。 许多评论者表示认同,报告说由于 AI 驱动的代码助手(Copilot、Cursor、Claude Code)和会议转录服务(Granola、Hyprnote)等工具,他们节省了大量时间,工作流程也更加顺畅。一些人估计,在 AI 辅助下,代码产出速度提高了 5-10 倍,甚至提高了代码质量和文档质量。 然而,也存在怀疑论。有人提出了麻省理工学院的研究,表明 AI 试点项目往往导致个人使用而非公司受益,可能掩盖了工作中的休闲行为。 还有人质疑,仅仅*更快地生成*代码是否等同于更好的解决方案,强调理解用户需求的重要性。一个共同的担忧是,AI 的收益是“细微且个人的”,难以量化,并可能夸大估值。 最终,这场讨论凸显了一个转变,即 AI 正在成为许多工作流程的核心部分,但其长期影响和可负担性仍然不确定。
相关文章

原文

Thousands of CEOs say AI hasn't improved productivity. I think they're measuring the wrong things.

A Fortune survey doing the rounds this week has thousands of CEOs admitting that AI has had no measurable impact on employment or productivity. It’s being treated as vindication by the sceptics and a crisis by the vendors. I read it and thought: these people are using AI wrong.

I use AI tools every day. Claude helps me write code. OpenClaw handles the kind of loose, conversational thinking I used to do on paper or in my head. Granola transcribes my meetings and a plugin I built pipes the notes straight into Obsidian. My email gets triaged before I look at it. Research gets compiled in minutes instead of hours. This stuff has genuinely changed how I work, and I don’t think I could go back.

The CEO survey doesn’t prove AI is failing. It proves that most organisations have no idea how to deploy it.

What actually changed

The gains aren’t where the enterprise pitch decks said they’d be. Nobody handed me an AI tool that “transformed my workflow” in one go. What happened was slower and more specific: a dozen small frictions disappeared, and the cumulative effect was significant.

Meeting notes are the obvious one. Before Granola, I’d either scribble while half-listening or pay attention and try to reconstruct things afterwards from memory. Both were bad. Now the transcript happens in the background, a summary lands in my Obsidian vault automatically, and I can actually be present in the conversation. That’s 20 minutes a day I got back, every day, without thinking about it.

Code generation changed my relationship with side projects entirely. I’ve shipped things this year that I simply wouldn’t have started before: small tools, automations, scripts that solve a specific problem in an afternoon instead of a weekend. The AI doesn’t write production-quality code on its own, but it gets me from “I know what I want” to “I have something running” in minutes instead of hours. That speed difference matters. It’s the difference between “I’ll build that someday” and actually building it.

Summarising long documents, compiling research, triaging email: none of these are exciting. But they used to eat real time. Now they don’t. The compound effect of reclaiming 30 or 40 minutes across a day is that my actual focus hours go further. I wrote about protecting those hours last year, and AI tools have turned out to be one of the better ways to do it.

Why the survey got it wrong

The CEO survey is measuring organisational productivity, which is a completely different thing from individual productivity. Most companies deployed AI by buying enterprise licences and hoping for the best. Copilot seats for every developer. ChatGPT access for every department. No training, no workflow integration, no clarity on what problems the tools were supposed to solve.

That’s not an AI failure. That’s a deployment failure. It’s a silly analogy, but you wouldn’t buy everyone in the company a piano and then wonder why not everyone is a musician a month later. But that’s essentially what happened with AI in most organisations, and it hopefully illustrates the point.

The productivity gains I’ve found came from figuring out, through months of trial and error, exactly where AI fits into my specific workflow. Not the generic “write me an email” stuff. The narrow, targeted things: transcription, code scaffolding, document summarisation, research triage. Each one required experimentation to get right. Most people in most companies haven’t done that work, and their employers aren’t helping them do it.

There’s also a measurement problem. My 20 minutes saved on meeting notes doesn’t show up in a quarterly report. The side project I shipped in a day instead of a week doesn’t register as a productivity metric. The compounding effect of less friction across dozens of small tasks is invisible to anyone looking at spreadsheets. CEOs are looking for step-change improvements because that’s what they were sold. The actual gains are granular and personal, which makes them hard to count and easy to dismiss.

The uncomfortable bit

None of this is free. Every AI tool that makes me more productive does so by ingesting my work. My meeting transcripts, my code, my half-formed ideas, my entire stream of consciousness on a given day: all of it flows through systems I don’t own and can’t audit.

I’ve spent the past year moving away from surveillance platforms. I replaced Google Photos with Ente, Gmail with Migadu, WhatsApp with Signal. I run my own XMPP server. I self-host my password manager. And yet I willingly feed more context into AI tools each day than Google ever passively collected from me.

It’s a contradiction I haven’t resolved. The productivity gains are real enough that I’m not willing to give them up, but the privacy cost is real too, and I notice it. For companies putting their entire workforce’s output through third-party AI, the data governance implications are enormous. Most organisations haven’t thought about this seriously, which is another reason the CEO survey results look the way they do: they adopted the tools without understanding what they were trading.

I’ve settled into an uneasy position: AI for work where the productivity gain justifies the privacy cost, strict boundaries everywhere else. It’s not philosophically clean. It’s just honest.

The real gap

The gap isn’t between AI’s potential and its capability. The tools are good enough. The gap is between having access to AI and knowing how to use it well. That’s an individual skill, built through experimentation, and it doesn’t scale the way enterprise software purchases do.

I’ll keep using these tools. They’ve made me measurably more productive in ways I can point to: time saved, projects shipped, focus protected. The CEOs in that survey aren’t wrong about what they’re seeing in their organisations. They’re just wrong about what it means. AI hasn’t failed. Most companies just haven’t figured it out yet.

联系我们 contact @ memedata.com