我知道你不是写的。
I know you didn't write this

原始链接: https://ammil.industries/i-know-you-didnt-write-this/

最近的一次经历凸显了一个日益严重的问题:“工作陷阱”——即利用人工智能*生成*工作成果,然后需要其他人付出大量精力去*核实*。作者收到同事发来的项目计划,起初对其细节印象深刻,但随后通过文档历史发现它完全是由人工智能生成的。这引发了一种认识:虽然人工智能可以快速*生成*内容,但现在关键的工作在于*验证*——确保准确性、思考性和责任性。 这种转变颠覆了传统的系统,在传统系统中,计算是困难的,但验证是容易的。现在,生成变得轻而易举,但验证人工智能的输出却代价高昂,需要持怀疑态度。核心问题不在于人工智能的能力,而在于使用人工智能的礼仪——未公开使用人工智能会破坏信任和协作工作的社会契约。 虽然人工智能可以协助那些缺乏资源的人进行转录或营销等任务,但它不应取代真正的努力。专业人士,如工程师和作家,最终需要*拥有*自己的工作,理解并能够为其辩护。重点正在从创造转向对话——不断质疑我们所消费的信息的来源和投入的努力。

## 黑客新闻上关于AI辅助工作与信任问题 最近黑客新闻上的一场讨论引发了关于使用像GPT这样的AI工具辅助工作伦理的争论,特别是关于透明度的问题。原发帖人分享了一个令人沮丧的经历,发现一位同事似乎在复制粘贴提示词给AI,然后将输出结果作为自己的作品呈现,却没有说明。这引发了对缺乏责任感和潜在不尊重的担忧。 许多评论者表达了相同的观点,强调了工作转移的问题——AI为创建者节省了时间,但增加了审查者的验证负担。几位用户详细描述了类似的工作流程,他们先在Obsidian或Vim等工具中起草,*然后*粘贴到协作平台,引发了关于编辑历史和真实性的问题。 讨论的核心围绕着信任和责任。虽然许多人承认AI的实用性,但普遍认为将AI生成的内容作为完全是自己原创的作品是存在问题的。建议范围从明确标记AI辅助内容到仅仅根据作品的优点进行判断,但一个共同的主题是你需要对你向他人展示的作品负责并理解它。最终,这场对话凸显了在AI辅助唾手可得的时代,调整工作场所规范和期望的迫切需求。
相关文章

原文
home

I received a link from a co-worker with the accompanying note:

I put together a plan for the project, take a look.

Taking a quick scan through the linked document I’m pleased to see that there’s some substance to it. And look, there are tables and step-by-step instructions. At the bottom there are risks and potential mitigations. They have definitely put together a plan and it’s definitely in this document.

Later, I poured another cup of coffee and actually read the document and something twigged a part of my brain. Suspicions aroused, I clicked on the “Document History” button in the top right and saw a clean history of empty document – and then wham – fully-formed plan, as if it had just spilled out of someone’s brain, straight onto the screen, ready to share.

So it’s definitely AI. I felt betrayed and a little foolish. But why? If this LLM has slurped up the entirety of human written output, shouldn’t this plan be better than what one person could ever dream up? Perhaps that’s exactly the thought process they had when they turned to their LLM of choice.

I recall looking back at the note to double, triple check that they didn’t call out the use of AI. If this was their best attempt then to save face I’m going to have to write the plan myself.

Regardless of their intent I realised something subtle had happened. Any time saved by (their) AI prompting gets consumed by verification overhead, the work just gets passed along to someone else – in this case me.

Have you been the victim of AI workslop?

A recent, well-covered article in Harvard Business Review explores this new category of newly-coined “workslop” – the process of relying on AI for the creation of working materials. The study provides extensive examples of cases where people have reached for AI with the direct outcome of greatly increasing the amount of collective work that’s required.

That increased work is verification – figuring out whether someone actually thought about what they sent you - and it rhymes with a completely different domain.

A core principle of the cryptographic systems that keep our information private online are mathematical constructs that are easy to verify but hard to compute.

With AI writing, we’ve inverted this: generation is trivial, verification is expensive. We still read, but we read differently: guards up, trust withheld, looking for tells. The document history button becomes mandatory due diligence.

It’s just not nice

Using AI when writing for others is fundamentally about etiquette. It’s not polite to share purely AI-generated writing without disclosing the provenance of it. In most cases we’re looking for an equitable exchange of ideas. If you know in your heart of hearts that you didn’t put the work in, you’re undermining the social contract between you and your reader.

By passing off AI as your own work it’s inevitable that you become passive, an observer of the act of creating, an assistant to the creator.

If you can’t explain what you’ve written, do you have any right to share it? There’s a reason most PhD candidates defend their work orally.

Why should I bother to read something you didn’t bother to write?

Accountability-shirking as a Service

In serious engineering circles we’re reaching consensus that developers are held accountable for all code committed and shared, regardless of how it was produced.

Other work is in different territory. Side projects, throwaway code, single-use applications – building something you lack the skills to create otherwise. But if you ship it and people use it, you’ve created an implicit promise: that you can maintain, debug, and extend what you’ve built. If AI assembled it and you can’t answer basic questions about how it works, you’ve misled users about what they can depend on. The work document and the shipped app both create dependencies – one on your strategic thinking, one on your technical follow-through.

Engineers who have embraced coding assistants to do the messy bit of actually putting code to editor see concrete, if modest, productivity boosts.

The same is happening for writers. Unless pressured by unrealistic expectations or deadlines (or in some cases, pleading ignorance to the risks) professional writers will converge on the same view as software engineers. Anything worth writing has to be written.1

Writers and other professionals want to do good work and be recognised for their good work. This leads us to explore where AI aids that work and understand where it impedes it. It doesn’t help that we’re working this all out as we go along.

Despite the name, conversion work – not generation – is where generative AI justifies itself. In journalism, Jason Koebler @ 404 Media notes:

YouTube’s transcript feature is an incredible reporting tool that has allowed me to do stories that would have never been possible even a few years ago. YouTube’s built-in translations and subtitles, and its transcript tool are some of the only reasons that I was able to do this investigation into Indian AI slop creators, which allowed me to get the gist of what was happening in a given video before we handed them to human translators to get exact translations.

When the team did the admirable thing to translate important reporting on ICE in Spanish they turned to human translators to get that extra certainty. Some people would be happy with the LLM translation. That’s their line. For responsible, authentic journalism 404 Media took the higher road.

In “Good Bot, Bad Bot”, Paul Ford compliments the proposal to use AI to help academics package and market their work to a non-technical author.

He notes

It makes economic sense. Researchers who aren’t affiliated with giant companies or large research labs at universities often have few resources to promote their research. And for the most part, biology postdocs cannot write good posts—not least in their native language, but especially in multiple languages. AI won’t be as good at posting as a thoughtful human, but it will likely be better at fun, emoji-laden social media posts than, say, an actuarial scientist adjunct who speaks English as their fourth language.

It’s refreshing that Ford acknowledges the pragmatic realities. Promotional posts aren’t the research itself – marketing your paper isn’t the same as writing it. That’s Ford’s line. The economic reality of underfunded academics means embracing AI in ways that might actually be welcome.

Undisclosed AI is becoming the default assumption. Reading anything is now an act of faith that someone thought about the results longer than it took to fire off a prompt.

Faced with uncovering the fingerprint of the author, will we get tired of the guessing game?

Verification today often leads to difficult conversations about the nature of work and effort, authenticity and etiquette. Those conversations are the work now.

Thanks to Sarah Moir, Harrison Neuert & Geoff Storbeck for their invaluable feedback.

联系我们 contact @ memedata.com