停止乱码/停止垃圾信息
Stop Sloppypasta

原始链接: https://stopsloppypasta.ai/

分享未经编辑的AI生成文本,不利于沟通和理解。就像“垃圾食品”一样,它产生的成本很低,但却给读者带来了不成比例的负担,读者需要付出同样的精力去解读它。 这种不平衡源于LLM消除了传统写作中通常涉及的努力——这个过程对于作者理解和读者信任都至关重要。AI输出的便捷性鼓励冗长,并引入不确定性;回复可能自信地错误(“幻觉”),并且缺乏明确的责任来源。 接收未经处理的AI输出会削弱信任,因为它掩盖了信息是否经过验证,将“信任但验证”的动态转变为持续的怀疑。最终,依赖未经编辑的AI会降低发送者的学习效果,并给接收者带来认知负担和不信任感,从而形成一个降低可信度和增加努力的恶性循环。

一个黑客新闻的讨论集中在日益严重的问题“sloppypasta”——在工作场所沟通中盲目复制粘贴来自大型语言模型(LLM),如ChatGPT未经核实的内容。这个术语由用户`namnnumbr`创造,将其描述为一种不礼貌的行为,因为它将验证和完善的工作转嫁给接收者。 对话的重点是*如何*在不冒犯的情况下解决同事这种行为。一位用户寻求建议,礼貌地劝阻这种做法,担心这可能会被视为批评他们的专业性或错误地指责他们使用人工智能。 评论者将“sloppypasta”与简单地提供链接(LMGTFY)区分开来,认为它更像是分享未经审查的搜索结果。核心问题是缺乏批判性思维以及将原始人工智能生成内容润色工作的负担放在他人身上。
相关文章

原文

Sharing raw AI output is like eating junk food: it's easy and may feel good, but it's not in your best interest. You'll negatively influence your relationship with the recipient, and do yourself a disservice by reducing your own comprehension.

Before LLMs, writing took effort. Authors spent time and effort considering and selecting their words with intention; time and effort that was balanced by that spent by the audience as they read. This balance is broken with LLMs; the effort to produce text is effectively free, but the effort required to read the text hasn't changed. The increasing verbosity of LLMs further increases the effort asymmetry. In some circumstances (like pasting raw LLM output into a chat thread), the sloppypasta effectively becomes a filibuster, crowding out the existing conversation and blocking the viewport.

Writing is thinking. The writing process forces the author to work through their thoughts, building their comprehension and retention. Multiple studies have found that delegating tasks to LLMs creates cognitive debt. Shortcutting thinking with LLMs ultimately reduces comprehension of and recall about the delegated subject.

Before LLMs, trust was the default. Authors wrote from their personal expertise and perspective, and readers could judge an author's understanding of the subject based on the coherence of their writing. LLMs generate the most probable next token given an overarching goal to be helpful, which explains their propensity for hallucination (confabulation) and why many people feel that LLMs are bullshit generators. Modern LLMs are typically provided tools to help them look up grounding information that reduces (but does not eradicate) their likelihood to outright make up facts during their responses. But that still doesn't solve the trust problem; the reader still has no way to know what the sender checked and what they didn't. LLM responses, therefore, cannot be trusted by default and compound the effort asymmetry on the reader by adding a verification tax.

Beyond accuracy, LLMs write authoritatively with the tone and confidence of an expert. This adds further uncertainty to the reader's burden; they have no way to gauge the sender's actual level of expertise with the subject matter. The result is a further erosion of trust, because the AI's voice removes signal that recipients previously used to distinguish expertise from plausible-sounding slop.

Formerly, "Trust but verify" ruled. Readers would trust until that trust was broken; the author was trustworthy or they weren't. However, shared LLM output obfuscates the chain of trust. Did the prompter do the appropriate due diligence to validate the LLM response? If problems or errors are discovered, who is to blame, the prompter or the AI? Was it an oversight, a missed verification step, or was verification skipped altogether? The uncertainty means the recipient doesn't know what they can trust, what has or has not been verified; they must treat everything as untrusted. Just like the Boy Who Cried Wolf, once the trust is broken, the uncertainty spreads to all future messages from the sender.

Assumptions of balanced effort and presumed trust are no longer guaranteed in a post-LLM world. Sloppypasta creates a compounding negative feedback loop where the sender forfeits learning and credibility while the recipient burns effort and loses trust. Receiving raw AI output feels bad due to the cognitive dissonance of having these assumptions violated.

联系我们 contact @ memedata.com