我避免将大型语言模型用作出版商和作者。
I avoid using LLMs as a publisher and writer

原始链接: https://lifehacky.net/prompt-0b953c089b44

最近的研究强调了依赖大型语言模型(如ChatGPT)进行写作的潜在弊端。麻省理工学院的一项研究利用脑电图扫描发现,从大型语言模型开始写作*会降低*大脑连接和原创性,甚至会阻碍用户在之后独立写作时的表现。参与者在准确引用方面遇到困难,并且对大型语言模型生成的内容缺乏所有权。 另一项英国研究证实,大型语言模型可以提高速度和质量,但代价是独立学习和批判性思维的降低——助长了“元认知懒惰”。作者认为,真正的理解来自于与原始文本互动所引发的联想性思维,而摘要工具会破坏这种思维。这种担忧延伸到开源情报(OSINT)等领域,过度依赖人工智能会威胁到分析技能。 最终,作者认为大型语言模型生成的内容缺乏真实性,感觉“经过处理”,并且与真实的思考脱节。虽然承认大型语言模型可以帮助改进沟通等任务,但他们更重视保持个人作者身份以及独立写作的认知益处。

我避免将LLM用作出版商和作者 (lifehacky.net) 13点,由tombarys 1小时前发布 | 隐藏 | 过去 | 收藏 | 1条评论 tombarys 1小时前 [–] 我是一名图书出版商,热爱科技。科技可以赋能人们。自从LLM聊天机器人广泛可用以来,我就一直在使用它们。我们出版社会与译员合作,定期测试机器翻译。我最近在我的母校马萨里克大学完成了人工智能和机器学习的两门课程,并且正在训练我自己的实验模型(用于预测畅销书:)。我认为机器学习是一项非凡的发明,是进步的催化剂。尽管如此,我仍然有一些疑虑。回复 考虑申请YC 2025秋季班!申请截止日期为8月4日 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系我们 搜索:
相关文章

原文

Now for my more detailed arguments.

In a recent study by MIT researchers (Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task) demonstrated using LLMs when writing essays reduces the originality of the resulting work. More notably, when measured using an EEG, LLMs also diminish brain connectivity compared to when participants were allowed to use only their brains or a search engine. People who used LLMs for the first three tasks and then had to write an essay without an LLM, using only their brains, had the worst results. “In contrast, the LLM-to-Brain group, which had previously been exposed to LLM use, showed less coordinated neural effort in most bands and also a bias toward LLM-specific vocabulary,” the study reports. Using LLMs only after completing the work, on the other hand, can enhance the quality of the result and connectivity, but starting with an LLM seems like a tricky choice.

It was intriguing that participants using LLMs were unable to accurately cite from their work and were also the least likely to consider it “their own.” Participants using their brains and search engines, on the other hand, reliably quoted in most cases and did not have a weakened sense of ownership. This is consistent with my experience.

A study by British researchers (Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance) came to similar conclusions: LLMs can help individuals achieve faster and better results, but it weakens the ability to learn independently by making people less accustomed to thinking for themselves.

Both papers are worth reading. But I warn you: if you employ AI summarization on the first study, you will be asking for the revenge of the authors, as described in Time magazine :)

Incidentally, automatic summarization is one of the things I also avoid. In the flood of information, the offer of a condensed summary of a book or essay might seem like the greatest invention since sliced bread. However, the problem is related to both the practical value and the joy of reading: in my opinion, the most rewarding thing about reading is that you encounter (and learn) things by occasionally triggering thematically distant associations while reading the original text. But these only emerge thanks to your comprehensive involvement and your personal neural network, which LLMs don’t even know the first thing about. You read a book about business in which the author mentions that he refused to move when his dog was ill, and you realize a fundamental emotional connection with your plans to move your company — and so a story begins that you start to tell (yourself). Summarization would obliterate this, your flash of insight, potentially genuine insight, and other random associations would be replaced by a totally generic and unemotional narrative.

I was also deeply touched by a real-life illustration about the end of critical thinking in the OSINT community. This type of volunteer work relies heavily on analytical reasoning as its main tool. The author explains how the gradual delegation of tasks to ML tools has insidiously undermined key processes of source validation, consideration of multiple perspectives, hypothesis formation, and independent thinking. He states that it has led to a decline in the quality of detection and relationships within the community as a whole. Incidentally, it is paradoxical that the acronym OSINT stands for Open-source intelligence.

I often think of the essay Writes and Write-Nots by Paul Graham (among other domains, an authority in the field of artificial intelligence), who argues in his uniquely light yet profound style that writing is thinking and thinking must be cultivated. According to Graham, in the new world with AI, there will only be people who can write well and those who cannot write at all.

Hmm, I don’t intend to end up in the latter group.

The thing is, I sense something peculiar in the generated text. The odor of ultra-processed food. I perceive the hint of a cabaret magician who has learned a few tricks to satisfy and appease my curiosity. And then there’s the hypocrisy.

Even if I manually edit entire passages previously produced by the language model, I can still perceive its little calculators clacking away between the lines. (Yes, they are here with us in this room).

So for now, I simply cannot and do not want to use LLMs for writing, alas: the text would not be mine, as I would not be the text.

But that doesn’t mean I’m elevating myself above others, if that’s how it sounded.

I understand and see that these emotions are justified for many chatbot users. We all have different expertise, sensitivities, and needs. If you’re not fixated on literature (e.g. being a publisher/author/editor) or simply writing isn’t your ambition, LLMs can be a welcome aid during the formulation, development, and refinement of text. It will probably enhance the quality of the output and make it more comprehensible without compromising it. Among other things, LLMs can perfectly smooth out sharp edges in an escalating email exchange or come up with arguments for negotiating with a difficult client, so why not give them a try?

* In his book Story, distinguished author and storytelling lecturer Robert McKee explains that we can empathize with villains, madmen, and desperate people, but hypocrites are inherently abhorrent to us. “The audience never aligns with a hypocrite,” he writes, and I agree with him.

联系我们 contact @ memedata.com