我们自动化废话
We Automated Bullshit

原始链接: https://www.cst.cam.ac.uk/blog/afb21/oops-we-automated-bullshit

针对最近围绕人工智能 (AI) 及其对人类潜在影响的讨论,麻省理工学院教员艾伦·布莱克威尔 (Alan Blackwell) 对人工智能及其与“自动化废话”的关系提出了批判性观点。 虽然主流叙述经常将人工智能描述为一种可以通过数据收集和解释来解决紧迫问题的工具,但布莱克威尔强调了人工智能算法如何生成模仿人类语言模式的文本的问题,但缺乏做出正确决策所需的可信度。 由于对准确性和责任性的担忧,ChatGPT 等聊天机器人技术的使用引起了争议; 此类系统使用大量文本档案进行训练,包括社交媒体平台和新闻网站,这提高了可信度和可靠性。 著名的“废话”哲学批评家菲利普·法兰克福表示,推广这些自动化工具的个人,例如埃隆·马斯克和英国首相里希·苏纳克,未能承认基于证据的推理与纯粹的猜测或宣传之间的区别。 此外,正如社会学家戴维·格雷伯(David Graeber)所详述的那样,大量“废话”职位的设立,即在企业或学术环境中执行毫无意义的行政任务,会导致社会群体之间基于其贡献的进一步分歧,对诸如此类的技能提出更高的要求。 数字素养、创造力和沟通能力,而不是特定学科领域的事实知识。 最终,布莱克韦尔建议政策制定者应探索旨在管理自动化材料的生产和传播的监管框架,强调质量优先于数量的必要性,特别是考虑到当前错误信息活动、假新闻现象和忽视事实可信度带来的社会挑战 在现代话语中。 <|>

Dan McQuillan 的分析强调了人工智能技术(尤其是 ChatGPT)的社会和政治影响。 他认为,当代人工智能可以被视为一个自动化行政暴力和扩大紧缩政策的项目。 His critique raises critical concerns regarding the true purpose of AI technologies, suggesting that, instead of focusing solely on their technical applications, we need to consider their broader impact on society. 麦奎伦建议不要再质疑琐碎的细节,而是要考虑更大的图景,其中包括调查构建和维护这些系统所涉及的基本流程,并解决与人工智能开发相关的基本社会问题。 最后,他认为,考虑到这些大型自动化宣传工厂和胡言乱语机器在近几十年来呈指数级增长并持续扩张,燃烧大量能源来推动这些机器的运转是令人担忧的。 麦奎伦要求我们批判性地评估这一趋势背后的动机,并确定以前所未有的成本投资能源是否合理。 纵观历史,技术创新带来了重大的社会和经济影响。 随着我们在日常生活中逐步实施数字技术,分析其对个人和社区的影响变得至关重要。 虽然人工智能和相关技术的出现无疑带来了进步,但人们越来越担心这些好处在多大程度上抵消了成本,特别是对社会弱势群体的负面影响。 因此,仔细、客观、全面地审视人工智能部署的后果至关重要。 Additionally, policymakers and researchers should prioritize the creation of mechanisms to enable equitable access and ensure responsible usage. 最终,正如麦克奎伦和其他分析师所建议的那样,我们必须认识到与开发和部署这些技术相关的长期社会和环境成本,并将这些因素与传统的成本效益分析结合起来。 现在是我们停止关注细节、欣赏更广阔的前景并审视长期后果的时候了。 正如麦奎伦所说,“当你在迷宫中感到困惑时,只要沿着黄砖路走到最近的出口”,也许我们应该对人工智能技术采取类似的心态。 让我们面对现实:这些发展为学习和教学提供了重要的机会。 然而,随着社会不断探索如何最好地利用这些机会,重要的是要记住,
相关文章

原文

ChatGPT is a bullshit generator. To understand AI, we should think harder about bullshit

This post appears in a blog created in 2019 to focus on AI in Africa. Long before the release of ChatGPT, many wondered why AI would be relevant to Africans. But I’m writing in a week when US President Biden published an executive order on AI, and British PM Rishi Sunak listened enthusiastically to Elon Musk promising a magical AI future where nobody needs to work. When the richest man in the world talks up a storm with NATO leaders, Africa will get blown around in those political and economic winds. 

Since my fieldwork in Africa, I’ve learned to ask different questions about AI, and in recent months, I’ve started to feel like the boy who questions the emperor’s new clothes. The problem I see, apparently not reported in coverage of Sunak’s AI Summit, is that AI literally produces bullshit. (Do I mean “literally”? My friends complain that I take everything literally, but I’m not a kleptomaniac). 

MIT Professor of AI Rodney Brooks summarises the working principle of ChatGPT as “it just makes up stuff that sounds good”. This is mathematically accurate, where “sounds good” is an algorithm to imitate text found on the internet, while “makes up” is the basic randomness of relying on predictive text rather than logic or facts. Other leading researchers and professors of AI say the same things, with more technical detail, as in the famous “stochastic parrots” paper by Emily Bender, Timnit Gebru and their colleagues, or Murray Shanahan’s explanation of the text prediction principles (references below).

“Godfather of AI” Geoff Hinton, in recent public talks, explains that one of the greatest risks is not that chatbots will become super-intelligent, but that they will generate text that is super-persuasive without being intelligent, in the manner of Donald Trump or Boris Johnson. In a world where evidence and logic are not respected in public debate, Hinton imagines that systems operating without evidence or logic could become our overlords by becoming superhumanly persuasive, imitating and supplanting the worst kinds of political leader.

If this isn’t about evidence or logic, what is the scientific principle involved? Quite simply, we are talking about bullshit. Philosopher Harry Frankfurt, in his classic text On Bullshit, explains that the bullshitter “does not reject the authority of truth, as the liar does […] He pays no attention to it at all.” This is exactly what senior AI researchers such as Brooks, Bender, Shanahan and Hinton are telling us, when they explain how ChatGPT works. The problem, as Frankfurt explains, is that “[b]y virtue of this, bullshit is a greater enemy of the truth than lies are.” (p. 61). At a time when a public enquiry is reporting the astonishing behaviour of our most senior leaders during the Covid pandemic, the British people wonder how we came to elect such such bullshitters to lead us. But as Frankfurt observes, “Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about” (p.63)

Perhaps this explains why the latest leader of the same government is so impressed by AI, and by billionaires promoting automated bullshit generators? But Frankfurt’s book is not the only classic text on bullshit. David Graeber’s famous analysis of Bullshit Jobs explains precisely why Elon Musk’s claim to the British PM is so revealing of the true nature of AI. Graeber revealed that over 30% of British workers believe their own job contributes nothing of any value to society. These are people who spend their lives writing pointless reports, relaying messages from one person to another, or listening to complaints they can do nothing about. Every part of their job could easily be done by ChatGPT. 

Graeber observes that aspects of university education prepare young people to expect little more from life, training them to submit to bureaucratic processes, while writing reams of text that few will ever read. In the kind of education that produces a Boris Johnson, verbal fluency and entertaining arguments may be rewarded more highly than close attention to the truth. As Graeber says, we train people for bullshit jobs by training them to generate bullshit. So is Elon Musk right, that nobody will have to work any more once AI is achieved? Perhaps so, if producing bullshit is the only kind of work we need - and you can see how the owner of (eX)Twitter might see the world that way.

AI systems like ChatGPT are trained with text from Twitter, Facebook, Reddit, and other huge archives of bullshit, alongside plenty of actual facts (including Wikipedia and text ripped off from professional writers). But there is no algorithm in ChatGPT to check which parts are true. The output is literally bullshit, exactly as defined by philosopher Harry Frankfurt, and as we would expect from Graeber’s study of bullshit jobs. Just as Twitter encourages bullshitting politicians who don’t care whether what they say is true or false, the archives of what they have said can be used to train automatic bullshit generators. 

The problem isn’t AI. What we need to regulate is the bullshit. Perhaps the next British PM should convene a summit on bullshit, to figure out whose jobs are worthwhile, and which ones we could happily lose?

If you'd like to read more, my new book on Designing Alternatives to AI, to be published by MIT Press in 2024, is available as an online free preview.
 

References

Emily M. Bender,  Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" in Proceedings of the 2021 ACM conference on Fairness, Accountability, and Transparency (March 2021), 610-623.

Harry G. Frankfurt, On Bullshit. Princeton University Press, 2005.

David Graeber, Bullshit Jobs: A Theory. Penguin Books, 2019.

Murray Shanahan, Talking About Large Language Models. (2022). arXiv preprint arXiv:2212.03551.

Glenn Zorpette,  "Just Calm Down About GPT-4 Already. And stop confusing performance with competence, says Rodney Brooks," IEEE Spectrum (17 May 2023).

 

联系我们 contact @ memedata.com