(评论)
(comments)
原始链接: https://news.ycombinator.com/item?id=38302635
Dan McQuillan 的分析强调了人工智能技术(尤其是 ChatGPT)的社会和政治影响。 他认为,当代人工智能可以被视为一个自动化行政暴力和扩大紧缩政策的项目。 His critique raises critical concerns regarding the true purpose of AI technologies, suggesting that, instead of focusing solely on their technical applications, we need to consider their broader impact on society.
麦奎伦建议不要再质疑琐碎的细节,而是要考虑更大的图景,其中包括调查构建和维护这些系统所涉及的基本流程,并解决与人工智能开发相关的基本社会问题。 最后,他认为,考虑到这些大型自动化宣传工厂和胡言乱语机器在近几十年来呈指数级增长并持续扩张,燃烧大量能源来推动这些机器的运转是令人担忧的。 麦奎伦要求我们批判性地评估这一趋势背后的动机,并确定以前所未有的成本投资能源是否合理。
纵观历史,技术创新带来了重大的社会和经济影响。 随着我们在日常生活中逐步实施数字技术,分析其对个人和社区的影响变得至关重要。 虽然人工智能和相关技术的出现无疑带来了进步,但人们越来越担心这些好处在多大程度上抵消了成本,特别是对社会弱势群体的负面影响。 因此,仔细、客观、全面地审视人工智能部署的后果至关重要。 Additionally, policymakers and researchers should prioritize the creation of mechanisms to enable equitable access and ensure responsible usage.
最终,正如麦克奎伦和其他分析师所建议的那样,我们必须认识到与开发和部署这些技术相关的长期社会和环境成本,并将这些因素与传统的成本效益分析结合起来。 现在是我们停止关注细节、欣赏更广阔的前景并审视长期后果的时候了。 正如麦奎伦所说,“当你在迷宫中感到困惑时,只要沿着黄砖路走到最近的出口”,也许我们应该对人工智能技术采取类似的心态。 让我们面对现实:这些发展为学习和教学提供了重要的机会。 然而,随着社会不断探索如何最好地利用这些机会,重要的是要记住,
However at some point you have to admit the LLM does generate things that are good answers. They might be good answers that happen to pass the smell test, but they are nonetheless good answers. For instance when you ask it for a snippet of code and it gets it right.
And here is the crucial thing: you need to already know what you're doing to know whether the LLM got it right. I'm no historian, and I can ask cGPT for an essay about the causes of the Great War. When I get the answer, it sounds right to me. I don't know if the essay talks about the things an actual historian would find important, all I know is that it gives me the vanilla answer that some layman who has read a little bit would think was the right answer.
Now there's another issue this brings up. Most of us are experts in one field only. What is stopping the LLM from fooling me in every field that I don't know anything about? I best be wary of using it outside of my area of expertise.
So in the current iteration, I think LLMs are a shortcutting tool for experts. I can tell when it spits out a snippet of code that is correct, and when it's wrong. Someone who wasn't working in my domain would get fooled.
reply