只有一个问题:人工智能不智能,这是一种系统性风险
There's Just One Problem: AI Isn't Intelligent, And That's A Systemic Risk

原始链接: https://www.zerohedge.com/technology/theres-just-one-problem-ai-isnt-intelligent-and-thats-systemic-risk

查尔斯·休·史密斯 (Charles Hugh Smith) 讨论了人工智能 (AI) 的局限性,认为它只是模仿人类智能,而不是真正的智能。 他强调了一种错误的信念,即人工智能的进步将导致一个乌托邦社会,在这个社会中,丰富的资源可以不受限制地控制和消耗。 事实上,他解释说,人工智能只是通过基于大量数据模仿人类行为而显得智能,缺乏任何真正的理解或自我意识。 由于这种限制,人工智能很难区分事实和虚构,导致产生幻觉和反应混乱。 此外,人工智能对预定算法的依赖降低了其对不断变化的环境的适应性,使其在解决复杂问题或识别系统内的真正问题时缺乏灵活性。 总的来说,史密斯认为,虽然人工智能可能提供一些实用性,但它无法提供真正的智能,并且在系统故障和道德困境方面提出了重大挑战。 此外,史密斯强调,人工智能的广泛采用威胁着我们的智力发展,迫使我们严重依赖人工智能来获取信息和知识。 结果,人们失去了批判性思维能力、创造力和深入分析情况的能力。 最终,这种对人工智能的依赖削弱了个人和社会的能力,使人们在意外危机中缺乏弹性和知识。 总之,虽然人工智能可能看起来具有革命性,但它最终未能为个人和社会带来有意义的利益。 相反,它带来了各种风险、道德困境以及智力发展和进步的障碍。 最后,史密斯认为,过度依赖人工智能可能会加剧现有的社会经济差距,创造一个新封建主义体系,其中富人享有独家获得人类驱动的服务,而其他人则满足于劣质、容易出错的人工智能替代品。 此外,人工智能的扩散可能会侵蚀独特和稀缺资产的价值,使它们更容易被大规模复制或复制。 此外,史密斯预测人工智能不会解决人类最紧迫的挑战——污染、气候变化和资源稀缺——而是会进一步加剧系统性风险和废物产生,从而加剧这些问题。 总而言之,作者将人工智能描述为一个令人兴奋但又充满危险的前沿领域,但其前景却未达到预期。

相关文章

原文

Authored by Charles Hugh Smith via OfTwoMinds blog,

Mimicry of intelligence isn't intelligence, and so while AI mimicry is a powerful tool, it isn't intelligent.

The mythology of Technology has a special altar for AI, artificial intelligence, which is reverently worshiped as the source of astonishing cost reductions (as human labor is replaced by AI) and the limitless expansion of consumption and profits. AI is the blissful perfection of technology's natural advance to ever greater powers.

The consensus holds that the advance of AI will lead to a utopia of essentially limitless control of Nature and a cornucopia of leisure and abundance.

If we pull aside the mythology's curtain, we find that AI mimics human intelligence, and this mimicry is so enthralling that we take it as evidence of actual intelligence. But mimicry of intelligence isn't intelligence, and so while AI mimicry is a powerful tool, it isn't intelligent.

The current iterations of Generative AI--large language models (LLMs) and machine learning--mimic our natural language ability by processing millions of examples of human writing and speech and extracting what algorithms select as the best answers to queries.

These AI programs have no understanding of the context or the meaning of the subject; they mine human knowledge to distill an answer. This is potentially useful but not intelligence.

The AI programs have limited capacity to discern truth from falsehood, hence their propensity to hallucinate fictions as facts. They are incapable of discerning the difference between statistical variations and fatal errors, and layering on precautionary measures adds additional complexity that becomes another point of failure.

As for machine learning, AI can project plausible solutions to computationally demanding problems such as how proteins fold, but this brute-force computational black-box is opaque and therefore of limited value: the program doesn't actually understand protein folding in the way humans understand it, and we don't understand how the program arrived at its solution.

Since AI doesn't actually understand the context, it is limited to the options embedded in its programming and algorithms. We discern these limits in AI-based apps and bots, which have no awareness of the actual problem. For example, our Internet connection is down due to a corrupted system update, but because this possibility wasn't included in the app's universe of problems to solve, the AI app/bot dutifully reports the system is functioning perfectly even though it is broken. (This is an example from real life.)

In essence, every layer of this mining / mimicry creates additional points of failure: the inability to identify the difference between fact and fiction or between allowable error rates and fatal errors, the added complexity of precautionary measures and the black-box opacity all generate risks of normal accidents cascading into systems failure.

There is also the systemic risk generated by relying on black-box AI to operate systems to the point that humans lose the capacity to modify or rebuild the systems. This over-reliance on AI programs creates the risk of cascading failure not just of digital systems but the real-world infrastructure that now depends on digital systems.

There is an even more pernicious result of depending on AI for solutions. Just as the addictive nature of mobile phones, social media and Internet content has disrupted our ability to concentrate, focus and learn difficult material--a devastating decline in learning for children and teens--AI offers up a cornucopia of snackable factoids, snippets of coding, computer-generated TV commercials, articles and entire books that no longer require us to have any deep knowledge of subjects and processes. Lacking this understanding, we're no longer equipped to pursue skeptical inquiry or create content or coding from scratch.

Indeed, the arduous process of acquiring this knowledge now seems needless: the AI bot can do it all, quickly, cheaply and accurately. This creates two problems: 1) when black-box AI programs fail, we no longer know enough to diagnose and fix the failure, or do the work ourselves, and 2) we have lost the ability to understand that in many cases, there is no answer or solution that is the last word: the "answer" demands interpretation of facts, events, processes and knowledge bases are that inherently ambiguous.

We no longer recognize that the AI answer to a query is not a fact per se, it's an interpretation of reality that's presented as a fact, and the AI solution is only one of many pathways, each of which has intrinsic tradeoffs that generate unforeseeable costs and consequences down the road.

To discern the difference between an interpretation and a supposed fact requires a sea of knowledge that is both wide and deep, and in losing the drive and capacity to learn difficult material, we've lost the capacity to even recognize what we've lost: those with little real knowledge lack the foundation needed to understand AI's answer in the proper context.

The net result is we become less capable and less knowledgeable, blind to the risks created by our loss of competency while the AI programs introduce systemic risks we cannot foresee or forestall. AI degrades the quality of every product and system, for mimicry does not generate definitive answers, solutions and insights, it only generates an illusion of definitive answers, solutions and insights which we foolishly confuse with actual intelligence.

While the neofeudal corporate-state cheers the profits to be reaped by culling human labor on a mass scale, the mining / mimicry of human knowledge has limits. Relying on the AI programs to eliminate all fatal errors is itself a fatal error, and so humans must remain in the decision loop (the OODA loop of observe, orient, decide, act).

Once AI programs engage in life-safety or healthcare processes, every entity connected to the AI program is exposed to open-ended (joint and several) liability should injurious or fatal errors occur.

If we boil off the mythology and hyperbole, we're left with another neofeudal structure: the wealthy will be served by humans, and the rest of us will be stuck with low-quality, error-prone AI service with no recourse.

The expectation of AI promoters is that Generative AI will reap trillions of dollars in profits from cost savings and new products / services. This story doesn't map the real world, in which every AI software tool is easily copied / distributed and so it will be impossible to protect any scarcity value, which is the essential dynamic in maintaining the pricing power needed to reap outsized profits.

There is little value in software tools that everyone possesses unless a monopoly restricts distribution, and little value in the content auto-generated by these tools: the millions of AI-generated songs, films, press releases, essays, research papers, etc. will overwhelm any potential audience, reducing the value of all AI-generated content to zero.

The promoters claim the mass culling of jobs will magically be offset by entire new industries created by AI, echoing the transition from farm labor to factory jobs. But the AI dragon will eat its own tail, for it creates few jobs or profits that can be taxed to pay people for not working (Universal Basic Income).

Perhaps the most consequential limit to AI is that it will do nothing to reverse humanity's most pressing problems. It can't clean up the Great Pacific Trash Gyre, or limit the 450 million tons of mostly unrecycled plastic spewed every year, or reverse climate change, or clean low-Earth orbits of the thousands of high-velocity bits of dangerous detritus, or remake the highly profitable waste is growth Landfill Economy into a sustainable global system, or eliminate all the sources of what I term Anti-Progress. It will simply add new sources of systemic risk, waste and neofeudal exploitation.

*  *  *

Become a $3/month patron of my work via patreon.com.

Subscribe to my Substack for free

联系我们 contact @ memedata.com