人工智能是下一个轻松赚钱的泡沫吗?
Is Artificial Intelligence The Next Easy-Money Bust?

原始链接: https://www.zerohedge.com/technology/artificial-intelligence-next-easy-money-bust

文章讨论了人工智能 (AI) 的增长趋势。 作者解释说,虽然人工智能自 20 世纪 80 年代以来就已存在,但由于大型语言模型 (LLM) 的进步,目前它又重新引起了人们的兴趣。 法学硕士被描述为复杂的统计引擎,可根据输入数据生成响应。 作者使用示例来展示法学硕士系统的优点和缺点,强调它们在处理简单的请求时表现良好,但在处理更复杂的问题或模糊的输入时却遇到困难。 作者对人工智能的实用性提出了担忧,指出它需要大量的计算资源,并且在最终用户利益方面产生的回报有限。 他批评人工智能行业支出过多,且该技术缺乏明确的应用。 他提到了加拿大航空实施人工智能导致误导性信息和诉讼等实例,以及美国法院禁止使用人工智能的努力。 此外,作者认为人工智能行业过度依赖昂贵的硬件并消耗大量电力,质疑未来硬件的发展能否抵消不断上升的成本。 他认为,人工智能行业可能会达到饱和点,尽管企业继续大力投资人工智能研究,但最终可能会遭受重大财务损失。 总之,作者质疑人工智能的真正价值,认为它更多地代表了一种由长期宽松货币推动的营销噱头,而不是具有广泛适用性和切实利益的真正变革性技术。

相关文章

原文

Authored by Justin Murray via The Mises Institute,

Since early 2022, the big buzz in the tech industry, and among laymen in the general public, has been “artificial intelligence.” While the concept isn’t new - AI has been the term used to describe how computers play games since at least the 1980s - it’s once again captured the public’s imagination.

Before getting into the meat of the article, a brief primer is necessary. When talking about AI, it’s important to understand what is meant. AI can be broken down into seven broad categories. Most of the seven are, at best, hypothetical and do not exist. The type of AI everyone is interested in falls under the category of Limited Memory AI. These are where large language models (LLMs) reside. Since this isn’t a paper on the details, think of LLMs as complex statistical guessing machines. You type in a sentence and it will output something based on the loaded training data that statistically lines up with what you requested.

Based on this technology, LLMs can produce (at least on the surface) impressive results. For example, ask ChatGPT 4.0 (the latest version at the time of writing) the following logic puzzle:

This is a party: {}

This is a jumping bean: B

The jumping bean wants to go to the party.

It will output, with some word flair, {B}. Impressive, right? It can do this same thing no matter what two characters you use in the party and whatever character you desire to go to the party. This has been used as a demonstration of the power of artificial intelligence.

However, do this:

This is a party: B

This is a jumping bean: {}

The jumping bean wants to go to the party.

When I asked this, I was expecting the system to, at minimum, give me a similar answer as above, however, what I got was two answers: B{} and {}B. This is not the correct answer since the logic puzzle is unsolvable, at least in terms of how computers operate. The correct answer, to a human, would be I{}3.

To understand what’s going on under the hood, here’s the next example:

Dis be ah pahtah: []

Messa wanna boogie woogie: M

Meesa be da boom chicka boom.

This silly Jar Jar Binks-phrased statement, if given to a human, makes no sense since the three statements aren’t related and there isn’t a logic puzzle present. Yet, GPT4 went through the motions and said that I’m now the party. This is because—for all its complexity—the system is still algorithmically driven. It sees the phrasing, looks in its database, sees what a ton of people previously typed with similar phrasing (because OpenAI prompted a ton of people to try), and pumps out the same format. It’s a similar result that a first year programming student could produce.

Major Limitations

The above silly example proves there are tremendous limitations in the AI industry space. It works great if you ask it something simple and predictable, while it falls apart when you ask for something only slightly more complex, like trying to get an image generator to give you the image you wanted out of a simple four-sentence paragraph. There is, as the industry admits, a lot of work to be done while advancements are being made.

The problem? The whole AI experiment is ludicrously expensive and the cost accelerates well beyond the advancements in utility. OpenAI—the current leader in LLMs—is on track to lose $5 billion this year, representing half of its total capital investment. The losses only expand with the more customers the company signs up and the better their model gets.

There is a surprising lack of viable applications for which this technology can be used. Attempts to implement this technology in substantive ways have backfired badly. Air Canada’s AI assisted customer service and gave away discounted airfare. The Canadian court stated the company is liable for anything an AI assistant provides to a customer. The legal profession is—piecemeal—being forbidden from using AI in court cases across the U.S. after a string of high-profile events of AI programs fabricating documents. Major demonstrations were later to be discovered as heavily faked. Google’s new AI summary at the top of the search page takes roughly 10 times more energy to produce than the search itself and has near zero end-user utility. Revenues in the AI space are almost exclusively concentrated in hardware, with little end-user money in sight. There’s also the shocking energy requirements needed to operate it all.

To make matters worse, further development will likely only get more expensive, not cheaper. The hardware industry is at the tail-end of its advancement potential. Processor designers ran out of the clock speed lever to pull nearly two decades ago while single thread performance peaked in 2015. Processor design has been mostly getting by on increasing logic core count via shrinking transistors. Though this particular lever is expected to be exhausted next year when the 2nm process comes online. What this means is that, starting as early as next year, AI can’t rely on hardware efficiency gains to close the cost gap since we’re already close to the maximum theoretical limit without radically redesigning how processors work. New customers require new capacity, so every time another business signs on, the costs go up, making it questionable if there will ever be a volume inflection point.

With these revelations, a prudent businessman would cut his losses in the AI space. The rapidly expanding costs, along with the questionable utility, of the technology makes it look like a major money-losing enterprise.

Yet AI investments have only expanded. What is going on?

Big Tech Easy Money

What we’re seeing is a significant repercussion of the long easy-money era, which, despite the formal Fed interest rate hikes, is still ongoing. The tech industry in particular has been a major beneficiary of the easy-money phenomenon. Easy money has been going on for so long that entire industries, tech in particular, are built and designed around it. This is how food delivery apps, which have never posted a profit and are on track to lose an eye-watering $20 billion just in 2024, keep going. The tech industry will pile in billions to invest in questionable business plans just because it has the veneer of software somewhere in the background.

I’m seeing a lot of the same patterns in the AI boom as I saw years ago with the WeWork fiasco. Both are attempting to address mundane solutions. Neither of them scale well to the customer base. Both, despite being formally capital-driven, are highly subject to variable costs of operation that can’t be easily unwound. Both apply an extra layer of expense to do little more than the exact same thing as done before.

Despite this, companies like Google and Microsoft are willing to pour tremendous amounts of resources into the project. The main reason is because, to them, the resources are relatively trivial. The major tech firms—flushed with decades of cheap money—have enough cash on hand to outright buy the entire global AI industry. A $5 billion loss is a drop in the bucket for a company like Microsoft. The fear of missing out is greater than the cost of a few dollars in the war chest.

However, easy money has its limits. Estimates put the 2025 investment at $200 billion which—even for juggernauts like Alphabet—isn’t chump change. Even this pales in comparison to some of the more ludicrous estimates like global AI revenues reaching $1.3 trillion by 2032. The easy money today doesn’t care about where that revenue is supposed to manifest from. The easy money will, however, give out when the realities hit and the revenues don’t show up. How much is the market willing to pay for what AI does? The recent wave of AI phones hasn’t exactly arrested the long-run decline in smartphones, for example.

At some point, investors will start asking why these major tech firms are blowing giant wads of cash on dead-end projects and not giving it back as dividends. Losses can’t be sustained indefinitely.

The big difference in the current easy-money wave is who feels the pain when the bust happens won’t be the usual suspects. Big players like Microsoft and Nvidia will still be around, but they’ll show lower profits as the AI hype dies down. They siphoned up the easy money, spent it on a prestige project, and will not face the repercussions of the failure. There likely won’t be a spectacular company collapse like we saw in the 2009 era, however, what we will see are substantial layoffs in the previously prestigious tech space, and the bust will litter the landscape with small startups. In fact, the layoffs have already started.

Of course, I could always be wrong on this. Maybe AI really is legitimate and there will be $1.3 trillion in consumer dollars chasing AI products and services in the next five years. Maybe AI will end up succeeding where 3D televisions, home delivery meal kits, and AR glasses have failed.

I am, however, not terribly optimistic. The tech industry is in the midst of an easy-money-fueled party. My proof? The last truly big piece of disruptive technology the world experienced—the iPhone—turned 17 not all that long ago. The tech industry has been chasing that next disruptive product ever since and has turned up nothing. Without the easy money, it wouldn’t have been able to keep it up for this long.

联系我们 contact @ memedata.com