人工智能的真正用途
What AI Is Really For

原始链接: https://www.chrbutler.com/what-ai-is-really-for

## 人工智能:炒作、泡沫与权力争夺 作者浸淫人工智能三年后得出结论,它是一种有用的但被严重夸大的技术,可能正走向灾难性的泡沫破裂。虽然小规模应用显示出潜力——例如信息整合——但试图广泛自动化工作流程或取代整个职能往往代价高昂且无效。作者是一位设计师,同时也是一家依赖人工智能的企业联合创始人,他指出人工智能在复制现有创意作品方面存在困难,并且在设计和用户界面等领域,其效率始终不如人类专业人士。 核心问题不在于人工智能的潜力,而是由对通用人工智能(AGI)的追求所驱动的不可持续的财务投资。顶尖公司投入巨资,形成了一个相互依赖的系统,缺乏可行的盈利模式,这与互联网泡沫如出一辙。 除了财务风险外,作者还对人工智能对真相和社会信任的影响表示担忧,这种担忧因其生成令人信服的虚假信息的能力而加剧。然而,更深层次的担忧是,人工智能的炒作掩盖了通过大规模数据中心建设来整合土地、能源和水资源——从而有效地创建由私营部门拥有、基础设施丰富的实体,并拥有巨大的政治权力,甚至可能超越民主治理。这种“私有化”代表了一种根本性的权力转移,无论人工智能是否能实现其宏伟的承诺。

## AI:炒作之外 (Hacker News 讨论总结) 一篇来自 chrbutler.com 的文章引发了 Hacker News 的讨论,主题是人工智能的真正目的和价值。虽然人工智能通用 (AGI) 备受炒作,但许多评论员认为人工智能目前的优势在于实际应用,尤其是在软件工程领域,定制代理正在提高生产力。 一个关键点是用户被告知的承诺(任务自动化)与驱动投资的因素(AGI 和未来盈利)之间的脱节。人们对人工智能倾向于自信地呈现错误信息表示担忧,准确率估计在 70-95% 之间,但存在条件。 几位参与者强调了人工智能克服语言障碍的潜力,通过改进机器翻译有效地消除“巴别塔诅咒”。然而,也有人警告不要失去文化细微差别和独特的沟通方式。更悲观的观点认为,人工智能繁荣是获取重要资源(如土地、水和能源)的掩盖,从而推高 GPU 需求并实现财务操作。最终,讨论表明人工智能是一项强大且不断发展的技术,但其影响可能比 AGI 的崇高目标更务实。
相关文章

原文

After three years of immersion in AI, I have come to a relatively simple conclusion: it’s a useful technology that is very likely overhyped to the point of catastrophe.

The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe. This is a classic bubble scenario. We’ll all take a hit when the air is let out, and given the historic concentration of the market compared to previous bubbles, the hit will really hurt. The worst case scenario is that the people with the most money at stake in AI know it’s not what they say it is. If this is true, we get the bubble and fraud with compound motives. I have an idea about one of them that I’ll get to toward the end of this essay. But first, let’s start with the hype.

As a designer, I’ve found the promise of AI to be seriously overblown. In fact, most of the AI use cases in design tend to feel like straw men to me. I’ve often found myself watching a video about using AI “end to end” in design only to conclude that the process would never work in real work. This is usually because the process depicted assumes total control from end to end — the way it might work when creating, say, a demonstration project for a portfolio, or inventing a brand from scratch with only yourself as a decision-maker. But inserting generative AI in the midst of existing design systems rarely benefits anyone.

It can take enormous amounts of time to replicate existing imagery with prompt engineering, only to have your tool of choice hiccup every now and again or just not get some specific aspect of what a person had created previously. I can think of many examples from my own team’s client work: difficult to replicate custom illustrative styles, impossible to replicate text and image layering, direct connections between images and texts that even the most explicit prompts don’t make. A similar problem happens with layout. Generative AI can help with ideating layout, but fails to deliver efficiently within existing design systems. Yes, there are plenty of AI tools that will generate a layout and offer one-click transport to Figma, where you nearly always have to rebuild it to integrate it properly with whatever was there beforehand. When it comes to layout and UI, every designer I know who is competent will produce a better page or screen faster doing it themselves than involving any AI tool. No caveats.

My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain. The larger the use case, the larger the expense. Most of the larger use cases that I have observed — where AI is leveraged to automate entire workflows, or capture end to end operational data, or replace an entire function — the outlay of work is equal to or greater than the savings. The time we think we’ll save by using AI tends to be spent on doing something else with AI.

(Before I continue, know also that I am a co-founder of a completely AI-dependent venture, Magnolia. Beyond the design-specific use cases I’ve described, I know what it means to build software that uses AI in a far more complex manner. The investment is enormous, and the maintenance — the effort required to maintain a level of quality and accuracy of output that can compete with general purpose AI tools like ChatGPT or even AI research tools like Perplexity — is even more so. This directly supports my argument because the only reason to even create such a venture is to capitalize on the promise of AI and the normalization of “knowledge work” around it. That may be too steep a hill to climb.)

Much has already been made of the MIT study noting the preponderance of AI initiative failures in corporate environments. Those that expect a uniform application of AI and a uniform, generalized ROI see failure, while those who identify isolated applications with specific targets experience success. The former tends to be a reaction to hype, the latter an outworking of real understanding. There are dozens of small-scale applications that have large-scale effects, most of which I’d categorize as information synthesis — search, summarization, analysis. Magnolia (and any other new, AI-focused venture) fits right in there. But the sweeping, work-wide transformation? That’s the part that doesn’t hold up.

Of course, we should expect AI to increase its usefulness over time as adoption calibrates — this is the pattern with any new technology. But calibration doesn’t mean indefinite growth, and this is where the financial picture becomes troubling. The top seven companies by market value all have mutually dependent investments in AI and one another. The more money that gets injected into this combined venture, the more everyone expects to extract. But there has yet to be a viable model to monetize AI that gets anywhere close to the desired market capitalization. This is Ed Zitron’s whole thing.

This is also the same reckoning that a dot-com inflated market faced twenty-five years ago. It was obvious that we had a useful technology on our hands, but it wasn’t obvious to enough people that it wasn’t a magic money machine.

Looking back, another product hype cycle that came right afterward sums this bubble problem up in a much shorter timescale: The Segway was hyped by venture capitalists as a technology that would change how cities were built. People actually said that. But when everyone saw that it was a scooter, that suddenly sounded awfully silly. Today, we hear that AI will change how all work is done by everyone — a much broader pronouncement than even the design of all cities. I think it’s likely to come closer than the Segway to delivering on its hype, but when the hype is that grand, the delta between scooter and normal technology is, at this point, a trillion dollar gap.

The AI bubble, as measured by the state of the financial market, is much, much bigger than any we’ve seen before. Even Sam Altman has acknowledged we’re likely in a bubble, shrugging it off like a billion-dollar miscalculation on a trillion-dollar balance sheet. The valuation numbers he is immersed in are extraordinarily large — and speculative — so, no wonder, but the market is dangerously imbalanced in its dependence upon them. A sudden burst or even a slower deflation will be a very big deal, and, unfortunately, we should expect it — even if AI doesn’t fail as a venture completely.

Meanwhile, generative AI presents a few other broader challenges to the integrity of our society. First is to truth. We’ve already seen how internet technologies can be used to manipulate a population’s understanding of reality. The last ten years have practically been defined by filter bubbles, alternative facts, and weaponized social media — without AI. AI can do all of that better, faster, and with more precision. With a culture-wide degradation of trust in our major global networks, it leaves us vulnerable to lies of all kinds from all kinds of sources and no standard by which to vet the things we see, hear, or read.

I really don’t like this, and to my mind, it represents, on its own, a good reason to back off from AI. Society is more than just a market. It’s a fabric of minds, all of which are vulnerable to losing coherence in the midst of AI output. Given the stated purpose of AI, such a thing would be a collateral damage, you know, like testing a nuclear bomb in the town square.

But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?

There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.

And yet, as much as I doubt what we are sold in AI, I feel the same about what they — the billionaire investors in an AI future — are sold as well. I doubt the AGI promise, not just because we keep moving the goal posts by redefining what we mean by AGI, but because it was always an abstract science fiction fantasy rather than a coherent, precise and measurable pursuit. Rather than previous audacious scientific goals like mapping the human genome, achieving AGI has never been precise enough to achieve. To think that with enough compute we can code consciousness is like thinking that with enough rainbows one of them will have a pot of gold at its end.

Again, I think that AI is probably just a normal technology, riding a normal hype wave.

And here’s where I nurse a particular conspiracy theory: I think the makers of AI know that.

I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water. Datacenters that outburn cities to keep the data churning are big, expensive, and have to be built somewhere. The deals made to develop this kind of property are political — they affect cities and states more than just about any other business run within their borders.

AI companies say they need datacenters to deliver on their ground-level, day-to-day user promises while simultaneously claiming they’re nearly at AGI. That’s quite a contradiction. A datacenter takes years to construct. How will today’s plans ever enable a company like OpenAI to catch up with what they already claim is a computational deficit that demands more datacenters? And yet, these deals are made. There’s a logic hole here that’s easily filled by the possibility that AI is a fitting front for consolidation of resources and power. The value of AI can drop to nothing, but owning the land and the flow of water through it won’t.

When the list of people who own this property is as short as it is, you have a very peculiar imbalance of power that almost creates an independent nation within a nation. Globalism eroded borders by crossing them, this new thing — this Privatism — erodes them from within. Remember, datacenters are built on large pieces of land, drawing more heavily from existing infrastructure and natural resources than they give back to the immediately surrounding community, so much so that they often measure up to municipal statuses without having the populace or governance that connects actual cities and towns to the systems that comprise our country.

When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body.

The scale has already been tipped. I don’t worry about the end of work so much as I worry about what comes after — when the infrastructure that powers AI becomes more valuable than the AI itself, when the people who control that infrastructure hold more sway over policy and resources than elected governments. I know, you can picture me wildly gesticulating at my crazy board of pins and string, but I’m really just following the money and the power to their logical conclusion.

Maybe AI will do everything humans do. Maybe it will usher in a new society defined by something other than the balancing of labor units and wealth units. Maybe AGI — these days defined as a general intelligence that exceeds human kind in all contexts — will emerge and “justify” all of this. Maybe.

I’m more than open to being wrong; I’d prefer it. But I’ve been watching technology long enough to know that when something requires this much money, this much hype, and this many contradictions to explain itself, it’s worth asking what else might be going on. The market concentration and incestuous investment shell game is real. The infrastructure is real. The land deals are real. The resulting shifts in power are real. Whether the AI lives up to its promise or not, those things won’t go away and sooner than later, we will find ourselves citizens of a very new kind of place and we may not like it.


2025-11-18  
Filed under: Essays
联系我们 contact @ memedata.com