影响力者:没有证据的人工智能炒作
The Influentists: AI hype without proof

原始链接: https://carette.xyz/posts/influentists/

谷歌的Jaana Dogan (Rakyll) 最近的一条病毒式推文引发了关于人工智能大幅加速软件开发的争论——最初暗示仅凭问题描述就能在数小时内构建复杂的项目。然而,随后的澄清表明,这种“成功”很大程度上依赖于Rakyll预先存在的架构工作和深厚的领域专业知识,仅仅是一个概念验证,而非可以直接投入生产的解决方案。 这一事件凸显了一种日益增长的趋势,作者称之为“影响力者”——那些优先考虑炒作而非可靠证据的有影响力的人物。这些人使用“相信我”的说辞,缺乏可重复的结果,并利用战略性模糊,通常随后会发表澄清声明。微软和Anthropic关于人工智能重写代码库或实现AGI的类似说法也属于此类。 作者认为这种“先炒作后求真”的方法会产生不切实际的期望,尤其是对于初级开发者而言,并且会分散对可验证结果的重视。为了对抗不受控制的影响,并培养对人工智能能力的更脚踏实地的理解,转向优先考虑证据和可重复性至关重要。

一场 Hacker News 的讨论集中在对当前人工智能炒作的怀疑态度,特别是关于在缺乏实质证据的情况下,声称人工智能能带来变革性的生产力提升。最初的文章(来自 carette.xyz,标题为“The Influentists”)引发了关于人工智能益处是否被夸大的争论。 用户分享了经验,发现人工智能生成的代码和想法往往不够理想,尤其是在像 Spark 这样专业领域,性能差异会转化为显著的成本变化。一位用户强调了*现有*专业知识在识别和纠正有缺陷的人工智能建议中的关键作用。 另一位评论员表示,最初的兴奋源于一位领域专家成功地进行了概念验证(PoC),同时承认缺乏更广泛的证据。最后一条评论链接到一篇关于“揭穿”炒作的徒劳的文章,将其比作与广告争论。总的来说,这场对话反映了在人工智能领域中,对现实期望和对可证明结果的关注日益增加,而不是对虚高承诺的关注。
相关文章

原文

Last week, the developer community was busy discussing about a single tweet:

The author is Jaana Dogan (known as Rakyll), a highly respected figure in the Google ecosystem, in the open-source world, and in my heart (thank you Rakyll for your great Go blog posts).

At first glance, the tweet suggests an enormous shift in the software industry: the ability to build in just one hour what previously required weeks or months for a team of sofware engineers, using just the description of the problem. The tweet was too-much dramatic in my own opinion, but actually impressive!

The post triggered an immediate wave of “doom-posting,” with many fearing for the future of software engineering (as each week since a year now). However, as the conversation reached a high number of replies and citations on social networks, Rakyll released a follow-up thread to provide context:

This response thread revealed a story far less miraculous than the original tweet suggested. Let’s analyze it.

Crucially, the foundational “thinking” had already been performed by Rakyll herself, who guided the AI using architectural concepts (honed over several weeks or months of prior effort) rather than the AI thinking and inventing the “product” from scratch.
Furthermore, the resulting project was strictly a proof-of-concept that falls far short of a production-ready system capable of managing real-world complexity.
And finally, this success hinged on the Rakyll’s implicit domain knowledge and deep expertise. The last point is often (strategically?) omitted from these “magic” viral demonstrations in order to make the tool appear way more autonomous than it truly is.

Hmm. Now, this is far less exciting…

Under influence #

This pattern of “hype first and context later” is actually part of a growing trend.

I call the individuals participating to that trend “The Influentists”. Those people are members of a scientific or technical community, and leverage their large audiences to propagate claims that are, at best, unproven and, at worst, intentionally misleading.

But how can we spot them?

I personally identify these “Influentists” by four personality traits that characterize their public discourse.
The first is a reliance on "trust-me-bro" culture, where anecdotal experiences are framed as universal, objective truths to generate hype. This is a sentiment perfectly captured by the “I’m not joking and this isn’t funny” tone of Rakyll’s original tweet, but also the dramatic “I’ve never felt that much behind as a programmer” from Andrej Karpathy’s tweet. This is supported by an absence of reproducible proof, as these individuals rarely share the code, data, or methodology behind their viral “wins”, an omission made easier than ever in the current LLM era. And finally, they utilize strategic ambiguity, carefully wording their claims with enough vagueness to pivot toward a “clarification” if the technical community challenges their accuracy.

A Growing Pattern #

Rakyll is far from alone. We see this “hype-first” approach across major AI firms like Anthropic, OpenAI, or Microsoft.

Consider Galen Hunt, a Distinguished Engineer at Microsoft. He recently made waves by claiming a goal to rewrite Microsoft’s massive C/C++ codebases into Rust by 2030 using AI.

When the industry pointed out the near-impossible complexity of this task, but also asking clarity for popular and critical products like Microsoft Windows, he was forced to clarify that it was only a “research project”.

Similarly, engineers from Anthropic and OpenAI oftenly post teasers about “AGI being achieved internally” to release months later models that disappoint the crowd.

Similarly, many other companies lie over what they are solving or willing to solve:

The Cost of Unchecked Influence #

When leaders at major labs propagate these hyped-based results, it can create a “technical debt of expectations” for the rest of us. Junior developers see these viral threads and feel they are failing because they can’t reproduce a year of work in an hour, not realizing the “magic” was actually a highly-curated prototype guided by a decade of hidden expertise.

We must stop granting automatic authority to those who rely on hype, or vibes, rather than evidence.
If a tool or methodology were truly as revolutionary as claimed, then it wouldn’t need a viral thread to prove its worth because the results would speak for themselves.

The tech community must shift its admiration back toward reproducible results and away from this “trust-me-bro” culture.

联系我们 contact @ memedata.com