我隐约感觉到 Iain Banks 的“文明”系列。
I'm Getting a Whiff of Iain Banks' Culture

原始链接: https://probablydance.com/2026/03/07/im-getting-a-whiff-of-iain-banks-culture/

最近美国的对外干预——从委内瑞拉和墨西哥的所谓行动到与伊朗的冲突升级——呈现出一种令人不安的模式:成功程度超出了传统优势所能预测的范围。这引发了一种与对抗超级人工智能的比较,不是作为个人,而是作为国家。 这种感觉不是源于卓越而显而易见的战略,而是所有事情都*略微*比应该的好——一种持续的、微妙的优势,让人联想到人工智能在围棋和星际争霸等游戏中展现的技巧。人工智能擅长保持全面的概览,执行精确的微观行动,并且至关重要的是,通过同时跟踪无数变量做到超前准备——这是人类智能难以做到的。 这表明人工智能正在加速美国的军事能力,正如军事领导人所承认的。虽然现有的技术发挥了作用,但这种转变感觉是质的,回响着由人工智能运行的虚构的“文化”文明,能够以令人不安的效率部署压倒性的力量。 我们正处于一个国家拥有这种优势的短暂窗口期,但差距正在缩小。令人担忧的不仅仅是人工智能的力量,还有这项技术的快速、不可预测的演变,以及人类可能在它的结果中变得无关紧要的潜力,这要求我们紧急考虑如何应对这一转变。

这个Hacker News讨论围绕一篇文章展开,该文章认为当前的国际地缘政治事件——特别是伊朗和委内瑞拉的行动——暗示了一种类似于“文化”的影响(指 Iain Banks 的科幻系列,其中超智能人工智能管理社会)。 然而,评论员们大多不同意这种类比。许多人认为,描述中的美国行动更像是一个*功能失调*的“文化”,受到自身利益驱动,缺乏道德考量。另一些人指出,令人印象深刻的情报收集和行动执行能力在像ChatGPT这样先进的人工智能出现之前就已存在,并举例说明了历史上的例子,如恩特比行动和对真主党寻呼机的行动。 一个关键点是,人工智能能够不懈地追求目标,超越人类的耐力和分析能力,甚至在复杂的策略游戏中也是如此。还有人批评这篇文章是推测性的,基于“感觉”而非具体证据,认为伊朗行动是冲动的且计划不周,而非人工智能策划的。
相关文章

原文

The US has been acting powerful recently and it reminded me of this question: What does it feel like to fight against a powerful AI? Not for normal people for whom there’s no difference between competing against a strong human or a strong AI, (you lose hard either way) but for the world’s best humans. We got a sense of the answer before LLMs were a thing, when the frontier research labs were working on game RL:

Fighting against a powerful AI feels like you’re weirdly underpowered somehow. Everything the AI does just works slightly better than it should.

If you’re not a strong human player, the closest feeling is when you play a game with lots of randomness against a really strong player. It will appear as if that strong player just keeps on getting lucky somehow.

I’m getting a similar sense for the recent US foreign interventions and wars. They all seem to work slightly better than they should. It finally clicked for me when Dario Amodei said “This technology can radically accelerate what our military can do. I’ve talked to admirals, I’ve talked to generals, I’ve talked to combatant commanders who say this has revolutionized what we can do.”

The things I’m referring to are the raid that captured Maduro in Venezuela (Claude was used), the current war with Iran (Claude was used), the killing of a drug boss in Mexico (unclear if AI was used but US intelligence helped Mexico).

The commentators in the AlphaGo match with Lee Sedol didn’t know what to make of most games. The AI wasn’t doing anything obviously brilliant, there were lots of little fights all over the board where the outcome wasn’t quite clear, but they just all worked a little better for AlphaGo than expected. So gradually Lee Sedol’s position changed from “this is tough, hard to tell how this is going but at least I’m feeling good about these areas” to “hmm I’m struggling, maybe I’m a bit behind but it’s not clear” to suddenly “oh I lost”.

I don’t know Go, but I got a clearer sense from the StarCraft 2 matches. In some skirmishes the AI would take damage, in others the human would. But somehow it always felt like the human was in more trouble. In some fights the human clearly came out ahead but then mysteriously just one minute later the AI had a clear advantage. It was able to quickly recover and constantly put pressure on the human. It all looked very stressful, because even when you think you do well as a human, it works out a little less well than expected and whatever the AI does works a little better than expected.

And where have we seen this pattern before? In sci-fi of course. In particular I’m thinking of Iain Banks’ Culture, the ostensibly human civilization that’s actually run entirely by AIs. Alien civilizations keep on wanting to pick fights with them for reasons and keep on being surprised by how hard the harmless-seeming Culture can whoop your ass if you make it mad.

I always thought of the Culture as closest to the European Union: Seemingly harmless but if anyone ever picked a fight with them, they’d find out that the EU can get its act together very quickly and can very quickly stand up the strongest army in the world. But obviously the real EU has never come close to the Culture because nothing human ever comes close to the potential of AIs. It would be as if Russia picked a fight with Poland, gained ground for a week, feeling good, only to suddenly find all of its IT systems hacked and access to nuclear bombs revoked, bombs dropping on Moscow the next day and an army in Moscow another two days later. The Culture takes a week to get its act together and then whoops your ass so hard you don’t even know what’s happening.

But now I’m getting a whiff of the power of the Culture for the first time, and it’s from the US. Going into another country, kidnapping their leader and getting away with it is exactly the kind of overpowered move that the Culture would be able to pull off. Bombing cities all over Iran, knocking out the entire leadership within two days, while the air-defense systems supplied by China do absolutely nothing is another example. If this was a video game these would be strategies done by high level players, but they’re not supposed to work that well.

It would be foolish to think this is entirely due to AI. The US had a high-tech advantage for a while. Turns out the F-35 is actually good. But even a couple years ago the US regularly messed up when it tried to do operate at high precision. We saw in Iraq and Afghanistan that being overpowered doesn’t work out as well in practice as it does in theory. So I think AI is the most likely candidate for the shift to “it worked better than it should have.”

So how specifically do you get to a point where everything works slightly better than it should? We saw two different approaches in Go and StarCraft 2:

  • In Go the AI was having little fights all over the map, in a way that combined to a few extra pieces at the end. It would defend a little bit here, attack a little bit there. It was able to keep the overall picture in its head, not feeling the pressure to resolve things too early. (I haven’t played Go, but I know I get frustrated in strategy games if I have to deal with multiple fights in different parts of the map at once)
  • In StarCraft 2 we saw the same thing, but we also saw that the AI could have perfect micro when it counts, like playing with wounded stalkers in the frontline because it could get them out of danger just in time. Humans could also do that in theory but in practice you can’t quickly click perfectly like that.

So the two angles are “having a better high-level view” and “having better micro control.”

Another source of success for the Culture is that they’re over-prepared for fighting. (not for their first big war, but in later books) And this is also part of the story we hear in Iran. Normally there’s just too much going on in the world and you can’t possibly keep track of all of it. Famously the US had prior intelligence on 9/11 but didn’t really put the pieces together. (there’s a whole Wikipedia article about it which has phrases like “Rice listened but was unconvinced, having other priorities on which to focus.”) But AI has almost no limits of what it can keep track of. You can always spin up another agent. So when something important comes up, chances are that some AI was keeping track of it and can raise an alert. You’ll never miss opportunities just because you had other priorities to focus on.

So the third angle is: Being over-prepared because you can follow up on many more things at once.

What does all of this mean for the world? It means we’re in a weird temporary phase where one country has control of a game-changing technology while others are not far behind (sadly not the EU. I’m thinking of China, especially with H200s). You get to play at a higher level, but only for a short time and only in specific ways. In a year others will have caught up, but by then you’ll have new capabilities that you didn’t have a year ago. If this was a game you’d saturate at some point (you just can’t play StarCraft that much better than the best humans), but in real life the game keeps on changing. New pieces keep on coming into play and the old pieces become irrelevant. You can’t do this for long before the humans become irrelevant to the outcomes, and then you’re fully in Culture territory. I personally wouldn’t mind living in the Culture, but it seems scary to rush towards it without a good plan for how we’ll survive the transition.

I don’t have a good angle for working on that plan, maybe others do. For now my contribution is just to point out that we seem to be in the early stages of overpowered AI, and to make people notice what that feels like.

联系我们 contact @ memedata.com