帕兰蒂尔希望恢复征兵制
Palantir Wants to Reinstate the Draft

原始链接: https://reason.com/2026/04/20/this-big-tech-firm-wants-to-reinstate-the-draft/

帕兰蒂尔公司,一家数据分析公司,最近发布了一份22条纲领,阐述了以科技驱动的国家复兴愿景。该文件是其首席执行官即将出版的书籍的预览,认为硅谷对国家防御负有“道义上的债务”,主张科技公司应优先开发“武器”而非消费者应用程序。它呼吁恢复国家服务作为“普遍义务”,并批评其认为的文化“颓废”。 这份纲领表现出强烈的民族主义倾向,认为宽容应该有界限,一些文化是“功能失调的”,同时又要求对精英阶层给予“宽容”,并保护他们免受审查。它认为对人工智能武器的争论是无益的,并推动加强监控以打击犯罪。 批评指出该文件具有精英主义、矛盾之处,并对隐私和个人自由产生令人担忧的影响,暗示着科技公司希望在国家意志下运作,以及一个共享风险和成本至上的社会——可能通过普遍征兵来实现。该文件的发布引发了关于科技在国家安全中的作用以及创新与政府控制之间平衡的争论。

## Palantir 与征兵辩论 - 摘要 Palantir 最近的一项提议表明,美国应该重新考虑全志愿兵制,转向一种“人人分担战争风险和成本”的制度,可能通过征兵或强制公民服务来实现。 这在 Hacker News 上引发了争论,评论者大多持怀疑态度。 许多人认为,如果那些倡导战争的人——特别是富人和权贵——也被要求参与,征兵才是公平的,而不是被影响力或特权所保护。 人们对历史上征兵中的不平等现象表示担忧,例如越南战争,弱势群体不成比例地承担了负担。 一些评论员强调需要公开战争的*实际*成本,认为诚实的核算可能会阻止不必要的冲突。 还有人质疑 Palantir 的动机,指出该公司与监控有关联,并可能从增加军事参与中获利。 一个反复出现的主题是强大社会契约的重要性; 强制服务只有在公民认为国家真正服务于*他们*的利益时才可行。 一些人提出了替代机制,例如国会战争全民公投,要求在采取军事行动之前获得广泛的公众批准。
相关文章

原文

"We should, as a society, seriously consider moving away from an all-volunteer force and only fight the next war if everyone shares in the risk and the cost," says military contractor and all-around surveillance-enabler Palantir.

The big data company—whose analysis tools help power everything from "predictive policing" in U.S. cities to military operations in Gaza—recently released a 22-point manifesto that's perhaps best described as bootlicking, though icky, elitist, and ultranationalistic would also do.

Palantir posted the document to X on Saturday, calling it a brief summary of The Technological Republic, a 2025 book by Palantir co-founder and CEO Alexander C. Karp and head of corporate and legal affairs Nicholas W. Zamiska. You can read the full manifesto here.

From the start, the company's view of the proper relationship between private entities and the government becomes clear.

"Silicon Valley owes a moral debt to the country that made its rise possible," it states. "The engineering elite of Silicon Valley has an affirmative obligation to participate in the defense of the nation."

A debt to the country that must be paid back by participating in national defense? That sure sounds like a suggestion that tech companies must do the state's bidding as a thank you for being allowed to exist and thrive—which would be an amazing distortion of how a liberal society should work. (It's hard not to see this through the lens of the Pentagon's recent dispute with Anthropic.)

Even the idea that tech companies owes "the country" as a whole—or individual Americans anything more than the goods and services we pay for—is a weird thing to suggest and a little too "collective good" for my liking.

Overall, the document drips with melodramatic language ("the tyranny of the apps"), conservative dog whistles (cultural "decadence"), and some jarring contradictions. For instance, we must have more tolerance for religious beliefs, it says—but also "resist the shallow temptation of a vacant and hollow pluralism."

There's a lot of bizarre shade thrown at other tech companies and/or people's satisfaction with them. "Is the iPhone our greatest creative if not crowning achievement as a civilization?" it asks at one point. "Free email is not enough," it says in another. ("The thesis is simple: Silicon Valley should stop building apps and start building weapons," commented one anonymous X user in response.)

Merely making products that people like and find useful is decadent, Palantir suggests. What tech companies should really be doing is delivering "security." And by security, we seem to be talking about robot weapons and a domestic police state. "Silicon Valley must play a role in addressing violent crime," it says in point 17.

And in point five: "The question is not whether A.I. weapons will be built; it is who will build them and for what purpose." Having "theatrical debates" about AI weapons is a waste of time, the manifesto declares.

Just as tech companies must be conscripted to serve national interests, so must individuals, suggests Palantir. "National service should be a universal duty," says point six.

But while us plebes must serve the country, heaven forbid we're rude to the elite. "We should show far more grace towards those who have subjected themselves to public life," it says. We should not "snicker" at Elon Musk. We should fight "the ruthless exposure of the private lives of public figures."

Such pleas for reverence, grace, and "tolerance for the complexities and contradictions of the human psyche" come into play when the manifesto mentions tech leaders and public officials. But while Palantir suggests it's wrong to act like billionaires should "simply stay in their lane of enriching themselves," it decries regular people who would "look to the political arena to nourish their soul and sense of self." (Stay in your lane, regular people!) 

Meanwhile, it suggests, we need to allow more "criticism and value judgements" when it comes to "middling" cultures and subcultures and we need more leeway to stereotype whole countries and cultures as inferior.

"Some cultures have produced vital advances; others remain dysfunctional and regressive," it says. "We, in America and more broadly the West, have for the past half century resisted defining national cultures in the name of inclusivity. But inclusion into what?"


In The News

Reese Witherspoon wants women to learn to use AI. In a new video, the actress tells us that the women in her book club weren't using artificial intelligence and she worries that women are going to get left behind.

If Palantir hadn't just released the worst manifesto of the year, I was going to spend the bulk of this newsletter ranting about this video and similar sentiments. You've been spared that.

Witherspoon has taken a lot of flack for this video. The criticisms are largely misguided—rooted in a generalized animosity to AI. But I am amused and puzzled by this idea that everyone must "learn AI" in some general, amorphous way.

Specific AI tools might be useful for specific people or professions. But there's this burgeoning chorus that women need to learn AI. Moms need to learn AI. Journalists and educators and unicycle riders need to learn AI. And the message gets totally divorced from any particulars. Use AI for what? Which systems? Which tools? In what capacity?

It's a mantra devoid of real meaning, delivered with the utmost gravity. Hype disguised as hope for the future. I don't know quite what it means about our current state of AI acceptance or reality, but I find it fascinating.


On Substack

How much privacy do you expect? Writing at The Freeman, Naomi Brockwell details how our government gets around the Fourth Amendment to collect people's data:

How did we end up in a world where law enforcement can get access to the intimate details of our lives, with seemingly no guardrails, even though the Constitution was supposed to prevent exactly that?

The answer lies in a set of outdated legal tests and doctrines built decades ago…before the Internet, before smartphones, before our messages, movements, relationships, and private lives migrated online and were turned into data.

The most destructive of these is something called the third-party doctrine. And it's powered by a broader legal framework called the "reasonable expectation of privacy" test: a framework that was supposed to protect us, but has instead given the government an ever-expanding ability to access our lives without a warrant.

Together, these outdated legal tools have helped erase the protections that were supposed to shield us from arbitrary government intrusion, and opened the door to a potentially dystopian future of unchecked surveillance and control.


Read This Thread

AI knows you wrote that. "I think that people should probably assume that text of any significant length which they wrote will be reliably possible to attribute to them, some time very soon," suggests Kelsey Piper on X. The implications of that are actually pretty huge. Piper explains her reasoning for this prediction in this thread:


More Sex & Tech News

• Elon Musk is calling for "universal HIGH INCOME via checks issued by the Federal government."

• AI discourse is out of touch, suggest Jerusalem Demsas and Maibritt Henkel. Evidence suggests that "AI is largely a general-purpose digital advisor for everything from 'What is this rash on my leg?' to 'What's a healthy, cheap meal I can make for my family in under 30 minutes?'" Rather than serving primarily to kill jobs and education (as a lot of media would have you believe), "the most common use cases for AI seem practical and largely unobjectionable."

• Some sex workers take issue with the TV show Europhia's portrayal of their jobs.

• Mark Meador, a commissioner at the Federal Trade Commission, is doom-mongering about social media based on bad interpretations of a recent study. Last summer, Reason's Jack Nicastro looked at what the study really found.

• Why it doesn't make sense to use terms like "addiction" when we're talking about social media.

• "Marriage bootcamp" and other bad ideas to promote family formation.

• The internet is being invaded by AI avatars for Trump.

联系我们 contact @ memedata.com