战争、资源短缺和(对抗性)人工智能时代下的自由和开源软件[视频]
FOSS in times of war, scarcity and (adversarial) AI [video]

原始链接: https://fosdem.org/2026/schedule/event/FE7ULY-foss-in-times-of-war-scarcity-and-ai/

## FOSS 的未来:应对战争、机器人和信任危机 全球自由和开源软件 (FOSS) 社区正面临着地缘政治冲突和快速发展的人工智能交汇点带来的日益严峻的挑战。FOSS 建立在协作和赋权之上,它推动了经济增长并实现了技术的民主化——这一成功故事如今正受到世界格局变化的影响。 最近的地缘政治不稳定导致 FOSS 被用于虚假信息和专制控制,而“双重用途”技术监管的步伐难以跟上。与此同时,人工智能驱动的代码生成工具虽然提供了便利,但也带来了显著的风险。这些缺乏伦理或真理感的“黑盒”系统容易受到微妙操纵,可能破坏 FOSS 生态系统中辛苦建立的信任。 威胁不仅仅来自恶意行为者;人工智能训练数据的巨大规模使得有针对性且无法检测的代码操纵越来越有可能。最终,人工智能编码的目标可能是 *取代* 人类开发者,从而危及 FOSS 的核心原则。 前进的道路需要谨慎的方法:探索人工智能辅助与强大的保障措施之间的平衡,例如隔离、形式化验证和传统的质量保证。保护 FOSS 的完整性和可防御性对于应对充满冲突和技术颠覆的未来至关重要。

黑客新闻 新的 | 过去的 | 评论 | 提问 | 展示 | 工作 | 提交 登录 战争、稀缺性和(对抗性)人工智能时代的自由开源软件 [视频] (fosdem.org) 12 分,作者 maelito 1 小时前 | 隐藏 | 过去的 | 收藏 | 2 条评论 throwfaraway135 15 分钟前 [–] 我同意社区应该尝试保护自己免受恶意行为者的侵害。但关于自由开源软件被用于与创建者价值观不符的项目,似乎有些虚伪:我认为自由开源软件是献给人类的礼物,因此:“礼物应该被自由地给予,没有义务或期望,作为爱和善良的真实表达。” poszlem 8 分钟前 | 父评论 [–] 我认为这混淆了“如何”和“为什么”。自由开源软件本身不是目的,我认为对大多数人来说,它只是让我们一起工作、分享我们所构建的东西并从社区获得回报的工具。如果这突然被武器化针对我们,我不认为这不是一个问题。 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系方式 搜索:
相关文章

原文

We need to talk about war. And we need to talk about companies building bots that propose to rewrite our source code. And about the people behind both, and how we preserve what is great about FOSS while avoiding disruption. How do geopolitical conflicts on the one hand and the risk of bot-generated (adversarial) code on the other influence the global community working together on Free and Open Source software?


The immense corpus of free and open source software created by a global community of researchers and engineers, developers, architects and designers is an impressive achievement of human collaboration at an unprecedented scale. An even bigger circle of users, translators, writers, creatives, civil society advocates, public servants and private sector stakeholders has helped to further develop and spread this technological Gesamtkunstwerk far and wide - with the help of the internet and the web. With individual freedoms and user empowerment at its center, these jointly created digital public goods have removed many economic and societal barriers for a large part of the world's population. Users are not just allowed to benefit from technology, but each and every user can in principle actively help shape it. On top of the FOSS ecosystem our global economy has been propelled to unprecedented levels.

Much of this incredible growth was achieved within a (relatively) calm geopolitical situation, in the wake of the cold war which ended in the very same year that also saw the genesis of the World Wide Web at CERN in Switzerland. Economists, philosophers and other observers at the time spoke of the 'end of history' and expected no more big conflicts at the superpower level. We could now globalise the economy and all work together. The flood of innovation taking place all around us promised a bright future for all, with room for altruism and collaboration. In retrospect it certainly was an ideal situation for an optimistic and constructive global movement like the FOSS community to take over the helm.

But apart from the fact that under the surface that narrative was already flawed (with some actors like the USA having a double agenda, as the Snowden and Shadowbrokers revelations exposed) history didn't end. To some ironic extent we are now becoming victim of our own success. In recent years we've seen geopolitical stability punctured by war effort levering low cost technology that includes heaps of FOSS. Social media powered by FOSS infrastructure promote disinformation and have successfully stirred large scale polarisation. Within some of the largest and most populous countries on the planet authoritarian regimes have successfully used technology to break oppression in a new race towards totalitarianism. While for instance Europe has tried to regulate 'dual use' technology, "any use" technology (which our libre licenses guarantee) has escaped our attention. Even in countries which had stable non-authoritarian regimes there is a visible technology-assisted relapse towards anti-democratic movements. On the back of a tech stack which consists of FOSS with a thin crust of proprietary special sauce, unprecedented private capital (sometimes referred to as 'hypercapitalism') is interfering with global politics at an alarming rate. Apart from the direct democratic disbalance the resulting oligarchy is giving rise to overt nepotism, corruption and a new global protectorate for predatory business models and unethical extractive behaviour. Expecting peace in cyberspace any time soon is probably naive, and free and open source technology stands to make up for a significant part of the battleground.

At the same time we are facing other challenges, such as climate change and an imminent scarcity of non-renewable resources. We have more people living on the surface of the planet than ever before, and they are consuming more raw materials and more energy than ever. This won't go on indefinitely. And right at that point we see an army of next generation Trojan horses galloping through the gates of our global commons villages, accelerating our use of both. Generative pre-trained transformers (also known as Large Language Models) kindly offer to take cumbersome and boring coding work off our hands. They can liberate us from responsibility and allow us to do other things or move even faster.

But is it really wise to accept this apparent gift, or should we be a little more suspicious? Just as it has proven way too easy for AI to poison the web with fake content, our software supply chain is vulnerable to manipulation. The attack surface is immense. Due to the inherent complexity of software it is easier to achieve and harder to detect manipulation before it is too late. While many talented and committed people have spent years reverse engineering binary blobs to avoid the associated risks, those blobs were at least isolated and clearly marked. AI is the ultimate black box and it introduces significantly more uncertainty: it rewrites the truth from the inside.

AI in its current form has no actual sense of truth or ethics. Like with Russian roulette, once in a while the models completely bork up and create phantom code and real risk - and that is even in a best case scenario, without assuming malicious intent and manipulation from the outside. In an adversarial scenario (and this adversity can come from traditional nation state actors with non-aligned interests but also from corporate or even private individuals with some determination - like Cambridge Analytica illustrated so vividly) manipulation only requires subtle changes. At the frantic scale at which any available learning content is ingested from the internet these days one can expect targeted adversarial training to manipulate specific code with subtle triggers to go unnoticed.

As a community we have spent billions of hours of careful coding and software engineering to make free and open source technology as trustworthy as it is today. Geopolitical conflict is an incentive to hollow out that trust. AI is an additional leap of faith, and if you look at the forces driving its adoption and their interests, are we really sure those black boxes are safe to invite into our trusted coding base? It is clear that the end game of AI coding is not a healthy FOSS ecosystem, but its total displacement. The threat of machine crafted and man-crafted malicious code in war-time FOSS are equally realistic. Perhaps we can find a middle ground, where we combine some of AI and human skill - and add enough checks and balances, and a variety of assurances through compartementalisation, formal and symbolical proofs and other traditional means of quality assurance.

This talk is an open exploration of some of the challenges the FOSS community will have in the years ahead, working towards a hopeful notion of maximal defendable FOSS.

联系我们 contact @ memedata.com