人工智能与美国政治的未来
AI and the Future of American Politics

原始链接: https://www.schneier.com/blog/archives/2025/10/ai-and-the-future-of-american-politics.html

## 人工智能对美国选举日益增长的影响 人工智能预计将显著影响2026年美国选举,这基于之前全球范围内出现的预警信号。与2024年不同,人工智能的作用已超越虚假信息,它正积极融入竞选活动的各个层面,包括组织和公民参与——且大多缺乏监管。 专业的竞选团队正在利用人工智能提高效率,自动化诸如筹款和广告定位等任务。组织者正在探索其促进讨论和建立运动的潜力,而公民则利用它进行自我表达和信息共享。一种党派分歧正在出现,共和党大力投资人工智能用于广告制作和策略,而民主党尽管有与进步派一致的初创公司在筹款等领域进行创新,但仍面临资金缺口。 然而,最令人担忧的方面并非深度伪造,而是人工智能在监视和操纵方面的潜力,尤其是在政府层面。缺乏监管导致实验不受约束,而强大的AI游说者积极反对监管,这引发了对维护民主进程的严重担忧。最终,人工智能在2026年的影响将取决于这些不同行为者如何利用这项技术,从而创造出一个不可预测且可能破坏稳定的选举环境。

## AI 与美国政治:摘要 一场 Hacker News 的讨论围绕布鲁斯·施奈尔关于人工智能可能对美国政治产生影响的文章展开,揭示了一种细致的观点。虽然人们担心人工智能驱动的虚假信息和操纵,但许多评论员认为其影响将是微小的。 普遍的看法是,人工智能将主要*增强*现有的政治策略——使当前策略(如向摇摆选民进行精准广告)更有效率,而不是从根本上改变格局。许多选民已经坚定地支持某个政党或基于核心问题做出决定,这使得他们不太容易受到人工智能的影响。甚至有人认为人工智能可以*促进*真实性,倾向于那些进行不太完美、更贴近现实互动的候选人。 然而,人们仍然担心人工智能驱动的“网络喷子军团”以及操纵训练数据以偏见人工智能模型,从而制造普遍宣传的可能性。一个关键点是,真正的问题不是人工智能本身,而是媒体的整合以及金钱对政治的影响,而人工智能可能会加剧这些问题。一位研究人员提出了一种新颖的解决方案:开发具有内在“亲人类良知”的人工智能,而不是依赖外部控制。最终,这场讨论凸显了人们对人工智能将彻底改变政治的怀疑态度,认为它只是现有趋势和权力结构的放大器。
相关文章

原文

Two years ago, Americans anxious about the forthcoming 2024 presidential election were considering the malevolent force of an election influencer: artificial intelligence. Over the past several years, we have seen plenty of warning signs from elections worldwide demonstrating how AI can be used to propagate misinformation and alter the political landscape, whether by trolls on social media, foreign influencers, or even a street magician. AI is poised to play a more volatile role than ever before in America’s next federal election in 2026. We can already see how different groups of political actors are approaching AI. Professional campaigners are using AI to accelerate the traditional tactics of electioneering; organizers are using it to reinvent how movements are built; and citizens are using it both to express themselves and amplify their side’s messaging. Because there are so few rules, and so little prospect of regulatory action, around AI’s role in politics, there is no oversight of these activities, and no safeguards against the dramatic potential impacts for our democracy.

The Campaigners

Campaigners—messengers, ad buyers, fundraisers, and strategists—are focused on efficiency and optimization. To them, AI is a way to augment or even replace expensive humans who traditionally perform tasks like personalizing emails, texting donation solicitations, and deciding what platforms and audiences to target.

This is an incremental evolution of the computerization of campaigning that has been underway for decades. For example, the progressive campaign infrastructure group Tech for Campaigns claims it used AI in the 2024 cycle to reduce the time spent drafting fundraising solicitations by one-third. If AI is working well here, you won’t notice the difference between an annoying campaign solicitation written by a human staffer and an annoying one written by AI.

But AI is scaling these capabilities, which is likely to make them even more ubiquitous. This will make the biggest difference for challengers to incumbents in safe seats, who see AI as both a tacitly useful tool and an attention-grabbing way to get their race into the headlines. Jason Palmer, the little-known Democratic primary challenger to Joe Biden, successfully won the American Samoa primary while extensively leveraging AI avatars for campaigning.

Such tactics were sometimes deployed as publicity stunts in the 2024 cycle; they were firsts that got attention. Pennsylvania Democratic Congressional candidate Shamaine Daniels became the first to use a conversational AI robocaller in 2023. Two long-shot challengers to Rep. Don Beyer used an AI avatar to represent the incumbent in a live debate last October after he declined to participate. In 2026, voters who have seen years of the official White House X account posting deepfaked memes of Donald Trump will be desensitized to the use of AI in political communications.

Strategists are also turning to AI to interpret public opinion data and provide more fine-grained insight into the perspective of different voters. This might sound like AIs replacing people in opinion polls, but it is really a continuation of the evolution of political polling into a data-driven science over the last several decades.

A recent survey by the American Association of Political Consultants found that a majority of their members’ firms already use AI regularly in their work, and more than 40 percent believe it will “fundamentally transform” the future of their profession. If these emerging AI tools become popular in the midterms, it won’t just be a few candidates from the tightest national races texting you three times a day. It may also be the member of Congress in the safe district next to you, and your state representative, and your school board members.

The development and use of AI in campaigning is different depending on what side of the aisle you look at. On the Republican side, Push Digital Group is going “all in” on a new AI initiative, using the technology to create hundreds of ad variants for their clients automatically, as well as assisting with strategy, targeting, and data analysis. On the other side, the National Democratic Training Committee recently released a playbook for using AI. Quiller is building an AI-powered fundraising platform aimed at drastically reducing the time campaigns spend producing emails and texts. Progressive-aligned startups Chorus AI and BattlegroundAI are offering AI tools for automatically generating ads for use on social media and other digital platforms. DonorAtlas automates data collection on potential donors, and RivalMind AI focuses on political research and strategy, automating the production of candidate dossiers.

For now, there seems to be an investment gap between Democratic- and Republican-aligned technology innovators. Progressive venture fund Higher Ground Labs boasts $50 million in deployed investments since 2017 and a significant focus on AI. Republican-aligned counterparts operate on a much smaller scale. Startup Caucus has announced one investment—of $50,000—since 2022. The Center for Campaign Innovation funds research projects and events, not companies. This echoes a longstanding gap in campaign technology between Democratic- and Republican-aligned fundraising platforms ActBlue and WinRed, which has landed the former in Republicans’ political crosshairs.

Of course, not all campaign technology innovations will be visible. In 2016, the Trump campaign vocally eschewed using data to drive campaign strategy and appeared to be falling way behind on ad spending, but was—we learned in retrospect—actually leaning heavily into digital advertising and making use of new controversial mechanisms for accessing and exploiting voters’ social media data with vendor Cambridge Analytica. The most impactful uses of AI in the 2026 midterms may not be known until 2027 or beyond.

The Organizers

Beyond the realm of political consultants driving ad buys and fundraising appeals, organizers are using AI in ways that feel more radically new.

The hypothetical potential of AI to drive political movements was illustrated in 2022 when a Danish artist collective used an AI model to found a political party, the Synthetic Party, and generate its policy goals. This was more of an art project than a popular movement, but it demonstrated that AIs—synthesizing the expressions and policy interests of humans—can formulate a political platform. In 2025, Denmark hosted a “summit” of eight such AI political agents where attendees could witness “continuously orchestrate[d] algorithmic micro-assemblies, spontaneous deliberations, and impromptu policy-making” by the participating AIs.

The more viable version of this concept lies in the use of AIs to facilitate deliberation. AIs are being used to help legislators collect input from constituents and to hold large-scale citizen assemblies. This kind of AI-driven “sensemaking” may play a powerful role in the future of public policy. Some research has suggested that AI can be as or more effective than humans in helping people find common ground on controversial policy issues.

Another movement for “Public AI” is focused on wresting AI from the hands of corporations to put people, through their governments, in control. Civic technologists in national governments from Singapore, Japan, Sweden, and Switzerland are building their own alternatives to Big Tech AI models, for use in public administration and distribution as a public good.

Labor organizers have a particularly interesting relationship to AI. At the same time that they are galvanizing mass resistance against the replacement or endangerment of human workers by AI, many are racing to leverage the technology in their own work to build power.

Some entrepreneurial organizers have used AI in the past few years as tools for activating, connecting, answering questions for, and providing guidance to their members. In the UK, the Centre for Responsible Union AI studies and promotes the use of AI by unions; they’ve published several case studies. The UK Public and Commercial Services Union has used AI to help their reps simulate recruitment conversations before going into the field. The Belgian union ACV-CVS has used AI to sort hundreds of emails per day from members to help them respond more efficiently. Software companies such as Quorum are increasingly offering AI-driven products to cater to the needs of organizers and grassroots campaigns.

But unions have also leveraged AI for its symbolic power. In the U.S., the Screen Actors Guild held up the specter of AI displacement of creative labor to attract public attention and sympathy, and the ETUC (the European confederation of trade unions) developed a policy platform for responding to AI.

Finally, some union organizers have leveraged AI in more provocative ways. Some have applied it to hacking the “bossware” AI to subvert the exploitative intent or disrupt the anti-union practices of their managers.

The Citizens

Many of the tasks we’ve talked about so far are familiar use cases to anyone working in office and management settings: writing emails, providing user (or voter, or member) support, doing research.

But even mundane tasks, when automated at scale and targeted at specific ends, can be pernicious. AI is not neutral. It can be applied by many actors for many purposes. In the hands of the most numerous and diverse actors in a democracy—the citizens—that has profound implications.

Conservative activists in Georgia and Florida have used a tool named EagleAI to automate challenging voter registration en masse (although the tool’s creator later denied that it uses AI). In a nonpartisan electoral management context with access to accurate data sources, such automated review of electoral registrations might be useful and effective. In this hyperpartisan context, AI merely serves to amplify the proclivities of activists at the extreme of their movements. This trend will continue unabated in 2026.

Of course, citizens can use AI to safeguard the integrity of elections. In Ghana’s 2024 presidential election, civic organizations used an AI tool to automatically detect and mitigate electoral disinformation spread on social media. The same year, Kenyan protesters developed specialized chatbots to distribute information about a controversial finance bill in Parliament and instances of government corruption.

So far, the biggest way Americans have leveraged AI in politics is in self-expression. About ten million Americans have used the chatbot Resistbot to help draft and send messages to their elected leaders. It’s hard to find statistics on how widely adopted tools like this are, but researchers have estimated that, as of 2024, about one in five consumer complaints to the U.S. Consumer Financial Protection Bureau was written with the assistance of AI.

OpenAI operates security programs to disrupt foreign influence operations and maintains restrictions on political use in its terms of service, but this is hardly sufficient to deter use of AI technologies for whatever purpose. And widely available free models give anyone the ability to attempt this on their own.

But this could change. The most ominous sign of AI’s potential to disrupt elections is not the deepfakes and misinformation. Rather, it may be the use of AI by the Trump administration to surveil and punish political speech on social media and other online platforms. The scalability and sophistication of AI tools give governments with authoritarian intent unprecedented power to police and selectively limit political speech.

What About the Midterms?

These examples illustrate AI’s pluripotent role as a force multiplier. The same technology used by different actors—campaigners, organizers, citizens, and governments—leads to wildly different impacts. We can’t know for sure what the net result will be. In the end, it will be the interactions and intersections of these uses that matters, and their unstable dynamics will make future elections even more unpredictable than in the past.

For now, the decisions of how and when to use AI lie largely with individuals and the political entities they lead. Whether or not you personally trust AI to write an email for you or make a decision about you hardly matters. If a campaign, an interest group, or a fellow citizen trusts it for that purpose, they are free to use it.

It seems unlikely that Congress or the Trump administration will put guardrails around the use of AI in politics. AI companies have rapidly emerged as among the biggest lobbyists in Washington, reportedly dumping $100 million toward preventing regulation, with a focus on influencing candidate behavior before the midterm elections. The Trump administration seems open and responsive to their appeals.

The ultimate effect of AI on the midterms will largely depend on the experimentation happening now. Candidates and organizations across the political spectrum have ample opportunity—but a ticking clock—to find effective ways to use the technology. Those that do will have little to stop them from exploiting it.

This essay was written with Nathan E. Sanders, and originally appeared in The American Prospect.

Posted on October 13, 2025 at 7:04 AM2 Comments

联系我们 contact @ memedata.com