Mythos is the best cybersecurity news in a decade

原始链接: https://sfstandard.com/opinion/2026/05/06/mythos-cybersecurity-ai/

相关文章

原文

Everything about the way Anthropic’s Mythos model was presented to the public suggested it was bad news for cybersecurity. The company itself intimated that Mythos (opens in new tab) was too dangerous to release to the public because of its ability to find and exploit software vulnerabilities; the news coverage suggested that government and private-sector officials (opens in new tab) alike were in a state of panic about the new capabilities; and when some online users apparently gained unauthorized access (opens in new tab) to the model, it was treated as a potentially catastrophic breach. 

It’s a slightly strange story to watch unfold with such an apocalyptic tilt, given that from everything Anthropic says, it has created the most powerful tool for cyber defense that we’ve perhaps ever possessed. 

Undoubtedly, there are good reasons to monitor such tools carefully and to be thoughtful about who gets access to them and when. The ability to discover vulnerabilities in software and exploit them is the basis for almost every significant, sophisticated technical compromise, ranging from cyber-espionage to disruptive and destructive cyberattacks. So making that process faster, easier, and available to a wider range of people could, of course, mean more serious cyberattacks coming from more quarters.

But we were already witnessing more serious cyberattacks coming from more quarters, well before the development of Mythos. And, more to the point, we were grappling with an asymmetry in cybersecurity, wherein the accepted wisdom has long been that it’s easier to compromise software than it is to secure it. The gist of that asymmetry is that defenders have to find all the vulnerabilities in their code to make it secure, while attackers have to find and exploit only one vulnerability to launch an attack, and therefore the latter group will always have an advantage over the former. We assume that software released to the public will have vulnerabilities, that researchers and adversaries will find those vulnerabilities and report or exploit them, that they will then have to be patched, and that we will continue repeating that cycle in perpetuity.

AI tools like Mythos suggest a possible alternative: What if finding every vulnerability in a piece of software were just as fast and easy as finding a few of them, thanks to automation? What if those vulnerabilities could be comprehensively catalogued and patched prior to the release of software? What if attackers and defenders were so reliant on the same tools that neither had any advantage over the other when it came to finding vulnerabilities?

Have thoughts on this story?

It would be one of the most radical — and promising — paradigm shifts in cybersecurity since the advent of public key cryptography (opens in new tab).

The fact that it’s possible to envision a path to a world where cyber defense has the upper hand is nothing short of remarkable. That’s especially true at a moment when more countries (opens in new tab) are turning to offensive cyber operations to shore up the vulnerabilities in their digital critical infrastructure, accepting that they won’t be able to protect their networks and should instead try to dissuade adversaries from attacking them by threatening counter cyberattacks. For a few years, it has seemed like the governments with the greatest cyber capabilities — led by the U.S. (opens in new tab) and China (opens in new tab) — have been increasingly focused on compromising one another’s computer systems and preparing or positioning themselves for potential disruptive cyberattacks in the future. If it were possible to meaningfully secure our critical infrastructure with automated tools, rather than just threatening to attack everyone else’s in order to stop them from attacking us, it would be a safer, more stable status quo for everyone.

One fear is that as these AI tools continue to improve, there will always be a new model with the ability to find even more complicated vulnerabilities and design ever more sophisticated ways of exploiting them. In that case, rather than a more stable status quo, we’re looking at a steady state much closer to where we are now, but instead of racing to find new vulnerabilities themselves, we’ll instead see governments and criminals racing to develop AI models that can identify vulnerabilities faster than their opponents. But it’s not entirely clear that there are so many vulnerabilities in every piece of software that newer, better AI will always be able to find some that were missed by previous models. It’s possible that the progress Anthropic reported (opens in new tab) with Mythos will level out, that there are a finite number of vulnerabilities in software, and that at some point AI will have managed to find effectively all of them.

Another fear is that only major companies — and criminals — will have access to the best AI tools for finding vulnerabilities, creating even more dramatic discrepancies in the quality of code coming out of Big Tech versus small or independent software developers. For instance, serious vulnerabilities in open-source projects like Apache Log4j (opens in new tab) and the OpenSSL cryptography library (opens in new tab) have caused major cybersecurity problems because they are so widely used and have so few developers and resources to devote to security efforts. More recently, an AI scanning tool reportedly identified (opens in new tab) a years-old vulnerability in the open-source operating system Linux, highlighting how much value other open-source projects might be able to derive from this technology. But that’s a reason to make these tools more widely available, not less. If open-source software could be as secure as the software produced by companies that employ thousands of security engineers, there would be tremendous benefits for everyone. That includes big companies, many of which rely on some open-source libraries and suffer considerably when open-source vulnerabilities come to light (opens in new tab).

There are good reasons to be worried about the ways that emerging AI technologies will affect cybersecurity. There’s a huge amount of software deployed that will need to be reviewed and patched — and this is a slow and complicated process. That’s one reason to be cautious in the deployment of tools like Mythos and to work with companies beforehand, as Anthropic is doing, to make sure their systems are patched before any bad actors can use those tools to find and exploit vulnerabilities. On top of that, the integration of even more software — and more complicated software — into our infrastructure and daily lives will create vast attack surfaces that can be exploited by adversaries looking to make money or steal secrets or break things. 

But with the exception of some fraudsters sending out fake invoices and other social engineering efforts, almost all of those adversaries, regardless of their end goal, will start by looking for software vulnerabilities they can exploit to plant malware on the systems they’re targeting. If we can really, truly change the calculus of how hard it is to find one vulnerability to exploit, compared with finding all of them to patch, it will revolutionize our ability to protect ourselves against all of those threats, even as more software is being rolled out for new AI systems in every domain.

Most of the decisions that will determine whether countries and companies manage to capitalize on the promise of AI tools for cybersecurity will be process and policy decisions, not technical ones. The issue is not whether tech companies can build AI tools with better technical safeguards to prevent people from using them to exploit technical vulnerabilities — they can’t. The same features that make these tools invaluable to attackers are what make them critical for defense. The issues that will matter moving forward are whom companies and governments let use these tools first, and how much of a head start they need to patch their code before access expands to a broader circle, and how they will test and roll out patches faster and more effectively, and make sure that the developers and maintainers of critical software who can’t afford access to the best tools are able to use them. The conversations that both the private and public sector most need to be having are around the policies and governance structures that will apply to these models — how to design them and how to evolve them and learn from our inevitable mistakes.

If not, we risk missing out on one of the greatest opportunities we’ve ever had to secure the computer systems that are integral to our daily lives.

联系我们 contact @ memedata.com