Visualização normal

Antes de ontemStream principal
  • ✇Security Boulevard
  • What Anthropic’s Mythos Means for the Future of Cybersecurity Bruce Schneier
    Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Ant
     

What Anthropic’s Mythos Means for the Future of Cybersecurity

28 de Abril de 2026, 08:06

Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a ...

The post What Anthropic’s Mythos Means for the Future of Cybersecurity appeared first on Security Boulevard.

  • ✇Schneier on Security
  • What Anthropic’s Mythos Means for the Future of Cybersecurity Bruce Schneier
    Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Ant
     

What Anthropic’s Mythos Means for the Future of Cybersecurity

28 de Abril de 2026, 08:06

Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a limited number of companies.

The news rocked the internet security community. There were few details in Anthropic’s announcement, angering many observers. Some speculate that Anthropic doesn’t have the GPUs to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to its AI safety mission. There’s hype and counterhype, reality and marketing. It’s a lot to sort out, even if you’re an expert.

We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.

How AI Is Changing Cybersecurity

We’ve written about shifting baseline syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.

The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a while this kind of capability was coming soon. The question is how we adapt to it.

We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more nuanced than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.

Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.

So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.

Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly updated firewall, not freely talking to the internet.

Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog-standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.

Rethinking Software Security Practices

This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive AI agents to test exploits against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of VulnOps is likely to become a standard part of the development process.

Documentation becomes more valuable, as it can guide an AI agent on a bug-finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral instant software—code that can be generated and deployed on demand.

Will this favor offense or defense? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.

Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.

This essay was written with Barath Raghavan, and originally appeared in IEEE Spectrum.

  • ✇Cybersecurity News
  • The Friend Request from Pyongyang: How APT37 Hijacks Facebook to Deploy RokRAT Ddos
    The post The Friend Request from Pyongyang: How APT37 Hijacks Facebook to Deploy RokRAT appeared first on Daily CyberSecurity. Related posts: North Korean APT37’s “ToyBox Story”: Stealthy Attacks Unveiled ClickFix Unmasked: How North Korea’s Kimsuky Group Turned PowerShell into a Weapon of Psychological Deception “Casting Call” for Malware: APT37 Poses as TV Writers to Hack Targets
     
  • ✇Schneier on Security
  • Cybersecurity in the Age of Instant Software Bruce Schneier
    AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: "instant software." Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being writte
     

Cybersecurity in the Age of Instant Software

7 de Abril de 2026, 14:07

AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: "instant software." Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted.

AI is changing cybersecurity as well. In particular, AI systems are getting better at finding and patching vulnerabilities in code. This has implications for both attackers and defenders, depending on the ways this and related technologies improve.

In this essay, I want to take an optimistic view of AI’s progress, and to speculate what AI-dominated cybersecurity in an age of instant software might look like. There are a number of unknowns that will factor into how the arms race between attacker and defender might play out.

How flaw discovery might work

On the attacker side, the ability of AIs to automatically find and exploit vulnerabilities has increased dramatically over the past few months. We are already seeing both government and criminal hackers using AI to attack systems. The exploitation part is critical here, because it gives an unsophisticated attacker capabilities far beyond their understanding. As AIs get better, expect more attackers to automate their attacks using AI. And as individuals and organizations can increasingly run powerful AI models locally, AI companies monitoring and disrupting malicious AI use will become increasingly irrelevant.

Expect open-source software, including open-source libraries incorporated in proprietary software, to be the most targeted, because vulnerabilities are easier to find in source code. Unknown No. 1 is how well AI vulnerability discovery tools will work against closed-source commercial software packages. I believe they will soon be good enough to find vulnerabilities just by analyzing a copy of a shipped product, without access to the source code. If that’s true, commercial software will be vulnerable as well.

Particularly vulnerable will be software in IoT devices: things like internet-connected cars, refrigerators, and security cameras. Also industrial IoT software in our internet-connected power grid, oil refineries and pipelines, chemical plants, and so on. IoT software tends to be of much lower quality, and industrial IoT software tends to be legacy.

Instant software is differently vulnerable. It’s not mass market. It’s created for a particular person, organization, or network. The attacker generally won’t have access to any code to analyze, which makes it less likely to be exploited by external attackers. If it’s ephemeral, any vulnerabilities will have a short lifetime. But lots of instant software will live on networks for a long time. And if it gets uploaded to shared tool libraries, attackers will be able to download and analyze that code.

All of this points to a future where AIs will become powerful tools of cyberattack, able to automatically find and exploit vulnerabilities in systems worldwide.

Automating patch creation

But that’s just half of the arms race. Defenders get to use AI, too. These same AI vulnerability-finding technologies are even more valuable for defense. When the defensive side finds an exploitable vulnerability, it can patch the code and deny it to attackers forever.

How this works in practice depends on another related capability: the ability of AIs to patch vulnerable software, which is closely related to their ability to write secure code in the first place.

AIs are not very good at this today; the instant software that AIs create is generally filled with vulnerabilities, both because AIs write insecure code and because the people vibe coding don’t understand security. OpenClaw is a good example of this.

Unknown No. 2 is how much better AIs will get at writing secure code. The fact that they’re trained on massive corpuses of poorly written and insecure code is a handicap, but they are getting better. If they can reliably write vulnerability-free code, it would be an enormous advantage for the defender. And AI-based vulnerability-finding makes it easier for an AI to train on writing secure code.

We can envision a future where AI tools that find and patch vulnerabilities are part of the typical software development process. We can’t say that the code would be vulnerability-free—that’s an impossible goal—but it could be without any easily findable vulnerabilities. If the technology got really good, the code could become essentially vulnerability-free.

Patching lags and legacy software

For new software—both commercial and instant—this future favors the defender. For commercial and conventional open-source software, it’s not that simple. Right now, the world is filled with legacy software. Much of it—like IoT device software—has no dedicated security team to update it. Sometimes it is incapable of being patched. Just as it’s harder for AIs to find vulnerabilities when they don’t have access to the source code, it’s harder for AIs to patch software when they are not embedded in the development process.

I’m not as confident that AI systems will be able to patch vulnerabilities as easily as they can find them, because patching often requires more holistic testing and understanding. That’s Unknown No. 3: how quickly AIs will be able to create reliable software updates for the vulnerabilities they find, and how quickly customers can update their systems.

Today, there is a time lag between when a vendor issues a patch and customers install that update. That time lag is even longer for large organizational software; the risk of an update breaking the underlying software system is just too great for organizations to roll out updates without testing them first. But if AI can help speed up that process, by writing patches faster and more reliably, and by testing them in some AI-generated twin environment, the advantage goes to the defender. If not, the attacker will still have a window to attack systems until a vulnerability is patched.

Toward self-healing

In a truly optimistic future, we can imagine a self-healing network. AI agents continuously scan the ever-evolving corpus of commercial and custom AI-generated software for vulnerabilities, and automatically patch them on discovery.

For that to work, software license agreements will need to change. Right now, software vendors control the cadence of security patches. Giving software purchasers this ability has implications about compatibility, the right to repair, and liability. Any solutions here are the realm of policy, not tech.

If the defense can find, but can’t reliably patch, flaws in legacy software, that’s where attackers will focus their efforts. If that’s the case, we can imagine a continuously evolving AI-powered intrusion detection, continuously scanning inputs and blocking malicious attacks before they get to vulnerable software. Not as transformative as automatically patching vulnerabilities in running code, but nevertheless valuable.

The power of these defensive AI systems increases if they are able to coordinate with each other, and share vulnerabilities and updates. A discovery by one AI can quickly spread to everyone using the affected software. Again: Advantage defender.

There are other variables to consider. The relative success of attackers and defenders also depends on how plentiful vulnerabilities are, how easy they are to find, whether AIs will be able to find the more subtle and obscure vulnerabilities, and how much coordination there is among different attackers. All this comprises Unknown No. 4.

Vulnerability economics

Presumably, AIs will clean up the obvious stuff first, which means that any remaining vulnerabilities will be subtle. Finding them will take AI computing resources. In the optimistic scenario, defenders pool resources through information sharing, effectively amortizing the cost of defense. If information sharing doesn’t work for some reason, defense becomes much more expensive, as individual defenders will need to do their own research. But instant software means much more diversity in code: an advantage to the defender.

This needs to be balanced with the relative cost of attackers finding vulnerabilities. Attackers already have an inherent way to amortize the costs of finding a new vulnerability and create a new exploit. They can vulnerability hunt cross-platform, cross-vendor, and cross-system, and can use what they find to attack multiple targets simultaneously. Fixing a common vulnerability often requires cooperation among all the relevant platforms, vendors, and systems. Again, instant software is an advantage to the defender.

But those hard-to-find vulnerabilities become more valuable. Attackers will attempt to do what the major intelligence agencies do today: find "nobody but us" zero-day exploits. They will either use them slowly and sparingly to minimize detection or quickly and broadly to maximize profit before they’re patched. Meanwhile, defenders will be both vulnerability hunting and intrusion detecting, with the goal of patching vulnerabilities before the attackers find them.

We can even imagine a market for vulnerability sharing, where the defender who finds a vulnerability and creates a patch is compensated by everyone else in the information-sharing/repair network. This might be a stretch, but maybe.

Up the stack

Even in the most optimistic future, attackers aren’t going to just give up. They will attack the non-software parts of the system, such as the users. Or they’re going to look for loopholes in the system: things that the system technically allows but were unintended and unanticipated by the designers—whether human or AI—and can be used by attackers to their advantage.

What’s left in this world are attacks that don’t depend on finding and exploiting software vulnerabilities, like social engineering and credential stealing attacks. And we have already seen how AI-generated deepfakes make social engineering easier. But here, too, we can imagine defensive AI agents that monitor users’ behaviors, watching for signs of attack. This is another AI use case, and one that I’m not even sure how to think about in terms of the attacker/defender arms race. But at least we’re pushing attacks up the stack.

Also, attackers will attempt to infiltrate and influence defensive AIs and the networks they use to communicate, poisoning their output and degrading their capabilities. AI systems are vulnerable to all sorts of manipulations, such as prompt injection, and it’s unclear whether we will ever be able to solve that. This is Unknown No. 5, and it’s a biggie. There might always be a "trusting trust problem."

No future is guaranteed. We truly don’t know whether these technologies will continue to improve and when they will plateau. But given the pace at which AI software development has improved in just the past few months, we need to start thinking about how cybersecurity works in this instant software world.

This essay originally appeared in CSO.

EDITED TO ADD: Two essays published after I wrote this. Both are good illustrations of where we are regarding AI vulnerability discovery. Things are changing very fast.

  • ✇Schneier on Security
  • AI Found Twelve New Vulnerabilities in OpenSSL Bruce Schneier
    The title of the post is”What AI Security Research Looks Like When It Works,” and I agree: In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-202
     

AI Found Twelve New Vulnerabilities in OpenSSL

18 de Fevereiro de 2026, 09:03

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

These weren’t trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that’s potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST‘s CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from Eric Young’s original SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google’s.

In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.

AI vulnerability finding is changing cybersecurity, faster than expected. This capability will be used by both offense and defense.

More.

  • ✇Security Intelligence
  • When you shouldn’t patch: Managing your risk factors Sue Poremba
    Look at any article with advice about best practices for cybersecurity, and about third or fourth on that list, you’ll find something about applying patches and updates quickly and regularly. Patching for known vulnerabilities is about as standard as it gets for good cybersecurity hygiene, right up there with using multi-factor authentication and thinking before you click on links in emails from unknown senders. So imagine my surprise when attending Qualys QSC24 in San Diego to hear a number of
     

When you shouldn’t patch: Managing your risk factors

12 de Fevereiro de 2025, 11:00

Look at any article with advice about best practices for cybersecurity, and about third or fourth on that list, you’ll find something about applying patches and updates quickly and regularly. Patching for known vulnerabilities is about as standard as it gets for good cybersecurity hygiene, right up there with using multi-factor authentication and thinking before you click on links in emails from unknown senders.

So imagine my surprise when attending Qualys QSC24 in San Diego to hear a number of conference speakers say that patching shouldn’t be an automatic reaction. In fact, they say, there are times when it is better not to patch at all.

No, you don’t need to fix everything, says Dilip Bachwani, Chief Technology Officer with Qualys.

“It’s not practical,” Bachwani adds. “Even if there is a vulnerability, it may not apply in your environment.” It could be an application that isn’t an internet-facing asset or something secured through other controls.

Knowing your risk factor

The knee-jerk reaction when a new patch is released is to get it installed as quickly as possible to prevent a vulnerability from turning into a cyber incident. However, Bachwani and his Qualys colleagues stress that security teams need to take a step back and evaluate their organization’s risk threshold.

What that evaluation will first discover is a lot of vulnerabilities across their infrastructure. A study by Coalition expects the total number of common vulnerabilities and exposures (CVEs) to increase by 25% in 2024 to 34,888 vulnerabilities, or nearly 3,000 per month.

“New vulnerabilities are published at a rapid rate and growing,” Tiago Henriques, Coalition’s Head of Research, says. “Most organizations are experiencing alert fatigue and confusion about what to patch first to limit their overall exposure and risk.”

With the steady increase in the number of CVEs, it is easy to think that every vulnerability is critical — and if every vulnerability is given an equal risk value, patching becomes overwhelming. The researchers at Qualys recommend prioritizing the risk involved with each vulnerability so that you can determine what should be patched first and what might not need to be patched at all.

How to prioritize your organization’s vulnerabilities

To prioritize vulnerabilities, it requires knowing all of your assets across the organization and identifying and monitoring the attack surface. However, Qualys research found that only 9% of companies are actively monitoring 100% of their attack surface. Shadow IT, third-party vendors and risks, a digital transformation made too quickly and without an assessment of technologies and assets added and not recognizing emerging threat vectors are just some of the reasons why organizations are unable to properly monitor their attack surface.

Deploying an attack surface management program will identify what technologies are attached to your network and where and what assets need protection. The critical requirements of an attack surface management program are:

  • Visibility across hybrid IT
  • Dynamic cybersecurity needs with rapid identification
  • Unauthorized software tracking in real-time
  • Finding and remediating blind spots

The more familiar you become with the systems accessing your network, the easier it will be to know your corporate assets and prioritize their importance. When levels of risk tolerance are assigned to these assets, it will then be easier to prioritize critical and non-critical vulnerabilities to be patched or, in some cases, not patched.

Explore vulnerability management services

When to slow down the patching process

Patching protocols should be unique to your organization, based on your internal measures of mission-critical and risk tolerance. Whereas one organization may decide that the most critical vulnerabilities must be patched immediately, another may find that seven days is the ultimate time frame to reduce risk for the most important assets. Patch management programs will tier their assets, beginning with the most critical and can’t afford downtime if something goes wrong and down through secondary tiers with longer wait times.

But there are times when it is smart to slow down or even eliminate the patching process. They include:

  • An important and time-sensitive project is in progress and requires uninterrupted computer time
  • Reports of bugs in the patch or it creates compatibility problems with the application in a testing sample
  • The vulnerable software is limited in scope within the organization and can be isolated
  • Other mitigating controls can be put in place
  • The application never uses the functions with the known vulnerability
  • The costs of patching outweigh the benefits. If the code is outdated and needs to be rewritten, for example, then it doesn’t make sense to take the time and expense to apply the patch.

Cybersecurity insurance and patching

With the increase of CVEs and the always looming threat of a cyber incident, many organizations are looking at how to maximize their cybersecurity insurance. With the strict rules and audits in place to be eligible for cybersecurity insurance, is taking an approach to only patch when it is truly necessary going to downgrade your organization with insurance companies?

Bachwani says no. “I actually think a solution like this will enable cyber insurers to be more effective.”

The way the insurance marketplace works today is that it is less focused on the company’s internal data and more on the organization’s overall cybersecurity posture.

“If I’m able to clearly demonstrate that we internally have really good hygiene, my insurance should be lower,” says Bachwani.

To patch or not to patch?

In the end, the decision on whether or not to patch will come down to one singular issue: What is the value to the business by patching or not patching? And that is determined by the organization’s risk tolerance. Recognizing the consequences of downtime or a cyber incident will help prioritize critical vulnerabilities that require time and resources to patch. But also being willing to accept that you can’t patch everything will give your team the space to focus on bigger risk threats.

The post When you shouldn’t patch: Managing your risk factors appeared first on Security Intelligence.

❌
❌