Visualização normal

Antes de ontemStream principal
  • ✇Security Affairs
  • Anthropic launches Claude Security to counter rapid AI-Powered exploits Pierluigi Paganini
    Anthropic launched Claude Security to counter faster AI-driven cyberattacks, as tools like Mythos enable near-instant exploitation by threat actors. Anthropic introduced Claude Security to help defenders keep up with a surge in AI-powered cyberattacks. As models like Mythos drastically reduce the time needed to exploit vulnerabilities, similar tools will likely spread among criminals and nation-state actors. Claude Security aims to give security teams the capabilities needed to respond to th
     

Anthropic launches Claude Security to counter rapid AI-Powered exploits

1 de Maio de 2026, 06:15

Anthropic launched Claude Security to counter faster AI-driven cyberattacks, as tools like Mythos enable near-instant exploitation by threat actors.

Anthropic introduced Claude Security to help defenders keep up with a surge in AI-powered cyberattacks. As models like Mythos drastically reduce the time needed to exploit vulnerabilities, similar tools will likely spread among criminals and nation-state actors. Claude Security aims to give security teams the capabilities needed to respond to this new, faster threat landscape.

“Claude Security is now in public beta for Claude Enterprise customers. Scan code for vulnerabilities and generate proposed fixes with Opus 4.7, on the Claude Platform, or through technology and services partners building with Claude.” reads the announcement.

Claude Security is now in public beta for Enterprise users, giving organizations advanced tools to detect and fix software vulnerabilities. As AI rapidly improves, new models can not only find flaws but also exploit them automatically, reducing the time window between discovery and attack. Anthropic recently introduced Claude Mythos, capable of matching top experts in identifying and exploiting weaknesses.

With Claude Security, companies can use the powerful Claude Opus 4.7 model to scan code, uncover complex issues, and generate targeted fixes. Already tested by hundreds of organizations, the tool now offers scheduled scans, easier integration, and better tracking, without requiring complex setup.

Anthropic is also integrating its technology into major security platforms through partners like CrowdStrike, Microsoft Security, and Palo Alto Networks, alongside consulting firms such as Deloitte and Accenture. As AI accelerates cyber threats, the goal is to equip defenders with equally advanced capabilities to keep pace.

Claude Security is easy to use: users select a repository or specific code scope and launch a scan directly from Claude. The system analyzes code like a security expert, understanding how components interact, tracing data flows, and identifying real vulnerabilities rather than relying only on known patterns.

After scanning, it delivers detailed findings with confidence levels, severity, impact, and reproduction steps, along with clear instructions to fix issues.

Based on feedback from hundreds of organizations, Anthropic improved detection accuracy, reduced false positives, and added confidence scoring. Teams can now move from scan to fix much faster, sometimes in one session. Scheduled scans also provide continuous security coverage instead of one-time checks.

“With this release, we’ve also added the ability to target a scan at a particular directory within a repository, dismiss findings with documented reasons (so that future reviewers can trust prior triage decisions), export findings as CSV or Markdown for existing tracking and audit systems, and send scan results to Slack, Jira, or other tools via webhooks.” concludes the announcement.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Claude Security)

Mozilla Fixes 271 Firefox Bugs Using Anthropic’s Mythos AI

22 de Abril de 2026, 15:50

Mozilla says Firefox 150 patches 271 vulnerabilities found with Anthropic’s restricted Mythos AI, highlighting how quickly AI-driven bug hunting is accelerating.

The post Mozilla Fixes 271 Firefox Bugs Using Anthropic’s Mythos AI appeared first on TechRepublic.

  • ✇Security Affairs
  • The US NSA is using Anthropic’s Claude Mythos despite supply chain risk Pierluigi Paganini
    Axios reports the National Security Agency uses Anthropic Mythos model despite Department of Defense concerns, blurring AI risk vs defense lines. The reported use of Anthropic’s Mythos model by the U.S. National Security Agency is a reminder that the line between AI as a defensive tool and AI as a security risk is getting harder to draw. According to Axios, the NSA is already using Mythos Preview even while the Department of Defense has formally treated Anthropic as a supply-chain risk and p
     

The US NSA is using Anthropic’s Claude Mythos despite supply chain risk

21 de Abril de 2026, 07:26

Axios reports the National Security Agency uses Anthropic Mythos model despite Department of Defense concerns, blurring AI risk vs defense lines.

The reported use of Anthropic’s Mythos model by the U.S. National Security Agency is a reminder that the line between AI as a defensive tool and AI as a security risk is getting harder to draw. According to Axios, the NSA is already using Mythos Preview even while the Department of Defense has formally treated Anthropic as a supply-chain risk and pushed to cut ties with the company.

“The National Security Agency is using Anthropic’s most powerful model yet, Mythos Preview, despite top officials at the Department of Defense — which oversees the NSA — insisting the company is a “supply chain risk,” two sources tell Axios.”

That tension captures a larger reality: governments want the most capable cybersecurity tools available, even when those tools raise concerns about misuse, governance, and strategic dependence.

Mythos is considered sensitive not just because it’s a powerful AI model, but because it’s especially strong in cybersecurity. Access is limited due to concerns it could be misused for attacks. At the same time, it’s useful for finding vulnerabilities, making it both a helpful defense tool and a potential risk—highlighting a key tension in AI security.

“Anthropic CEO Dario Amodei met White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent on Friday to discuss the use of Mythos within government and Anthropic’s wider plans and security practices.” continues Axios. “Sources said next steps after the meeting were expected to focus on how departments other than the Pentagon engage with the model. Both sides described the meeting as productive.”

The NSA story also highlights a basic policy problem: agencies can criticize a vendor in public or in court while still relying on the same vendor’s technology in practice. Reuters reported the Axios claims, while other outlets noted that the UK’s AI Security Institute also has access to Mythos. This suggests that the real competition is not only between governments and AI companies, but also between procurement caution and operational urgency. When cyber defense demands speed, stability, and scale, the newest model can become too valuable to ignore.

Anthropic says Claude Mythos is a major leap beyond its Haiku, Sonnet, and Opus models, introducing a new top tier called Copybara. It stands out for strong agentic coding and reasoning skills, achieving top scores in software tasks and enabling advanced cybersecurity capabilities.

Project Glasswing is a joint effort led by Anthropic with major tech and security firms (Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks) to protect critical software using advanced AI.

It leverages Claude Mythos Preview, a powerful model capable of finding and exploiting vulnerabilities at a level beyond most humans.

The goal is to use these capabilities defensively, helping organizations detect and fix flaws before attackers can exploit them. Anthropic is sharing access with partners and funding the initiative to strengthen both proprietary and open-source software security.

Glasswing brings together major tech and security companies to use Mythos defensively, helping secure critical software and infrastructure. Anthropic plans to limit access for now, hoping to improve global cybersecurity before such powerful tools become widely available.

Modern software underpins critical systems like banking, healthcare, energy, and government, but it has always contained vulnerabilities—some severe enough to enable cyberattacks, data theft, and disruption. These threats are already costly and widespread, with global cybercrime estimated at around $500 billion annually and often driven by state-backed actors.

With advanced AI models like Claude Mythos, the effort and expertise needed to find and exploit flaws has dropped sharply. These models can identify long-hidden vulnerabilities and develop sophisticated exploits, sometimes outperforming human experts. This raises serious risks, as attacks could become faster, more frequent, and more damaging.

However, the same capabilities can be used defensively. Initiatives like Project Glasswing aim to harness AI to detect and fix vulnerabilities at scale, helping secure critical infrastructure. The challenge now is to deploy these tools responsibly and quickly, ensuring defenders stay ahead in an AI-driven cybersecurity landscape.

Anthropic is investing $100M in usage credits and funding open-source security projects, while sharing findings to improve industry-wide defenses. The initiative aims to expand collaboration across tech, security, and governments to develop best practices and strengthen cybersecurity in the AI era.

For governments, the immediate lesson is uncomfortable but straightforward. They need strong AI tools to defend networks, but they also need procurement rules, audit trails, and usage boundaries that keep those tools from becoming opaque dependencies. The Pentagon’s feud with Anthropic shows what happens when those boundaries are not aligned. If an agency says a vendor is too risky for broad use but still wants the model for its own missions, the issue is no longer just technical. It becomes one of trust, accountability, and national strategy.

In the end, the NSA–Anthropic story is less about one model and more about the future of cyber power. The organizations that can safely deploy frontier AI will move faster in defense, but they will also face greater pressure to justify how these tools are controlled. Mythos may be a glimpse of what’s coming: a world where the most capable cyber systems are also the most contested, and where operational need often outruns policy comfort.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini(SecurityAffairs – hacking, Claude Mythos)

  • ✇Security Affairs
  • AI Model Claude Opus turns bugs into exploits for just $2,283 Pierluigi Paganini
    Claude Opus created a working Chrome exploit for $2,283, showing that widely available AI models can already find and weaponize vulnerabilities. Claude Opus managed to produce a functional Chrome exploit for just $2,283, raising concerns about how easily AI can be used to find and exploit vulnerabilities. Below is the cost of the experiment: ModelTokensCostClaude Opus 4.6 (high)2,140M$2,014Claude Opus 4.6 (high-thinking)189M$267Claude Sonnet / GPT-5.4 (minor)—~$2Total2,330M across 1,7
     

AI Model Claude Opus turns bugs into exploits for just $2,283

20 de Abril de 2026, 05:24

Claude Opus created a working Chrome exploit for $2,283, showing that widely available AI models can already find and weaponize vulnerabilities.

Claude Opus managed to produce a functional Chrome exploit for just $2,283, raising concerns about how easily AI can be used to find and exploit vulnerabilities.

Below is the cost of the experiment:

ModelTokensCost
Claude Opus 4.6 (high)2,140M$2,014
Claude Opus 4.6 (high-thinking)189M$267
Claude Sonnet / GPT-5.4 (minor)~$2
Total2,330M across 1,765 requests$2,283

While Anthropic held back its more advanced Mythos model over safety fears, even earlier, widely accessible models like Opus 4.6 can already generate real attack code, showing that the risk is not theoretical but already here.

“I pointed Claude Opus at Discord’s bundled Chrome (version 138, nine major versions behind upstream) and asked it to build a full V8 exploit chain. The V8 OOB we used was from Chrome 146, the same version Anthropic’s own Claude Desktop is running.” wrote Mohan Pedhapati, CTO of Hacktron. “A week of back and forth, 2.3 billion tokens, $2,283 in API costs, and about ~20 hours of me unsticking it from dead ends. It popped calc.”

Building the Chrome exploit cost about $6,283, but the return can easily exceed that. Programs like Google’s v8CTF pay $10,000 per valid exploit, and past submissions earned $5,000, with even higher offers appearing privately. Similar bugs could bring large rewards from companies like Anthropic. Overall, the cost already pays off in legitimate bug bounty programs, and could be far more profitable in underground markets.

Anthropic Mythos announcement sparked debate, with some calling it hype and others raising alarms. Beyond the noise, it highlights a real issue: AI models can already turn patches into working exploits, as shown with Chrome’s V8. The real risk lies in slow patching, outdated systems become easy targets. Whether Mythos lives up to the hype or not, progress won’t stop. Sooner or later, even low-skilled attackers with access to AI tools will exploit unpatched software.

The experts pointed out that Electron apps like Discord, Slack, and Teams bundle their own Chromium versions, often lagging weeks or months behind updates. This creates “patch gaps” where known V8 vulnerabilities remain exploitable. Researchers have already shown real-world exploits, including remote code execution on Discord. Many apps still run outdated versions, sometimes missing key protections like sandboxing, making full exploit chains easier. As a result, widely used applications remain exposed to known flaws long after patches exist upstream.

“I picked Discord as my target. It only needs two bugs for a full chain since there’s no sandbox on the main window. It’s sitting on Chrome 138, nine major versions behind current.” continues Pedhapati. “You’d still need an XSS on discord.com to deliver the payload. I’ll leave how hard that is as an exercise for the reader.”

Pedhapati explained that Claude Opus still needs heavy human guidance to build exploits. It often gets stuck, loses context, guesses instead of verifying, and even changes the goal when it can’t solve a problem. It doesn’t recover on its own, so the operator must step in, debug issues, and guide it forward. Setting up the right environment and managing sessions also takes significant effort.

Even with these limits, the trend is clear: future models will need less supervision. As AI speeds up exploit development, it shrinks the time needed to weaponize bugs, while patching still lags. This gap will likely increase real-world attacks.

Security patches themselves reveal vulnerabilities, and AI can quickly turn them into exploits. Open-source code makes this easier, since fixes appear publicly before updates spread. You can’t hide these changes anymore, AI can scan and analyze everything.

Every patch is basically an exploit hint. A security patch in Chromium or the Linux kernel tells you exactly what was broken. Reverse-engineering patches used to take skill and time. Now you can throw tokens at the problem and, with a decent operator nudging it past stuck points, get to a working exploit much faster.” continues the expert.

The real advantage goes to small, skilled teams. One expert can manage multiple AI-driven exploit efforts at once, greatly increasing their impact compared to less capable attackers.

The researchers doubts AI progress will slow and warns that simply saying “patch faster” isn’t enough. Teams should build security into development from the start, track all dependencies to know what they run, and enforce automatic updates to remove delays. He also suggests rethinking how and when patches get published, since public fixes can quickly turn into exploit blueprints for attackers using AI.

“This sounds crazy, but maybe Chrome, or any open source software, shouldn’t publish V8 patches before the stable release ships. Every public commit is a starting gun for anyone with an API key and strong team members who can weaponize exploits.” he concludes.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Claude)

  • ✇Security Affairs
  • Project Glasswing powered by Claude Mythos: defending software before hackers do Pierluigi Paganini
    Anthropic unveiled Claude Mythos, a powerful AI for cybersecurity that could also be misused to enhance cyberattacks. Anthropic has unveiled Claude Mythos, a new AI model designed to strengthen cybersecurity through Project Glasswing, aiming to secure critical software before it can be abused. Interest in Mythos grew after a leak of nearly 3,000 internal files revealed details of the project, which Anthropic later confirmed. The company has now officially introduced Mythos Preview, posit
     

Project Glasswing powered by Claude Mythos: defending software before hackers do

8 de Abril de 2026, 08:18

Anthropic unveiled Claude Mythos, a powerful AI for cybersecurity that could also be misused to enhance cyberattacks.

Anthropic has unveiled Claude Mythos, a new AI model designed to strengthen cybersecurity through Project Glasswing, aiming to secure critical software before it can be abused.

Interest in Mythos grew after a leak of nearly 3,000 internal files revealed details of the project, which Anthropic later confirmed. The company has now officially introduced Mythos Preview, positioning it as a major step forward in AI, powerful, but potentially risky if it falls into the wrong hands.

Anthropic says Claude Mythos is a major leap beyond its Haiku, Sonnet, and Opus models, introducing a new top tier called Copybara. It stands out for strong agentic coding and reasoning skills, achieving top scores in software tasks and enabling advanced cybersecurity capabilities.

Project Glasswing is a joint effort led by Anthropic with major tech and security firms (Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks) to protect critical software using advanced AI.

It leverages Claude Mythos Preview, a powerful model capable of finding and exploiting vulnerabilities at a level beyond most humans.

The goal is to use these capabilities defensively, helping organizations detect and fix flaws before attackers can exploit them. Anthropic is sharing access with partners and funding the initiative to strengthen both proprietary and open-source software security.

“AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back. Our foundational work with these models has shown we can identify and fix security vulnerabilities across hardware and software at a pace and scale previously impossible. That is a profound shift, and a clear signal that the old ways of hardening systems are no longer sufficient.” said Anthony Grieco, SVP & Chief Security & Trust Officer, Cisco. “Providers of technology must aggressively adopt new approaches now, and customers need to be ready to deploy. That is why Cisco joined Project Glasswing—this work is too important and too urgent to do alone.”

While Anthropic develops AI for broader scientific goals, it recognizes the risk of abuse, especially after observing early AI-driven cyber espionage campaigns. The concern is that such capabilities could soon enable faster and more advanced attacks than defenders can handle.

“Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.” reads the announcement by Anthropic. “The fallout—for economies, public safety, and national security—could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.”

Glasswing brings together major tech and security companies to use Mythos defensively, helping secure critical software and infrastructure. Anthropic plans to limit access for now, hoping to improve global cybersecurity before such powerful tools become widely available.

Modern software underpins critical systems like banking, healthcare, energy, and government, but it has always contained vulnerabilities—some severe enough to enable cyberattacks, data theft, and disruption. These threats are already costly and widespread, with global cybercrime estimated at around $500 billion annually and often driven by state-backed actors.

With advanced AI models like Claude Mythos, the effort and expertise needed to find and exploit flaws has dropped sharply. These models can identify long-hidden vulnerabilities and develop sophisticated exploits, sometimes outperforming human experts. This raises serious risks, as attacks could become faster, more frequent, and more damaging.

However, the same capabilities can be used defensively. Initiatives like Project Glasswing aim to harness AI to detect and fix vulnerabilities at scale, helping secure critical infrastructure. The challenge now is to deploy these tools responsibly and quickly, ensuring defenders stay ahead in an AI-driven cybersecurity landscape.

Anthropic is investing $100M in usage credits and funding open-source security projects, while sharing findings to improve industry-wide defenses. The initiative aims to expand collaboration across tech, security, and governments to develop best practices and strengthen cybersecurity in the AI era.

“We are hopeful that Project Glasswing can seed a larger effort across industry and the public sector, with all parties helping to address the biggest questions around the impact of powerful models on security. We invite other AI industry members to join us in helping to set the standards for the industry.” concludes the report. “In the medium term, an independent, third-party body—one that can bring together private- and public-sector organizations—might be”

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Claude Mythos)

❌
❌