Braintrust warned customers to rotate API keys after hackers breached an AWS account, exposing secrets tied to cloud-based AI models.
AI observability startup Braintrust warned customers to rotate API keys after attackers gained unauthorized access to one of the company’s AWS accounts, potentially exposing secrets used to connect to cloud-based AI models.
The company said it discovered suspicious activity on May 4 and immediately locked down the affected account, restricted access to rela
Braintrust warned customers to rotate API keys after hackers breached an AWS account, exposing secrets tied to cloud-based AI models.
AI observability startup Braintrust warned customers to rotate API keys after attackers gained unauthorized access to one of the company’s AWS accounts, potentially exposing secrets used to connect to cloud-based AI models.
The company said it discovered suspicious activity on May 4 and immediately locked down the affected account, restricted access to related systems, and rotated internal credentials. The firm launched an investigation into the security incident.
“We’ve identified a security incident that involved unauthorized access to one of our AWS accounts. We are actively investigating, and we have engaged incident response experts.” reads the security breach notice published by the company. “We have contained the incident by locking down the compromised account, auditing and restricting access across related systems, rotating internal secrets, and engaging incident response experts to support our investigation. As a precaution, we recommend that all customers rotate any org-level AI provider keys used with Braintrust.”
Braintrust notified customers the following day and shared indicators of compromise and remediation guidance.
Although Braintrust says the impact appears limited, experts warn the breach highlights growing AI supply chain risks, as AI platforms increasingly store valuable API credentials targeted by attackers.
The potential exposure could affect organizations relying on Braintrust to manage AI provider keys across services and applications.
Researchers note that once threat actors obtain valid API keys, they can abuse AI services while appearing as legitimate users, often bypassing traditional security controls.
“To date, we’ve confirmed the issue affected one customer. Three additional customers reported suspicious spikes in AI provider usage, and we’re investigating those alongside them.” continues the notice. “We have not identified broader customer exposure based on our investigation to date, but as a precaution we informed all org admins with stored AI provider secrets in Braintrust. The investigation is ongoing.”
The incident also reflects a broader trend of attackers targeting cloud accounts and SaaS providers to gain indirect access to downstream customers and interconnected AI infrastructure.
The company plans to add new safeguards, including timestamps and user attribution for API key changes, while the investigation into the incident remains ongoing.
AWS Rex adds runtime guardrails for agentic AI, but security leaders still need data-layer controls to satisfy compliance and audit demands.
The post AWS Rex Is a Big Step for Agentic AI Security, But Not the Final Layer appeared first on TechRepublic.
The post The 4GB Secret: Why Chrome is Surreptitiously Downloading AI Models to Your Hard Drive appeared first on Daily CyberSecurity.
Related posts:
The Ghost in the Browser: Is Claude Desktop Clandestinely Installing a Surveillance Bridge?
Mark Zuckerberg & Meta Directors Settle $8 Billion Privacy Lawsuit Over Cambridge Analytica
ByteDance’s Trae IDE Under Fire: AI Coding Tool Caught Telemetry Spying Even After Opt-Out
The post Cloudflare Cuts 20% of Staff to Pivot Toward an “AI-First Agentic” Future appeared first on Daily CyberSecurity.
Related posts:
The Great Recalibration: Amazon Cuts 16,000 More Roles in Massive Shift to AI-First Future
The Bot Takeover: Why AI is Set to Drown Out Human Internet Traffic by 2027
NVIDIA Crushes AI Bubble Fears with Record $57B Revenue
The post The End of Whiteboard Coding: Google to Allow Gemini AI in Software Engineering Interviews appeared first on Daily CyberSecurity.
Related posts:
A New Home for Gemini: Google Unveils Next-Gen Smart Home Devices
Beyond Translation: Google Translate Adds a Gemini-Powered Language Tutor
Apple and Google Are Partnering to Bring Gemini AI to Siri
The post The InstallFix Trap: Fake Claude AI Google Ads Drop Fileless RedLine Malware on Developers appeared first on Daily CyberSecurity.
Related posts:
AI Hype Hijacked: How a Fake Claude Installer Blinds Windows Security
Exploited in the Wild: Interlock Ransomware Weaponizes Critical 10.0 CVSS Cisco Zero-Day
Copyright Lures and “Fileless” Shadows: Inside the PureLog Stealer Campaign
The Pentagon is integrating AI into military operations, transforming cybersecurity, targeting, and command systems into a unified warfare architecture.
May 2026 marks a turning point in the evolution of modern warfare: the convergence of artificial intelligence, cybersecurity, and conventional military power is no longer theoretical. It is becoming an operational reality.
The Pentagon has signed agreements with major technology companies, including OpenAI, Google, Microsoft, Amazon, and
The Pentagon is integrating AI into military operations, transforming cybersecurity, targeting, and command systems into a unified warfare architecture.
May 2026 marks a turning point in the evolution of modern warfare: the convergence of artificial intelligence, cybersecurity, and conventional military power is no longer theoretical. It is becoming an operational reality.
The Pentagon has signed agreements with major technology companies, including OpenAI, Google, Microsoft, Amazon, and SpaceX to integrate advanced AI models into classified military networks. The stated goal is clear: transform the United States into an “AI-first” military force capable of maintaining decision superiority across every battlefield domain.
Under this strategy, AI is no longer treated as a laboratory tool or analytical assistant. It is moving directly into the military chain of command, intelligence analysis, logistics, targeting, and operational planning. More than 1.3 million Department of Defense employees are already using the GenAI.mil platform, dramatically reducing processes that once took months to just days.
The Pentagon’s doctrine reflects a major cultural shift: code and combat are no longer separate domains. Cybersecurity itself is now considered a combat capability. The ability to deploy, secure, update, and operate AI models inside classified environments has become part of national defense infrastructure.
The contracts signed with technology providers include “lawful operational use” clauses, requiring vendors to accept any use considered legitimate by the Pentagon, including autonomous weapons systems and intelligence operations. This raises profound ethical and geopolitical questions.
At the same time, the U.S. military is pushing for deep integration across defense systems. Through the Army’s new “Right to Integrate” initiative, manufacturers of missiles, drones, radars, and sensors are being asked to open their software interfaces so AI agents can connect systems in real time. The inspiration comes largely from Ukraine, where open APIs allowed rapid battlefield integration between drones, sensors, and fire-control systems.
However, this transformation creates a dangerous paradox: the same openness that enables speed and flexibility also expands the attack surface. Every API, cloud platform, and AI integration point can potentially become an entry point for sophisticated adversaries such as China, Russia, or state-sponsored APT groups.
A compromised AI-enabled military ecosystem could allow attackers to inject false sensor data, manipulate targeting systems, degrade drone communications, study operational decision patterns, or even hijack autonomous weapons platforms. In this context, software vulnerabilities and supply-chain weaknesses are no longer merely IT problems, they become military objectives.
Washington is also increasingly concerned about the cyber risks posed by advanced AI models themselves. According to reports, the White House is considering new oversight mechanisms for frontier AI systems capable of autonomously discovering software vulnerabilities or automating cyberattacks at scale. Officials fear that uncontrolled deployment of such models could lead to mass exploitation of critical infrastructure, financial systems, or global supply chains.
The strategic implications extend beyond military technology. Major cloud providers such as Amazon, Microsoft, and Google are gradually becoming part of the American defense architecture. Civilian digital infrastructure is evolving into a structural extension of military power.
This raises difficult questions for Europe and Italy. In a world where most cloud, AI, and cybersecurity infrastructures are controlled by American companies, what does technological sovereignty really mean? Sovereignty is no longer just about producing chips or funding startups. It is about controlling the digital infrastructure that supports national defense, determining who can update AI systems operating on classified networks, and deciding who sets the operational rules of software during crises.
The United States, Israel, and China are already integrating AI into military doctrine at high speed. Europe risks remaining trapped between regulation and technological dependence unless it develops its own industrial capabilities, operational autonomy, and independent evaluation frameworks.
The message coming from Washington is unmistakable: the future of strategic power will depend on who controls AI models, data, interfaces, and software-driven operational systems. In modern warfare, software has become a battlefield domain, and the speed of code deployment increasingly matters as much as firepower itself.
A more detailed analysis is available in Italian here.
In this weekly roundup from The Cyber Express, the global cybersecurity landscape continues to show rapid and uneven change, shaped by both regulatory shifts and escalating cyber threats. Governments are tightening oversight of new technologies such as artificial intelligence, while threat actors are simultaneously refining their techniques to exploit businesses, infrastructure, and end users across multiple platforms.
This edition of cybersecurity news brings together some of the most impor
In this weekly roundup from The Cyber Express, the global cybersecurity landscape continues to show rapid and uneven change, shaped by both regulatory shifts and escalating cyber threats. Governments are tightening oversight of new technologies such as artificial intelligence, while threat actors are simultaneously refining their techniques to exploit businesses, infrastructure, and end users across multiple platforms.This edition of cybersecurity news brings together some of the most important developments of the week, ranging from significant amendments to the European Union’s AI Act to the expansion of malware campaigns into macOS environments and the discovery of a critical vulnerability in widely used enterprise firewall software. It also covers major sentencing in a global ransomware case and a fresh warning from the FBI about the growing scale of cyber-enabled cargo theft targeting logistics and supply chain organizations.
The Cyber Express Weekly Roundup
EU Updates AI Act with Simpler Rules and New AI Content Bans
In a significant regulatory update, the European Union has agreed to revise parts of the EU AI Act. The updated framework aims to simplify compliance requirements for businesses while simultaneously introducing stricter restrictions on harmful AI-generated content. Read more..
ClickFix Malware Campaign Expands to macOS
Another key development is the expansion of the ClickFix malware campaign beyond Windows systems. Security researchers at Microsoft have confirmed that the operation is now targeting macOS users using deceptive troubleshooting content. Read more...
A critical security flaw has been identified in Palo Alto Networks’ PAN-OS firewall software. Tracked as CVE-2026-0300, the vulnerability carries a CVSS score of 9.3, indicating severe risk. The issue originates from a buffer overflow vulnerability in the User-ID Authentication Portal. Read more...
Latvian Cybercriminal Sentenced in Global Ransomware Case
Latvian national Deniss Zolotarjovs has been sentenced to 102 months in prison for his role in a large-scale ransomware operation. According to the U.S. Department of Justice, the group operated under multiple ransomware brands, including Conti, Royal, Akira, and Karakurt. Between 2021 and 2023, the organization carried out attacks against more than 54 companies worldwide, using data theft and encryption-based extortion tactics to pressure victims into paying ransom demands. Read more...
FBI Warns of Rising Cyber-Enabled Cargo Theft
The FBI has issued an alert regarding a sharp rise in cyber-enabled cargo theft. Criminal actors are using impersonation techniques to pose as legitimate logistics providers, allowing them to intercept and redirect freight shipments. The agency noted that logistics, shipping, and insurance companies have been targeted since at least 2024. Read more...
Weekly Takeaway
This week’s The Cyber Express weekly roundup highlights the growing convergence of regulatory change, advanced malware threats, critical infrastructure vulnerabilities, ransomware enforcement actions, and supply chain fraud. As the global cybersecurity landscape continues to evolve, organizations across all sectors remain under increasing pressure to strengthen defenses and adapt to emerging risks.
Researchers tracked a large AI‑themed investment scam campaign involving more than 15,000 domains. It uses cloaking and deepfakes to hide from security tools while targeting ordinary users.
Criminals abused the Keitaro ad-tracking platform as part of a cloaking system so real victims see scam content, while security scanners, ad reviewers, and some random visitors see harmless pages, making the operation hard to detect and shut down.
Keitaro is a commercial tracking platform originally mea
Researchers tracked a large AI‑themed investment scam campaign involving more than 15,000 domains. It uses cloaking and deepfakes to hide from security tools while targeting ordinary users.
Criminals abused the Keitaro ad-tracking platform as part of a cloaking system so real victims see scam content, while security scanners, ad reviewers, and some random visitors see harmless pages, making the operation hard to detect and shut down.
Keitaro is a commercial tracking platform originally meant for digital marketers to manage ad campaigns, test which ads work best, and route visitors to different landing pages.
Because it is feature rich, easy to spin up on regular hosting, and built to filter and route traffic, criminals found they can abuse those capabilities to run scams at scale.
Traffic starts in many places. The scammers used compromised websites, spam emails, social media posts, and online ads, all quietly routing through the same tracking infrastructure.
The scam sites typically promise “Smart AI Trading Technology” or “Intelligent Trading Solutions” and claim consistently high returns, often reinforced with deepfake images or fabricated media to look more credible.
Some parts of the campaign now use deepfake videos and fake interviews with well-known public figures, making it look like a celebrity, or finance expert personally endorses the platform.
Once you follow a link, the cloaking part of the operation kicks in. Cloaking is the trick that makes these scams so hard to see from the outside.
When you click an ad or link, your visit passes through a traffic distribution system (TDS), a kind of router for web visitors that decides which page you see. In these cases, the TDS is connected to the tracker.
The system checks things like:
Your country/region
Your device and browser
Where you came from (Facebook ad, Google ad, email link, etc.)
Sometimes your IP address reputation or other subtle fingerprints
You’re shown the real investment scam landing page only if you match the “ideal victim” profile (for example, a regular consumer in a target country coming from a social media ad).
Everyone else, like a security researcher, ad platform reviewer, or automated scanner, gets shown a benign page, like a generic blog or placeholder site.
How to stay safe
The best way to stay safe is to stay informed about the tricks scammers use. Learn to spot the red flags that almost always give away scams and phishing emails, and remember:
There is no such thing as a risk-free, consistently profitable investment. If you’re looking to invest, navigate directly to known, regulated financial institutions.
Deepfakes are very convincing nowadays, so you will hardly be able to tell the difference between the real celebrity and their deepfake persona.
Don’t act upon unsolicited investment advice, whether it reaches you by email, social media, or sponsored search results.
Researchers have discovered a new malvertising campaign using a fake Claude AI website to plant a new, undocumented backdoor named Beagle on user devices.
Researchers have discovered a new malvertising campaign using a fake Claude AI website to plant a new, undocumented backdoor named Beagle on user devices.
The post Orbital Ambitions: Anthropic Taps Musk’s “Colossus” to Double Claude’s Power and Eye the Stars appeared first on Daily CyberSecurity.
Related posts:
The Moral Surge: Anthropic Doubles Claude Quotas as Users Flee OpenAI for Ethical High Ground
Elon Musk’s AI Empire Boosted: SpaceX Invests $2B in xAI to Accelerate Grok Development & Tesla Integration
Pentagon Funds AI Giants: OpenAI, Google, Anthropic, xAI Tapped for Military AI Development
A new Mirai‑based botnet, xlabs_v1, hijacks ADB‑exposed IoT devices for powerful DDoS attacks, with 21 flooding methods and DDoS‑for‑hire use.
A new Mirai‑derived botnet called xlabs_v1 is hijacking internet‑exposed devices running Android Debug Bridge (ADB) and using them for large‑scale DDoS attacks. Hunt.io discovered the bot on an unsecured server, it includes 21 flood techniques across TCP, UDP, and raw protocols, allowing it to bypass basic protections. It appears to be sold as a DDoS‑
A new Mirai‑based botnet, xlabs_v1, hijacks ADB‑exposed IoT devices for powerful DDoS attacks, with 21 flooding methods and DDoS‑for‑hire use.
A new Mirai‑derived botnet called xlabs_v1 is hijacking internet‑exposed devices running Android Debug Bridge (ADB) and using them for large‑scale DDoS attacks. Hunt.io discovered the bot on an unsecured server, it includes 21 flood techniques across TCP, UDP, and raw protocols, allowing it to bypass basic protections. It appears to be sold as a DDoS‑for‑hire service, especially for targeting game and Minecraft servers.
During routine monitoring, researchers spotted an exposed directory on a Netherlands‑hosted server (176.65[.]139.44) used for bulletproof hosting. The operator had left their entire toolkit publicly accessible over TCP/80 with no authentication, allowing investigators to index everything before the attacker realized it was exposed.
Open access to the server revealed a six‑file toolkit instead of a login page, exposing binaries and text files with no authentication. Two files were auto‑tagged as malicious: arm7 (Mirai) and payloads.txt (exploit content), suggesting the operator was using analyst‑grade tools on an unsecured host. The directory held about 200 KB of data, including the packed ARM bot, an unstripped x86‑64 debug build, ADB infection one‑liners, a SOCKS5 proxy, and a placeholder targets file. The debug build’s intact symbols made reconstructing the bot’s behavior straightforward.
“The xlabs_v1 codebase reads as a focused commercial product rather than an opportunistic Mirai derivative. Its twenty-one flood variants, ChaCha20 string protection, OpenNIC-aware DNS resolution, and Speedtest-driven bandwidth profiling are subsystems aimed at a single outcome: keeping a fleet of compromised IoT devices reachable, accountable, and profitable for the operator. Everything else in the binary serves that goal or protects it.” reads the report published by Hunt.io.
xlabs_v1 botnet is built entirely for commercial DDoS‑for‑hire operations, with no added features like credential theft that could increase detection risk. Its core function is to receive attack commands and launch one of 21 flood variants, many aimed at game servers, including RakNet floods for Minecraft and OpenVPN‑shaped UDP traffic to evade filters. Delivered through ADB exploits, the ARMv7 bot targets Android TVs, set‑top boxes, and IoT hardware, part of a global surface of more than 4 million devices with TCP/5555 exposed.
“nfection vector is Android Debug Bridge on TCP/5555, with multi-architecture builds covering ARM, MIPS, x86-64, ARC, and Android APK, meaning any internet-exposed device running ADB is a potential target: Android TV boxes, set-top boxes, smart TVs, residential routers, and any IoT-grade hardware shipping with ADB enabled by default.” continutes the report.
Once installed, the bot hides infection tags, profiles each device’s bandwidth by opening 8,192 TCP sockets, and reports Mbps to its panel so the operator can assign price tiers. It also kills competing botnets by scanning /proc, terminating rival processes, and removing malware on port 24936.
For resilience, xlabs_v1 resolves its C2 via OpenNIC, falls back to a firewall‑punching SOCKS‑style listener on TCP/26721, and masks itself as /bin/bash to evade casual inspection. Sensitive strings, including the C2 domain xlabslover.lol, the operator handle Tadashi, and the agent tag xlabs_v1, are encrypted with ChaCha20 but easily recovered due to key reuse.
Its command‑and‑control uses a custom TCP protocol, supporting bandwidth probes, updates, self‑restart, and attack dispatch. Together, these techniques reveal a sophisticated, commercially motivated DDoS botnet engineered for persistence, evasion, and profit.
Analysis of the xlabs_v1 botnet’s infrastructure begins with its C2 domain, xlabslover[.]lol, which resolves to a single IP in the Netherlands hosted by Offshore LC. The domain uses Ultahost nameservers, a provider often linked to bulletproof hosting, and shows no prior malware detections, suggesting a recently deployed C2.
Pivoting from the domain to its IP (176.65.139[.]134) reveals SSH as the only open port, plus past honeypot activity involving HTTP and .env‑file scanning. SSL history shows unusual self‑signed certificates, including one with the CN “Godisgood”, previously used on another IP in Germany, indicating the same operator managing multiple servers.
Three hosts within the 176.65.139.0/24 netblock appear tied to the botnet: .44 (staging), .42 (distribution), and .9 (additional distribution). Hunt.io captured open directories on these systems containing Mirai‑tagged binaries, multi‑architecture payloads, and ADB exploitation scripts.
Historical scans confirmed Mirai C2 activity in late March and early April 2026, consistent with the botnet’s active deployment period and revealing a consolidated, bulletproof infrastructure supporting xlabs_v1.
The operator behind the botnet uses the handle Tadashi, embedded in each build, while the botnet brand xlabs_v1 appears in every C2 registration, hinting at future versions. A development tag, aterna, shows earlier branding before release. OSINT searches linking “Tadashi,” “xlabs,” and “xlabslover” may reveal the operator’s DDoS‑for‑hire storefront. A decrypted banner also exposes hostility toward a rival fork, xlab 2, suggesting a code split or underground feud. Nearby infrastructure in the same netblock has hosted cryptojacking tools, though overlap with the xlabs operation remains unconfirmed.
“In commercial-criminal terms, xlabs_v1 is mid-tier. It is more sophisticated than the typical script-kiddie Mirai fork (which would lack the ChaCha20 layer, the multi-architecture binary set, the bandwidth profiling, and the registered-attack diversity), but less sophisticated than the top tier of commercial DDoS-for-hire operations (which would use TLS on the C2 channel, would not ship a debug build to production paths, would rotate cryptographic material across builds, and would not ship a hard-coded competitor-rivalry banner).” concludes the report. “This operator is competing on price and attack variety, not technical sophistication. Consumer IoT devices, residential routers, and small game-server operators are the target. Treat it accordingly.”
AI tools designed to assist developers are no longer staying in the background. They are starting to shape what actually gets built and deployed.
They open pull requests.
They modify dependencies.
They generate infrastructure templates.
They interact directly with repositories and CI/CD pipelines.
At some point, this stops being assistance.
It becomes participation.
And participation changes the problem.
When assistance becomes participation
The
AI tools designed to assist developers are no longer staying in the background. They are starting to shape what actually gets built and deployed.
They open pull requests.
They modify dependencies.
They generate infrastructure templates.
They interact directly with repositories and CI/CD pipelines.
At some point, this stops being assistance.
It becomes participation.
And participation changes the problem.
When assistance becomes participation
The shift from generative to agentic behavior is the inflection point.
Earlier tools operated inside a tight loop. A developer prompted. The system suggested. The developer reviewed. Nothing moved without human intent.
That boundary is eroding.
Newer systems propose changes, update libraries, remediate vulnerabilities and interact with development pipelines with limited human intervention. They don’t just accelerate developers. They begin to shape the artifacts that move through the software supply chain — code, dependencies, configurations and infrastructure definitions.
That makes them something different.
Not tools.
Participants.
And once something participates in the supply chain, it inherits the same question every other participant does:
How is it governed?
A simple scenario
Consider a common pattern already emerging in many environments.
An AI system identifies a vulnerable dependency.
It opens a pull request updating the library.
A workflow triggers automated tests.
The change is promoted into a staging environment.
Four steps.
No human review.
No explicit governance checkpoint.
Each step is individually valid. Nothing looks wrong in isolation.
But taken together, they create something fundamentally different: A system that can change enterprise software without human intent being re-established at any point. Research from Black Duck found that while 95% of organizations now use AI in their development process, only 24% properly evaluate AI-generated code for security and quality risks.
This is autonomous change propagation across the software supply chain.
The “human-in-the-loop” fallacy
Many organizations rely on a “human-in-the-loop” (HITL) requirement as a safety mechanism for AI-generated code.
At low volumes, this works.
At scale, it breaks.
When an AI system generates dozens of pull requests in a short window, review becomes a throughput problem, not a control. The cognitive load of validating machine-generated logic exceeds what a human can realistically govern.
What remains is not oversight, but a checkpoint.
And checkpoints without effective review are not controls.
The governance gap
Most governance models assume a stable truth: Humans are the primary actors.
Controls tied identity to individuals, approvals to intent and audit trails to accountability.
Even automation systems are treated as extensions of human intent — predictable, bounded and deterministic.
AI systems break that model.
They can generate new logic, act on it and propagate changes across systems. Yet in most environments, they are still governed as if they were static tools.
That mismatch is the gap.
Machine identity is no longer what it was
One way to see this clearly is through identity.
Every interaction an AI system has — repository access, pipeline execution, API calls — requires credentials. In practice, these systems operate as machine identities.
But they are not traditional machine identities.
A service account executes predefined logic. Its behavior is known in advance. Its risk is bounded by what it was configured to do.
An AI-driven system is different. It generates the logic it then executes.
It can propose new code paths, interact with new systems and trigger actions that were not explicitly predefined at the time access was granted.
That is a category change.
Not just a new identity type, but a new attack surface: Identities that can generate the behavior they are authorized to execute.
The World Economic Forum has identified this class of non-human identity as one of the fastest-growing and least-governed security risks in enterprise AI adoption.
Measuring exposure before solving it
Most organizations already track access-related metrics. Those metrics were designed for human-driven systems.
They are no longer sufficient.
If AI systems are participating in the software supply chain, organizations need to measure where and how that participation introduces risk.
A few signals matter immediately:
AI-generated artifact footprint: What portion of code, dependencies or infrastructure definitions in production originates from AI-assisted processes?
Authority scope of AI systems: What systems can these identities access — and what actions can they take across repositories and pipelines?
Autonomous change rate: How often are changes introduced and propagated without explicit human review?
Cross-system interaction surface: How many systems does a single AI workflow touch as part of normal operation?
Auditability of AI-driven actions: Can changes be traced cleanly to a system, workflow and triggering context?
These are not abstract concerns. They are measurable.
And until they are measured, they are not governed.
The regulatory imperative
This is not just a technical shift. It is a governance and liability shift.
As regulatory expectations evolve — from AI accountability frameworks to cybersecurity disclosure requirements — organizations are increasingly responsible for explaining and controlling automated decisions inside their environments.
If an AI-driven change introduces a vulnerability or leads to a material incident, “the system generated it” will not be an acceptable answer.
Accountability will still sit with the enterprise.
That raises the bar: Governance must extend to how autonomous systems act, not just how they are accessed.
The architecture gap
AI systems operate horizontally across systems, while governance remains vertical
Puneet Bhatnagar
The issue is not that any one control is missing.
It is that AI systems operate across the seams of systems designed to govern within their own boundaries.
Repositories enforce code controls.
Pipelines enforce deployment controls.
Identity systems enforce access controls.
Security tools enforce policy checks.
Each works as designed.
But AI systems move across all of them.
They read from one system, generate changes, trigger another and influence a third. Authority is exercised across systems, while governance remains within them.
That is the architectural gap.
A different governance model
Most organizations will respond to this shift by trying to extend existing access controls. That instinct is understandable — and insufficient.
The problem is no longer just who or what can access a system. It is how control is maintained when authority can generate new actions dynamically.
This requires a different model of governance.
One that treats software systems as actors whose behavior must be bounded, observed and continuously evaluated across workflows — not just permitted or denied at a point of access. Governance becomes less about static permissions and more about controlling the shape and impact of actions across systems.
That is the shift.
Conclusion
The conversation around AI in software development often focuses on productivity.
But as AI systems begin to participate in producing and modifying enterprise software, the more important question becomes governance.
AI is not just accelerating the software development lifecycle. It is becoming part of the software supply chain itself.
And that changes the problem.
The challenge for CIOs is no longer just managing developers, tools or pipelines. It is understanding and governing the authority that software systems exercise across them.
Because in a world where software can act on behalf of the enterprise, governance is no longer just about access.
It is about authority — what systems are allowed to do, and how that authority is controlled and measured over time.
This article is published as part of the Foundry Expert Contributor Network. Want to join?
금융 기술 기업 FIS가 화요일 금융 범죄 탐지용 신규 AI 에이전트를 개발했다. 이 에이전트는 앤트로픽이 자체 개발한 커넥터와 템플릿을 기반으로 구축됐으며, 개발 과정에서 앤트로픽의 파견형 엔지니어(FDE) 팀이 내부에 투입됐다.
기업 CIO들은 자체 데이터 품질 문제와 AI 모델 활용의 복잡성으로 인해, AI 벤더의 FDE(Forward Deployed Engineer) 즉 파견형 엔지니어 서비스에 점점 더 많은 비용을 지불하고 있다.
다만 이러한 팀을 어떤 방식과 목적로 도입하느냐에 따라, 기업이 AI 역량을 한 단계 끌어올릴 수 있을지 아니면 끝이 없는 컨설팅 비용 구조에 묶이게 될지가 갈린다.
FIS는 캐나다 몬트리올은행(BMO)과 아말가메이티드 은행을 해당 에이전트의 첫 도입 기업으로 공개했다. 이 에이전트는 은행 핵심 시스템 전반에서 데이터를 수집해 자금세탁 방지 조사 시간을 수시간에서 수분으로 단축하고, 가장 위
금융 기술 기업 FIS가 화요일 금융 범죄 탐지용 신규 AI 에이전트를 개발했다. 이 에이전트는 앤트로픽이 자체 개발한 커넥터와 템플릿을 기반으로 구축됐으며, 개발 과정에서 앤트로픽의 파견형 엔지니어(FDE) 팀이 내부에 투입됐다.
기업 CIO들은 자체 데이터 품질 문제와 AI 모델 활용의 복잡성으로 인해, AI 벤더의 FDE(Forward Deployed Engineer) 즉 파견형 엔지니어 서비스에 점점 더 많은 비용을 지불하고 있다.
다만 이러한 팀을 어떤 방식과 목적로 도입하느냐에 따라, 기업이 AI 역량을 한 단계 끌어올릴 수 있을지 아니면 끝이 없는 컨설팅 비용 구조에 묶이게 될지가 갈린다.
FIS는 캐나다 몬트리올은행(BMO)과 아말가메이티드 은행을 해당 에이전트의 첫 도입 기업으로 공개했다. 이 에이전트는 은행 핵심 시스템 전반에서 데이터를 수집해 자금세탁 방지 조사 시간을 수시간에서 수분으로 단축하고, 가장 위험도가 높은 사례를 선별해 제공하며 모든 의사결정 과정에 대한 감사 가능성과 추적성을 확보한다.
FIS는 4일 보도자료를 통해 “앤트로픽의 응용 AI(Applied AI) 팀과 FDE가 함께 금융 범죄 AI 에이전트를 공동 설계하고 있으며, FIS가 향후 독립적으로 추가 에이전트를 구축·확장할 수 있도록 지식 이전도 진행하고 있다”라고 밝혔다.
뉴욕 기반 기술 컨설팅 기업 트라이베카 소프트텍의 최고전략책임자 아만 마하파트라는 유사한 AI 벤더 협업을 평가할 때 비용 흐름을 면밀히 살펴야 한다고 조언했다.
마하파트라는 “FIS와 앤트로픽 모델에서 구조적으로 가장 중요한 부분은 실제로 FDE 비용을 누가 부담하느냐”라며 “이는 CIO들이 반드시 던져야 할 질문이지만 대부분 간과하고 있다”라고 지적했다.
가트너의 수석 디렉터 애널리스트 알렉스 코케이루의 최근 보고서에 따르면, FDE 비용은 일부 AI 프로젝트를 위태롭게 만들 수 있다. 코케이루는 “2028년까지 기업의 70%가 높은 벤더 비용과 내부 역량 부족으로 인해 FDE 중심 협업에서 구축된 에이전틱 AI 솔루션을 포기하게 될 것”이라고 전망했다.
소프트웨어가 아닌 ‘서비스’
이 문제는 전적으로 AI 벤더의 책임만은 아니라는 지적이다. 많은 IT 조직이 데이터를 정제하고 AI 활용에 적합하도록 만드는 사전 준비를 충분히 하지 않고 있으며, 조직 내부의 정치적 역학과 개인 간 이해관계도 중요한 변수로 작용한다.
코케이루는 보고서에서 “FDE 성공에 가장 중요한 도메인 전문가일수록 이를 방해할 유인이 가장 크다”라며 “자신의 전문성이 에이전틱 자동화로 흡수된다고 인식한 전문가는 실제 업무 프로세스가 아닌 형식적인 절차만 제공하고, 그 결과 해당 기반으로 구축된 AI 에이전트는 의도적으로 누락된 예외 상황에서 실패하게 된다”라고 분석했다.
이어 “여러 차례 배포 이후에도 FDE 투입 규모가 줄지 않는다면 이는 역량이 아니라 의존성이 형성됐다는 신호”라며 “활용 사례가 성숙해져도 투입 노력이 감소하지 않는다면 기업은 스스로 운영해야 할 영역에 컨설팅 비용을 지불하고 있는 것”이라고 지적했다.
FIS와 앤트로픽 협업 사례에 대해 마하파트라는 “BMO와 아말가메이티드 은행이 분기별 컨설팅 비용 형태로 앤트로픽의 FDE에 직접 비용을 지불하는 구조가 아니다”라며 “FIS가 FDE 비용을 흡수해 전체 은행 고객군에 분산시키는 방식”이라고 설명했다.
이어 “각 은행이 개별적으로 엔지니어링 팀을 구성해 동일한 컨텍스트 경계, 섀도 자율성 통제, 탈옥(jailbreak) 저항 테스트를 반복 수행하는 방식보다 훨씬 경제적인 구조”라고 평가했다.
마하파트라는 이러한 문제의 상당 부분이 생성형 AI와 에이전틱 AI의 마케팅 방식에서 비롯됐다고 지적했다. “AI를 통해 더 적은 인력으로 더 많은 일을 할 수 있다는 초기 ROI 논리는 규제가 엄격한 금융 업무 환경에서는 현실과 맞지 않는 메시지였다”라고 말했다.
보안 AI 연합(CoSAI) 회원이자 ACM AI 보안 프로그램(AISec) 위원인 닉 케일은 FIS의 발표를 두고 “최첨단 AI가 아직 제품 단계에 이르지 못했음을 인정한 것”이라고 평가했다. 이어 “CIO들은 소프트웨어를 구매한다고 생각했지만 실제로는 전문 서비스 계약을 체결하고 있는 것”이라며 “이는 모든 기업 AI 도입에서 비용 구조, 의존성 구조, 거버넌스 모델을 바꾸는 요소”라고 설명했다.
케일은 발표 문구 자체가 에이전틱 전략의 방향성을 보여준다고 분석했다. “FIS는 모든 에이전트 의사결정이 추적 가능하고 감사 가능하다고 밝혔는데, 이는 사실이지만 핵심 질문은 아니다”라며 “진짜 어려운 문제는 에이전트가 어떤 결정을 내렸는지를 검증하는 것이 아니라, 애초에 어떤 결정을 맡길 것인지 정하는 것”이라고 짚었다. 이어 “은행은 수십 년간 의사결정 권한 체계를 구축해 왔지만, 외부 엔지니어가 만든 에이전트 구조에는 이를 그대로 적용하기 어렵다”라고 덧붙였다.
또한 “FDE 팀이 철수한 이후에도 조직이 에이전틱 워크플로를 운영하고, 모니터링하며, 문제를 제기하고, 안전하게 수정할 수 있는지가 CIO의 핵심 판단 기준”이라며 “그렇지 않다면 이는 성공적인 구축 프로젝트일 수는 있어도 아직 기업 역량이라고 보기는 어렵다”라고 강조했다.
컨설팅 기업 액셀리전스의 CEO이자 전 맥킨지 북미 사이버보안 책임자였던 저스틴 그라이스 역시 이 같은 견해에 동의했다.
프로세스로 위장된 인간 판단
그라이스는 “진짜 위험은 비용이 아니라 의존성”이라며 “수십만 달러를 들여 시스템을 운영 환경에 올리는 것 자체는 문제가 아니다”라고 말했다. 이어 “문제는 해당 시스템을 벤더만이 운영하거나 확장할 수 있고, 심지어 완전히 이해할 수 있는 구조가 되는 순간부터 발생한다”라고 지적했다.
일부 컨설팅 구조의 문제는 IT 역량 부족을 가리는 데 있는 것이 아니라, AI 도입 과정에서 ‘지름길’을 허용한다는 점이다.
그레이하운드 리서치의 수석 애널리스트 산치트 비르 고기아는 “FDE에 비용을 지불하는 구조는 에이전틱 AI의 ROI 자체를 훼손하는 것이 아니라, 단순화된 ROI 논리를 무너뜨리는 것”이라며 “이 차이는 매우 중요하다”라고 말했다.
이어 “지난 2년간 기업 AI는 지나치게 깔끔한 인력 절감 스토리로 포장돼 왔다. 모델을 도입하고, 업무를 자동화하고, 인력을 줄여 비용을 절감한다는 식의 접근은 이사회에는 매력적으로 보일 수 있지만 현실을 충분히 반영하지 못한다”라고 설명했다.
고기아는 “대기업은 자동화를 기다리는 정형화된 업무의 집합이 아니라, 예외 상황, 레거시 시스템, 취약한 통합 구조, 접근 통제, 문서화되지 않은 임시 대응, 규제 요구, 그리고 ‘프로세스로 위장된 인간 판단’이 얽힌 복잡한 구조”라며 “FDE는 AI를 실제로 작동하게 만들기 위한 비용 청구서에 가깝다. 이는 혁신이 아니라 더 정교해진 의존성”이라고 강조했다.
또 다른 FDE 관련 우려는 이해 상충 가능성이다. 복잡성을 해결하기 위해 비용을 받는 AI 벤더가, 동시에 그 복잡성의 상당 부분을 만들어낸 주체일 수 있다는 점이다.
프리랜서 기술 분석가 카미 레비는 이러한 비즈니스 구조가 기업의 목표를 저해할 수 있다고 지적했다. 레비는 “AI 에이전트가 조직 전반에서 고도화된 워크플로를 자율적으로 생성·배포·운영하는 것이 목표라면, 기존에 높은 수익을 창출해 온 유지보수 계약 모델과 충돌할 수 있다”라고 말했다. 이어 “고객과 함께 에이전트를 구축하기 위해 FDE를 지속 투입해야 한다면, 장기적인 지원이 필요 없는 수준까지 에이전트를 고도화할 유인이 과연 존재하는지 의문”이라고 덧붙였다.
또한 “FDE 중심 비즈니스 모델이 초기 모델 설계에까지 영향을 미칠 수 있으며, 지속적인 FDE 지원이 필요하도록 AI 플랫폼이 의도적으로 설계됐을 가능성도 있다”라고 분석했다. dl-ciokorea@foundryco.com
When financial tech vendor FIS announced its new AI agent for detecting financial crimes on Tuesday, it made much of its embedding of a team of forward deployed engineers (FDEs) from Anthropic to make it happen. It’s just one of the dozen or so companies working with Anthropic on developing agents for financial services using new connectors and so-called “ready-to-run” templates Anthropic announced the same day.
Enterprise CIOs are increasingly paying for the services o
When financial tech vendor FIS announced its new AI agent for detecting financial crimes on Tuesday, it made much of its embedding of a team of forward deployed engineers (FDEs) from Anthropic to make it happen. It’s just one of the dozen or so companies working with Anthropic on developing agents for financial services using new connectors and so-called “ready-to-run” templates Anthropic announced the same day.
Enterprise CIOs are increasingly paying for the services of AI vendors’ FDEs, given their own data quality issues and the complexity of working with AI models.
But how and why such teams are brought in can make the difference between whether the enterprise is helped to get to the next AI level or becomes a hostage to never-ending consulting costs.
FIS listed the Bank of Montreal (BMO) and Amalgamated Bank as the first two companies to deploy its agent, which it said will compress anti-money-laundering investigations from hours to minutes, assembling evidence across a bank’s core systems and surfacing the riskiest cases for review with full auditability and traceability of decisions. “Anthropic’s Applied AI team and forward-deployed engineers (FDEs) are embedded with FIS to co-design the Financial Crimes AI Agent and transfer knowledge so FIS can build and scale additional agents independently over time,” it said.
Aman Mahapatra, chief strategy officer for Tribeca Softtech, a New York City-based technology consulting firm, suggests CIOs follow the money when evaluating similar work with AI vendors.
“The structurally interesting thing about the FIS-Anthropic model is who actually pays the FDE cost. This is the question CIOs should be asking but mostly are not,” Mahapatra said.
The cost of FDEs could put some AI projects in jeopardy according to a recent report by Alex Coqueiro, a senior director analyst with Gartner. He predicted that by 2028, “70% of enterprises will be forced to abandon agentic AI solutions from FDE-led engagements because of high vendor costs and lack of internal skills to evolve them independently.”
Service, not software
He argued that the problem is not entirely the fault of the AI vendor. Many IT operations don’t put in the necessary preparatory work to clean their data and to make it AI-friendly. Internal corporate politics/personalities is another critical factor.
“The domain experts most critical to FDE success have the strongest incentive to undermine it. An expert who perceives the FDE as capturing their expertise for agentic automation will give the official process instead of the real one, and the AI agent built on it will fail on the exact edge cases they chose not to mention,” Coqueiro said in the report. “Flat FDE effort across successive deployments is the signal that an engagement has produced a dependency, not a capability. When effort does not decrease as use cases mature, the organization is paying consulting rates for operations it should own.”
In the case of FIS’s work with Anthropic, said Mahapatra, “BMO and Amalgamated are not writing direct checks to Anthropic for forward-deployed engineers at quarterly consulting rates. FIS is absorbing the FDE engagement and amortizing it across its banking customer base.”
That approach, he said, “is meaningfully better economics than direct Anthropic engagements where each bank funds its own embedded engineering team to redesign the same context boundaries, shadow autonomy controls, and the jailbreak resistance testing in isolation.”
Mahapatra said much of this problem stems from how generative and agentic AI have been marketed. The original ROI thesis, he said, was that AI enables enterprises to do more with fewer people, but that was “a marketing pitch that was never going to survive contact with regulated banking workflows.”
Nik Kale, a member of the Coalition for Secure AI (CoSAI) and of ACM’s AI Security (AISec) program committee, said that he sees FIS’s presentation of its work with Anthropic as “a concession that frontier AI isn’t a product yet. CIOs thought they were buying software. They’re actually buying a professional services engagement. That changes the cost model, the dependency model and the governance model for every enterprise AI deployment.”
Kale said the statement’s wording gives a clue about the agentic strategy.
“The FIS release says every agent decision is traceable and auditable. True statement, wrong sentence. The harder question isn’t auditing what the agent decided. It’s deciding which decisions are the agent’s to make in the first place. Banks have decades of decision-rights frameworks. They don’t translate cleanly to agent harnesses built by someone else’s engineers,” Kale said. “The CIO test is simple: after the forward-deployed team leaves, can your organization still operate, monitor, challenge, and safely modify the agentic workflow? If the answer is no, it’s not mature yet. It may be a successful implementation project, but it’s not yet an enterprise capability.”
Justin Greis, CEO of consulting firm Acceligence and former head of the North American cybersecurity practice at McKinsey, agreed with Kale.
Human judgment pretending to be process
“The bigger risk isn’t the cost of these engagements. It’s the dependency they can create. Spending a few hundred thousand dollars to get something into production isn’t the issue,” Greis said. “Ending up with a system that only the vendor can operate, extend, or even fully understand is where things start to break down.”
The problem with some of these consulting arrangements is not that they hide IT deficiencies as much as they enable AI shortcuts.
Enterprises paying FDE teams “do not undermine the ROI case for agentic AI. They undermine the lazy version of the ROI case. That distinction matters,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. “For the past two years, too much of the enterprise AI narrative has been sold as a tidy labor-reduction story. Buy the model. Automate the work. Reduce the people. Capture the savings. It is neat, board-friendly, and deeply incomplete. Large enterprises are not collections of clean tasks waiting to be automated. They are collections of exceptions, legacy systems, fragile integrations, access controls, undocumented workarounds, compliance obligations, and human judgement pretending to be process. Forward deployed engineers are the invoice for making AI real. That is not transformation. That is dependency with better stationery.”
Another FDE concern is the inevitable conflict of interest that can exist where the AI vendor that is being paid to fix the complexity is also the vendor that created much of that complexity in its model.
Carmi Levy, an independent technology analyst, said the business case can undermine enterprise objectives. “If AI agents are supposed to autonomously create, deploy, and manage super-capable workflows at all levels of the organization, their very capability threatens the future viability of vendors who have long attached lucrative support contracts to those very same deployments. If the FDE is going to be engaged to work alongside customers to make their AI agents come alive, where is the incentive for AI vendors to build agentic systems that are so capable that they don’t require ongoing support? The FDE business model influences up-front model design, and it’s entirely possible that AI platforms are being deliberately designed to require persistent FDE support.”
During Q1 2026, the exploit kits leveraged by threat actors to target user systems expanded once again, incorporating new exploits for the Microsoft Office platform, as well as Windows and Linux operating systems.
In this report, we dive into the statistics on published vulnerabilities and exploits, as well as the known vulnerabilities leveraged by popular C2 frameworks throughout Q1 2026.
Statistics on registered vulnerabilities
This section provides statistical data on registered vulnerabiliti
During Q1 2026, the exploit kits leveraged by threat actors to target user systems expanded once again, incorporating new exploits for the Microsoft Office platform, as well as Windows and Linux operating systems.
In this report, we dive into the statistics on published vulnerabilities and exploits, as well as the known vulnerabilities leveraged by popular C2 frameworks throughout Q1 2026.
Statistics on registered vulnerabilities
This section provides statistical data on registered vulnerabilities. The data is sourced from cve.org.
We examine the number of registered CVEs for each month starting from January 2022. The total volume of vulnerabilities continues rising and, according to current reports, the use of AI agents for discovering security issues is expected to further reinforce this upward trend.
Total published vulnerabilities per month from 2022 through 2026 (download)
Next, we analyze the number of new critical vulnerabilities (CVSS > 8.9) over the same period.
Total critical vulnerabilities published per month from 2022 through 2026 (download)
The graph indicates that while the volume of critical vulnerabilities slightly decreased compared to previous years, an upward trend remained clearly visible. At present, we attribute this to the fact that the end of last year was marked by the disclosure of several severe vulnerabilities in web frameworks. The current growth is driven by high-profile issues like React2Shell, the release of exploit frameworks for mobile platforms, and the uncovering of secondary vulnerabilities during the remediation of previously discovered ones. We will be able to test this hypothesis in the next quarter; if correct, the second quarter will show a significant decline, similar to the pattern observed in the previous year.
Exploitation statistics
This section presents statistics on vulnerability exploitation for Q1 2026. The data draws on open sources and our telemetry.
Windows and Linux vulnerability exploitation
In Q1 2026, threat actor toolsets were updated with exploits for new, recently registered vulnerabilities. However, we first examine the list of veteran vulnerabilities that consistently account for the largest share of detections:
CVE-2018-0802: a remote code execution (RCE) vulnerability in the Equation Editor component
CVE-2017-11882: another RCE vulnerability also affecting Equation Editor
CVE-2017-0199: a vulnerability in Microsoft Office and WordPad that allows an attacker to gain control over the system
CVE-2023-38831: a vulnerability resulting from the improper handling of objects contained within an archive
CVE-2025-6218: a vulnerability allowing the specification of relative paths to extract files into arbitrary directories, potentially leading to malicious command execution
CVE-2025-8088: a directory traversal bypass vulnerability during file extraction utilizing NTFS Streams
Among the newcomers, we have observed exploits targeting the Microsoft Office platform and Windows OS components. Notably, these new vulnerabilities exploit logic flaws arising from the interaction between multiple systems, making them technically difficult to isolate within a specific file or library. A list of these vulnerabilities is provided below:
CVE-2026-21509 and CVE-2026-21514: security feature bypass vulnerabilities: despite Protected View being enabled, a specially crafted file can still execute malicious code without the user’s knowledge. Malicious commands are executed on the victim’s system with the privileges of the user who opened the file.
CVE-2026-21513: a vulnerability in the Internet Explorer MSHTML engine, which is used to open websites and render HTML markup. The vulnerability involves bypassing rules that restrict the execution of files from untrusted network sources. Interestingly, the data provider for this vulnerability was an LNK file.
These three vulnerabilities were utilized together in a single chain during attacks on Windows-based user systems. While this combination is noteworthy, we believe the widespread use of the entire chain as a unified exploit will likely decline due to its instability. We anticipate that these vulnerabilities will eventually be applied individually as initial entry vectors in phishing campaigns.
Below is the trend of exploit detections on user Windows systems starting from Q1 2025.
Dynamics of the number of Windows users encountering exploits, Q1 2025 – Q1 2026. The number of users who encountered exploits in Q1 2025 is taken as 100% (download)
The vulnerabilities listed here can be leveraged to gain initial access to a vulnerable system and for privilege escalation. This underscores the critical importance of timely software updates.
On Linux devices, exploits for the following vulnerabilities were detected most frequently:
CVE-2022-0847: a vulnerability known as Dirty Pipe, which enables privilege escalation and the hijacking of running applications
CVE-2019-13272: a vulnerability caused by improper handling of privilege inheritance, which can be exploited to achieve privilege escalation
CVE-2021-22555: a heap out-of-bounds write vulnerability in the Netfilter kernel subsystem
CVE-2023-32233: a vulnerability in the Netfilter subsystem that allows for Use-After-Free conditions and privilege escalation through the improper processing of network requests
Dynamics of the number of Linux users encountering exploits, Q1 2025 – Q1 2026. The number of users who encountered exploits in Q1 2025 is taken as 100% (download)
In the first quarter of 2026, we observed a decrease in the number of detected exploits; however, the detection rates are on the rise relative to the same period last year. For the Linux operating system, the installation of security patches remains critical.
Most common published exploits
The distribution of published exploits by software type in Q1 2026 features an updated set of categories; once again, we see exploits targeting operating systems and Microsoft Office suites.
Distribution of published exploits by platform, Q1 2026 (download)
Vulnerability exploitation in APT attacks
We analyzed which vulnerabilities were utilized in APT attacks during Q1 2026. The ranking provided below includes data based on our telemetry, research, and open sources.
TOP 10 vulnerabilities exploited in APT attacks, Q1 2026 (download)
In Q1 2026, threat actors continued to utilize high-profile vulnerabilities registered in the previous year for APT attacks. The hypothesis we previously proposed has been confirmed: security flaws affecting web applications remain heavily exploited in real-world attacks. However, we are also observing a partial refresh of attacker toolsets. Specifically, during the first quarter of the year, APT campaigns leveraged recently discovered vulnerabilities in Microsoft Office products, edge networking device software, and remote access management systems. Although the most recent vulnerabilities are being exploited most heavily, their general characteristics continue to reinforce established trends regarding the categories of vulnerable software. Consequently, we strongly recommend applying the security patches provided by vendors.
C2 frameworks
In this section, we examine the most popular C2 frameworks used by threat actors and analyze the vulnerabilities targeted by the exploits that interacted with C2 agents in APT attacks.
The chart below shows the frequency of known C2 framework usage in attacks against users during Q1 2026, according to open sources.
TOP 10 C2 frameworks used by APTs to compromise user systems, Q1 2026 (download)
Metasploit has returned to the top of the list of the most common C2 frameworks, displacing Sliver, which now shares the second position with Havoc. These are followed by Covenant and Mythic, the latter of which previously saw greater popularity. After studying open sources and analyzing samples of malicious C2 agents that contained exploits, we determined that the following vulnerabilities were utilized in APT attacks involving the C2 frameworks mentioned above:
CVE-2023-46604: an insecure deserialization vulnerability allowing for arbitrary code execution within the server process context if the Apache ActiveMQ service is running
CVE-2024-12356 and CVE-2026-1731: command injection vulnerabilities in BeyondTrust software that allow an attacker to send malicious commands even without system authentication
CVE-2023-36884: a vulnerability in the Windows Search component that enables command execution on the system, bypassing security mechanisms built into Microsoft Office applications
CVE-2025-53770: an insecure deserialization vulnerability in Microsoft SharePoint that allows for unauthenticated command execution on the server
CVE-2025-8088 and CVE-2025-6218: similar directory traversal vulnerabilities that allow files to be extracted from an archive to a predefined path, potentially without the archiving utility displaying any alerts to the user
The nature of the described vulnerabilities indicates that they were exploited to gain initial access to the system. Notably, the majority of these security issues are targeted to bypass authentication mechanisms. This is likely due to the fact that C2 agents are being detected effectively, prompting threat actors to reduce the probability of discovery by utilizing bypass exploits.
Notable vulnerabilities
This section highlights the most significant vulnerabilities published in Q1 2026 that have publicly available descriptions.
At the core of this vulnerability is a Type Confusion flaw. By attempting to access a resource within the Desktop Window Manager subsystem, an attacker can achieve privilege escalation. A necessary condition for exploiting this issue is existing authorization on the system.
It is worth noting that the DWM subsystem has been under close scrutiny by threat actors for quite some time. Historically, the primary attack vector involves interacting with the NtDComposition* function set.
RegPwn (CVE-2026-21533): a system settings access control vulnerability
CVE-2026-21533 is essentially a logic vulnerability that enables privilege escalation. It stems from the improper handling of privileges within Remote Desktop Services (RDS) components. By modifying service parameters in the registry and replacing the configuration with a custom key, an attacker can elevate privileges to the SYSTEM level. This vulnerability is likely to remain a fixture in threat actor toolsets as a method for establishing persistence and gaining high-level privileges.
CVE-2026-21514: a Microsoft Office vulnerability
This vulnerability was discovered in the wild during attacks on user systems. Notably, an LNK file is used to initiate the exploitation process. CVE-2026-21514 is also a logic issue that allows for bypassing OLE technology restrictions on malicious code execution and the transmission of NetNTLM authentication requests when processing untrusted input.
Clawdbot (CVE-2026-25253): an OpenClaw vulnerability
This vulnerability in the AI agent leaks credentials (authentication tokens) when queried via the WebSocket protocol. It can lead to the compromise of the infrastructure where the agent is installed: researchers have confirmed the ability to access local system data and execute commands with elevated privileges. The danger of CVE-2026-25253 is further compounded by the fact that its exploitation has generated numerous attack scenarios, including the use of prompt injections and ClickFix techniques to install stealers on vulnerable systems.
CVE-2026-34070: LangChain framework vulnerability
LangChain is an open-source framework designed for building applications powered by large language models (LLMs). A directory traversal vulnerability allowed attackers to access arbitrary files within the infrastructure where the framework was deployed. The core of CVE-2026-34070 lies in the fact that certain functions within langchain_core/prompts/loading.py handled configuration files insecurely. This could potentially lead to the processing of files containing malicious data, which could be leveraged to execute commands and expose critical system information or other sensitive files.
CVE-2026-22812: an OpenCode vulnerability
CVE-2026-22812 is another vulnerability identified in AI-assisted coding software. By default, the OpenCode agent provided local access for launching authorized applications via an HTTP server that did not require authentication. Consequently, attackers could execute malicious commands on a vulnerable device with the privileges of the current user.
Conclusion and advice
We observe that the registration of vulnerabilities is steadily gaining momentum in Q1 2026, a trend driven by the widespread development of AI tools designed to identify security flaws across various software types. This trajectory is likely to result not only in a higher volume of registered vulnerabilities but also in an increase in exploit-driven attacks, further reinforcing the critical necessity of timely security patch deployment. Additionally, organizations must prioritize vulnerability management and implement effective defensive technologies to mitigate the risks associated with potential exploitation.
To ensure the rapid detection of threats involving exploit utilization and to prevent their escalation, it is essential to deploy a reliable security solution. Key features of such a tool include continuous infrastructure monitoring, proactive protection, and vulnerability prioritization based on real-world relevance. These mechanisms are integrated into Kaspersky Next, which also provides endpoint security and protection against cyberattacks of any complexity.
Google Chrome has been quietly downloading a 4GB AI model onto users’ devices without asking first.
Security researcher Alexander Hanff, aka ThatPrivacyGuy, reports that Chrome has been silently installing Gemini Nano, Google’s on-device AI model, as a file called weights.bin stored in the OptGuideOnDeviceModel directory within users’ Chrome profiles. This 4GB download happens automatically when Chrome determines your device meets the hardware requirements. It does not ask for consent, and se
Google Chrome has been quietly downloading a 4GB AI model onto users’ devices without asking first.
Security researcher Alexander Hanff, aka ThatPrivacyGuy, reports that Chrome has been silently installing Gemini Nano, Google’s on-device AI model, as a file called weights.bin stored in the OptGuideOnDeviceModel directory within users’ Chrome profiles. This 4GB download happens automatically when Chrome determines your device meets the hardware requirements. It does not ask for consent, and sends no notification—not even one of those annoying cookie banners you’ve learned to dismiss without reading.
The Gemini Nano model powers features like “Help me write” text composition assistance, on-device scam detection, and a Summarizer API that websites can call directly. These features are enabled by default in some recent Chrome versions. And here’s the kicker: if you discover the file and delete it, Chrome simply downloads it again.
Why this matters
Let’s start with the obvious problem: a 4GB download isn’t trivial for everyone. If you’re lucky enough to have unlimited fiber internet, you might not notice. But for users on metered connections, mobile hotspots, or in developing countries where data is expensive, Google just cost them real money without permission. For rural users or those with bandwidth caps, this kind of silent transfer can blow through monthly limits in minutes.
Hanff focuses on the environmental angle. He calculated that if this model were pushed to just 1 billion Chrome users (roughly 30% of Chrome’s user base), the distribution alone would consume 240 gigawatt-hours of energy and generate 60,000 tons of CO2 equivalent. That’s not including actually using the model, just the downloads.
But to us, the most troubling aspect is the broader pattern this represents. Just a few weeks ago, we reported another unsolicited AI invasion on our personal computers discovered by Hanff. He documented how Anthropic’s Claude Desktop app, which silently installed browser integration files across multiple Chromium browsers, including five browsers he didn’t even have installed. The integration would reinstall itself if removed, and it also happened without any meaningful user disclosure.
Hanff argues that both cases likely violate EU privacy law, specifically the ePrivacy Directive’s rules about storing data on user devices and the GDPR’s requirements around transparency and lawful processing. While these claims haven’t been tested in court, they highlight a fundamental tension: can companies just install whatever they want on your computer as long as they say it’s a feature of an app you installed?
Google might argue that having an AI on your device provides better privacy than cloud-based alternatives. Which is generally true, but it does not apply here, since Chrome’s most prominent AI feature—the “AI Mode” pill in the address bar—doesn’t even use the local model. According to Hanff’s analysis, it routes queries to Google’s cloud servers anyway.
All in all, users see a 4GB local AI model and reasonably assume their data stays private, when in reality, the most visible AI feature sends everything to Google’s servers.
Tech companies need to stop treating silent deployment as acceptable practice. We see no valid excuse for this. Your device is yours. The storage is yours. The bandwidth is yours. And the electricity bill is yours.
What happened to asking for permission? And when I remove it, I want it gone permanently—not automatic reinstallation.
When are the tech giants going to learn that we don’t want to be left discovering after the fact that our devices have become deployment targets for features we never asked for.
Browse like no one’s watching.
Malwarebytes Privacy VPN encrypts your connection and never logs what you do, so the next story you read doesn’t have to feel personal. Try it free →
Hackers have abused commercial Claude AI models to help compromise a Mexican water and drainage utility’s IT network and probe systems connected to critical infrastructure. The attackers used Claude as an operational “copilot” to discover industrial systems, build custom tools, and plan attacks against an internal SCADA/IIoT platform managing water and drainage processes. The investigation […]
The post Hackers Weaponize Claude AI in Attacks on Water and Drainage Utilities appeared first on GBHac
Hackers have abused commercial Claude AI models to help compromise a Mexican water and drainage utility’s IT network and probe systems connected to critical infrastructure. The attackers used Claude as an operational “copilot” to discover industrial systems, build custom tools, and plan attacks against an internal SCADA/IIoT platform managing water and drainage processes. The investigation […]
Anthropic has officially announced a massive strategic partnership with SpaceX to expand its computing capabilities significantly. This collaboration aims to provide the necessary infrastructure to scale up the Claude artificial intelligence ecosystem. By securing dedicated computing power, Anthropic is immediately increasing usage limits for its dedicated customers and laying the groundwork for unprecedented future technological […]
The post Claude and SpaceX Join Forces to Enhance Large-Scale
Anthropic has officially announced a massive strategic partnership with SpaceX to expand its computing capabilities significantly. This collaboration aims to provide the necessary infrastructure to scale up the Claude artificial intelligence ecosystem. By securing dedicated computing power, Anthropic is immediately increasing usage limits for its dedicated customers and laying the groundwork for unprecedented future technological […]