Visualização de leitura

Hypersonic Supply Chain Attacks: One Solution That Didn’t Need to Know the Payload

In 2026, the question for security leaders is not whether a supply chain attack is coming. Every serious organization should assume it is. The question is whether their defense architecture can stop a payload it has never seen before. It’s a question that takes on even more critical implications at a time where trusted agentic automation increasingly becomes the norm.

In three weeks this spring, three threat actors each ran a tier-1 supply chain attack against widely deployed software: LiteLLM, a core AI infrastructure package, Axios, the most downloaded HTTP client in the JavaScript ecosystem, and CPU-Z, a trusted system diagnostic tool. Different vectors, different actors, different techniques. SentinelOne® stopped all three on the same day each attack launched, with no prior knowledge of any payload.

The more important story is the how. Each attack arrived as a zero-day at the moment of execution. Each exploited a trusted delivery channel: an AI coding agent running with unrestricted permissions, a phantom dependency staged eighteen hours before detonation, a properly signed binary from an official vendor domain. No signature existed for any of them. No IOA matched.

SentinelOne stopped all three. That outcome is a direct answer to the question every security leader is now running against: What does your defense do when the attack arrives through a channel you explicitly trust, carrying a payload you have never seen before?

The AI Arms Race in Security is Underway

Adversaries are no longer running manual campaigns at human speed. In September 2025, Anthropic disclosed a Chinese state-sponsored group that jailbroke an AI coding assistant and ran a full espionage campaign against approximately 30 organizations. The AI handled 80–90% of tactical operations autonomously (i.e., reconnaissance, vulnerability discovery, exploit development, credential harvesting, lateral movement, exfiltration) with minimal human direction. Anthropic noted only 4–6 human decision points per campaign. The attack achieved limited success across those targets, but the trajectory is clear: AI is compressing the human bottleneck in offensive operations. Security programs designed around manual-speed adversaries are calibrating to a threat that is moving faster.

The LiteLLM attack is the clearest recent example of what this looks like inside an AI development workflow. On March 24, 2026, threat actor TeamPCP compromised the LiteLLM Python package by obtaining PyPI credentials through a prior supply chain compromise of Trivy, a widely-used open-source security scanner. Two malicious versions (1.82.7 and 1.82.8) were published. Any system with those versions during the exposure window executed the embedded credential theft payload automatically. In one confirmed detection, an AI coding agent running with unrestricted permissions (claude --dangerously-skip-permissions) auto-updated to the infected version without human review — no approval, no alert, no visible action before the payload ran. SentinelOne detected and blocked the malicious Python execution on the same day across multiple environments. Most organizations running AI development workflows didn’t know they were exposed until after the fact. The gap where human review processes don’t reach is wide, and it grows with every AI agent added to a pipeline.

Security programs were built for a different adversary. Vulnerability management, triage queues, patch cadences: all of it assumes an attacker who moves at a pace where human response can still close the window. This year’s SentinelOne Annual Threat Report documented what happens when that assumption breaks: adversaries are shifting left, embedding malicious logic in the build process before software ever reaches production. Likewise, the Verizon 2025 Data Breach Investigations Report found that edge device vulnerabilities are now being mass-exploited at or before the day of CVE publication, while organizations take a median of 32 days to patch them. The old model worked when it was designed. Attackers just weren’t running AI yet.

Three Attacks, One Common Failure Mode

Each attack ran through the same gap. Authorization was treated as a sufficient security boundary, and when authorization is automated, that assumption has no floor.

An AI agent with install permissions doesn’t stop to ask whether a package looks right. It installs. Trusted source, valid credentials, done. Supply chain attacks have always exploited trusted delivery channels, but a human at the keyboard introduces at least one friction point: Someone might notice something off, slow down, ask a question. Agents don’t do that. They execute at the speed of the next API call. When you give an agent install permissions, you’ve extended your trust model to cover everything it will ever run. Authorized agents execute exactly what their permissions allow. That’s the design. Treating permission as a proxy for safety is what turns a compromised supply chain hypersonic.

LiteLLM was compromised via credentials stolen through Trivy, a security scanner. The Axios attacker bypassed every npm security control the project had in place by exploiting a legacy access token the maintainers had forgotten to revoke. The CPUID attackers went after the vendor’s distribution infrastructure directly, so anyone who downloaded from the official website got a properly signed binary with a payload inside. In all three cases, the identity was legitimate. The intent wasn’t.

SentinelOne’s Annual Threat Report named the failure precisely: “The identity is verified, but the intent has been subverted, rendering traditional access controls ineffective against the resulting supply chain contamination.” Signature libraries, IOA rule sets, reputation lookups: All of them check authorization. None check intent. These attacks were designed to exploit exactly that. When the authorization model runs automatically, so does the exposure.

What Actually Stopped Them

In each incident, SentinelOne’s on-device behavioral AI flagged the execution pattern, not a known signature or hash for that specific attack.

The LiteLLM detection flagged a Python interpreter executing Base64-decoded code in a spawned subprocess. SentinelOne killed the process preemptively, terminating 424 related events in under 44 seconds, before any human was in a position to observe it. The Axios detection, via the Lunar behavioral engine, caught PowerShell executing under a renamed binary from a non-standard path. The engine flagged the technique regardless of what the payload contained. The first infection occurred 89 seconds after the malicious package went live; the behavioral detection fired on the same day of publication. The CPU-Z detection flagged cpuz_x64.exe building an anomalous process chain: spawning PowerShell, which spawned csc.exe, which spawned cvtres.exe. CPU-Z does not do that. The platform terminated the execution chain mid-attack during a 19-hour active distribution window.

This is the operational output of Autonomous Security Intelligence (ASI), the intelligence fabric built into the Singularity™ Platform. ASI runs on-device at the edge as part of the core architecture. It is already running when the attack starts, killing the process before the threat can escalate.

Where customers had SentinelOne fully deployed with the right policies enabled, they were covered. Where they did not, they were exposed, and with average ransomware recovery costs exceeding $4M per incident, that exposure has a real price. If you are not certain your deployment matches the configuration that stopped these three attacks, that certainty is worth getting.

AI to Fight AI

This is the product reality behind the thesis SentinelOne brought to RSAC: AI to fight AI. A machine-speed adversary requires a machine-speed defense. That is an architectural requirement, not a positioning statement. ASI monitors behavioral patterns at the point of execution and kills the process when something deviates, at machine speed, without waiting for a human to write a query or approve a kill.

According to an IDC study, organizations using SentinelOne’s AI platform identify threats 63% faster and remediate 55% faster than legacy solutions, neutralizing 99% of threats without a single manual step. For organizations in regulated industries (healthcare, financial services, manufacturing, critical infrastructure), the stakes compound beyond breach cost. An exposure window that stays open through manual investigation is a potential regulatory notification event, an audit finding, and a conversation the CISO has with the board under circumstances no one wants. The difference between a stopped attack and an active breach is whether the architecture acts before the attacker establishes persistence. By the time a human analyst approves the kill, redundant persistence mechanisms may already be installed. The CPU-Z attack deployed three of them specifically because partial cleanup leaves the payload operational.

Human-driven workflows, manual validation, and legacy tooling cannot keep pace with that attack cadence. When defense relies on investigation before action, the advantage shifts to the adversary. The gap is in the architecture. You cannot tune your way out of it.

Conclusion | The Only Question That Matters

SentinelOne’s latest Annual Threat Report documented the pattern these three attacks confirm: Adversaries are “shifting left” by integrating malicious logic into the build process itself, compromising software before it reaches production. It is the current operating model of advanced threat actors, and it is accelerating.

Three attacks. Three detections. Three outcomes, all in a matter of weeks. The architecture that survived them is real-time, AI-native, and built into the edge.

The question every security leader should be able to answer: Could your current solution have stopped LiteLLM, Axios, and CPU-Z autonomously, on the day of each attack, with no prior knowledge of any payload?

If the answer depends on a signature update, a cloud verdict, a manual investigation step, or a policy that wasn’t enabled, that is your answer.

Read the full technical breakdown of each incident:

Third-Party Trademark Disclaimer:

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

Securing the Software Supply Chain: How SentinelOne’s AI EDR Autonomously Blocked the CPU-Z Watering Hole Cyber Attack

On April 9, 2026, cpuid.com was actively serving malware through its own official download button. Threat actors had compromised the CPUID domain at the API level and were silently redirecting legitimate download requests to attacker-controlled infrastructure. The attack ran for approximately 19 hours. Users who navigated directly to the official site received a legitimate, properly signed binary with a malicious payload bundled inside it.

That morning, SentinelOne’s behavioral detection flagged an anomaly inside cpuz_x64.exe. The binary was genuine. The digital signature was valid. The download had arrived from the vendor’s own infrastructure. The process chain cpuz_x64.exe began constructing was the tell: it spawned PowerShell, which spawned csc.exe, which spawned cvtres.exe. CPU-Z does not do that.

CPU-Z, HWMonitor, HWMonitor Pro, and PerfMonitor are staples in IT toolkits. The users who downloaded them followed every instruction they’d been given. The trust chain broke above them. The next attack will work the same way.

SentinelOne’s Annual Threat Report identifies exactly this pattern as a systemic shift: “This [shift] extends deeply into the software supply chain, where the identity of a trusted developer becomes the vector of attack.” In late 2025, we observed the GhostAction campaign, where a compromised GitHub maintainer account pushed malicious workflows to extract secrets. A concurrent phishing attack against a maintainer of popular NPM packages deployed malicious code capable of intercepting cryptocurrency transactions. In each case, the commit logs and push events appeared legitimate because they originated from accounts with valid write access. The identity was verified. The intent had been subverted. The CPUID incident extends this pattern to software distribution itself: the supplier’s download infrastructure became the delivery channel.

What the Agent Saw

The SentinelOne agent triggered the alert “Penetration framework or shellcode was detected” within the first seconds of execution. The detection came from what the process was doing, with five specific behavioral indicators converging:

  • Anomalous API resolution: The process located system functions through non-standard discovery methods, bypassing the OS loader entirely.
  • Reflective code loading: Executable code was running in memory regions with no corresponding file on disk.
  • Suspicious memory allocation: Read-Write-Execute (RWX) memory permissions were requested, a staging pattern for malicious payloads.
  • Process injection patterns: Execution flow consistent with code being redirected into a secondary process to mask its origin.
  • Heuristic shellcode signatures: Sequential operations characteristic of automated exploitation toolkits preparing an environment for command execution.

The agent autonomously terminated and quarantined the involved processes before the attack advanced further. The malicious CRYPTBASE.dll, placed in the same directory as the legitimate CPU-Z binary, was loaded by Windows before the real system DLL could be reached, and it never completed its job.

Alert Page
Alert Page

The agent was watching for what the software was trying to do. Behavioral detection is the layer that holds when authorization cannot be trusted, because the behavior reveals intent regardless of what signed the package.

Behavioral Indicator
Behavioral Indicator
Process Tree
Process Tree
Event Table
Event Table

What Was Actually Inside

The trojanized packages were designed to leave no trace. A reflective PE loader decrypted and injected a second-stage DLL using XXTEA encryption and DEFLATE decompression, no disk writes, no file artifacts. Three redundant persistence mechanisms were then installed: a registry Run key, a 68-minute scheduled task with a 20-year duration, and MSBuild project files in AppData\Local engineered to survive reboots and partial remediation.

The 2026 Annual Threat Report describes this persistence design as “masquerading as maintenance”: adversaries blend into the environment by mimicking legitimate system updates and background processes. To a busy defender, a scheduled task with a generic name and a timed execution interval appears entirely routine until you examine what it is executing. STX RAT’s 68-minute task with a 20-year duration operates on exactly this logic.

The process chain visible in EDR logs made the intent clear: cpuz_x64.exe spawned powershell.exe, which spawned csc.exe, then cvtres.exe. CPU-Z does not do that.

The final payload, STX RAT, delivered hidden VNC providing an attacker-controlled desktop session invisible to the user, keyboard and mouse injection, browser credential theft across Chrome, Firefox, Edge, and Brave, Windows Vault extraction, cryptocurrency wallet access, and a reverse proxy for follow-on payload delivery. C2 communication ran over a custom encrypted protocol using DNS-over-HTTPS to 1.1.1.1 to bypass DNS monitoring.

A reflective payload executing entirely in memory, inside a signed process, with no disk writes, compresses the detection window to milliseconds. Autonomous response is the only response fast enough.

The Attacker’s Critical Mistake

Kaspersky’s analysis linked the CPUID samples to a March 2026 campaign targeting FileZilla users within hours, and the connection required no advanced forensics. The attacker reused the identical C2 infrastructure and deployed the unmodified STX RAT payload, the same one eSentire’s Threat Response Unit had already fingerprinted and published YARA rules for after the FileZilla campaign.

Those rules detected the CPUID variant without modification.

The actor invested time compromising CPUID’s download API and did nothing to retool after being publicly fingerprinted. The C2 domain, the backend server, the payload: all identical across campaigns. The same backend server had been operating since at least July 2025. Per Kaspersky’s own assessment, the C2 reuse was the gravest mistake of the operation. A more disciplined actor burns infrastructure between campaigns. This one did not, and defenders had working detection before most victims knew an attack had occurred.

What the Attack Was Really For

The 150+ confirmed victims span retail, manufacturing, consulting, telecommunications, and agriculture. The count is almost certainly low, CPUID’s tools have tens of millions of users globally, and the portable ZIP variant of CPU-Z runs commonly on production systems in environments that block installer-based software.

Victim count is secondary to victim profile. CPU-Z users skew toward IT professionals: system administrators, developers, security engineers, the people with domain admin rights, production access, and infrastructure keys. One compromised sysadmin carries a fundamentally different blast radius than one compromised user.

The operational pattern points to an initial access broker. The goal was to sell persistent, hidden access. Someone else would do the extracting.

For organizations where an infection occurred, two questions need answers. What did the attacker do during the window they had access, especially if that machine belonged to a privileged user? And what happens over the next 60-90 days, when whoever purchased that access decides to activate it? Ransomware affiliates who buy IAB access typically move within that window. Cleaning the machine closes one exposure. Monitoring for lateral movement, credential reuse, and unusual authentication in the weeks following remediation closes the other.

What Defenders Should Do Now

For practitioners

The indicators are specific and actionable.

  • Check your fleet for CRYPTBASE.dll in any directory other than C:\Windows\System32.
  • Look for the process chain cpuz_x64.exe or any CPUID application spawning PowerShell.
  • Block supp0v3[.]com and 147.45.178.61 at DNS and firewall layers.
  • At the network layer, watch for DNS-over-HTTPS queries to 1.1.1.1/dns-query resolving welcome.supp0v3.com; STX RAT specifically uses DoH to bypass DNS monitoring, and any endpoint generating this pattern is a high-confidence indicator.

If you find an infected machine, remediate all four persistence mechanisms explicitly: the registry Run key, the scheduled task, any MSBuild .proj files in AppData\Local, and PowerShell profile autoruns. The malware installs redundant footholds specifically because partial cleanup leaves it alive.

For security leaders

The harder conversation is about supply chain trust. Your users followed every rule they were given. They downloaded from the official website. They trusted a vendor they had used for years. That vendor’s infrastructure failed them. Behavioral detection, security that watches what software does rather than where it came from, is the layer that caught this.

The business case is specific. When an initial access broker sells a foothold obtained this way, the buyer typically activates within 60-90 days. With average ransomware recovery costs exceeding $4 million per incident, even a single privileged endpoint sold through an IAB represents material, quantifiable exposure. The organizations that already had 24/7 autonomous behavioral monitoring in place closed the window before it opened. The ones that did not are still counting.

The adversary’s tooling was unsophisticated. The OPSEC was poor. The C2 reuse was a gift to defenders. And yet: 150+ confirmed victims and a 19-hour window during which clean, legitimate software was being replaced by a remote access trojan is a demonstration of how far attacker leverage has extended into the software supply chain, and how quickly behavioral detection closes the gap when it acts autonomously, before the attack completes its first stage. The attacker’s poor OPSEC saved defenders this time. The structural failure in the trust model (the assumption that software from a trusted source is safe to run) persists regardless of attacker discipline.

The Structural Problem That Remains

SentinelOne’s latest Annual Threat Report documents GhostAction and the NPM package compromise as supply chain identity attacks through code repositories and package managers. CPUID adds a third layer: the vendor’s distribution infrastructure itself. Across all three cases, access controls validated a legitimate identity. The report frames this plainly: “The identity is verified, but the intent has been subverted, rendering traditional access controls ineffective against the resulting supply chain contamination.”

This shift means authorization, the cornerstone of traditional software trust, is no longer a sufficient security boundary. When the distribution channel becomes the failure point, verification has to move from the point of origin to the point of execution.

In the CPUID case, users followed every rule. They downloaded from the official vendor website. That vendor’s download API was the failure point, compromised at the infrastructure level for 19 hours, with no visible indication.

SentinelOne’s Behavioral AI engine detects suspicious and malicious patterns in real time, watching what the software does regardless of where it came from.

SentinelOne customers were protected through autonomous behavioral detection at the point of execution. The structural failure in the trust model (the assumption that software from a trusted source is safe to run) is a gap that better user behavior cannot close. Behavioral detection at machine speed is what closes it.

To understand how the Singularity™ Platform identifies threats across your environment, including those arriving through trusted software channels, request a demo.

❌