The Good | Courts Sentence Karakurt Ransomware Negotiator & Two DPRK IT Worker Scheme Facilitators
Federal authorities have successfully secured a nearly nine-year prison sentence for Deniss Zolotarjovs, a Latvian national extradited to the U.S. for his critical role in the Karakurt extortion syndicate.
Operating as a specialized “cold case” negotiator, Zolotarjovs (aka Sforza_cesarini) systematically targeted victims who had previously stopped communications with the extortion group to avoid paying the ransom. To coerce the ransom payments, he focused on analyzing stolen personal data and information about the target companies to exert intense psychological pressure on the victims. In some cases, Zolotarjovs resorted to leveraging sensitive health information, including children’s medical records, to force the victim to complete the ransom payment.
Source: Dayton247now
The broader Karakurt operation has extorted an estimated $56 million from dozens of compromised organizations. As the first Karakurt member to face federal prosecution, Zolotarjovs’s sentencing is a hard-won milestone in ongoing efforts to dismantle international cyber-extortion rings.
In a separate victory, U.S. prosecutors sentenced two American nationals to 18 months in prison each for operating extensive laptop farms that actively facilitated North Korean cyber infiltration.
Matthew Knoot and Erick Prince were prosecuted for helping DPRK-based IT workers secure remote employment at almost 70 U.S. companies by exploiting stolen identities. The pair received company-issued laptops and deployed unauthorized remote desktop software, allowing the North Korean workers to seamlessly masquerade as legitimate domestic employees.
The FBI continues to warn about the thousands of North Korean IT workers working to infiltrate U.S. firms to steal intellectual property, implant malware, and siphon funds to the heavily sanctioned regime.
The Bad | PCPJack Worm Evicts TeamPCP, Steals Cloud Credentials at Scale
SentinelLABS researchers this week exposed PCPJack, a sophisticated credential theft framework and cloud worm that targets public infrastructure to harvest sensitive data.
Unlike other known cloud hacktools, the toolset actively hunts, evicts, and systematically deletes artifacts associated with TeamPCP, a threat group responsible for multiple high-profile supply chain intrusions earlier this year.
The multi-stage infection chain begins with a shell script called bootstrap.sh, which establishes persistence and selectively downloads specialized Python modules from an attacker-controlled Amazon S3 bucket. The malware extracts a massive array of sensitive credentials, including cloud access keys, Kubernetes service account tokens, Docker secrets, enterprise productivity application tokens, and cryptocurrency wallets. Unlike typical cloud-focused threat campaigns, PCPJack does not deploy cryptomining payloads on victims.
Beginning of bootstrap.sh, the dropper script
To achieve lateral movement, the framework exploits a number of web vulnerabilities, including severe Next.js and WordPress flaws, while aggressively scanning for poorly secured Docker, Redis, RayML, and MongoDB instances. Stolen data is then encrypted before being exfiltrated via attacker-controlled Telegram channels.
Security teams are advised to strictly enforce multi-factor authentication on service accounts, restrict Kubernetes access scopes, use an enterprise-wide vault, and thoroughly secure all exposed cloud management interfaces.
The Ugly | Palo Alto Warns of Critical Flaw in PAN-OS Enabling Remote Code Execution
Palo Alto Networks customers were issued an urgent warning this week regarding a critical-level, unpatched zero-day vulnerability currently being exploited in the wild.
Tracked as CVE-2026-0300, the buffer overflow flaw directly impacts the PAN-OS User-ID Authentication Portal (aka the Captive Portal), enabling unauthenticated attackers to execute arbitrary code with root privileges using specially-crafted packets.
With a CVSS score of 9.3, the vulnerability presents an immediate risk to enterprise networks. Threat watchdog Shadowserver has currently identified over 5,000 vulnerable firewalls exposed online, primarily concentrated across Asia and North America.
Source: ShadowServers (current as of this writing)
This actively exploited vulnerability adds to the growing pattern of targeting edge infrastructure. PAN-OS has a well-documented history of severe zero-days, and with 90% of Fortune 10 companies and many major U.S. banks depending on it, the exposure is significant. CISA has added the flaw to its Known Exploited Vulnerabilities (KEV) catalog, setting mandatory remediation deadlines for federal civilian agencies.
With a patch not expected until mid-May, Palo Alto is urging administrators to secure affected environments immediately, starting by confirming exposure via the device’s Authentication Portal Settings. To successfully mitigate the threat of remote code execution, security teams can restrict all User-ID Authentication Portal access exclusively to trusted internal IP addresses. If strict network segmentation is impossible, organizations are being advised to disable the Captive Portal service until updates can be safely applied.
The Good | Authorities Dismantle State-Backed Espionage & Cybercrime Rings
This week, authorities successfully secured the extradition of Xu Zewei, an alleged Chinese Ministry of State Security (MSS) contract hacker, from Italy to the U.S. to face severe federal cyberespionage charges. Operating alongside the Silk Typhoon group, Xu systematically compromised internet-facing systems during a highly coordinated intelligence-gathering campaign between February 2020 and June 2021. The DoJ says that the attackers relentlessly targeted COVID-19 research organizations, stealing critical vaccine and treatment data by exploiting Microsoft Exchange Server zero day vulnerabilities and deploying malicious web shells for deep network access. Xu is set to appear in federal court where he faces multiple counts of computer intrusions and conspiracy.
Source: Italian Justice System
European law enforcement agencies have dismantled a widespread cryptocurrency investment fraud network responsible for inflicting over €50 million in estimated global losses. Operating almost identically to a legitimate enterprise, the syndicate employed up to 450 individuals across several specialized call centers located in Albania. Threat actors worked by luring vulnerable victims through online advertisements, assigning “retention agents” who wore down the targets through intense pressure and remote access software to manipulate deposits. Illicit funds were then channeled into international money-laundering pipelines to evade authorities worldwide.
Evan Tangeman is receiving a nearly six year prison sentence for laundering $230 million in a cryptocurrency heist that took place between October 2023 and May 2025. Based on court documents, attackers initially breached a Washington D.C. victim by aggressively impersonating Gemini customer support, leveraging remote desktop software to steal thousands of Bitcoin after bypassing two-factor authentication protocols. Tangeman systematically obfuscated the stolen proceeds through a network of cryptocurrency mixers, exchanges, and virtual private networks. The ill-got funds financed the criminal organization’s lavish lifestyle until his eventual arrest by law enforcement officials.
The Bad | New Report Shows Scammers Stole $2.1 Billion from Social Media Users
A new warning has come from the U.S. Federal Trade Commission (FTC) regarding a pointed surge in social media fraud, withreportedconsumer losses exceeding $2.1 billion in 2025. Representing an eightfold increase since 2020, malicious actors actively leveraged platforms like Facebook, Instagram, and WhatsApp to exploit nearly 30% of all fraud victims last year. Remarkably, individuals reported losing significantly more money to Facebook-originated schemes than to traditional text and email campaigns combined, establishing the platform as the primary threat vector for almost every age demographic.
Who gets scammed more often, younger people or older adults? At the FTC we know scammers target everyone, and FTC Chairman @AFergusonFTC has a message that might surprise you: pic.twitter.com/8kveWbsM0e
Operating with a global reach and minimal overhead, threat actors systematically hijack legitimate user accounts, analyze personal posts to craft highly targeted social engineering lures, and actively purchase deceptive advertisements. These criminal syndicates utilize the exact same marketing tools legitimate businesses employ, filtering potential victims by age, precise interests, and specific shopping habits to maximize the returns.
In direct response to these findings, Meta has already removed more than 159 million scam advertisements and taken down nearly 11 million malicious accounts tied to criminal operations last year. Additionally, the tech giant has introduced advanced anti-scam protections across its product ecosystem, proactively flagging suspicious friend requests, implementing intelligent chat detection systems, and introducing critical screen sharing warnings on WhatsApp to disrupt fraudulent video calls.
To successfully navigate and mitigate social engineering tactics, federal authorities strongly urge users to strictly limit profile visibility, independently verify unfamiliar online vendors, and reject any unsolicited investment advice originating from unknown social media contacts.
The Ugly | Threat Actors Poison SAP-Related npm Packages in Supply Chain Attack
Cybersecurity researchers are tracking a highly sophisticated supply chain attack targeting SAP-related npm packages with credential-stealing malware. Dubbed “Mini Shai-Hulud”, the campaign recently compromised vital packages within SAP’s cloud application development ecosystem, including @cap-js/db-service@2.10.1, @cap-js/postgres@2.2.2, @cap-js/sqlite@2.2.2, and mbt@1.2.48. Threat actors executed the breach by exploiting an npm OIDC trusted publishing configuration gap, allowing them to exchange a token and publish poisoned package versions to the registry.
Source: Aikido
Once installed, the malicious releases deploy a preinstall script acting as a runtime bootstrapper to immediately download and execute a platform-specific Bun binary. The malware then harvests local developer credentials, GitHub and npm tokens, GitHub Actions secrets, cloud secrets from major providers, and passwords across multiple web browsers. To establish persistence, the payload targets AI coding agent configurations by injecting malicious files into Claude Code and Visual Studio Code settings. This ensures automated execution whenever an infected repository is opened. To add to this, the malware deliberately terminates on Russian-locale systems, strongly linking the entire operation to previous TeamPCP threat actors.
The stolen data is securely encrypted using AES-256-GCM and exfiltrated to public GitHub repositories created on the victim’s own account. By leveraging GitHub as their primary command and control (C2) infrastructure, the attackers make tracing and blocking exfiltration exceptionally difficult for security and development teams.
Since the massive payload utilizes stolen tokens to aggressively self-propagate, injecting malicious workflows into newly discovered repositories further spreads the poisoned packages across environments. Package maintainers have rapidly released updated, safe versions of the affected software to immediately mitigate this expanding threat.
The Good | Two Cybercrime Leaders Face Justice for Fraud, Identity Theft & Extortion
Tyler Robert Buchanan, a 24-year-old British national believed to be a leader of the UNC3944 cybercrime group, has pleaded guilty in the U.S. to wire fraud and aggravated identity theft. Prosecutors say Buchanan and four accomplices stole at least $8 million in cryptocurrency by targeting employees at multiple organizations with SMS phishing attacks between 2021 and 2023. Victims were tricked into entering credentials on fake company login pages, allowing attackers to hijack email accounts, conduct SIM swaps, and drain cryptocurrency wallets.
Buchanan arrested in Spain (Source: Spanish National Police Corps)
Arrested in Spain in 2024 and extradited to the U.S. in last year, Buchanan now faces up to 22 years in prison at his sentencing this August. UNC3944 (aka 0ktapus, Scattered Spider) has historically been linked to major breaches at MGM Resorts International, Twilio, and Caesars Entertainment.
In a second guilty plea this week, Angelo Martino, a former ransomware negotiator at DigitalMint, has formally admitted to helping the BlackCat ransomware gang extort U.S. companies. Martino secretly shared clients’ confidential negotiation strategies and insurance policy limits with BlackCat operators, enabling them to demand larger ransoms. He also worked directly with other DigitalMint and Sygnia accomplices to launch ransomware attacks against multiple victims in 2023, targeting law firms, school districts, medical facilities, and financial firms. In one case, a victim paid over $25 million to settle the ransom.
Authorities have since seized $10 million in Martino’s assets, including cryptocurrency and luxury vehicles. He will also receive up to 20 years in prison when sentenced in July under the charge of conspiracy to and interference with interstate commerce by extortion as well as intentional damage to protected computers.
The Bad | Chinese-Linked Threat Actors Expand Botnets to Disguise Cyberattacks
The U.K.’s National Cyber Security Centre (NCSC-UK) and allied cyber agencies are warning that China-linked actors are increasingly relying on vast proxy networks of hijacked consumer devices to conceal cyberattacks and evade detection. A new joint statement details how the threat actors now route malicious traffic through compromised routers, cameras, recorders, and network-attached storage (NAS) devices instead of using rented infrastructure. This method means attacks are harder to trace since their geographic origins are masked.
Covert network typical setup (Source: NCSC-UK)
Officials say most China-nexus groups are now leveraging constantly shifting covert proxy networks, sometimes shared across multiple threat actors. These networks are mostly made up of Small Office Home Office (SOHO) routers, smart devices, and Internet of Things (IoT) devices. One example is a massive botnet called Raptor Train, which infected more than 260,000 devices in 2024 and was linked by the FBI to the state-backed Flax Typhoon and Integrity Technology Group, sanctioned back in January 2025. Another network, KV Botnet, has been tied to the PRC-backed Volt Typhoon group and targets vulnerable routers that no longer receive security updates. Though KV Botnet was disrupted by authorities in January 2024, Volt Typhoon actors began reviving it as of November that same year.
Authorities warn these botnets undermine traditional IP-blocking defenses because their infrastructure constantly changes. To reduce exposure, organizations are being urged to strengthen edge security by enforcing multi-factor authentication, maintaining updated inventories of internet-facing devices, using dynamic threat intelligence feeds, and adopting zero-trust controls. The advisory outlines the growing concern that everyday internet-connected devices are being weaponized at scale to support stealthy cyber operations targeting governments, telecom providers, defense contractors, and critical infrastructure worldwide.
The Ugly | ShadowBrokers Leak Links to Pre-Stuxnet Sabotage Framework
SentinelLABS has identified a previously undocumented cyber sabotage framework, tracked as “fast16”, with core components dating back to 2005. The operation centers on a kernel driver, fast16.sys, designed to intercept executable files in memory and subtly alter high-precision calculations to corrupt scientific and engineering outputs at scale.
The framework predates Stuxnet by at least five years and even early Flame-era tooling, making it one of the earliest known examples of a modular, Lua-based malware architecture. It was discovered alongside a companion service binary, svcmgmt.exe, which embeds a Lua virtual machine, encrypted bytecode, and system-level modules for propagation, persistence, and coordination across infected systems.
Unlike typical worms of its era, fast16 was engineered for targeted sabotage rather than indiscriminate spread. It selectively identifies compiled executables, particularly those using Intel toolchains, and injects rule-based modifications into floating-point computation routines.
SentinelLABS believes this could have introduced systematic errors into domains such as physics simulations, cryptographic research, and structural engineering models, effectively undermining high-value scientific workloads without obvious system failure. The carrier component also functions as a self-propagating wormlet (wormable payloads) platform, capable of deploying across networks using native Windows2000/XP services and weak administrative credentials.
Wormlets stored in the carrier’s internal storage
SentinelLABS linked fast16.sys to the infamous ShadowBrokers leak from 2017 via deconfliction signatures used within advanced state-level tooling ecosystems by the NSA. Although full target attribution remains incomplete, analysis of matching code patterns suggests potential alignment with high-precision simulation software used in engineering and defense research.
The fast16 framework offers a rare early glimpse into real-world operations where kernel-level tampering, modular scripting, and precision sabotage logic were already converging. Although fast16 itself was built to run on now-obsolete operating systems, SentinelLABS discovery pushes back the accepted timeline on modern tradecraft, showing how well-resourced actors had been building long-lived implants that prefigured today’s state-backed cyber programs years earlier than previously thought.
In 2026, the question for security leaders is not whether a supply chain attack is coming. Every serious organization should assume it is. The question is whether their defense architecture can stop a payload it has never seen before. It’s a question that takes on even more critical implications at a time where trusted agentic automation increasingly becomes the norm.
In three weeks this spring, three threat actors each ran a tier-1 supply chain attack against widely deployed software: LiteLLM, a core AI infrastructure package, Axios, the most downloaded HTTP client in the JavaScript ecosystem, and CPU-Z, a trusted system diagnostic tool. Different vectors, different actors, different techniques. SentinelOne® stopped all three on the same day each attack launched, with no prior knowledge of any payload.
The more important story is the how. Each attack arrived as a zero-day at the moment of execution. Each exploited a trusted delivery channel: an AI coding agent running with unrestricted permissions, a phantom dependency staged eighteen hours before detonation, a properly signed binary from an official vendor domain. No signature existed for any of them. No IOA matched.
SentinelOne stopped all three. That outcome is a direct answer to the question every security leader is now running against: What does your defense do when the attack arrives through a channel you explicitly trust, carrying a payload you have never seen before?
The AI Arms Race in Security is Underway
Adversaries are no longer running manual campaigns at human speed. In September 2025, Anthropic disclosed a Chinese state-sponsored group that jailbroke an AI coding assistant and ran a full espionage campaign against approximately 30 organizations. The AI handled 80–90% of tactical operations autonomously (i.e., reconnaissance, vulnerability discovery, exploit development, credential harvesting, lateral movement, exfiltration) with minimal human direction. Anthropic noted only 4–6 human decision points per campaign. The attack achieved limited success across those targets, but the trajectory is clear: AI is compressing the human bottleneck in offensive operations. Security programs designed around manual-speed adversaries are calibrating to a threat that is moving faster.
The LiteLLM attack is the clearest recent example of what this looks like inside an AI development workflow. On March 24, 2026, threat actor TeamPCP compromised the LiteLLM Python package by obtaining PyPI credentials through a prior supply chain compromise of Trivy, a widely-used open-source security scanner. Two malicious versions (1.82.7 and 1.82.8) were published. Any system with those versions during the exposure window executed the embedded credential theft payload automatically. In one confirmed detection, an AI coding agent running with unrestricted permissions (claude --dangerously-skip-permissions) auto-updated to the infected version without human review — no approval, no alert, no visible action before the payload ran. SentinelOne detected and blocked the malicious Python execution on the same day across multiple environments. Most organizations running AI development workflows didn’t know they were exposed until after the fact. The gap where human review processes don’t reach is wide, and it grows with every AI agent added to a pipeline.
Security programs were built for a different adversary. Vulnerability management, triage queues, patch cadences: all of it assumes an attacker who moves at a pace where human response can still close the window. This year’s SentinelOne Annual Threat Report documented what happens when that assumption breaks: adversaries are shifting left, embedding malicious logic in the build process before software ever reaches production. Likewise, the Verizon 2025 Data Breach Investigations Report found that edge device vulnerabilities are now being mass-exploited at or before the day of CVE publication, while organizations take a median of 32 days to patch them. The old model worked when it was designed. Attackers just weren’t running AI yet.
Three Attacks, One Common Failure Mode
Each attack ran through the same gap. Authorization was treated as a sufficient security boundary, and when authorization is automated, that assumption has no floor.
An AI agent with install permissions doesn’t stop to ask whether a package looks right. It installs. Trusted source, valid credentials, done. Supply chain attacks have always exploited trusted delivery channels, but a human at the keyboard introduces at least one friction point: Someone might notice something off, slow down, ask a question. Agents don’t do that. They execute at the speed of the next API call. When you give an agent install permissions, you’ve extended your trust model to cover everything it will ever run. Authorized agents execute exactly what their permissions allow. That’s the design. Treating permission as a proxy for safety is what turns a compromised supply chain hypersonic.
LiteLLM was compromised via credentials stolen through Trivy, a security scanner. The Axios attacker bypassed every npm security control the project had in place by exploiting a legacy access token the maintainers had forgotten to revoke. The CPUID attackers went after the vendor’s distribution infrastructure directly, so anyone who downloaded from the official website got a properly signed binary with a payload inside. In all three cases, the identity was legitimate. The intent wasn’t.
SentinelOne’s Annual Threat Report named the failure precisely: “The identity is verified, but the intent has been subverted, rendering traditional access controls ineffective against the resulting supply chain contamination.” Signature libraries, IOA rule sets, reputation lookups: All of them check authorization. None check intent. These attacks were designed to exploit exactly that. When the authorization model runs automatically, so does the exposure.
What Actually Stopped Them
In each incident, SentinelOne’s on-device behavioral AI flagged the execution pattern, not a known signature or hash for that specific attack.
The LiteLLM detection flagged a Python interpreter executing Base64-decoded code in a spawned subprocess. SentinelOne killed the process preemptively, terminating 424 related events in under 44 seconds, before any human was in a position to observe it. The Axios detection, via the Lunar behavioral engine, caught PowerShell executing under a renamed binary from a non-standard path. The engine flagged the technique regardless of what the payload contained. The first infection occurred 89 seconds after the malicious package went live; the behavioral detection fired on the same day of publication. The CPU-Z detection flagged cpuz_x64.exe building an anomalous process chain: spawning PowerShell, which spawned csc.exe, which spawned cvtres.exe. CPU-Z does not do that. The platform terminated the execution chain mid-attack during a 19-hour active distribution window.
This is the operational output of Autonomous Security Intelligence (ASI), the intelligence fabric built into the Singularity Platform. ASI runs on-device at the edge as part of the core architecture. It is already running when the attack starts, killing the process before the threat can escalate.
Where customers had SentinelOne fully deployed with the right policies enabled, they were covered. Where they did not, they were exposed, and with average ransomware recovery costs exceeding $4M per incident, that exposure has a real price. If you are not certain your deployment matches the configuration that stopped these three attacks, that certainty is worth getting.
AI to Fight AI
This is the product reality behind the thesis SentinelOne brought to RSAC: AI to fight AI. A machine-speed adversary requires a machine-speed defense. That is an architectural requirement, not a positioning statement. ASI monitors behavioral patterns at the point of execution and kills the process when something deviates, at machine speed, without waiting for a human to write a query or approve a kill.
According to an IDC study, organizations using SentinelOne’s AI platform identify threats 63% faster and remediate 55% faster than legacy solutions, neutralizing 99% of threats without a single manual step. For organizations in regulated industries (healthcare, financial services, manufacturing, critical infrastructure), the stakes compound beyond breach cost. An exposure window that stays open through manual investigation is a potential regulatory notification event, an audit finding, and a conversation the CISO has with the board under circumstances no one wants. The difference between a stopped attack and an active breach is whether the architecture acts before the attacker establishes persistence. By the time a human analyst approves the kill, redundant persistence mechanisms may already be installed. The CPU-Z attack deployed three of them specifically because partial cleanup leaves the payload operational.
Human-driven workflows, manual validation, and legacy tooling cannot keep pace with that attack cadence. When defense relies on investigation before action, the advantage shifts to the adversary. The gap is in the architecture. You cannot tune your way out of it.
Conclusion | The Only Question That Matters
SentinelOne’s latest Annual Threat Report documented the pattern these three attacks confirm: Adversaries are “shifting left” by integrating malicious logic into the build process itself, compromising software before it reaches production. It is the current operating model of advanced threat actors, and it is accelerating.
Three attacks. Three detections. Three outcomes, all in a matter of weeks. The architecture that survived them is real-time, AI-native, and built into the edge.
The question every security leader should be able to answer: Could your current solution have stopped LiteLLM, Axios, and CPU-Z autonomously, on the day of each attack, with no prior knowledge of any payload?
If the answer depends on a signature update, a cloud verdict, a manual investigation step, or a policy that wasn’t enabled, that is your answer.
Read the full technical breakdown of each incident:
All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.
In our previous posts, we explored the Identity Paradox and the rising risks at the enterprise edge. Together, these blogs highlighted how attackers gain initial access and leverage unmanaged devices to escalate privileges. The next phase of intrusion – execution – demonstrates how modern adversaries, aided by automation and AI, operate at speeds and a scale that challenge traditional human-centered defenses. Understanding these capabilities is critical for organizations aiming to reduce attacker dwell time and maintain operational resilience.
Automation: The Real Machine Multiplier
The cybersecurity conversation today often centers on AI, with organizations experimenting with generative models, agentic systems, and predictive analytics. While these tools offer unique capabilities, the backbone of modern defense and the source of the real operational advantage is automation.
In today’s landscape where we are seeing a shrinking window for response, adversaries are operating almost entirely at machine speed. In this environment, human operators alone cannot respond fast enough to prevent compromise. Automation enables defenders to reclaim the tempo. By integrating AI insights into hardened automated workflows, security teams can move from reactive triage to proactive intervention, closing gaps before attackers can exploit them. SentinelOne’s® own internal data demonstrates the tangible impact of this shift, showing that proper automation can save analysts approximately 35% manual workload despite 63% growth in total alerts, proving that automation can increase operational speed.
AI as Insight, Not Just Hype
The irony of AI innovation in the last year is that the AI tools we deploy to defend ourselves now need defending. The attack surface didn’t just grow, it folded back on itself. Automation executes tasks at speed, but AI provides context and predictive intelligence that guides those tasks. AI for security encompasses two complementary disciplines:
Security for AI: Protecting AI tools, models, and agentic systems themselves from misuse or compromise. This includes governing employee access, ensuring secure coding practices, and managing autonomous AI agents.
AI for Security: Leveraging machine learning and reasoning systems to detect and respond to threats faster than traditional rule-based approaches.
AI excels in identifying subtle behavioral patterns, predicting attacker intent, and supporting agentic workflows that can autonomously investigate alerts, recommend actions, and enforce pre-approved policies. By combining high-quality data, low-latency telemetry, and centralized visibility, AI transforms raw signals from endpoints, cloud environments, and identity systems into actionable insights.
However, AI is not a panacea. Without robust automation to operationalize these insights, organizations risk generating alerts faster than they can respond, replicating the same bottlenecks that have plagued traditional security operations.
Threats Accelerated by Automation and AI
Attackers are leveraging the same principles. Across campaigns observed in 2025 and 2026, adversaries are increasingly automating reconnaissance, exploitation, and lateral movement. Examples include:
AI-assisted phishing: Rapid generation of highly localized and convincing campaigns in minutes, bypassing traditional content filters.
Polymorphic malware: AI-generated malware that mutates faster than signature-based defenses can detect.
Automated pivoting: Integration with compromised edge devices or cloud assets to move laterally and escalate privileges at machine speed.
These behaviors compress the attack lifecycle dramatically. What once required hours or days now occurs in milliseconds, highlighting why both automation and AI must form the core of modern defensive strategies.
Transforming Enterprise Operations with Agentic AI
Defending against machine-speed attacks requires agentic AI – systems that can perform investigative and response tasks autonomously, but under human-defined guardrails. SentinelOne’s Purple AI exemplifies this approach:
Agentic auto-investigations: From alert assessment to hypothesis validation, Purple AI can perform complete investigations with minimal human intervention, documenting every step for audit and compliance.
Custom detection creation: Analysts receive agentically recommended detection rules that can be implemented immediately to stop similar attacks before they spread.
Integrated hyperautomation: Workflows, alerts, and response actions are executed automatically across endpoints, cloud services, and AI systems, enabling coordinated defense at machine speed.
These capabilities bridge the gap between insight and action, ensuring that detection is accurate and response is rapid, precise, and auditable. As organizations adopt AI for business processes, security must evolve to address the expanding attack surface. Key challenges include:
Shadow AI adoption: Employees and teams using unmonitored AI tools create unseen channels for data exfiltration or misconfiguration.
Agentic AI risks: Autonomous agents acting without sufficient oversight could unintentionally expose sensitive data or introduce vulnerabilities.
Data velocity and volume: AI systems rely on vast, real-time data streams. Ensuring integrity, context, and governance of that data is critical to maintain trust in automated defenses.
Solutions must integrate visibility, control, and governance. SentinelOne’s Prompt Security portfolio provides real-time monitoring for employee AI use, AI coding tools, and agentic AI operations. By automatically redacting secrets, blocking vulnerable code, and enforcing policy compliance, organizations can safely harness AI while reducing exposure.
Meanwhile, Observo AI and AI-native SIEM integration enable organizations to ingest, normalize, and analyze petabytes of telemetry in near real time. By pairing this high-fidelity data with Purple AI’s agentic reasoning, defenders can detect threats, trigger pre-approved responses, and maintain operational oversight across both traditional and AI-native environments.
Operational Principles for Machine-Speed Defense
Implementing an effective AI- and automation-driven security strategy requires clear guiding principles:
Intelligence Over Rules: Move beyond static signatures to behavioral and predictive detection. Threats evolve faster than predefined rules; systems must continuously learn, reason, and adapt.
Autonomy with Accountability: Automation and agentic AI should operate at machine speed, but within human-defined guardrails, ensuring actions remain traceable, auditable, and aligned with policy.
Unified Data and Context: Signals from endpoints, identities, cloud, and AI tools must be fused to create a coherent understanding. Insight without context is noise; action without context is risk.
When consistently applied, these principles reduce dwell time, enable faster response, and ensure that human expertise is focused on high-value decision-making rather than repetitive manual tasks.
Conclusion | Automation & AI as Allies
For two decades, security has been a human-speed discipline applied to a machine-speed problem. That model is over. The organizations that will lead from here aren’t the ones with more analysts or better dashboards. They’re the ones where detection, investigation, and response happen autonomously. The future will be defined by organizations where human and AI manage the SOC together: AI reasons, automation acts, and humans govern the process. Not in sequence. In parallel. At machine speed.
Execution is no longer a phase in the kill chain. It’s the entire game. The defenders who win it won’t be the fastest responders. They’ll be the ones who made their response automatic.
The evolution of execution in cybersecurity demonstrates a broader trend: Defenders must match the speed, scale, and sophistication of adversaries. Not just tools, automation and AI are partners in defense and able to extend human capacity while maintaining oversight, context, and control.
Organizations that invest in integrated, agentic AI systems and robust automated workflows can detect and respond to attacks in real time, reduce analyst workload while increasing coverage, and secure AI adoption itself, maintaining trust in both technology and operations. This shift marks a transition from perimeter-based and manual defense to autonomous, adaptive security, where systems and people collaborate to outpace attackers, secure critical assets, and support business innovation.
Execution is the new frontier in the cyber kill chain. By combining automation, AI-driven insight, and human oversight, organizations can operate at machine speed, defend against advanced threats, and confidently embrace AI-powered transformation.
As the cybersecurity landscape evolves, success will no longer depend solely on faster patching, deeper monitoring, or more alerts. It will depend on the intelligent orchestration of people, machines, and AI, enabling defenders to act faster, smarter, and with confidence in a world where adversaries are already moving at machine speed.
SentinelOne's Annual Threat Report
A defender’s guide to the real-world tactics adversaries are using today to abuse identity, exploit infrastructure gaps, and weaponize automation.
The Good | U.S. Authorities Seize W3LL Phishing Ring & Jail DPRK IT Worker Scheme Facilitators
The FBI has dismantled the “W3LL” phishing platform, seized its infrastructure, and arrested its alleged developer in its first joint crackdown on a phishing kit developer together with Indonesian authorities. Sold for $500 per kit, W3LL-enabled criminals to clone login portals, steal credentials, bypass MFA using adversary-in-the-middle techniques, and launch business email compromise attacks.
The W3LL Store interface (Source: Group-IB)
Through the W3LL Store marketplace, more than 25,000 compromised accounts were sold, fueling over $20 million in attempted fraud. Even after the storefront shut down in 2023, the operation continued through encrypted channels under new branding. It was then used against over 17,000 victims worldwide after W3LL gave cybercriminals an end-to-end phishing service. Investigators say the takedown disrupted a major criminal ecosystem that helped more than 500 threat actors steal access, hijack accounts, and commit financial fraud.
From the DoJ, two U.S. nationals have been sentenced for helping North Korean IT workers pose as American residents and secure remote jobs at more than 100 U.S. companies, including Fortune 500 firms. Court documents note that between 2021 and 2024, the scheme generated over $5 million for the DPRK and caused about $3 million in losses to victim companies. The defendants used stolen identities from over 80 U.S. citizens, created fake companies and financial accounts, and hosted company-issued laptops in U.S. homes so North Korean workers could secretly access corporate networks.
U.S. officials said the operation endangered national security by placing DPRK operatives inside American businesses. Kejia Wang will receive nine years in prison, while Zhenxing Wang is sentenced to over seven years. Authorities say the broader network remains active, with additional suspects still at large, as North Korea continues using fraudulent remote workers to fund government operations and evade sanctions.
The Bad | New “AgingFly” Malware Breaches Ukrainian Governments & Hospitals
Ukraine’s CERT-UA has uncovered a new malware campaign using a toolset called “AgingFly” to target local governments, hospitals, and possibly Ukrainian defense personnel.
The attack (UAC-0247) begins with phishing emails disguised as humanitarian aid offers that lure victims into downloading malicious shortcut files. These files trigger a chain of scripts and loaders that ultimately deploy AgingFly, a C# malware strain that gives attackers remote control of infected systems.
Example of chain of damage (Source: CERT-UA)
Once installed, AgingFly can execute commands, steal files, capture screenshots, log keystrokes, and deploy additional payloads. It also uses PowerShell scripts to update configurations and retrieve command and control (C2) server details through Telegram, helping the malware remain flexible and persistent.
One notable feature is that it downloads pre-built command handlers as source code from the server and compiles them directly on the infected machine, reducing its static footprint and helping it evade signature-based detection tools.
Investigators found that the attackers use open-source tools such as ChromElevator to steal saved passwords and cookies from Chromium-based browsers, and ZAPiDESK to decrypt WhatsApp data. Additional tools like RustScan, Ligolo-ng, and Chisel support reconnaissance, tunneling, and lateral movement across compromised networks. CERT-UA says the campaign has impacted at least a dozen organizations and may also have targeted members of Ukraine’s defense forces.
To reduce exposure, the agency recommends blocking the execution of LNK, HTA, and JavaScript files, along with restricting trusted Windows utilities such as PowerShell and mshta.exe that are abused in the attack chain.
The Ugly | Attackers Exploit Nginx Auth Bypass Vulnerability to Hijack Servers
A critical vulnerability in Nginx UI, tracked as CVE-2026-33032, is being actively exploited in the wild to achieve full server takeover without authentication.
The flaw stems from an exposed /mcp_message endpoint in systems using Model Context Protocol (MCP) support, which fails to enforce proper authentication controls. As a result, remote attackers can invoke privileged MCP functions, including modifying configuration files, restarting services, and forcing automatic reloads to effectively gain complete control over affected Nginx servers.
The attacker-controlled page by nginx (Source: Pluto Security)
Security researchers have reported that exploitation requires only network access. Attackers initiate a session via Server-Sent Events, open an MCP connection, retrieve a session ID, and then use it to send unauthenticated requests to the vulnerable endpoint.
This grants access to all available MCP tools, executing destructive capabilities like injecting malicious server blocks, exfiltrating configuration data, and triggering service restarts.
The vulnerability was patched in version 2.3.4 shortly after the disclosure, but a more secure release, 2.3.6, is now recommended. Despite the fix, active exploitation in the wild has been confirmed with proof-of-concept code publicly available.
Nginx UI is widely used, with over 11,000 GitHub stars and hundreds of thousands of Docker pulls, and scans suggest roughly 2,600 exposed instances remain vulnerable globally. Attackers can establish MCP sessions, reuse session IDs, and chain requests to escalate privileges, enabling stealthy persistence, configuration tampering, and full administrative control over exposed systems.
Organizations are urged to update immediately, as attackers can fully compromise systems through a single unauthenticated request, bypassing traditional security controls and gaining persistent control over web infrastructure.
The latest announcements from OpenAI and Anthropic mark another important step forward for frontier AI. They also reinforce something we’ve believed at SentinelOne® for years: the future of cybersecurity will be shaped by AI-native defense.
SentinelOne has worked closely with frontier labs for years, including OpenAI, Anthropic, and Google DeepMind, and naturally continues to do so. While we cannot always share the specifics of every collaboration, these partnerships have provided, and continue to provide meaningful insight into how advanced models are evolving and where they can create real impact across security. Many of these learnings and capabilities are already embedded in our platform, protecting customers from the most advanced attacks – every day, stopping zero day exploits no other solution is currently able to.
What stands out most is not simply that frontier models are becoming more capable, but that they are accelerating the broader shift toward faster, more intelligent, and more automated security operations. On the one hand, they are improving how the cyber industry and defenders identify weaknesses, analyze complex systems, and reason about attack paths at scale. On the other, they are giving attackers the advantage of speed and scale when it comes to finding new vulnerabilities. Progress in this race matters, but it is only one part of the broader security picture.
In practice, and without discounting the severity of uncovering exponentially more bugs in software, raw vulnerability counts rarely map cleanly to real-world risk. Many vulnerabilities are not meaningfully exploitable in live environments, and many are already reduced by architectural layers, controls, mitigations, and runtime protections. The gap between theoretical exposure and operational risk is often substantial. What matters most is the ability to understand real conditions, prioritize what matters, and stop actual attacks across complex environments, even when faced with novel threats and zero days.
That has been SentinelOne’s pioneering principle and the advantage we’ve delivered to our customers from the beginning.
From day one, SentinelOne was built to operate at machine speed, using behavioral AI, automation, and autonomous protection to detect, defend, and respond across endpoint, cloud, identity, data, network, and AI attack surfaces. As frontier AI continues to advance, the value of that approach only grows. To demonstrate our commitment to these principles, we’d provide two distinct examples.
First, in the last few weeks alone, the benefit of such an approach has played out in supply chain attacks, like LiteLLM, Axios, and CPU-Z, all illustrative of novel threats and the risk of trusted agents and workflows in the AI era. In each case, autonomous response at machine speed was the only antidote to block these novel threats that leverage unpatched, or zero day vulnerabilities.
Second, SentinelOne demonstrably expanded our own ongoing efforts to secure our technology. Along with the standard, established efforts we’ve used for years, SentinelOne has used multiple, AI-driven models to constantly examine our technology and architecture in techniques virtually identical to those discussed in Anthropic’s technical details for researchers and practitioners released April 7th 2026 (Assessing Claude Mythos Preview’s cybersecurity capabilities). This activity has been ongoing for months and is also consistently reviewed for findings as well as evaluated as a program by the SentinelOne executive team. It is our commitment to build and deliver secure technology and we do not see an effective future in this work without robust AI-driven methods, and an inclusive, multi-model approach.
As we look at the overall AI landscape, the shift is already underway, and it plays directly to SentinelOne’s strengths. The industry is moving toward more autonomous, more adaptive, and more intelligence-driven security. That is the future we helped pioneer, and one we are uniquely positioned to lead.
Our clear advice to defenders: Invest in machine speed defense and visibility right now. Ensure your defenses are up to date and well configured. Ground yourself in true research, not press releases and hype. As an example, many of the press and information shared by third parties around Anthropic’s new model release have lacked any substantive data – in many cases those statements preceded any real, tangible experience with the preview models in question. Inversely, the AI Security Institute (AISI) released a detailed research evaluation of relevant models, which sheds light on the state of frontier AI, exploitation rates, and potential real world implications. It clearly shows the trajectory, even from older models, had been apparent for a while, and that capability has existed and in many cases has been a function of compute scaling, as well as potentially the result of looser guardrails allowing more effective compute and reasoning than guardrailed models:
Source: AISI, Our evaluation of Claude Mythos Preview’s cyber capabilities, April 13th 2026Source: AISI, Our evaluation of Claude Mythos Preview’s cyber capabilities, April 13th 2026
The AI Security Institute also goes forward and outlines the following implications:
“Mythos Preview’s success on one cyber range indicates that it is at least capable of autonomously attacking small, weakly defended and vulnerable enterprise systems where access to a network has been gained. However, our ranges have important differences from real-world environments that make them easier targets. They lack security features that are often present, such as active defenders and defensive tooling. There are also no penalties for the model for undertaking actions that would trigger security alerts. This means we cannot say for sure whether Mythos Preview would be able to attack well-defended systems.
In a regime where attackers can direct and provide network access to models to conduct autonomous attacks on poorly defended systems, cybersecurity evaluations must evolve. As capabilities continue to improve, evaluation environments that lack defenses will no longer be challenging enough to discriminate between the capabilities of the most cyber-capable models or assess trends. Our future work will involve evaluating capabilities using ranges simulating hardened and defended environments, including ranges with active monitoring, endpoint detection and real-time incident response. We will also be tracking how AI-enabled vulnerability discovery and penetration testing campaigns perform on real-world systems.”
Stay safe,
The SentinelOne team
Third-Party Trademark Disclaimer:
All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.
On April 9, 2026, cpuid.com was actively serving malware through its own official download button. Threat actors had compromised the CPUID domain at the API level and were silently redirecting legitimate download requests to attacker-controlled infrastructure. The attack ran for approximately 19 hours. Users who navigated directly to the official site received a legitimate, properly signed binary with a malicious payload bundled inside it.
That morning, SentinelOne’s behavioral detection flagged an anomaly inside cpuz_x64.exe. The binary was genuine. The digital signature was valid. The download had arrived from the vendor’s own infrastructure. The process chain cpuz_x64.exe began constructing was the tell: it spawned PowerShell, which spawned csc.exe, which spawned cvtres.exe. CPU-Z does not do that.
CPU-Z, HWMonitor, HWMonitor Pro, and PerfMonitor are staples in IT toolkits. The users who downloaded them followed every instruction they’d been given. The trust chain broke above them. The next attack will work the same way.
SentinelOne’s Annual Threat Report identifies exactly this pattern as a systemic shift: “This [shift] extends deeply into the software supply chain, where the identity of a trusted developer becomes the vector of attack.” In late 2025, we observed the GhostAction campaign, where a compromised GitHub maintainer account pushed malicious workflows to extract secrets. A concurrent phishing attack against a maintainer of popular NPM packages deployed malicious code capable of intercepting cryptocurrency transactions. In each case, the commit logs and push events appeared legitimate because they originated from accounts with valid write access. The identity was verified. The intent had been subverted. The CPUID incident extends this pattern to software distribution itself: the supplier’s download infrastructure became the delivery channel.
What the Agent Saw
The SentinelOne agent triggered the alert “Penetration framework or shellcode was detected” within the first seconds of execution. The detection came from what the process was doing, with five specific behavioral indicators converging:
Anomalous API resolution: The process located system functions through non-standard discovery methods, bypassing the OS loader entirely.
Reflective code loading: Executable code was running in memory regions with no corresponding file on disk.
Suspicious memory allocation: Read-Write-Execute (RWX) memory permissions were requested, a staging pattern for malicious payloads.
Process injection patterns: Execution flow consistent with code being redirected into a secondary process to mask its origin.
Heuristic shellcode signatures: Sequential operations characteristic of automated exploitation toolkits preparing an environment for command execution.
The agent autonomously terminated and quarantined the involved processes before the attack advanced further. The malicious CRYPTBASE.dll, placed in the same directory as the legitimate CPU-Z binary, was loaded by Windows before the real system DLL could be reached, and it never completed its job.
Alert Page
The agent was watching for what the software was trying to do. Behavioral detection is the layer that holds when authorization cannot be trusted, because the behavior reveals intent regardless of what signed the package.
Behavioral IndicatorProcess TreeEvent Table
What Was Actually Inside
The trojanized packages were designed to leave no trace. A reflective PE loader decrypted and injected a second-stage DLL using XXTEA encryption and DEFLATE decompression, no disk writes, no file artifacts. Three redundant persistence mechanisms were then installed: a registry Run key, a 68-minute scheduled task with a 20-year duration, and MSBuild project files in AppData\Local engineered to survive reboots and partial remediation.
The 2026 Annual Threat Report describes this persistence design as “masquerading as maintenance”: adversaries blend into the environment by mimicking legitimate system updates and background processes. To a busy defender, a scheduled task with a generic name and a timed execution interval appears entirely routine until you examine what it is executing. STX RAT’s 68-minute task with a 20-year duration operates on exactly this logic.
The process chain visible in EDR logs made the intent clear: cpuz_x64.exe spawned powershell.exe, which spawned csc.exe, then cvtres.exe. CPU-Z does not do that.
The final payload, STX RAT, delivered hidden VNC providing an attacker-controlled desktop session invisible to the user, keyboard and mouse injection, browser credential theft across Chrome, Firefox, Edge, and Brave, Windows Vault extraction, cryptocurrency wallet access, and a reverse proxy for follow-on payload delivery. C2 communication ran over a custom encrypted protocol using DNS-over-HTTPS to 1.1.1.1 to bypass DNS monitoring.
A reflective payload executing entirely in memory, inside a signed process, with no disk writes, compresses the detection window to milliseconds. Autonomous response is the only response fast enough.
The Attacker’s Critical Mistake
Kaspersky’s analysis linked the CPUID samples to a March 2026 campaign targeting FileZilla users within hours, and the connection required no advanced forensics. The attacker reused the identical C2 infrastructure and deployed the unmodified STX RAT payload, the same one eSentire’s Threat Response Unit had already fingerprinted and published YARA rules for after the FileZilla campaign.
Those rules detected the CPUID variant without modification.
The actor invested time compromising CPUID’s download API and did nothing to retool after being publicly fingerprinted. The C2 domain, the backend server, the payload: all identical across campaigns. The same backend server had been operating since at least July 2025. Per Kaspersky’s own assessment, the C2 reuse was the gravest mistake of the operation. A more disciplined actor burns infrastructure between campaigns. This one did not, and defenders had working detection before most victims knew an attack had occurred.
What the Attack Was Really For
The 150+ confirmed victims span retail, manufacturing, consulting, telecommunications, and agriculture. The count is almost certainly low, CPUID’s tools have tens of millions of users globally, and the portable ZIP variant of CPU-Z runs commonly on production systems in environments that block installer-based software.
Victim count is secondary to victim profile. CPU-Z users skew toward IT professionals: system administrators, developers, security engineers, the people with domain admin rights, production access, and infrastructure keys. One compromised sysadmin carries a fundamentally different blast radius than one compromised user.
The operational pattern points to an initial access broker. The goal was to sell persistent, hidden access. Someone else would do the extracting.
For organizations where an infection occurred, two questions need answers. What did the attacker do during the window they had access, especially if that machine belonged to a privileged user? And what happens over the next 60-90 days, when whoever purchased that access decides to activate it? Ransomware affiliates who buy IAB access typically move within that window. Cleaning the machine closes one exposure. Monitoring for lateral movement, credential reuse, and unusual authentication in the weeks following remediation closes the other.
What Defenders Should Do Now
For practitioners
The indicators are specific and actionable.
Check your fleet for CRYPTBASE.dll in any directory other than C:\Windows\System32.
Look for the process chain cpuz_x64.exe or any CPUID application spawning PowerShell.
Block supp0v3[.]com and 147.45.178.61 at DNS and firewall layers.
At the network layer, watch for DNS-over-HTTPS queries to 1.1.1.1/dns-query resolving welcome.supp0v3.com; STX RAT specifically uses DoH to bypass DNS monitoring, and any endpoint generating this pattern is a high-confidence indicator.
If you find an infected machine, remediate all four persistence mechanisms explicitly: the registry Run key, the scheduled task, any MSBuild .proj files in AppData\Local, and PowerShell profile autoruns. The malware installs redundant footholds specifically because partial cleanup leaves it alive.
For security leaders
The harder conversation is about supply chain trust. Your users followed every rule they were given. They downloaded from the official website. They trusted a vendor they had used for years. That vendor’s infrastructure failed them. Behavioral detection, security that watches what software does rather than where it came from, is the layer that caught this.
The business case is specific. When an initial access broker sells a foothold obtained this way, the buyer typically activates within 60-90 days. With average ransomware recovery costs exceeding $4 million per incident, even a single privileged endpoint sold through an IAB represents material, quantifiable exposure. The organizations that already had 24/7 autonomous behavioral monitoring in place closed the window before it opened. The ones that did not are still counting.
The adversary’s tooling was unsophisticated. The OPSEC was poor. The C2 reuse was a gift to defenders. And yet: 150+ confirmed victims and a 19-hour window during which clean, legitimate software was being replaced by a remote access trojan is a demonstration of how far attacker leverage has extended into the software supply chain, and how quickly behavioral detection closes the gap when it acts autonomously, before the attack completes its first stage. The attacker’s poor OPSEC saved defenders this time. The structural failure in the trust model (the assumption that software from a trusted source is safe to run) persists regardless of attacker discipline.
The Structural Problem That Remains
SentinelOne’s latest Annual Threat Report documents GhostAction and the NPM package compromise as supply chain identity attacks through code repositories and package managers. CPUID adds a third layer: the vendor’s distribution infrastructure itself. Across all three cases, access controls validated a legitimate identity. The report frames this plainly: “The identity is verified, but the intent has been subverted, rendering traditional access controls ineffective against the resulting supply chain contamination.”
This shift means authorization, the cornerstone of traditional software trust, is no longer a sufficient security boundary. When the distribution channel becomes the failure point, verification has to move from the point of origin to the point of execution.
In the CPUID case, users followed every rule. They downloaded from the official vendor website. That vendor’s download API was the failure point, compromised at the infrastructure level for 19 hours, with no visible indication.
SentinelOne’s Behavioral AI engine detects suspicious and malicious patterns in real time, watching what the software does regardless of where it came from.
SentinelOne customers were protected through autonomous behavioral detection at the point of execution. The structural failure in the trust model (the assumption that software from a trusted source is safe to run) is a gap that better user behavior cannot close. Behavioral detection at machine speed is what closes it.
To understand how the Singularity Platform identifies threats across your environment, including those arriving through trusted software channels, request a demo.
The Good | DoJ Disrupts TP-Link Router Network Run by Russian Spy Org
This week, authorities in the U.S. carried out Operation Masquerade, a court-authorized operation to disrupt a DNS hijacking network run by Russia’s GRU Unit 26165 (APT28). The network involved the compromise of thousands of TP-Link small home and small office routers, spread across more than 23 U.S. states.
Since at least 2024, APT28 operators have been exploiting known vulnerabilities in the devices to steal credentials, gain unauthorized access to router management interfaces, and silently rewrite DNS settings so that queries were redirected to GRU-controlled resolvers instead of the users’ normal providers. The actors then applied automated filtering on the hijacked traffic to pick out DNS requests of intelligence interest.
For selected targets, the resolvers returned forged DNS records for specific domains to insert GRU-controlled infrastructure into encrypted sessions. This allowed operators to collect passwords, authentication tokens, emails, and other sensitive data from devices on the same networks as the compromised routers, including users in government, military, and critical infrastructure sectors.
Russian espionage group APT28 compromised MikroTik and TP-Link routers to redirect traffic for certain authentication operations to AitM phishing kits
Under court supervision, the FBI developed and deployed a series of commands to send to compromised routers. The operation captured evidence of GRU activity and reset the DNS configuration so the devices would obtain legitimate resolvers from their ISPs. It also blocked the original path the actors used for unauthorized access.
According to DOJ, the FBI first tested the command set on the same TP-Link router models and firmware in a controlled environment, with the goal of leaving normal routing functions intact, avoiding access to any user content, and ensuring that owners could reverse the changes via a factory reset or web management interface.
The bureau is now working with U.S. internet service providers to notify customers whose routers fell within the scope of the warrant.
The Bad | Threat Actors Turn to Script Editor to Bypass Apple’s ClickFix Mitigation
SentinelOne researchers have discovered a variant of theClickFixsocial engineering trick targeting macOS users that avoids the need for victims to unwittingly copy-paste commands to the Terminal. Apple recently updated the desktop operating system to include a mitigation for Terminal-driven ClickFix attacks, but threat actors have moved quickly to sidestep Apple’s response.
SentinelOne researchers discovered a campaign in which threat actors used a lure to install the popular AI-Assistant Claude to deliver AMOS malware. The lure leverages the appplescript:// URL scheme to launch the Script Editor from the user’s browser, with the editor pre-populated with malicious commands. The delivery mechanism offers threat actors a smooth, Terminal-free, attack flow that simply asks the user to perform a few clicks, with no copy-paste involved.
Instructions to victimsScript Editor opens with pre-populated malicious commands
Analysis of the payloads shows the technique is being used to deliver AMOS/Atomic Stealer malware that reaches out to hardcoded C2 infrastructure and attempts to exfiltrate browser data, crypto wallets and passsword stores in a single run. SentinelOne customers are protected against AMOS and similar variants of infostealer.
Researchers at JAMF later described a similar campaign using a webpage themed to look like an official Apple help page with instructions on how to reclaim disk space. Taken together, these campaigns suggest that Script Editor–driven ClickFix flows are becoming a reusable pattern rather than a one-off trick.
In the recent macOS Tahoe 26.4 update, Apple added a new security feature to warn users when pasting commands into the Terminal under certain conditions. Threat actors had moved towards the Terminal copy-paste method in response to Apple blocking a previous widely-used method of bypassing Gatekeeper via a Control-click override. However, the new Script Editor-based delivery mechanism entirely sidesteps these efforts and continues the long-running cat-and-mouse game between the operating system vendor and malware authors.
The Ugly | Iranian Hackers Target U.S. PLCs in Critical Infrastructure
Iran-affiliated APT actors are actively exploiting internet-facing operational technology (OT) devices, including Rockwell Automation/Allen-Bradley programmable logic controllers (PLCs), across multiple U.S. critical infrastructure sectors.
According to a joint advisory from CISA and other agencies, this activity has led to PLC disruptions, manipulation of data on HMI/SCADA displays, and in some cases operational disruption and financial loss. The authoring agencies assess that these Iranian-affiliated actors are conducting the campaign to cause disruptive effects inside the United States and note an escalation in activity since at least March 2026.
The campaign focuses on CompactLogix and Micro850 PLCs deployed in government services and facilities, water and wastewater systems, as well as the energy sector. Using leased third-party infrastructure together with configuration tools such as Rockwell’s Studio 5000 Logix Designer, the actors establish apparently legitimate connections to exposed PLCs over common OT ports including 44818, 2222, 102, and 502.
Once connected, they deploy Dropbear SSH on victim endpoints to gain remote access over port 22, extract project files such as .ACD ladder logic and configuration, and alter the process data operators see on HMI and SCADA dashboards. The same port-targeting pattern suggests the actors are also probing protocols used by other vendors, including Siemens S7 PLCs.
Iran-affiliated cyber actors are targeting operational technology devices across US critical infrastructure, including programmable logic controllers (PLCs). These attacks have led to diminished PLC functionality, manipulation of display data and, in some cases, operational… pic.twitter.com/odBD3lBi0l
The advisory places this activity in the context of earlier IRGC-linked operations against U.S. industrial control systems. In late 2023, IRGC-affiliated CyberAv3ngers targeted Unitronics PLCs used across multiple water and wastewater facilities, compromising at least 75 devices. The latest wave extends that playbook to a broader set of PLC vendors and sectors, reinforcing that internet-exposed controllers with weak or missing hardening remain a priority target for disruptive state-linked operations.
In the first blog of this series, we explored the Identity Paradox and how attackers exploit valid credentials to operate undetected inside enterprise environments. However, identity compromise rarely happens in isolation.
To understand how these attacks begin, we need to look earlier in the intrusion lifecycle at the place many organizations still assume is secure: the edge.
For years, cybersecurity strategy has been built around defending the perimeter to protect the enterprise. Firewalls, VPNs, and secure gateways were designed as the outer boundary of the organization – hardened systems intended to control access and reduce risk. But that model is breaking down. What was once treated as a defensive layer is now a frequent target of modern attacks.
Rather than acting purely as protection, the perimeter increasingly introduces exposure. This shift reflects what can be described as edge decay, a gradual erosion of trust in boundary-based security as attackers focus on the infrastructure that defines it.
The Perimeter Is No Longer a Safe Boundary
The scale of this shift is hard to ignore. Zero-day vulnerabilities often target edge devices, including firewalls, VPN concentrators, and load balancers, all of which are not fringe systems. They are foundational components of enterprise connectivity, and the infrastructure that organizations built to protect themselves has become the infrastructure attackers exploit first.
Yet, unlike endpoints or servers, many edge devices still sit outside traditional endpoint visibility and control. Because these appliances typically cannot run EDR agents, defenders are often forced to rely on logs and external monitoring instead. However, logging can be inconsistent, patch cycles are often slow, and in many environments, these devices are treated as stable infrastructure rather than active risk. This combination creates a persistent visibility gap.
Attackers have recognized this gap and are exploiting it at scale. Rather than targeting hardened endpoints, adversaries are shifting their focus to unmanaged and legacy edge infrastructure and the systems that sit at the intersection of trust and exposure.
Weaponization at Machine Speed
One of the most significant accelerators of edge-focused attacks is the rise of automation and AI-assisted exploitation.
Threat actors are no longer relying on manual discovery. Instead, they use automated tooling to scan global IP space, identify exposed devices, and operationalize vulnerabilities within hours of disclosure. In some cases, exploitation begins within days or even hours of a vulnerability becoming public.
This compression of the attack timeline has important implications for defenders. Traditional patching cycles and risk prioritization models are no longer sufficient when adversaries can move faster than organizations can respond. As a result, edge compromise is increasingly observed as an early step in broader intrusion chains, often preceding identity-based attacks.
Edge Devices as Persistent Beachheads
Adversaries are increasingly prioritizing edge infrastructure because it represents a structural blind spot. Rather than targeting well-defended endpoints, they focus on unmanaged or legacy systems that fall outside standard visibility. Once compromised, these devices become more than just entry points, they provide a stable foothold for continued operations.
Once attackers gain access to a firewall or VPN appliance, that system effectively becomes an internal pivot point rather than a boundary control. From there, adversaries can monitor traffic, capture credentials, and pivot deeper into the network.
Investigations have repeatedly shown how compromised edge devices are used to:
Intercept authentication flows and harvest credentials
Deploy web shells on internal systems
Create unauthorized accounts for persistence
Pivot directly into sensitive infrastructure such as virtualization platforms
SentinelOne’s® Annual Threat Report observed a case where attackers leveraged compromised F5 BIG-IP devices to move from the internet-facing edge directly into internal VMware vSphere environments. In another, vulnerabilities in Check Point gateway devices were exploited to gain initial access across dozens of organizations globally.
These incidents reflect a broader pattern where the edge is becoming the attacker’s preferred entry point for lateral movement and identity compromise.
Living Inside the Infrastructure
More advanced campaigns take this concept even further by embedding themselves directly into the firmware of edge devices. The ongoing ArcaneDoor campaign, as noted in the Annual Threat Report, illustrates this evolution. Targeting legacy Cisco Adaptive Security Appliance (ASA) devices, attackers chained multiple zero-day vulnerabilities to deploy a firmware-level bootkit known as RayInitiator.
This implant is particularly dangerous because it operates below the operating system, allowing it to survive reboots and software updates. Alongside it, attackers deployed LINE VIPER, an in-memory payload capable of capturing authentication traffic and suppressing logging activity to evade detection. In effect, the device itself becomes both the attack platform and the concealment mechanism. When logging is suppressed and monitoring is absent, defenders lose visibility into the intrusion entirely.
The Rise of Untraceable Relay Networks
Compromised edge devices are not just used for internal access, they are also being repurposed as part of global attack infrastructure. State-sponsored actors have begun building Operational Relay Box (ORB) networks from compromised routers and firewalls. These networks allow attackers to route malicious traffic through legitimate but hijacked infrastructure, obscuring the true origin of their operations.
Clusters such as PurpleHaze and activity linked to groups like APT15 and Hafnium demonstrate how these relay networks are used to dynamically rotate attack paths, making attribution more difficult. As a result, malicious traffic can appear to originate from trusted enterprise systems, complicating both detection and response.
This dual use of edge devices as both entry points and relay infrastructure highlights a shift in how adversaries operationalize compromised systems.
Legacy Systems and the Illusion of Patchability
A major contributor to edge decay is the persistence of legacy systems. Many organizations continue to rely on outdated appliances that lack modern security features such as Secure Boot or robust integrity verification. These systems are often considered “patchable,” but in practice, they represent long-term operational risk that is difficult to fully mitigate.
Firmware updates can be disruptive and vendor support may be inconsistent. In many cases, organizations are hesitant to modify systems that underpin critical connectivity. The result is a growing population of edge devices that remain exposed long after vulnerabilities are discovered. In some environments, this problem is compounded by visibility gaps. Devices running unsupported operating systems or incompatible software cannot host modern security tooling, leaving them effectively unmonitored. These “legacy ghosts” become ideal targets for attackers for being stable, trusted, and largely invisible.
The Identity Connection
Edge compromise does not exist in isolation. It is deeply connected to identity-based attacks. Once an attacker controls a gateway or VPN appliance, they gain access to authentication flows, session data, and credential material. This allows them to pivot directly into identity infrastructure, bypassing traditional defenses.
In many intrusions, edge compromise becomes the first step toward identity abuse. This creates a direct connection between edge exposure and the challenges described in the Identity Paradox. Attackers do not need to break authentication if they can intercept it. By observing or capturing identity data in transit, they can operate using valid artifacts without triggering traditional controls.
Conclusion | Securing Edge Infrastructure from the Vanishing Perimeter
The perimeter isn’t failing, it’s already failed. Every unpatched VPN, every legacy firewall running decade-old firmware, every edge device outside your visibility is a door left open and forgot about. The question isn’t whether attackers will find it. It’s whether you’ll see them when they walk through. Once attackers establish a foothold at the edge, they move quickly to compromise identities, escalate privileges, and expand their reach across the environment. This progression from edge access to identity abuse to full-scale intrusion is becoming the dominant pattern in modern attacks.
In this context, defending the edge means both protecting infrastructure and disrupting the earliest stages of the attack lifecycle. Given how dynamic and often unmanaged edge environments have become, they can no longer be treated as a reliable line of defense on their own.
To defend against adversaries who specialize in exploiting these blind spots, the path forward requires a shift in perspective from device-level alerts to attack lifecycle visibility, and from assumed integrity to continuous validation.
SentinelOne's Annual Threat Report
A defender’s guide to the real-world tactics adversaries are using today to abuse identity, exploit infrastructure gaps, and weaponize automation.
All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.
The Good | SentinelOne AI EDR Stops LiteLLM Supply Chain Attack in Real Time
This week, SentinelOne demonstrated how autonomous, AI-driven endpoint protection can detect and stop sophisticated supply chain attacks in real time, without human intervention. On the same day the attack was launched, Singularity Platform identified and blocked a trojanized version of LiteLLM, an increasingly popular proxy for LLM API calls, before it could execute across multiple customer environments. The compromise had occurred only hours earlier, yet the platform prevented execution instantly, without requiring analyst input, signatures, or manual triage.
Catching the Payload in the Act
The attack itself followed a multi-stage, fast-moving, pattern that is designed to evade traditional detection and manual workflows. Originating from a compromised security tool, attackers obtained PyPi credentials to publish malicious LiteLLM versions that deployed a cross-platform payload. In one case, SentinelOne observed an AI coding assistant with unrestricted permissions unknowingly installing the infected package, highlighting a new and largely ungoverned attack surface.
Once triggered, the malware attempted to execute obfuscated Python code, deploy a data stealer, establish persistence, move laterally into Kubernetes clusters, and exfiltrate encrypted data. SentinelOne’s behavioral AI detected the malicious activity at runtime, specifically identifying suspicious execution patterns like base64-decoded payloads, and terminated the process chain in under 44 seconds while preserving full forensic visibility.
Critically, detection did not depend on knowing the compromised package. Instead, it relied on observing behavior across processes, allowing the platform to stop the attack regardless of how it entered the environment – whether via a developer, CI/CD pipeline, or autonomous agent.
This incident underscores a growing trend: AI-driven attacks are operating at speeds that outpace human response. Effective defense now requires autonomous, behavior-based systems capable of acting instantly, closing the gap between detection and compromise before damage can occur.
The Bad | Attackers Compromise Axios to Deliver Cross-Platform RAT via Compromised npm
For JavaScript HTTP client Axios, a major supply chain attack compromised its systems after malicious versions of an npm package introduced a hidden dependency that deploys a cross-platform remote access trojan (RAT). Specifically, Axios versions 1.14.1 and 0.30.4 were found to include a rogue package called “plain-crypto-js@4.2.1,” inserted using stolen npm credentials that belonged to a core maintainer. This allowed attackers to bypass normal CI/CD safeguards and publish poisoned releases directly to npm.
Source: Socket
The malicious dependency exists solely to execute a post-install script that downloads and runs platform-specific malware on macOS, Windows, and Linux systems. Once executed, the malware connects to a command and control (C2) server, retrieves a second-stage payload, and then deletes itself while restoring clean-looking package files to evade detection. Notably, no malicious code exists within Axios itself, making the attack harder to detect through traditional code review.
The operation was highly coordinated, with staged payloads prepared in advance and both affected Axios branches compromised within minutes. Each platform-specific variant – C++ for macOS, PowerShell for Windows, and Python for Linux – shares the same functionality, enabling system reconnaissance, command execution, and data exfiltration. While macOS and Linux variants lack persistence, the Windows version establishes ongoing access via registry modifications.
Researchers believe the attacker leveraged a long-lived npm access token to gain control of the maintainer account. There are also indications linking the malware to previously observed tooling associated with a North Korean threat group known as UNC1069.
Users are strongly advised to downgrade Axios immediately to versions 1.14.0 or 0.30.3, remove the malicious dependency, check for indicators of compromise, and rotate all credentials if exposure is suspected.
The Ugly | High-Severity Chrome Zero-Day in Dawn Component Allows Remote Code Execution
Google has issued security updates for its Chrome browser to address 21 vulnerabilities, including a high-severity zero-day flaw, tracked as CVE-2026-5281, that is actively being exploited in the wild. The vulnerability stems from a use-after-free (UAF) bug in Dawn, an open-source implementation of the WebGPU standard used by Chromium. If successfully exploited, it allows attackers who have already compromised the browser’s renderer process to execute arbitrary code via a specially crafted HTML page.
While Google has confirmed active exploitation, it has withheld technical details and attribution to limit further abuse until more users apply the patch. This zero-day is the latest in a series of actively-exploited Chrome flaws addressed in 2026 so far, bringing the total to four for this year alone. Previous issues included vulnerabilities in Chrome’s CSS component, Skia graphics library, and V8 JavaScript engine.
The Dawn flaw could lead to browser crashes, memory corruption, or other erratic behavior, underscoring the risks posed by modern browser attack surfaces. To date, Google has released fixes in Chrome version 146.0.7680.177/178 for Windows and macOS, and 146.0.7680.177 for Linux, now available through the Stable Desktop channel.
To protect against the flaw, Users can update Chrome immediately by navigating to the browser’s settings and relaunching after installation. Other Chromium-based browsers, including Microsoft Edge, Brave, Opera, and Vivaldi, are also expected to roll out patches and should be updated promptly. CISA has added the flow to its KEV catalog and mandated that FCEB agencies apply the patch by April 15, 2026 to prevent their networks from attack. This latest incident highlights the ongoing targeting of web browsers by threat actors and reinforces the importance of timely patching to mitigate exploitation risks.
A guide to the suspected North Korean cyber attack—and how SentinelOne defends against it at machine speed
On March 31, 2026, a North Korean state actor hijacked the npm credentials of the primary Axios maintainer and published two backdoored releases that deployed a cross-platform remote access trojan (RAT) to Windows, macOS, and Linux systems. Axios is the most widely used HTTP client in the JavaScript ecosystem, with approximately 100 million weekly downloads and a presence in roughly 80% of cloud and code environments. The malicious versions were live for approximately three hours. An estimated 600,000 downloads occurred during that window with no user interaction required beyond a routine npm install.
SentinelOne protects against this attack, demonstrating why autonomous, layered defense at machine speed is not optional when adversaries operate at this velocity. In this attack, the first infection was observed 89 seconds after publication. At that pace, manual workflows do not have a response window. They have a spectator seat.
For SentinelOne’s customers and partners, here’s a quick overview of the compromise, SentinelOne’s response, and steps you can take to further protect your environment.
What Happened: The Anatomy of a State-Level Supply Chain Weapon
The attacker, tracked as UNC1069 by Google Threat Intelligence and Sapphire Sleet by Microsoft, compromised maintainer credentials and published axios@1.14.1 (tagged “latest”) and axios@0.30.4 (tagged “legacy”). Each version introduced a single new dependency: plain-crypto-js@4.2.1, a purpose-built trojan. The malicious package’s postinstall hook silently deployed a cross-platform RAT communicating over HTTP to C2 infrastructure at sfrclak[.]com (142.11.206[.]73), commonly being referred to as WAVESHAPER.V2.
The operational sophistication was striking. The attacker pre-staged a clean version of plain-crypto-js 18 hours before detonation to evade novelty-based detection. Publication occurred just after midnight UTC on a Sunday to maximize the response window. The malware self-deleted after execution, swapping its malicious package.json for a clean stub, leaving forensic evidence only in lockfiles and audit logs.
Most critically, Axios had adopted OIDC Trusted Publishing, the post-Shai-Hulud hardening measure npm promoted as the solution to credential-based attacks. But the OIDC configuration coexisted with a long-lived npm access token. npm’s authentication logic prioritizes environment variable tokens over OIDC when both are present. The attacker stole the legacy token and bypassed every modern control the project had in place.
The issue is architectural: security controls that coexist with the mechanisms they are meant to replace provide a false sense of protection. Axios had Trusted Publishing, SLSA provenance, and GitHub Actions workflows. None of it mattered because the old key was still under the mat.
How SentinelOne Is Protecting Customers
Behavioral Detection via the Lunar Engine
SentinelOne’s Lunar behavioral engine detects the renamed binary execution technique central to the Windows attack chain, in which PowerShell is copied to %PROGRAMDATA%\wt.exe and executed under a disguised process. The RenamedBinExecution logic catches this behavior regardless of the specific payload hash, providing durable detection against variants.
Global Hash Blocklist
All known stage payloads, malicious npm package tarballs, and RAT binaries across Windows, macOS, and Linux have been added to the SentinelOne Cloud blocklist with a globally blocked reputation status. This provides immediate protection for all customers with cloud-connected agents.
Wayfinder Threat Hunting
The Wayfinder Threat Hunting team executed proactive hunts across all MDR regions and operating systems using Axios-specific IOCs, including DNS queries to sfrclak[.]com, file artifacts (com.apple.act.mond, /tmp/ld.py, wt.exe), and consolidated hash sets. All true positive findings generate console alerts, with MDR customers receiving direct analyst engagement and escalation.
Sustained Research on This Threat Actor
SentinelLABS has tracked BlueNoroff, the DPRK-linked threat cluster with significant overlap to UNC1069, across multiple campaigns targeting macOS and credential theft operations. The WAVESHAPER.V2 macOS binary recovered from the Axios compromise carries the internal project name “macWebT,” a direct lineage marker to BlueNoroff’s documented webT module. SentinelLABS published detailed analysis of this tooling family in 2023 when RustBucket first emerged as a macOS-targeted campaign, and again in 2024 when BlueNoroff shifted to fake cryptocurrency news as a delivery mechanism with novel persistence techniques.
The initial access vector matters here, too. In March 2026, Google Threat Intelligence reported that UNC1069 leverages ClickFix, a social engineering technique that weaponizes user verification fatigue, as an initial access vector for credential harvesting. SentinelLABS had already published a detailed analysis of ClickFix techniques and their use in delivering RATs and infostealers before Google’s attribution dropped.
The behavioral detections that caught the Axios compromise were built on this accumulated intelligence, not written after the fact.
Live Security Updates (LSU)
Customers with LSU enabled receive real-time detection updates without waiting for agent releases, ensuring coverage evolves as fast as the threat intelligence does. This is critical for rapidly evolving supply chain campaigns where new IOCs emerge hourly.
What You Should Do Now
Supply chain compromise exploits the inherent trust enterprises place in their software delivery infrastructure. When that trust is weaponized by a state-level actor, the response must be both immediate and structural.
Audit and contain. Search all environments for axios@1.14.1 and axios@0.30.4. Treat any system that installed either version during the exposure window as fully compromised. Rebuild from known-good images rather than attempting in-place cleanup.
Rotate every credential the endpoint could reach. npm tokens, SSH keys, CI/CD secrets, cloud provider keys, and API tokens accessible from impacted systems must be rotated immediately. The RAT was designed to harvest exactly these credential types.
Pin dependencies and enforce lockfiles. Use npm ci (not npm install) in all CI/CD pipelines. Commit and audit lockfiles. Organizations using strict lockfile discipline were protected even during the three-hour exposure window. This is the single most actionable control.
Eliminate legacy npm tokens. Inventory all long-lived tokens across the organization. Migrate to OIDC Trusted Publishing and revoke legacy tokens entirely. Do not leave them as fallbacks. The coexistence of old and new authentication is what this attack exploited.
Harden detection policy. Ensure Behavioral AI and Documents & Scripts engines are set to Protect (On Execute). Avoid broad exclusions for developer tools like node.exe or npm. Enable LSU for real-time detection updates.
Extend endpoint coverage to developer workstations and CI runners. These environments have access to production secrets, deployment credentials, and code signing infrastructure. They are typically less monitored than production servers. DPRK has recognized this asymmetry and is systematically exploiting it.
Hunt proactively. Use Deep Visibility to search for DNS queries to sfrclak[.]com, connections to 142.11.206[.]73, and the presence of plain-crypto-js in any node_modules directory. SentinelOne’s 2025 Annual Threat Report documents how supply chain attacks are part of a broader pattern where adversaries are “shifting left” to subvert the build process itself, compromising software before it ever reaches production.
Practitioner Investigative Guide
In addition to the strategic recommendations above, here are some specific queries, file paths, and commands you can execute now to protect your environment.
Determine Blast Radius
Your first job is to answer one question: did any system in my environment pull a compromised Axios version during the March 31 exposure window (00:21 – 03:25 UTC)?
In the SentinelOne Console:
Open the Wayfinder alert queue. Look for the alert name “Axios NPM Supply Chain Compromise” (Wayfinder retroactive rule). If these alerts are not visible under default filters, switch the alert type from “EDR” to “All”, as these surface as Custom/STAR alerts.
For each alert, review the Storyline and process tree. The typical chain looks like this:
Developer process (VS Code, Electron, Node, Yarn, npx) → node → setup.js under plain-crypto-js → curl download from sfrclak[.]com:8000/6202033 → OS-specific payload execution
Classify the affected asset: developer workstation, CI/CD runner, or production server. This drives urgency. Shared CI runners imply wider blast radius because multiple teams and credential sets may be exposed.
Deep Visibility / Event Search hunts to run immediately:
What You’re Looking For
Query Pattern
C2 DNS resolution
#dns contains:anycase 'sfrclak.com'
C2 IP connection
#ip contains '142.11.206.73'
Malicious dependency on disk
File path contains
node_modules/plain-crypto-js/ or */plain-crypto-js/setup.js
macOS RAT binary
File path: /Library/Caches/com.apple.act.mond
Linux loader
File path: /tmp/ld.py
Windows payload
File path: %PROGRAMDATA%\wt.exe
Renamed PowerShell execution
Lunar detection: RenamedBinExecution
Run hash hunts against consolidated IOC lists even if the global blocklist is already active. Historic hits help you quantify which systems were exposed and when.
Contain and Kill
For every system with confirmed Axios-related activity:
Mark the Storyline as Threat in the SentinelOne Console. Confirm that remediation commands (Kill + Quarantine) executed successfully.
Network-isolate the endpoint if the C2 connection succeeded (outbound to sfrclak[.]com or 142.11.206[.]73). Check for any secondary tooling or persistence beyond the initial RAT.
Block at the perimeter. Add the following to your firewall, proxy, and DNS blocklists:
Domain: sfrclak[.]com
IP: 142.11.206[.]73
Port: 8000
Check for persistence mechanisms:
Windows: Registry key “Microsoft Update” (used by the RAT for persistence), presence of 6202033.vbs or 6202033.ps1
macOS: Any process spawned from /Library/Caches/com.apple.act.mond, AppleScript execution from /var/folders/.../6202033
Linux: Active python3 processes running /tmp/ld.py, nohup wrappers
Credential Rotation and Dependency Cleanup
Assume every credential accessible from a confirmed-compromised endpoint is stolen. The RAT was built to harvest them.
Credential rotation checklist:
npm access tokens (revoke and reissue)
SSH keys (regenerate keypairs, update authorized_keys on all targets)
Git signing keys and code signing certificates if accessible from the endpoint
Dependency cleanup (all environments):
Pin Axios to known-good versions: axios@1.14.0 (1.x branch) or axios@0.30.3 (legacy branch)
Delete node_modules/plain-crypto-js/ wherever it exists
Run npm cache clean --force (or equivalent for Yarn/pnpm) on all affected build environments
Reinstall cleanly using npm ci --ignore-scripts during the cleanup period to prevent any other postinstall hooks from executing
Audit your package-lock.json / yarn.lock / pnpm-lock.yaml for any reference to plain-crypto-js. Its presence in a lockfile is a forensic indicator that the compromised version was resolved, even if the malware self-deleted.
Harden and Validate
Policy hardening:
Confirm Behavioral AI engine is set to Protect (On Execute), not Detect-only
Confirm Documents & Scripts engine is set to Protect (On Execute)
Review and remove any broad exclusions for node.exe, npm, yarn, python3, or developer IDEs
Verify LSU (Live Security Updates) is enabled. Customers on Fed/OnPrem environments without LSU access should confirm they are on the latest Service Pack
Confirm the SentinelOne agent is deployed on all developer workstations and CI/CD runners, not just production servers
Validation sweep:
Run a full disk scan on every endpoint that was in the blast radius
Verify no new users, services, or scheduled tasks were created during the exposure window
Confirm that network blocks for C2 infrastructure are active and logging hits
Re-run the Deep Visibility hunts from Hour 0-1 to verify no new activity has appeared
Key IOC Reference Card
Keep this card accessible for your team during the response.
Malicious packages:
Package
SHA-1
axios@1.14.1
2553649f2322049666871cea80a5d0d6adc700ca
axios@0.30.4
d6f3f62fd3b9f5432f5782b62d8cfd5247d5ee71
plain-crypto-js@4.2.1
07d889e2dadce6f3910dcbc253317d28ca61c766
C2 infrastructure:
Indicator
Value
Domain
sfrclak[.]com
IP
142.11.206[.]73
Port
8000
URL pattern
hxxp[://]sfrclak[.]com:8000/6202033
RAT User-Agent
mozilla/4.0 (compatible; msie 8.0; windows nt 5.1; trident/4.0)
File artifacts by OS:
OS
Artifact
Path
macOS
RAT binary
/Library/Caches/com.apple.act.mond
macOS
Temp script
/var/folders/.../6202033
Windows
Renamed PowerShell
%PROGRAMDATA%\wt.exe
Windows
Stage 1
system.bat
Windows
Stage 2
6202033.ps1
Windows
VBS launcher
6202033.vbs
Linux
Python loader
/tmp/ld.py
RAT beacon behavior: HTTP POST every 60 seconds, Base64-encoded JSON, two-layer obfuscation (reversed Base64 + XOR with key OrDeR_7077, constant 333). The IE8/Windows XP User-Agent string is anachronistic and serves as a strong network-level detection indicator.
SentinelLABS Expanded Indicators:
Indicator
Value
Note
Email
nrwise@proton[.]me
Involved in supply chain compromise.
Email
ifstap@proton[.]me
Involved in supply chain compromise.
Domain
callnrwise[.]com
Domain overlaps with email scheme and infrastructure design from confirmed C2 domain.
Domain
focusrecruitment[.]careers
Overlapping domain registration details and timeline. Medium Confidence
Domain
chickencoinwin[.]website
Overlapping domain registration details and timeline. Medium Confidence
The Structural Problem Is Bigger Than Axios
The progression from event-stream (2018, individual actor) to Shai-Hulud (2025, self-replicating worm across 500+ packages) to Axios (2026, DPRK state actor with multi-vendor attribution from SentinelOne, Google, and Microsoft) is not a series of isolated incidents. It is a clear escalation in adversary sophistication and strategic intent. North Korean threat actors stole $2.02 billion in cryptocurrency in 2025 alone, a 51% increase year-over-year, and the Axios RAT harvests exactly the credential types that feed that revenue pipeline.
Developer environments are now a Tier 1 attack surface. The organizations that treat them as anything less are operating with a structural blind spot that state-level adversaries have already mapped.
SentinelOne’s Autonomous Security Intelligence framework delivers what this moment requires: AI-native protection that detects and contains threats at machine speed, human expertise through Wayfinder MDR that translates alerts into confident action, and a unified platform that eliminates the fragmented visibility where supply chain attacks hide. When the next three-hour window opens, the question is whether your defense moves faster than the attacker. With SentinelOne, it does.
Disclaimer: All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third party.
For decades, attackers have favored one intrusion method over all others: compromise the identity. Long before ransomware crews industrialized extortion and modern malware ecosystems matured, adversaries understood a simple truth. If you can access a legitimate account, you can bypass most security controls and operate inside a network with the same privileges as the user who owns it. That strategy has not changed. What has changed is the scale and complexity of the identity surface attackers can exploit.
Modern enterprises no longer operate around a single directory and a handful of user accounts. Instead, organizations rely on sprawling webs of identities that span SaaS platforms, cloud infrastructure, APIs, service accounts, and increasingly autonomous AI agents. A single employee account may now provide access to dozens of interconnected services, while non-human identities quietly power automation behind the scenes.
This evolution has created a fundamental security dilemma: organizations now collect more identity telemetry than ever before, yet identity-based intrusions remain some of the hardest attacks to detect. Security teams are facing what can only be described as the “Identity Paradox”.
More Identity Data, Less Clarity
The Identity Paradox reflects a growing imbalance in modern security operations. Enterprises have unprecedented visibility into authentication events, login attempts, and access logs, yet attackers continue to breach organizations using legitimate credentials. The reason is simple: an attacker using a valid identity does not look like an attacker. They look like an employee doing their job.
SentinelOne’s Steve Stone, Warwick Webb, and Matt Berry break down some of the key aspects of the “Identity Paradox”.
Under this guise, threat actors increasingly rely on techniques that inherit trusted sessions or legitimate credentials. These include stolen authentication tokens, adversary-in-the-middle (AiTM) phishing campaigns, compromised developer accounts, and even state-sponsored insiders. In each case, the attacker bypasses security by leveraging an identity that the system already trusts.
When authentication appears legitimate, traditional defenses struggle to distinguish between normal activity and malicious intent. The problem is further compounded by the wide spectrum of identity abuse methods now being observed in the wild.
When the Attacker Is an “Employee”
At one extreme of the identity threat landscape are traditional credential theft campaigns powered by phishing, infostealers, and session hijacking tools. At the other extreme are state-sponsored actors who continue to put significant effort into infiltrating organizations by applying for open roles directly.
In recent years, investigators have documented coordinated efforts by North Korean IT workers to obtain remote employment at Western technology firms. These individuals create elaborate fake personas using stolen identities and fabricated work histories to pass background checks.
In 2025 alone, SentinelLABS tracked over 1,000 job applications and roughly 360 fake personas linked to these operations. Once hired, these individuals operate as legitimate insiders with authorized access to corporate infrastructure. From a telemetry perspective, the account is valid. HR has approved the employee and login activity appears normal, yet the identity itself has been subverted.
This highlights the core challenge of identity defense: the system may validate who the user is, but it cannot easily validate their intent.
Supply Chains & Trusted Developers
The Identity Paradox also extends deeply into the software supply chain. Developers and maintainers of open-source packages often hold privileged access to repositories that are widely trusted by downstream users. When these accounts are compromised, attackers can inject malicious code into legitimate projects while appearing to operate as the original maintainer.
One example observed in late 2025 involved the “GhostAction” campaign, where attackers compromised a GitHub maintainer account and pushed malicious workflows designed to extract secrets from development pipelines. Similarly, a phishing attack against a maintainer of popular NPM packages led to the deployment of malicious code capable of intercepting cryptocurrency transactions.
In both cases, the malicious commits originated from accounts with legitimate write access. Access controls were functioning exactly as designed. While the identity was verified, the intent behind the activity had changed.
The Expanding Identity Surface
As the definition of identity expands, employees are no longer the only actors operating within enterprise environments. Service accounts, APIs, workload identities, and AI agents are now executing actions across cloud platforms and SaaS environments at machine speed.
These non-human identities (NHIs) often operate with persistent privileges and broad access to critical resources. However, they are frequently overlooked in traditional identity governance frameworks. As organizations adopt automation and agent-driven workflows, non-human identities are rapidly becoming one of the fastest-growing attack surfaces in cybersecurity.
Traditional identity security models were built around human users and authentication events. That model does not translate well to NHIs, which can be ephemeral, programmatic, and massively scaled. In many environments, these automated identities vastly outnumber human users.
The Authorization Gap
The shift toward automation exposes another structural weakness in traditional identity security: the “Authorization Gap”. Security frameworks have historically focused on the moment of authentication as a gate that determines whether a user is allowed to enter. To follow this, organizations have in turn invested heavily in stronger authentication mechanisms, granular permissions, and zero trust access models. These controls remain essential, but authentication alone cannot determine what happens after access is granted.
A fully authenticated user may still perform reconnaissance, exfiltrate sensitive data through a browser, or upload proprietary code into generative AI tools. Likewise, a correctly provisioned service account could be abused for lateral movement across cloud infrastructure. Once inside, traditional identity systems often assume legitimacy. This assumption creates a dangerous blind spot between who is allowed into the system and what they actually do once inside it.
Shifting the Focus to Behavior
Defeating the Identity Paradox requires a fundamental shift in how organizations think about identity security. Moving away from a narrow focus on authentication, defenders can broaden the scope by monitoring the behavior that occurs after login. Post-authentication behavioral monitoring allows security teams to identify deviations from expected activity patterns such as:
Access to sensitive repositories outside a developer’s normal workflow
Unexpected privilege changes or administrative actions
Bulk data exports from SaaS platforms
Identity-driven lateral movement across systems
These behavioral signals often reveal malicious activity long before traditional alerts trigger. Organizations should treat events such as new MFA device enrollments, OAuth permission grants, and service account privilege changes as high-risk signals that require close scrutiny. Restricting long-lived sessions, monitoring concurrent authentication activity, and auditing machine-to-machine trust relationships can significantly reduce an attacker’s ability to convert a single compromised credential into persistent access.
Conclusion | Defeating the Identity Paradox
Identity is both the attacker’s preferred entry point and the defender’s most valuable signal. Organizations that succeed in defending against identity-driven threats will be those that treat identity not as a static credential, but as a continuously monitored security boundary.
That means validating not only who is acting within the system, but also how that identity behaves over time, whether it belongs to a human employee, a service account, or an autonomous AI agent. As automation accelerates and machine-driven activity expands across enterprise environments, identity security must evolve accordingly.
SentinelOne’s® Autonomous Security Intelligence architecture is designed to support this expansion. It delivers comprehensive visibility and response across both human and non-human activity where Singularity Identity delivers essential context around who (or what) is taking action, Prompt Security detects misuse within browsers and AI-driven workflows, and Singularity Endpoint verifies behavior directly at the system level.
Together, all three capabilities create a continuous execution layer that correlates activity across identities, applications, and devices. SentinelOne uniquely provides immediate, end-to-end visibility into GenAI usage along with data protection at every point of employee interaction on managed devices – all without requiring SASE redesigns or API-level integrations.
As advanced threats increasingly operate behind legitimate access and automation drives more machine-led activity, enterprise resilience hinges on securing execution itself in real time. SentinelOne is evolving identity from a static checkpoint into an ongoing system of behavioral validation, ensuring the integrity of every action across the enterprise, whether performed by a user, service account, or AI agent.
SentinelOne's Annual Threat Report
A defender’s guide to the real-world tactics adversaries are using today to abuse identity, exploit infrastructure gaps, and weaponize automation.
All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.
Across organizations, AI adoption is accelerating. Tools are being deployed, workflows are being restructured, and headcount decisions are being made against the assumption that AI will absorb the analytical load. Most leaders doing this work believe they are being careful because the technology keeps reminding them it isn’t ready yet.
This is a dangerous phase in any technological transition. While we are currently struggling to get these models to behave, to integrate them into our stacks, and to verify their messy outputs, we feel safe. We mistake the current difficulty of implementation for the inherent difficulty of the task. This is not just an error in judgment. It is a cognitive trap that will cost organizations their institutional knowledge and competitive advantage.
This trap has a name. The “cognitive rust belt” is the hollowing-out of human analytic capacity when organizations hand core thinking tasks to AI and stop exercising those skills themselves. It is happening now, across industries, hidden behind a wall of implementation friction that makes the problem invisible to the people experiencing it.
If you lived through the early days of the internet or the migration to the cloud, you know this feeling. You remember the broken APIs, the architectural wars, the endless debates about whether it would ever really work at scale. But there is a fundamental difference this time that most leaders are missing because they are too busy fighting with their prompts.
The critical question is not how hard AI is to implement today. It is what your organization looks like once it isn’t. This piece names that difference, explains why the current friction is masking the problem rather than preventing it, and gives you three questions to audit your exposure before the window closes.
Infrastructure vs. Intellect | The Category Difference
The transitions to the internet and the cloud were shifts in infrastructure. They changed where data lived and how it moved. They were, fundamentally, plumbing problems. Whether you were mailing a floppy disk or uploading to an S3 bucket, a human still had to do the analytical work. The friction was in the delivery mechanism, not the cognition itself.
The AI transition is categorically different. This is a shift in agency, not architecture. We are not just changing the pipes; we are changing who (or what) processes the data. And this distinction matters more than many organizations realize.
Consider a typical analysis task in 2010 versus today. In 2010, the challenge was getting the right telemetry in front of the analyst and doing it fast enough. You pulled server logs, endpoint artifacts, maybe a PCAP or a disk image, then you manually triaged. You grepped, pivoted, correlated timestamps across sources, built a timeline, extracted IOCs, assessed scope and impact, and wrote the recommendation: contain, eradicate, harden, detect. Infrastructure limited speed and scale, but the human remained the cognitive bottleneck.
Today, the hard part is “getting the AI to behave”: stop hallucinating, follow the format, use the right context, ground in the right evidence. But that framing hides what’s actually changing.
We are not just accelerating access to data, we are delegating the synthesis. When a model reads a week of EDR events, clusters related activity, proposes likely intrusion paths, summarizes the timeline, and drafts containment steps, it is not acting as infrastructure. It is acting as the junior analyst. The human’s job shifts from doing the reasoning to auditing it.
The problem is that right now, the cognitive rust belt is hidden behind that wall of technical frustration. Your team appears engaged because they are working hard to make AI work. They are debugging prompts, building verification pipelines, implementing guardrails. This looks like skill development. It is not. They are sharpening their troubleshooting skills for a tool that will eventually be frictionless, not sharpening their domain expertise.
When the friction disappears, and it will, what muscle memory will they have built? The ability to craft better prompts for a model that no longer needs careful prompting. The ability to verify outputs from a system that has become more reliable than their own domain knowledge. The ability to troubleshoot integrations that have been standardized and commoditized.
Why Senior Staff Can’t See This Problem
Those of us currently in the workforce grew up in the grunt work era. We built mental models, smell tests, and professional intuition through years of manual, often tedious labor. We learned to spot a flawed analysis not because we ran it through a verification checklist, but because something felt wrong. That instinct came from doing it wrong ourselves, repeatedly, at 2am with a deadline looming.
We view AI through the lens of a senior professional. To us, it is an assistant that handles the boring stuff while we provide the oversight. We find it nearly impossible to imagine a world where that intuition does not exist because it is already baked into our brains. We cannot un-know what we know.
This creates a massive blind spot. When you automate the entry-level rungs of the ladder, you are not “freeing people up for higher-level work.” You are removing the very gym where the mental muscles for that higher-level work are built.
The pattern holds across any profession where expertise is built through repetitive, often tedious, hands-on work. Research[1,2] on expert performance consistently finds that professional judgment develops not through instruction, but through repeated engagement with real problems — making errors, receiving feedback, and slowly recalibrating.
Think about how a senior financial analyst develops their sense for when a valuation model is off. It is not from reading the textbook on discounted cash flow. It is from building hundreds of models, getting the assumptions wrong, seeing the absurd outputs, and slowly calibrating their instincts. They develop a sensitivity to which levers matter and which are noise.
Now imagine a junior analyst whose first three years are spent reviewing AI-generated models. They can spot when the AI has made an obvious error because the revenue growth assumption is 400%. But can they spot when the model has used the wrong cost of capital framework for an emerging market acquisition? Can they sense when the comparable set is technically correct but strategically misleading?
The answer is no. Not because they are less intelligent, but because they never developed the error-detection patterns that come from making and fixing their own errors. They are one layer removed from the problem space. They are auditors of a system they have never operated.
This is not hypothetical. In domains where automation has already removed the grunt work, we see this pattern clearly. Pilots who learned to fly on highly automated aircraft have weaker manual flying skills and slower situational awareness when automation fails. Radiologists who trained primarily on AI-assisted systems show reduced ability to interpret edge cases the AI was not trained on. The pattern is consistent: when you remove the struggle, you remove the learning.
The Settled State Trap
Technology tends to follow a predictable evolutionary arc:
Friction Phase: For engineers, this is broken integrations and prompt failures. For the knowledge worker, it is wrestling with outputs that are close but wrong in ways that require domain knowledge to catch. For both, the technology’s unreliability is a forcing function. Humans stay cognitively in the loop not out of discipline, but because they have no choice.
Standardization Phase: Tooling matures. Best practices emerge. The knowledge worker’s experience smooths out significantly. The engineer moves on to the next integration challenge. For the person entering the workforce right now, this is where they arrive — they do not experience a friction phase at all.
Invisibility Phase: The tool becomes a utility. No one thinks about it. This is where electricity, indoor plumbing, and cloud infrastructure have landed. The person who joins the workforce in three years will have no memory of AI being anything other than ambient infrastructure. The forcing function is gone. There is nothing left to push back.
When AI reaches the invisibility phase, the current friction that keeps us engaged will vanish. We will not be prompt engineering. We will not be verifying outputs with a skeptical eye. We will not be debugging integration issues. We will be passively consuming results from a system that has become as invisible and trusted as our email client.
Look at how most knowledge workers interact with Excel. How many people using pivot tables understand the underlying algorithms? How many people using VLOOKUP could implement a lookup function from scratch? The answer is almost none, and that is fine for a deterministic tool with well-understood failure modes.
But AI is not deterministic. Its failure modes are subtle, context-dependent, and often invisible to users who lack deep domain expertise. A model asked to assess a novel threat actor may confidently apply the closest historical pattern in its training data — attributing tradecraft to a known group because the techniques superficially match, while missing indicators that suggest an entirely different origin or objective. A model summarizing a week of endpoint telemetry may produce a clean, coherent timeline that happens to exclude the anomalous process behavior that doesn’t fit the pattern — not because it hallucinated, but because it normalized.
These are not hallucinations. They are plausible inferences that happen to be wrong in ways that require deep domain knowledge to detect. When AI reaches the settled state, most users will not have that knowledge. They will trust it the way they trust autocorrect: mostly correct, with occasional catastrophic failures they do not see coming.
What This Looks Like in Practice | The Alert That Closed Itself
A medium-severity alert arrives: suspicious child process spawning from a scheduled task. The execution chain shows intermittent lateral movement attempts using WMI, then silence. The AI triage system flags it as likely benign with low confidence because the process tree does not match common ransomware or C2 patterns in its training data.
Nobody escalates it. The closure rules were configured during initial deployment and have not been audited since. No one routinely samples low-confidence benign outcomes for false negatives. The response automation closes the ticket.
Two weeks later, the security team discovers lateral movement to a domain controller and evidence of credential harvesting. The adversary used a novel persistence technique (living-off-the-land binary abuse combined with registry modification, not flagged by endpoint detection). They moved laterally just enough to establish access, then went dormant. During those two weeks, they exfiltrated architectural diagrams, credential databases, and HR records.
Now the team tries to reconstruct the attack timeline and discovers the deeper problem: they cannot. They have been living inside summaries, not raw telemetry. When they pull the original endpoint logs, they realize they do not know how to correlate process trees with authentication events manually. They do not know what normal scheduled task behavior looks like in their environment because they have never needed to examine it directly.
The incident response firm they hire charges $450/hour and takes three days to produce the timeline the internal team should have been able to reconstruct in six hours.
The executive debrief asks why the alert was closed. The answer is uncomfortable: nobody questioned the automation.
The Forward-Thinking Audit
The trap is avoidable, but the window is closing. Audit your exposure now, while the friction still makes the problem visible. Once AI reaches the settled state, the expertise gap will already be locked in. Three questions:
If your senior staff retired tomorrow, could your current junior employees replicate their gut instinct decisions using only the tools provided?
This is not asking whether they could do the job adequately with training. This is asking whether they have built the foundational mental models that allow them to recognize when the tools themselves are insufficient or misleading.
If your honest answer is no, you need to map out which specific experiences and failure modes your seniors went through that your juniors are now being protected from. Those protected experiences are the foundation your organization’s future judgment is built on.
Every year you defer addressing this, that foundation gets thinner.
Are you building workflows that require human judgment, or are you building verification loops where a human clicks “approve” on a machine-generated task?
There is a critical difference. Human judgment means the person is actively constructing part of the solution, making trade-off decisions with incomplete information, and applying contextual knowledge that is not fully captured in any system. Verification means checking whether an output meets a predefined quality bar.
Verification is valuable. But if it is the only cognitive work your junior staff is doing, they are not developing judgment. They are developing quality assurance skills. When the AI gets good enough that verification becomes pro forma, what expertise will they have built?
Do you know which manual skills in your department are currently being treated as waste to be eliminated, but are actually the training data for your future leaders?
Here is a heuristic: if the task requires making multiple small judgment calls based on contextual knowledge, it is probably building expertise even if it is boring. If the task is purely procedural with no meaningful decision points, it is probably safe to automate.
The danger zone is tasks that appear procedural but actually contain implicit judgment calls that experts have internalized to the point of unconsciousness. These are the tasks where automation removes learning opportunities that are invisible to the people designing the automation.
The Uncomfortable Conclusion
The cognitive rust belt is not a future threat. It is a slow-motion oxidation happening right now, masked by the noise of implementation. If you answered “no” to any of the three questions above, you are not looking at a future risk. You are looking at a gap that is already open. What you do with that answer is the only variable still in play.
Every organization currently deploying AI is making an implicit bet: that the current friction will last long enough for them to figure out the expertise development problem. That bet is likely wrong. The technology is improving faster than organizational learning cycles. By the time most companies realize they have a problem, the expertise gap will already be unbridgeable.
The hardest part of this problem is that it runs against every instinct of modern management. You are being told to preserve inefficiency, to intentionally slow down processes that could be automated, to force junior staff to do manual work that appears wasteful. This feels wrong because in almost every other context, it is wrong.
But the difference is that in most contexts, the inefficiency is pure waste. In this context, the inefficiency is the education. The struggle is not a bug to be eliminated; it is the feature that builds expertise.
The three questions above address what your organization does next. The harder question, how we got here and what the systematic hollowing-out of analytic capacity looks like across an entire profession, is worth examining on its own terms. If you wait for the technology to settle before you address this, you will find there is nothing left to save. The time to build organizational muscle memory is while the weights are still heavy. Once they become weightless, the gym is closed.
Host-based Behavioral Autonomous AI Detection is by far the most effective way to generically see, and stop both Human and/or machine-speed AI Agent based rogue or malicious activities.
On March 24, 2026, SentinelOne’s autonomous detection caught what manual workflows never could have: a trojaned version of LiteLLM, one of the most widely used proxy layers for LLM API calls, executing malicious Python across multiple customer environments. The package had been compromised hours earlier. No analyst wrote a query. No SOC team triaged an alert. The Singularity Platform identified and blocked the payload before it could run, across every affected environment, on the same day the attack was launched.
The LiteLLM supply chain compromise is not an anomaly. It is the new pattern: multi-stage, multi-surface, designed to evade manual workflows at every turn. A compromised security tool led to a compromised AI package, which led to data theft, persistence, Kubernetes lateral movement, and encrypted exfiltration, all within a window measured in hours.
SentinelOne detected and blocked this attack autonomously, on the same day it was launched, across multiple customer environments. No manual triage. No signature update. No analyst in the loop for the initial containment. This is what autonomous, AI-native defense looks like when it meets a real-world threat at machine speed.
The gap between the velocity of this attack and the capacity of human-driven investigation is the gap where organizations get compromised. Closing that gap is not a feature request. It is an architectural decision. This is what happens when AI infrastructure gets targeted by a multi-stage supply chain campaign, and what it looks like when autonomous, AI-native defense is already in position.
Here is what we detected, how the attack was structured, and why this is the class of threat that the Singularity Platform was built to stop.
Autonomous Detection at Machine Speed
SentinelOne’s macOS agent identified and preemptively killed a malicious process chain originating from Anthropic’s Claude Code running with unrestricted permissions (claude --dangerously-skip-permissions). No human developer ran pip install, an autonomous AI coding assistant updated LiteLLM to the compromised version as part of its normal workflow.
The AI engine classified the behavior as MALICIOUS and took immediate action: KILLED (PREEMPTIVE) across 424 related events in under 44 seconds. The agent didn’t need to know the package was compromised, it watched what the process did and stopped it based on behavior, regardless of what initiated the install.
Catching the Payload in the Act
The macOS agent caught the trojaned LiteLLM package mid-execution. The process summary tells the story: python3.12 launching with a command line containing import base64; exec(base64.b64decode(... , the exact bootstrap mechanism described in the attack’s first stage, decoding and executing the obfuscated payload in a child process.
The agent didn’t need a signature for this specific package. It recognized the behavioral pattern, a Python interpreter executing base64-decoded code in a spawned subprocess, classified it as MALICIOUS, and killed it preemptively before the stealer, persistence, or lateral movement stages could deploy.
The Full Process Tree: Containing the Blast Radius
Zooming out on the same detection reveals the full scope of what the autonomous AI agent was doing when the payload fired. The process tree expands from Claude Code (2.1.81) into a sprawling chain: zsh, bash, node, uv, ssh, rm, python3.12, mktemp, with hundreds of child events still loadable (304 events captured). This is what unrestricted AI agent activity looks like at the endpoint level: a single command spawning an entire dependency management workflow that pulled, installed, and attempted to execute the trojaned package.
The SentinelOne macOS agent traced every branch of this tree, correlated the events back to the root cause, and killed the malicious execution; all while preserving the full forensic record for investigation.
The Compromise Was Indirect. That’s What Makes It Dangerous.
The attacker, operating under the alias TeamPCP, never attacked LiteLLM directly. They first compromised Trivy, a widely trusted open-source security scanner, on March 19. From there, they obtained the LiteLLM maintainer’s PyPI credentials and used them to publish two malicious versions: 1.82.7 and 1.82.8.
A security tool, built to find vulnerabilities, became the vector that enabled the compromise of an AI infrastructure package used by thousands of organizations. The same actor went on to compromise Checkmarx KICS and AST on March 23, and Telnyx on March 27. This was not a smash-and-grab. It was a coordinated campaign that exploited the transitive trust woven through open-source supply chains.
For security leaders asking, “Could this have reached us?” the more pressing question is: “How fast could we have answered that?”
A New Attack Surface: AI Agents With Unrestricted Permissions
In one customer environment, SentinelOne detected the infection arriving through an unexpected vector: an AI coding assistant running with unrestricted system permissions autonomously updated LiteLLM to the trojaned version without human review. The update pulled the infected package, and the payload attempted to execute. Our agent blocked it.
This is a new class of attack surface that most organizations have not yet scoped. AI coding agents operating with full system permissions can become unwitting vectors for supply chain compromises. The speed and automation that make these tools valuable are the same properties that make them dangerous when the packages they pull have been weaponized. Organizations that have not yet established governance policies for AI assistant permissions are carrying risks they cannot see.
SentinelOne’s behavioral detection operates below the application layer. It does not matter whether a malicious package is installed by a human, a CI pipeline, or an AI agent. The platform monitors process behavior via the Endpoint Security Framework, which is why this detection fired regardless of how the infected package arrived.
Two Infection Vectors, One Designed to Run Without You
Version 1.82.7 embedded its payload in proxy_server.py, which executes every time the litellm.proxy module is imported. For anyone using LiteLLM as a proxy layer for LLM API calls, this fires constantly during normal operations.
Version 1.82.8 escalated. The attacker placed the payload in a .pth file, litellm_init.pth. Files with the .pth extension are processed by the Python interpreter at startup, regardless of which modules are imported. Any Python script running on a system with this version installed would trigger the malicious code, even if that script had nothing to do with LiteLLM.
If version 1.82.7 was a targeted shot, version 1.82.8 was a blast radius expansion. The attacker removed the requirement that the victim actually use the compromised library.
What the Payload Did Once Inside
The attack was structured as a multi-stage delivery system, each stage decoding, decrypting, and executing the next. The first stage was a minimal bootstrap, a single line of base64-decoded Python launched in a detached subprocess with stdout and stderr suppressed. Lightweight enough to slip past signature-based tools. Quiet enough to avoid raising flags.
The second stage was a comprehensive data stealer. It harvested system and user information, cryptocurrency wallets, cloud credentials, application secrets, and system configurations. For practitioners wondering what the blast radius looks like if a developer workstation is compromised, this is the answer: the attacker collects everything needed to move from a laptop to production infrastructure.
The third stage established persistence through a systemd user service at ~/.config/systemd/user/sysmon.service, executing a script at ~/.config/sysmon/sysmon.py. The naming convention, “sysmon,” was deliberately chosen to mimic legitimate system monitoring tools. It is designed to survive casual inspection and blend into environments where dozens of services run as expected background noise. This is precisely the kind of evasion that signature-based detection misses and behavioral AI catches: the process looks normal until you observe what it actually does.
The persistence mechanism included a 5-minute initial delay before any network activity, a technique specifically designed to outlast automated sandbox analysis. After that, the script contacted its C2 server every 50 minutes, fetching dynamic payload URLs. This sparse communication pattern makes behavioral detection through network monitoring significantly harder, and gives the attacker the ability to push new tooling without ever re-compromising the target.
It Moved Laterally Through Kubernetes
The attack did not stop at the workstation. It created privileged pods across Kubernetes cluster nodes in the kube-system namespace, using standard container images like alpine:latest, with hostPID, hostNetwork, and a privileged security context. By mounting the host filesystem directly, these pods gained root-level access to underlying nodes.
Each pod deployed persistent backdoors as systemd services on the host system. The pods operated in legitimate namespaces, used standard images, and ran with privileges that many production workloads legitimately require. For SOC practitioners asking whether their admission control and runtime detection would have caught this: the attack was designed specifically so they might not. Detecting this requires runtime visibility into container behavior after deployment, not just policy enforcement at the admission gate. This is the difference between cloud security that checks configuration and cloud security that watches execution.
Exfiltration Was Encrypted and Camouflaged
Stolen data was encrypted using a hybrid RSA and AES-256-CBC scheme. A random 32-byte AES session key encrypted the data, then the session key itself was protected with RSA encryption using a hardcoded public key. This meant the malware could encrypt and exfiltrate without first communicating with the C2 server. The encrypted payload was packaged as tpcp.tar.gz and transmitted via a single HTTP POST to models.litellm.cloud, a domain chosen to blend with legitimate LiteLLM API traffic and slip past network monitoring that whitelists expected destinations.
What This Attack Proves
The LiteLLM supply chain compromise is not an anomaly. It is the new pattern: multi-stage, multi-surface, designed to evade manual workflows at every turn. A compromised security tool led to a compromised AI package, which led to data theft, persistence, Kubernetes lateral movement, and encrypted exfiltration, all within a window measured in hours.
SentinelOne detected and blocked this attack autonomously, on the same day it was launched, across multiple customer environments. No manual triage. No signature update. No analyst in the loop for the initial containment. This is what autonomous, AI-native defense looks like when it meets a real-world threat at machine speed.
The gap between the velocity of this attack and the capacity of human-driven investigation is the gap where organizations get compromised. Closing that gap is not a feature request. It is an architectural decision.
Why This Detection Worked: Architecture, Not Luck
The LiteLLM detection wasn’t a one-off. It’s what happens when autonomous, behavioral AI is built into the foundation, not bolted on after the fact. The Singularity Platform’s visibility across endpoint, cloud, identity, and AI workloads is why the agent saw this regardless of whether the install came from a human, a CI pipeline, or an AI coding assistant.
For teams that need the human expertise layer on top, Wayfinder MDR extends that autonomous detection with 24/7 investigation and response, closing the gap between detection and resolution.
This is the Autonomous Security Intelligence (ASI) framework in practice: AI that acts at machine speed, backed by human expertise when it matters, across every surface the attack can reach. See how the Singularity Platform protects AI infrastructure and request a demo today.
Protect Your Endpoint
See how AI-powered endpoint security from SentinelOne can help you prevent, detect, and respond to cyber threats in real time.
The Good | U.S. Jails Ransomware Actors, Extradites Alleged RedLine Operator
The DoJ has given Russian national, Aleksey Volkov, almost seven years in person and ordered him to pay full restitution for acting as an initial access broker in Yanluowang ransomware attacks. Between 2021 and 2022, he breached multiple U.S. organizations and sold network access to affiliates who deployed ransomware and demanded payments up to $15 million. Arrested in Italy in 2024 and later extradited, Volkov pleaded guilty in 2025. Investigators have since tied him to over $9 million in losses using digital evidence, including chat logs and iCloud data.
For Ilya Angelov, a fellow Russian citizen, U.S. courts have doled out two years in prison for co-managing a phishing botnet used to enable BitPaymer ransomware attacks against 72 major companies across the States. From 2017 to 2021, the crime group known as TA551 distributed malware via massive spam campaigns, infecting thousands of systems daily and selling access to other cybercriminals. These operations generated over $14 million in ransom payments. Angelov later traveled to the U.S. to plead guilty following the Russian invasion of Ukraine in 2022 and has been fined $100,000 on top of his sentence.
Law enforcement have also extradited Hambardzum Minasyan to the United States to face charges for allegedly helping to operate the RedLine infostealer malware service. According to the prosecution, the Armenian national managed RedLine’s infrastructure, including servers, domains, and cryptocurrency accounts used to support affiliates and distribute malware as well as laundered the illicit proceeds. The operations enabled large-scale data theft from infected systems, targeting corporations and individuals. He now faces multiple cybercrime charges and could receive up to 30 years in prison if convicted.
Source: FBI Instagram
The Bad | Hackers Deploy FAUX#ELEVATE Malware via Phishing Résumés
Cyberattackers have set their sights on French-speaking professionals, luring victims with fake résumé attachments in an active phishing campaign designed to deploy credential stealers and cryptocurrency miners. The activity, now tracked as FAUX#ELEVATE, relies on heavily obfuscated VBScript files disguised as CV documents, which execute silently while displaying fake error messages. The malware uses sandbox evasion, persistence techniques, and a domain-check mechanism to ensure only enterprise systems are infected.
Source: Securonix
Once the attackers gain elevated privileges, the attack then disables security defenses, modifies system settings, and downloads additional payloads from legitimate platforms and infrastructure like Dropbox, Moroccan WordPress sites, and mail[.]ru. This abuse of valid services allows the attackers to stage the payloads, host a command and control (C2) configuration, and exfiltrate browser credentials and desktop files.
The campaign stands out for its “living-off-the-land” approach, which is defined by blending malicious activity with trusted services to evade detection. It also uses advanced techniques to bypass browser encryption and maximize system resource exploitation. After execution, most artifacts are removed to limit forensic visibility, leaving only persistent mining and backdoor components.
Notably, the entire infection chain executes in under 30 seconds, enabling rapid compromise and data theft. By selectively targeting domain-joined systems, attackers ensure high-value corporate credentials are harvested, making the campaign particularly dangerous for enterprise environments.
Campaigns like FAUX#ELEVATE show that even heavily obfuscated malware still presents multiple choke points for detection, from malicious scripting chains and abuse of legitimate services to anomalous outbound traffic. A modern, capable EDR with strong behavioral detection and endpoint visibility can detect and stop activity like this despite the obfuscation.
The Ugly | TeamPCP Hijacks Trivy, npm, and LiteLLM to Steal Credentials Worldwide
Over the past week, a cloud-focused threat actor called TeamPCP orchestrated a multi-stage, global supply chain campaign, beginning with a compromise of the widely-used Trivy vulnerability scanner. By injecting malicious code into Trivy v0.69.4 and associated GitHub Actions, TeamPCP harvested credentials, SSH keys, cloud tokens, CI/CD secrets, and cryptocurrency wallets. The malware persisted via systemd services and exfiltrated stolen data to typosquatted or attacker-controlled domains.
Source: Phoenix Security
Following the Trivy breach, TeamPCP deployed CanisterWorm, a self-propagating npm malware that leveraged compromised developer tokens to infect additional packages. CanisterWorm used a decentralized ICP canister as a resilient dead-drop C2, enabling automated payload updates and credential theft without direct attacker interaction.
The group then expanded to Aqua Security’s broader GitHub ecosystem, tampering with private repositories and Docker images, and to Checkmarx workflows and VS Code extensions, using the same credential-stealing payload to cascade compromises across CI/CD pipelines. Kubernetes clusters have also been targeted with scripts that wiped machines in Iranian locales while installing persistent backdoors elsewhere, demonstrating both selective destruction and lateral movement.
In the most recent leg of the offensive, TeamPCP compromised the popular “LiteLLM” Python package on PyPI, embedding the same cloud stealer and persistence mechanisms into versions 1.82.7 and 1.82.8. The attack harvested credentials, accessed Kubernetes secrets, and installed persistent systemd services while exfiltrating data to infrastructure controlled by the attackers.
Across this cluster of linked incidents, TeamPCP’s operations highlight the danger of credential reuse, incomplete secret rotation, and weak CI/CD hygiene, pointing to how a single supply chain compromise can cascade into a multi-platform, multi-stage attack that spans open-source software, cloud services, and developer ecosystems.
The Good | Operation Synergia III Disrupts Malicious Networks & the EU Sanctions State-Sponsored Attackers
Operation Synergia III, an Interpol-led crackdown spanning July 2025 to January 2026, has disrupted global cybercrime infrastructure across the globe. Authorities across 72 countries sinkholed 45,000 malicious IP addresses and seized 212 devices and servers, resulting in 94 arrests and 110 ongoing investigations.
The operation focused on taking down servers used in connection to extensive phishing, ransomware, malware, and fraud networks. Regional actions highlighted the breadth of the cyber activity: Bangladesh police arrested 40 suspects tied to scams and identity theft, while law enforcement in Togo dismantled a fraud ring engaged in social engineering, including romance scams and sextortion.
In Macau, investigators uncovered over 33,000 phishing sites impersonating casinos, banks, and government services all posed to steal financial data. Building on earlier phases of the operation and complementary operations like Red Card 2.0, Serengeti, and Africa Cyber Surge, these joint efforts point to the growing sophistication of cybercrime and the critical role that coordinated international actions plays in stemming its reach.
To further hinder threat actors, the Council of the European Union has sanctioned three companies and two individuals tied to major cyberattacks on critical infrastructure.
China-linked Integrity Technology Group supported operations that compromised over 65,000 devices across six EU countries, while Anxun Information Technology (akai-SOON) provided hacker-for-hire services targeting governments. Two of its co-founders have also been sanctioned for their part in executing the cyberattacks.
Iran-based company Emennet Pasargad has also been sanctioned for multiple influence campaigns and breaches, including phishing and disinformation efforts.
The Bad | Researchers Uncover ‘DarkSword’ iOS Exploit Stealing Sensitive Personal Data
A new iOS exploit chain and payload dubbed ‘DarkSword’ is stealing sensitive personal information from iPhones running iOS 18.4 to 18.7. The toolkit is linked to multiple threat actors, including Russian-aligned UNC6353, who previously leveraged a similar exploit chain called Coruna. DarkSword was subsequently uncovered while various researchers analyzed Coruna’s infrastructure.
In early November 2025, NC6748 used DarkSword against Saudi Arabian users via a Snapchat-themed website. Subsequently, other attackers linked to PARS Defense, a Turkish commercial surveillance firm, started running the exploit kit on Apple devices. Early this year, cases involving DarkSword were spotted across Malaysia and, most recently, it has been leveraged to target Ukrainian users.
The snapshare[.]chat decoy page (Source: GTIG)
DarkSword exploits six documented vulnerabilities (CVE-2025-31277, CVE-2025-43529, CVE-2026-20700, CVE-2025-14174, CVE-2025-43510, CVE-2025-43520), which Apple has since patched. Threat actors have used them to deliver at least three malware families: GHOSTBLADE (a data miner collecting crypto, messages, photos, and locations), GHOSTKNIFE (a backdoor exfiltrating accounts and communications), and GHOSTSABER (a JavaScript backdoor enumerating devices and executing code).
The delivery chain begins via Safari exploits, gaining kernel access and executing a main orchestrator (pe_main.js) that injects modules into privileged iOS services, including App Access, Wi-Fi, Keychain, and iCloud. Collected data spans passwords, messages, contacts, call history, location, browser history, Apple Health, and cryptocurrency wallets. The malware removes traces after exfiltration, indicating a focus on rapid theft rather than persistent surveillance.
Experts note that both DarkSword and Coruna exhibit signs of large language model (LLM)-assisted code expansion, showing professional design with maintainability and modularity in mind. Users are advised to update to iOS 26.3.1 and enable Lockdown Mode if at high risk.
The Ugly | Interlock Ransomware Exploits Cisco FMC Zero-Day to Breach Enterprise Firewalls
The Interlock ransomware group has been actively exploiting a critical remote code execution (RCE) zero-day in Cisco’s Secure Firewall Management Center (FMC) software since late January 2026. The vulnerability, tracked as CVE-2026-20131 (CVSS: 10.0), allows unauthenticated attackers to execute arbitrary code with root privileges on unpatched devices due to a case of insecure deserialization of user-supplied Java byte stream. Cisco has since issued a patch, urging customers to update immediately.
Interlock ransomware group is now exploiting a Cisco firewall bug patched on March 4
The bug is a CVSSv3 10/10 RCE in the Cisco Secure Firewall Management Center (FMC) Software: sec.cloudapps.cisco.com/security/cen…
Interlock, first seen in September 2024, has a history of high-profile attacks, including deploying the NodeSnake remote access trojan (RAT) against U.K. universities. The group has claimed responsibility for incidents affecting organizations such as DaVita, Kettering Health, the Texas Tech University System, and the city of Saint Paul, Minnesota. IBM X-Force researchers recently noted Interlock’s deployment of a new AI-assisted malware strain called Slopoly, highlighting the group’s evolving capabilities.
Latest reports explain that Interlock exploited the FMC flaw 36 days before its public disclosure, beginning on January 26, giving operators a head start to compromise firewalls before defenders were aware. This early access allowed attackers to operate undetected, underlining the danger of zero-day vulnerabilities.
Cisco has faced a series of zero-day exploits in 2026 so far. Earlier this year, maximum-severity flaws in Cisco AsyncOS email appliances, Unified Communications, and Catalyst SD-WAN were patched after being actively exploited, allowing attackers to bypass authentication, compromise controllers, and insert malicious peers.
The most recent incidents affecting FMC demonstrate both Interlock’s aggressive targeting of enterprise networks and the importance of rapid patching management and coordinated vulnerability disclosure. Organizations using Cisco FMC are strongly urged to apply the latest updates to mitigate ongoing risk.
The Good | Authorities Disrupt Proxy Network and Charge BlackCat Insider, Vendors Patch Critical RCE Bugs
U.S. and European law enforcement have dismantled the SocksEscort cybercrime proxy network, which relied on Linux edge devices infected with AVRecon malware. New research found that the service maintained roughly 20,000 compromised devices weekly and offered criminals access to ‘clean’ residential IP addresses from major internet service providers to evade blocklists. Since 2020, the platform has advertised access to hundreds of thousands of IPs. Now, authorities have seized dozens of servers and domains, froze $3.5 million in cryptocurrency, and disconnected infected routers, all previously linked to significant fraud and cryptocurrency theft.
Former DigitalMint employee Angelo Martino has been charged for conspiring with the BlackCat (aka ALPHV) ransomware group while serving as a ransomware negotiator. Prosecutors say Martino shared confidential negotiation details and participated in attacks with various accomplices between 2023 and 2025, operating as BlackCat affiliates. Victims included multiple U.S. organizations, with ransom payments exceeding $26 million and payments to BlackCat operators valued at a 20% cut of proceeds. Since the emergence of the group in 2021, the FBI has attributed to it thousands of targets and over $300 million in ransom payments.
Microsoft’s Patch Tuesday for the month delivers security updates for 79 vulnerabilities, including two publicly disclosed zero day flaws. The release also addresses three critical vulnerabilities including two remote code execution (RCE) bugs and one information disclosure issue.
The two zero days, an SQL Server elevation-of-privilege flaw (CVE-2026-21262) and a .NET denial-of-service bug (CVE-2026-26127), are not known to be actively exploited. The RCE bugs in Microsoft Office however, are exploitable via the preview pane, as is an Excel information disclosure flaw (CVE-2026-26144) that could leak data through Copilot.
Users are urged to prioritize updates to secure Office, Excel, SQL Server, and .NET environments.
The Bad | Attackers Exploit FortiGate Next-Gen Firewalls to Breach Networks
Threat actors are exploiting FortiGate Next-Generation Firewall (NGFW) appliances to gain access to targeted networks. A new post from SentinelOne outlines a consistent theme across these attacks: targeted victims did not retain appliance logs, preventing understanding on how and when the intruders gained access.
What happens when the FortiGate next-generation firewall protecting your network becomes the backdoor?
Our DFIR team has been tracking a wave of FortiGate NGFW compromises. Attackers are exploiting vulnerabilities to extract config files, steal service account credentials,… pic.twitter.com/Q9egoLwfN2
To date, attackers have leveraged known vulnerabilities (CVE-2025-59718, CVE-2025-59719, and CVE-2026-24858) and weak credentials to extract configuration files containing service account credentials and network topology information. These accounts, often linked to Active Directory (AD) and Lightweight Directory Access Protocol (LDAP), allowed attackers to map roles, escalate privileges, and move laterally within environments.
In one case, an attacker compromised a FortiGate appliance in November 2025, creating a local administrator account named support and adding unrestricted firewall policies. The attacker later decrypted the configuration file to extract LDAP service account credentials, which were used to enroll rogue workstations into AD, enabling deeper access. Network scanning triggered alerts, stopping further lateral movement.
In another incident, attackers rapidly deployed legitimate Remote Monitoring and Management (RMM) tools, Pulseway and MeshAgent, and downloaded malware from AWS and Google Cloud storage. The Java payload, executed via DLL side-loading, exfiltrated the NTDS.dit file and SYSTEM registry hive to an external server, potentially enabling credential harvesting, though no subsequent misuse was observed.
These incidents highlight the high value of NGFW appliances, which threat actors are exploiting for cyber espionage or ransomware attacks. SentinelOne emphasizes enforcing strong administrative access controls, maintaining up-to-date patches, and retaining detailed FortiGate logs up to 14 days minimum, ideally sent to a Security Incident & Event Monitoring platform (SIEM), to detect configuration exports and unauthorized account creation. Proper monitoring, combined with automated defenses, can significantly reduce attacker dwell time and prevent full-scale network compromise.
The Ugly | Iran-Linked Hacktivist ‘Handala’ Wipes Stryker MedTech Systems Worldwide
Medical technology giant Stryker has suffered a major cyberattack involving wiper malware claimed by Handala, a pro-Palestinian hacktivist group linked to Iran.
Handala says it stole 50 terabytes of data and wiped over 200,000 systems, servers, and mobile devices, forcing office shutdowns in 79 countries. Employees in the U.S., Ireland, Costa Rica, and Australia reported that corporate and personal devices enrolled for work were wiped, disrupting access to Microsoft systems, Teams, VPNs, and other applications, with some locations reverting to manual workflows.
Login screens taken over by the Handala logo (Source: WWMT.com)
At the time of the incident, staff were instructed to remove corporate management and applications from personal devices. Stryker later confirmed the incident in a Form 8-K filing with the SEC, describing a global disruption affecting its Microsoft environment. The company activated its cybersecurity response plan and is working with internal teams and external experts. The incident appears contained and involved no ransomware, though full restoration timelines remain unknown.
Handala, active since December 2023, is known to target Israeli organizations with destructive malware that wipes Windows and Linux systems, often publishing stolen sensitive data. This attack marks a major disruption for Stryker, which employs over 53,000 people and reported $22.6 billion in global sales in 2024.
Cybersecurity experts warn that Iranian state-aligned actors, including APT groups and proxy hacktivists, frequently use cyber operations for retaliation and disruptive campaigns during geopolitical escalations. They are likely to increase attacks against U.S. organizations, critical infrastructure, and allied sectors. Organizations are urged to strengthen security controls and prepare for potential follow-on campaigns targeting networks and operations.
Throughout early 2026, SentinelOne’s® Digital Forensics & Incident Response (DFIR) team has responded to several incidents where FortiGate Next-Generation Firewall (NGFW) appliances have been compromised to establish a foothold into the targeted environment. Each incident was detected and stopped during the lateral movement phase of the attack.
Fortinet has disclosed and issued patches for several high-severity vulnerabilities allowing unauthorized access during the activity period of our investigations. Successful exploitation of these flaws allows an attacker to extract the configuration file from the FortiGate appliance, which frequently contains service account credentials and valuable network topology information for the targeted environment.
We observed a consistent theme: targeted organizations fail to retain sufficient logs on these appliances, which prevents understanding exactly how and when attackers gained access. The dwell time between initial perimeter device compromise to network compromise was drastically different across two incidents we investigated, ranging from 2 months to near instantaneous follow-on activity.
This post explores the actions that an attacker or attackers conducted following likely exploitation of two of these FortiGate appliances in different environments. It also provides defenders with guidance to investigate compromise of these appliances and subsequent infiltration activities.
FortiGate Appliance Compromise
FortiGate network appliances have considerable access to the environments they were installed to protect. In many configurations, this includes service accounts which are connected to the authentication infrastructure, such as Active Directory (AD) and Lightweight Directory Access Protocol (LDAP). This setup can enable the appliance to map roles to specific users by fetching attributes about the connection that’s being analyzed and correlating with the Directory information, which is useful in cases where role-based policies are set or for increasing response speed for network security alerts detected by the device.
However, such access is abused by actors who compromise FortiGate devices, as we have seen in these recent incidents.
Through December 2025 and February 2026, Fortinet products have reportedly been exploited via CVE-2025-59718 and CVE-2025-59719, two vulnerabilities impacting Fortinet products’ Single Sign On (SSO) mechanisms by failing to validate cryptographic signatures. In effect, this means an attacker who sends a crafted SSO token can achieve unauthenticated administrative access as the cryptographic signature is not verified.
Another vulnerability, CVE-2026-24858, was patched by Fortinet in late January: this vulnerability permitted attackers to log into FortiGate devices where FortiCloud SSO was enabled. Attackers exploited this flaw by logging into the victim’s device using the attacker’s FortiCloud account.
Once an attacker gains access in this way, they can run the command show full-configuration to extract the FortiGate device configuration file. Fortinet’s FortiOS devices, including FortiGate appliances, use a reversible form of encryption on the configuration files, meaning an attacker can then identify embedded service accounts and extract them.
However, other recent reports have detailed that actors are scanning for open instances and then attempting to log into FortiGate devices using common weak credentials, which means actors can access such devices without a weaponized exploit.
FortiGate Configs Abused to Enroll Rogue Domain Workstations
In one incident (IOCS: Incident 1), the compromise likely began in late November 2025 and remained undetected through February 2026. After accessing the appliance, the actor created a new local administrator account on the FortiGate device named support and used it to create 4 new firewall policies that allowed the account to traverse all zones (source: all; destination: all).
Activity then dropped to low volumes of traffic through some of these policies, suggesting the actor was periodically checking that access was still available before later shifting to noisier network activity.
This pattern is consistent with an initial access broker (IAB) establishing a foothold and then selling it on to another actor. Insufficient FortiGate log retention meant we could only reconstruct the activity window rather than identify the precise initial access vector.
In February 2026, an attacker likely extracted the configuration file, which contained encrypted service account LDAP credentials. Evidence demonstrates the attacker authenticated to the AD using clear text credentials from the fortidcagent service account, suggesting the attacker decrypted the configuration file and extracted the service account credentials.
The service account was then used to authenticate to the victim’s environment from IP address 193.24.211[.]61. The attacker used the mS-DS-MachineAccountQuota attribute to join two rogue workstations to the AD; by default, this setting permits a standard account to join up to 10 workstations to the domain.
Joining the attacker’s workstation to the AD provided the attacker with more access to the environment with fewer security controls. The rogue workstation names are:
WIN-X8WRBOSK0OF
WIN-YRSXLEONJY2
Per Validin’s records on IP address 193.24.211[.]61, the IP has consistently shown an RDP port open with an exposed Windows system with workstation ID WIN-1J7L3SQSTMS. We did not see this workstation ID during our incident, but given the consistent nature of this system on the IP address, this workstation ID should be considered suspect.
The actor then performed network scanning across the environment, which generated security alerts and prevented further lateral movement. Identity logs showed massive volumes of failed logins indicating password spraying attempts, which originated from the FortiGate appliance IP address. There were also multiple delete.me file artifacts that suggest the actor likely used the SoftPerfect Network Scanner for enumeration.
There were multiple failed login attempts during the period of heavy activity in February, including from IP addresses 185.156.73[.]62 and 185.242.246[.]127, which are registered to networks in Ukraine and Kazakhstan, respectively.
FortiGate Access Exploited to Deploy RMM Tools and Steal NTDS
In another case we investigated in late January (IOCS: Incident 2), a threat actor accessed the organization’s FortiGate appliance and created a local administrator account with the name ssl-admin. As in the previous investigation, it is likely that the actor again extracted the configuration from the FortiGate appliance and decrypted it to harvest the AD administrator credentials.
Within 10 minutes of creating a local account on the FortiGate appliance, the actor logged into several servers in the victim’s environment with the built-in Domain Administrator account. Server authentication logs confirmed a series of successful Network (Type 3) and Remote Interactive (Type 10/RDP) logins originating from the FortiGate VPN-assigned IP range.
On one of the servers, the attacker launched SQL Server Management Studio (SSMS) but did not connect to any databases, possibly searching for stored connection details or credentials in the application.
The actor began staging files in the system’s C:\ProgramData\USOShared directory, a technique we have observed across multiple incidents. The attacker downloaded two Remote Monitoring and Management (RMM) tools: Pulseway and MeshAgent, which are legitimate system administration tools that are frequently abused by threat actors to achieve a deeper foothold in the target environment.
The actor abused legitimate cloud storage functionality by hosting the Pulseway application on a Google Cloud Storage URL at hxxps://storage.googleapis[.]com/apply-main/windows_agent_x64[.]msi, which is likely attacker-controlled. The MeshAgent RMM was installed on the domain controller and a file share. The actor modified a Windows Registry value SystemComponent=1 to hide MeshAgent from the “Programs and Features” list.
These tools were used to create Windows Scheduled Tasks named JavaMainUpdate and MeshUserTask for Pulseway and MeshAgent, respectively. The actor also downloaded malware from a cloud storage bucket via PowerShell from Amazon Web Services (AWS) Simple Storage Service (S3) hostname fastdlvrss[.]s3[.]us-east-1[.]amazonaws[.]com. Like the Google Cloud Storage URL, this is a legitimate Amazon resource that was likely registered by the attacker for unauthorized purposes. This stage used the following command:
The attacker gave the malicious DLLs the same names as legitimate Java files, causing the application to load the malware via DLL side-loading instead of the real components. The Java-loaded payload beaconed to two domains: ndibstersoft[.]com and neremedysoft[.]com. These payloads were executed against other servers on the network using PsExec, including the primary and secondary domain controllers.
The actor then created a Volume Shadow Copy backup of the primary domain controller via Windows Management Instrumentation Controls (WMIC) and extracted the NTDS.dit file and SYSTEM registry hive from the backup using the makecab command, then used makecab to compress each file.
The malicious Java application then established a connection on port 443 to IP address 172.67.196[.]232, a Cloudflare-owned IP address with thousands of domain records. The connection was terminated after 8 minutes, and the compressed NTDS.dit and registry hive files were deleted. This sequence of events suggests the attacker uploaded the data to their infrastructure during this session.
Following this activity, we observed no evidence of the threat actor leveraging new or additional user accounts. While the actor may have attempted to crack passwords from the data, no such credential usage was identified between the time of credential harvesting and incident containment.
Conclusion
NGFW appliances have become ubiquitous because they provide strong network monitoring capabilities for organizations by integrating security controls of a firewall with other management features, such as AD. However, these devices are high-value targets for actors with a variety of motivations and skill levels, from state-aligned actors conducting espionage to financially motivated attacks such as ransomware.
As Amazon Security recently wrote, lower skilled threat actors have been boosted by integrating large language models (LLM) into their workflows, making attacks easier and more automated despite demonstrating limited knowledge after exploitation. SentinelOne does not see indications that the campaigns identified in the aforementioned cases are associated with the threat actor tracked by Amazon Security, particularly given the relatively high dwell time between initial access and further activity mentioned in the first incident.
However, organizations should prepare for increases in attack volume at network edge devices as attackers find novel ways to bypass LLM safeguards: these appliances are valuable targets and are often exposed to the open internet. LLMs are often trained on these products and can readily supply information to actors that facilitates gaining access and understanding how to navigate from network appliances deeper into the targeted environment without the knowledge uplift previously required by threat actor crews.
Organizations should consider that FortiGate and other edge devices typically do not permit security software to be installed on the appliance, such as endpoint detection and response (EDR) tools. The best defense for these appliances is to apply strong administrative access controls and to keep the software patched to prevent exploitation. Further, both of these investigations were hindered by insufficient FortiGate log retention. Organizations should ensure they have at least 14 days of log retention on NGFW appliances like FortiGate, though 60-90 days is much better when possible.
SentinelOne recommends sending all logs to a Security Incident & Event Monitoring (SIEM) or similar log aggregation system, as attackers may delete logs from local systems after establishing access, but they can’t delete what’s already been sent to a security center. A SIEM can help at each stage of an attack:
UEBA and Identity Intelligence: User entity and behavior analytics (UEBA) within a SIEM baselines “normal behavior for every administrator, allowing the SIEM to immediately flag a login if it originates from an unrecognized device or “impossible travel” location. By identifying these anomalies at the moment of entry, the SIEM can alert defenders even if the attacker is using perfectly valid, stolen credentials.
Detection of Configuration Access and Credential Extraction: A SIEM monitors the FortiGate audit logs for sensitive CLI commands like show full-configuration or backup exports occurring outside of maintenance windows. By correlating these actions with an unusual admin login location, the SIEM can alert security teams that a configuration file and the credentials within it may have been compromised.
Spotting Unauthorized Account Creation: When a threat actor creates a local “backdoor” admin account, the SIEM immediately flags the user-creation event as a high-priority anomaly. It compares this new account against a “whitelist” of authorized admins, ensuring that any account created without a linked Change Management ticket is treated as an active breach.
Monitoring Malware Downloads & C2 Traffic: SIEMs analyze network flow data to identify internal systems reaching out to known malicious IPs or suspicious domains via PowerShell. By detecting the “heartbeat” of a C2 channel, the SIEM allows defenders to kill the connection before the attacker can begin exfiltrating data.
Preserving Evidence Against Log Deletion: Because the FortiGate appliance streams its logs to the SIEM in real-time, the attacker cannot hide their tracks by clearing the local logs. The SIEM maintains an immutable record of every command the attacker typed, providing the forensic evidence needed to understand the full scope of the intrusion.
Neutralizing the Threat with Automation: Modern SIEMs now come with automation built in allowing the SIEM to trigger automated playbooks the millisecond an attack is detected, such as instantly disabling the compromised service account or “shunning” the attacker’s IP at the network perimeter. By removing the need for human intervention in the initial response, automation slashes the attacker’s dwell time, effectively neutralizing the breach before malware can begin to spread.
Below, we share guidance for organizations and other incident responders on how to investigate a suspected FortiGate intrusion of this nature.
Forensic Investigation Guidance
FortiGate
Malicious SSO/Unexpected Logins:
Search system logs for Log ID 0100032001 (Admin login successful, method sso).
Check for usernames matching public IOCs (e.g., cloud-init@mail.io, cloud-noc@mail.io).
Configuration Download:
Search for Log ID 0100032095 (System config file has been downloaded).
The timestamp confirms when the attacker exported the configuration file.
Malicious Local Admin Account Creation:
Identify the exact creation time and source IP using Log ID 0100044547 (Object attribute configured) with cfgpath="user.local" or cfgpath="system.admin".
VPN Sessions:
Identify the attacker’s source IP by analyzing the remip field in VPN tunnel logs.
Look for Log ID 0101039424 (SSL VPN tunnel up) or 0101037138 (IPsec tunnel up).
Note the internal IP addresses for later correlation with evidence from the Domain Controller(s).
Domain Controllers
Rogue Computer Account Creation (Domain Join):
Windows Event ID 4741: Confirms the creation of a computer account.
Verify the Subject: Security ID matches FortiGate’s stolen LDAP bind account.
Use the SubjectLogonId from 4741 to find the corresponding Event ID 4624 (Logon Type 3) on the same domain controller to extract the Source Network Address (should match the attacker’s internal VPN IP identified via FortiGate logs).
Directory Service Changes (Advanced Audit):
If enabled, Event ID 5136 shows exact attributes set during the join (e.g., missing SPNs, modified UAC values), indicating the use of automated tools like Impacket.
DNS Record Creation:
DNS Server Audit log (Event ID 515 – Record Create): Records the creation of the host (A) record, including workstation name and timestamp.
Microsoft-Windows-DNSServer/Analytical log: Provides the Client IP (internal VPN IP) and the QNAME (workstation name being registered).
Active Directory
Find Malicious Computer Objects:
Check mS-DS-CreatorSID: If multiple rogue computers share the same SID, and it belongs to the Fortinet LDAP service account, compromise is confirmed.
Check for defined SPNs: The lack of SPNs is a red flag and a likely malicious indicator.
Note whenCreated: This attribute stores the exact time the object was added.
Originating Domain Controller and Time:
Check the sAMAccountName attribute to identify the Originating DSA (the domain controller that originally created the object) and the Originating Time.
ssl-admin – Incident 2, FortiGate local administrative account
support – Incident 1, FortiGate local administrative account
Windows Workstation Names
WIN-1J7L3SQSTMS – Incident 1, Windows hostname of RDP service hosted on attacker IP 193.24.211[.]61
WIN-X8WRBOSK0OF – Incident 1, rogue workstation ID
WIN-YRSXLEONJY2 – Incident 1, rogue workstation ID
Third-Party Trademark Disclaimer:
All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.
The Good | Global Authorities Disrupt Tycoon2FA, LeakBase & Phobos Ransomware
Europol has successfully disrupted Tycoon2FA in an international operation, taking down the phishing-as-a-service (PhaaS) platform responsible for sending tens of millions of phishing emails each month. Authorities seized 330 domains used to host phishing pages and control infrastructure.
Active since 2023, Tycoon2FA enabled attackers to bypass multi-factor authentication (MFA) using adversary-in-the-middle (AitM) techniques that captured credentials and session cookies. Sold through Telegram for about $120, the service allowed low-skill criminals to launch large-scale phishing attacks against organizations worldwide.
In another seizure, LeakBase, a major cybercrime forum used to trade stolen data and hacking tools, was taken down as part of Operation Leak, a joint effort by the FBI, Europol, and law enforcement in 14 countries. Police seized two domains, posted seizure banners, executed search warrants, and made arrests worldwide.
LeakBase had amassed 142,000 members since 2021 and offered leaked databases, exploits, and cybercrime services. All forum data, including accounts, messages, and IP logs, have been preserved for evidence, with the seizure now entering a prevention phase to deter further cybercrime.
A Russian national, Evgenii Ptitsyn, has pleaded guiltyto wire fraud conspiracy for his role running the Phobos ransomware operation. Since 2020, Phobos has targeted over 1000 organizations worldwide, including schools, hospitals, and government agencies, collecting more than $39 million in ransom payments. Phobos affiliates were responsible for infiltrating victim networks, encrypting data, exfiltrating sensitive files, and paying Ptitsyn a per-deployment fee in exchange for the corresponding decryption keys.
Ptitsyn himself managed ransomware sales, distributed decryption keys, and took a cut of all affiliate payments. His sentencing is scheduled for July 15, facing up to 20 years.
The Bad | Researchers Uncover ‘Coruna’ Exploit Kit Mass Targeting iOS Devices
Multiple threat actors have deployedCoruna, a previously unknown iOS exploit kit containing 23 exploits and five complete exploit chains capable of targeting Apple devices running iOS 13 through iOS 17.2.1.
Researchers first observed parts of the Coruna framework in February 2025 while investigating activity linked to a commercial surveillance vendor. The exploit kit uses a sophisticated JavaScript delivery framework that fingerprints a victim’s device and operating system before selecting the most effective exploit chain.
Several of the exploits rely on advanced techniques such as WebKit remote code execution (RCE), pointer authentication code (PAC) bypasses, sandbox escapes, kernel privilege escalation, and Page Protection Layer (PPL) bypasses. Some vulnerabilities included in the kit were previously associated with Operation Triangulation, a high-profile iOS espionage campaign uncovered in June 2023.
Coruna exploit chain delivered on iOS 15.8.5 (Source: GTIG)
Over time, Coruna has spread across different threat ecosystems. In mid-2025, a suspected Russian espionage group UNC6353 used the framework in watering hole attacks targeting visitors to compromised Ukrainian websites. Later that year, the exploit kit appeared on fake Chinese cryptocurrency and gambling websites linked to a financially-motivated threat actor.
Once exploitation succeeds, attackers deploy a loader known as PlasmaLoader, which downloads additional modules designed primarily to steal cryptocurrency wallet data and sensitive information. Targeted data includes wallet recovery phrases, financial information, and other stored text. Stolen data is encrypted before being transmitted to attacker-controlled infrastructure.
Coruna demonstrates how advanced spyware-grade exploit frameworks can spread from surveillance vendors to nation-state actors and eventually cybercriminal groups, highlighting the growing commercialization and reuse of sophisticated zero-day capabilities in the mobile threat landscape.
The Ugly | Hacktivists Launch Retaliatory Cyberattacks After U.S.–Israel Strikes on Iran
Following the U.S.-Israel military operations against Iran, cybersecurity researchers are flagging a spike in retaliatory hacktivist activity codenamed as ‘Epic Fury’ and ‘Roaring Lion’. The surge has primarily taken the form of distributed denial-of-service (DDoS) attacks, data leaks, and online disruption targeting both government and critical infrastructure organizations.
A new report describes how three main hacktivist groups, Keymous+, DieNet, and NoName057(16), have been responsible for nearly 70% of observed attack activity between February 28 and March 2, 2026. The first recorded attack during this period was launched by Hider Nex (aka Tunisian Maskers Cyber Force), a pro-Palestinian hacktivist collective that combines DDoS attacks with data breaches to support geopolitical messaging.
Hider Nex claiming the first DDoS attack on Telegram (source: Radware)
In total, researchers recorded 149 DDoS attacks targeting 110 organizations across 16 countries, carried out by 12 hacktivist groups. The majority of attacks focused on the Middle East, with 107 incidents targeting regional organizations. Government entities were the most affected sector, accounting for nearly 48% of the victims, followed by organizations in financial services and telecommunications.
Several other cyber threats have emerged alongside the hacktivist campaigns. Pro-Russian groups are claiming breaches of Israeli military networks, while threat actors have an active SMS phishing campaign distributing malware disguised as an Israeli civil defense alert app. Iranian state-linked actors associated with the Islamic Revolutionary Guard Corps (IRGC) have reportedly targeted regional energy and digital infrastructure, striking major oil refineries and data centers in the U.A.E.
Iranian-aligned cyber actors have historically blended espionage, disruption, and influence operations during geopolitical crises, suggesting the potential for broader targeting of government, infrastructure, financial, and technology sectors applicable on a global scale, too.