The Cyber Express Weekly Roundup: EU AI Act Updates, Malware Expansion, Critical Vulnerabilities, and Rising Cybercrime Trends




sudo sh -c "printf 'install esp4 /bin/false\ninstall esp6 /bin/false\ninstall rxrpc /bin/false\n' > /etc/modprobe.d/dirtyfrag.conf; rmmod esp4 esp6 rxrpc 2>/dev/null; true"Security experts also warned that Dirty Frag importantly differs from CVE-2026-31431. Unlike Copy Fail, Dirty Frag can still be exploited even if the Linux kernel’s algif_aead module has been disabled. Kim stated: “Note that Dirty Frag can be triggered regardless of whether the algif_aead module is available.” He further cautioned: “In other words, even on systems where the publicly known Copy Fail mitigation (algif_aead blacklist) is applied, your Linux is still vulnerable to Dirty Frag.” With no patches currently available and exploit code already circulating publicly, the newly disclosed Dirty Frag LPE vulnerability presents a significant risk to Linux distributions worldwide.







According to an official statement, UIDAI and NFSU have established a structured collaboration designed to address emerging challenges in cybersecurity and digital forensics.


In the span of four days, the U.S. government announced two parallel sets of agreements with frontier AI companies that together define the two tracks Washington wants to run simultaneously—test AI for national security risks before the public ever sees it, and deploy AI directly on the military's most classified networks.
The Center for AI Standards and Innovation — CAISI, the entity under the Department of Commerce's National Institute of Standards and Technology that inherited the remit of the former AI Safety Institute — announced new agreements with Google DeepMind, Microsoft, and Elon Musk's xAI. These build on renegotiated agreements with Anthropic and OpenAI that date to 2024, updated to reflect directives from Commerce Secretary Howard Lutnick and America's AI Action Plan.
Under the CAISI agreements, the three companies will hand over their frontier AI models to government evaluators before those models are publicly released. The evaluations probe for national security-relevant capabilities and risks.
To conduct a thorough assessment, developers frequently provide CAISI with models that have reduced or removed safety guardrails — a design choice that allows evaluators to probe what a model can do at its ceiling, not what it will do under commercial safety controls. Evaluators from across the federal government participate, coordinated through the CAISI-convened TRAINS Taskforce, an interagency body focused specifically on AI national security concerns.
CAISI said it has completed more than 40 such evaluations to date. The agreements explicitly support testing in classified environments and were drafted with the flexibility to adapt rapidly as AI capabilities continue advancing.
"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," said CAISI Director Chris Fall. "These expanded industry collaborations help us scale our work in the public interest at a critical moment."
Fall was appointed to lead CAISI after Collin Burns — a former Anthropic researcher — was reportedly removed from the director role after just four days. The personnel transition at CAISI's top reflects a broader institutional pivot. Under the Biden administration, the AI Safety Institute focused on safety standards, definitions, and voluntary guardrails. Under Trump, CAISI has shifted its emphasis toward AI acceleration and national security capability assessment. The substance of what the evaluators do — probe powerful models before release — has not changed. The framing of why they do it has.
The latest announcement comes four days after the Department of War (formerly Department of Defense) announced agreements with eight frontier AI companies to deploy their models directly on the military's classified networks for operational use.
The companies cleared are SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, Amazon Web Services, and Oracle. The networks in question are classified at Impact Level 6, covering secret-level data, and Impact Level 7, which refers to the most highly restricted national-security systems. The stated objectives are data synthesis, situational awareness enhancement, and warfighter decision support.
The Department of War announcement carries one conspicuous absence that dominates coverage of what it actually means. Anthropic is not on the list. The company that first deployed AI models on Pentagon classified systems — via a Palantir integration under the Maven Smart System contract — is excluded after a dispute over the guardrails governing military and surveillance use of its AI.
The Pentagon had previously branded Anthropic a "supply chain risk," a designation typically reserved for foreign entities posing national security concerns. A March 2026 federal injunction reversed that designation, but it did not restore Anthropic's position as a Pentagon AI vendor. Palantir has pulled its Claude models from its DoD platforms accordingly.
The exclusion has strategic implications that extend beyond one company's contract status. Anthropic's recently released Mythos model — described by Treasury Secretary Scott Bessent as representing a step change in large language model capability — has generated significant attention from U.S. officials and financial sector executives about its potential to supercharge adversarial cyber operations.
The fact that Mythos is not among the models being assessed for classified military use, while simultaneously being cited by senior officials as a capability milestone that warrants concern, creates a gap in the government's stated AI security posture that is difficult to characterize as anything other than a policy contradiction.

Attackers have found a way to intercept SMS-based one-time passwords from a victim's mobile device without deploying a single line of malware on the phone itself. Instead, they go through the Windows PC the phone is already connected to.
Researchers documented an active intrusion campaign active since at least January 2026, that combines a remote access trojan called "CloudZ" with a previously undocumented plugin named "Pheno." Together the two tools are designed to steal credentials and harvest authentication codes that arrive on a victim's phone by abusing Microsoft Phone Link, a legitimate Windows application built into every Windows 10 and 11 system.
Microsoft Phone Link, formerly "Your Phone," is a synchronization tool that bridges a user's Android or iOS device to their Windows PC, mirroring calls, messages, and app notifications directly onto the desktop.
Pheno exploits that bridge. It continuously scans running processes for keywords including "YourPhone," "PhoneExperienceHost," and "Link to Windows" to detect an active phone connection. When one is found, the plugin writes "Maybe connected" to a local staging file and gains access to the Phone Link application's local SQLite database. It is a file that can contain SMS messages and authenticator app notification content, including OTP codes.
The attack never targets the mobile device directly. It targets the enterprise-managed Windows endpoint the device trusts, bypassing security controls focused on securing smartphones rather than the desktop layer they sync with.
CloudZ is a modular .NET RAT compiled on January 13, and obfuscated with ConfuserEx. Beyond loading Pheno, it supports credential harvesting from web browsers, file operations, remote command execution, and host profiling.
It establishes an encrypted TCP connection to its command-and-control server and rotates between three hardcoded user-agent strings to make its traffic blend with legitimate browser requests. To evade analysis, CloudZ detects .NET debuggers and profilers via environment variable queries and generates its executable functions dynamically in memory — meaning the most sensitive code never sits as a static binary on disk.
The infection chain begins with a fake ScreenConnect application update. ScreenConnect is a legitimate remote support tool commonly used in enterprise environments. Executing the fake update drops a Rust-compiled loader, which in turn deploys a .NET loader that installs CloudZ and establishes persistence via a scheduled task. The .NET loader performs thorough sandbox checks, scanning for analysis tools including Wireshark, Fiddler, Procmon, and Sysmon before proceeding.
Cisco Talos researchers did not attribute the campaign to a known threat actor. The initial access vector also remains unidentified.


Source: X[/caption]
Such claims, if validated, could significantly expand the scope of the Canvas cybersecurity incident beyond initial disclosures. For now, the company maintains that its investigation is ongoing.

It is always a bit jarring when the "digital locksmiths" are the ones getting their locks picked. Cybersecurity firm Trellix on Saturday confirmed it suffered a breach involving its internal source code repositories, proving that even the defenders aren't immune to the threats they fight.
On May 2, Trellix released a statement confirming that unauthorized parties had gained access to sections of their internal code. Upon discovering the intrusion, the company initiated a standard response protocol. They hired external security experts to map the extent of the breach and informed relevant authorities immediately.
Trellix maintains that there is no evidence their software distribution channels were compromised or that any leaked code has been used in active attacks.
While the "all clear" on product safety is a relief, several questions remain. Trellix has yet to identify the threat actors, the duration of the unauthorized access, or the specific volume of data stolen.
A breach at a firm like Trellix—born from the merger of McAfee Enterprise and FireEye—carries more weight than a standard data leak. Because Trellix provides Endpoint Detection and Response (EDR) and XDR services to governments and global banks, their source code is a roadmap for attackers.
Vulnerability Research: Having the code allows hackers to hunt for "zero-day" flaws without having to guess how the software works.
Supply Chain Risk: If an attacker can inject malicious code into a trusted update, they can compromise thousands of customers at once.
Bypassing Defenses: Knowing how a security tool "thinks" makes it much easier for malware to stay invisible.
Trellix is far from the first titan to be targeted. They join a list of major players like Microsoft, Okta, and LastPass, all of whom have dealt with source code theft in recent years. This pattern suggests that sophisticated actors (whether cybercriminals or nation-states) are increasingly focused on the "keys to the kingdom."
For now, there isn't a "fire drill" for Trellix users. Since there is no proof of tampered software, the immediate risk remains low. Trellix has promised to be transparent as their investigation concludes. Until then, the industry is left waiting to see if this was a simple smash-and-grab or the opening move of a much larger campaign.

When governments introduced stricter online age checks under the UK’s Online Safety Act, the goal was to keep children away from harmful content. But in practice, the system is already showing cracks—and the most telling insight comes from the very users it’s meant to protect.
Children aren’t just countering age checks, they’re actively bypassing them—and often with surprising ease.
According to a new report from Internet Matters foundation, nearly half of children (46%) believe age verification systems are easy to get around, while only 17% think they are difficult. That perception isn’t theoretical. It’s grounded in real behavior, shared knowledge, and increasingly creative workarounds.
From simply entering a fake birthdate to using someone else’s ID, children have developed a toolkit to bypass techniques. Some methods are almost trivial—changing a date of birth or borrowing a parent’s login—while others reflect a growing sophistication. Kids reported submitting altered images, using AI-generated faces, or even drawing facial hair on themselves to trick facial recognition systems.
In one striking example, a parent described catching their child using makeup to appear older—successfully fooling the system.
I did catch my son using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old. – Mum of boy, 12
But the problem goes deeper than perception. It’s systemic.
The report reveals that nearly one in three children (32%) admitted to bypassing age restrictions in just the past two months. Older children are even more likely to do so, which shows how digital literacy often translates into evasion capability.
The most common methods?
Despite widespread concerns about VPNs, they play a relatively minor role. Only 7% of children reported using them to bypass restrictions, suggesting that simpler, low-effort tactics remain the preferred route.
In other words, the barrier to entry is not just low—it’s practically optional.
Europe’s cyber threat landscape Q1 2026 shows a sharp acceleration in cyber threats across the region. Do you know what's contributing to it?
Check Cyble's full analysis report here!
Ironically, even when children attempt to follow the rules, the technology doesn’t always cooperate.
Some reported being incorrectly identified as older—or younger—by facial recognition systems. In cases where they were flagged as underage, enforcement was often inconsistent or temporary. One child described being blocked from going live on a platform for just 10 minutes before being allowed to try again.
This inconsistency creates a loophole where persistence pays. If at first you’re denied, simply try again.
Perhaps the most concerning finding isn’t that children can bypass age checks—it’s that adults can too.
The report states fears that adults may exploit these same weaknesses to access spaces intended for younger users. In some cases, this involves using images or videos of children to trick verification systems. There are even reports of adults acquiring child-registered accounts to blend into youth platforms.
This flips the entire premise of age verification on its head. Instead of protecting children, flawed systems may inadvertently expose them to greater risk.
Adding another layer of complexity, parents themselves are sometimes complicit.
About 26% of parents admitted to allowing their children to bypass age checks, with 17% actively helping them do so. The reasoning is often pragmatic. Parents feel they understand the risks and trust their child’s judgment.
I have helped my son get around them. It was to play a game, and I knew the game, and I was happy and confident that I was fine with him playing it. – Mum of non-binary child, 13
But this undermines the consistency of enforcement. If rules vary from household to household, platform-level protections lose their impact.
Interestingly, the data also suggests that communication matters. Children who regularly discuss their online activity with parents are less likely to bypass restrictions than those who don’t.
The motivations aren’t always malicious. In many cases, children are simply trying to access social media (34%), gaming communities (30%), or messaging apps (29%) that their peers are already using.
What this resonate is a fundamental tension where age verification systems are trying to enforce boundaries in environments where social participation is the norm.
Age verification is often positioned as a cornerstone of online safety. But in practice, it’s proving to be more of a speed bump than a safeguard.
Children understand the systems. They share methods. They adapt quickly. And until the technology—and its enforcement—becomes significantly more robust, age checks may offer more reassurance than real protection.

Image Source: https://www.ic3.gov/[/caption]

