Visualização de leitura

If a fake moustache can fool age checks, is the Online Safety Act working?

A report based on a survey by the UK’s Internet Matters shows that much of the responsibility for managing the online safety of children still falls on families.

The Online Safety Act came into effect in July, 2025, and the report explores what has changed in the online lives of UK families since then.

We discussed in December 2025 whether the privacy risks of age verification outweighed the enhanced child protection. While the report shows some progress, it mostly provides “an early view of how the online landscape is changing, and crucially, where it is not.”

Around half of children say they now see more age-appropriate content, and roughly four in ten parents and children feel the online world has become somewhat safer.

The online world is as much a part of a child’s environment as the physical world is. And blocking the view to parts of that world is not taken lightly. Almost half of children think age checks are easy to bypass. About a third admit to doing so recently, using tactics from fake birthdates and borrowed logins to spoofed faces and, less commonly, VPNs.

“I did catch my son [12] using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old.”

Yet 90% of children who noticed improved blocking and reporting saw this as a good thing. Their support for these safety features is pragmatic. They point to:

  • clearer rules
  • restricted contact with strangers
  • limits on high-risk functions

 They also rate these features as helpful in reducing exposure to harmful content and interactions.

But the system is not perfect. In the month after the child protection codes came into force, almost half of children reported some online harm, including violent, hateful, and body image-related content that should be covered by the Act’s protections.

The survey also revealed that age checks are now commonplace. Over half of children said they were asked to verify their age within a recent two-month window, often on major platforms like TikTok, YouTube/Google, and Roblox, on both new and existing accounts.

The technology is improving. Platforms use facial age estimation, government ID, and third-party age assurance apps, and these are usually easy for children to complete.

However, gains in protection come with unresolved and, in some cases, growing concerns around privacy and data use, especially around age verification and AI.

Parents are worried not just about what data is collected for age checks, but whether it will be stored or reused by government or industry. This has fueled calls for central, privacy-protective solutions rather than fragmented data collection across platforms.

Because age assurance systems are both intrusive (in terms of data) and often ineffective (easy workarounds, weak enforcement), the report suggests they may not yet provide a good safety-to-privacy trade-off from a family perspective.

Obviously, the survey also didn’t capture input from adults pretending to be children to gain access to child-only spaces, a risk that parents link directly to predatory behavior.

The authors conclude that the Online Safety Act has started to reshape children’s online environments, making safety features more visible and enabling more age‑appropriate experiences in some areas.

However, the Act has not yet produced a “step change.” Harmful content remains widespread, age‑assurance is patchy and easy to circumvent, and key concerns such as time spent online, AI risks, and persuasive design remain under‑regulated.


Browse like no one’s watching. 

Malwarebytes Privacy VPN encrypts your connection and never logs what you do, so the next story you read doesn’t have to feel personal. Try it free → 

Meta & YouTube Found Negligent: A Turning Point for Big Tech?

A landmark jury verdict has found Meta and YouTube negligent in a social media addiction case, raising major questions about platform accountability and legal protections under Section 230. This episode covers the details of the case, why the ruling is significant, and what it could mean for the future of social media, privacy, and cybersecurity. […]

The post Meta & YouTube Found Negligent: A Turning Point for Big Tech? appeared first on Shared Security Podcast.

The post Meta & YouTube Found Negligent: A Turning Point for Big Tech? appeared first on Security Boulevard.

💾

Blocking children from social media is a badly executed good idea

While we can probably all agree that there is more than enough proof that social media is bad for the mental health of our children, the methods we are trying to block or ban them seem to do more harm than good.

Across the world, lawmakers are tripping over each other to be seen “doing something” about kids and social media. Europe is slowly turning into a patchwork of age limits, curfews, and partial bans, with each country testing its own flavor of restriction while platforms try to update their systems just fast enough to stay compliant. Australia has gone even further with a nationwide ban for children under 16 that regulators now struggle to enforce at scale. The political message seems to be: social media is dangerous, and the state will step in where parents supposedly fail.

On paper, that sounds decisive. In practice, it is messy, easy to bypass, and it risks shifting the problem rather than solving it. Most of these measures depend on age‑verification systems that were never designed to handle this kind of pressure. Research looking at sign‑up flows for major platforms shows what every teenager already knows: it is not hard to lie about your date of birth, borrow an older friend’s details, or hop to a service that is just outside the current regulatory crosshairs. The result is a lot of political noise, a lot of extra friction for everyone, and only a marginal effect on the very group these rules are aimed at.

Worse, by treating all social media use by minors as equally harmful, bans erase important nuances. There is a world of difference between doom‑scrolling through algorithmically-boosted gore reels at 2 AM and using a group chat to do homework, laugh at memes, or stay in touch with cousins abroad. Studies and expert reviews echo this. Social media can contribute to anxiety, depression, and poor sleep, but it can also provides support, connection, and a sense of belonging, especially for teens who feel isolated offline. A blunt ban cuts off both the toxic and the helpful parts in one sweep, which is not necessarily an improvement.

The tools we build to make bans enforceable come with their own side‑effects. Age‑verification schemes based on IDs, biometric analysis, or third‑party brokers may reduce some underage sign‑ups, but they also normalize handing over sensitive data just to speak or listen online. Legal and technical analysts warn that these systems introduce new privacy risks, expand surveillance, and can disproportionately impact vulnerable communities who rely on pseudonyms and anonymity for their safety. For children, the lesson the takeaway is that if they want to participate, they must accept invasive checks they barely understand or learn how to bypass them.

Which children easily do.

When you close one door without addressing the underlying behavior, kids will find another, as they have done throughout history. From chat rooms to instant messaging to early social networks, every attempt to lock children out has produced a mix of circumvention and secrecy. That secrecy is a problem in itself, because it pushes online life into hidden accounts, borrowed devices, or unregulated platforms where adults have even less visibility into what is going on. The more online activity that moves into that grey area of illegality, the harder it becomes to have honest conversations about the risks.

That, ultimately, is the core weakness of “ban first, ask questions later” policies. They are optimized for sending a strong signal to voters, not for building resilient habits in families. Politicians and platforms both have roles to play to make the online environment safer. Platforms can use a better design, safer defaults, more transparency, and proper enforcement against clear abuse. But none of that will replace what actually makes a difference for a child: an adult who understands the risks well enough to talk about them, set reasonable boundaries, and is trusted enough that the child will come to them when something goes wrong. No child suddenly matures enough on their 13th or even 16th birthday to be able to fight off the pitfalls of extremely fine-tuned algorithms.

We should be honest about this. No regulator, filter, or age‑gate will ever know your child as well as you do. No law will be able to adjust itself on the fly when a teenager suddenly starts using a new app in a worrying way. Governments can and should tackle the worst excesses, and hold companies responsible so they stop pretending that maximized engagement is compatible with child safety. But in the end, the real responsibility for keeping children safe online cannot be outsourced to apps or regulation. In the end, it lies, unavoidably, with the people—daily, compassionately—in their lives.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Apple Introduces Age Checks for iPhone Users in the UK

Apple age verification

Apple has introduced Apple age verification UK measures that will require iPhone and iPad users to confirm they are adults before accessing certain services, including 18-plus apps. The change comes with the iOS 26.4 update and is being implemented in response to legal requirements in certain regions, including the UK. According to Apple, users may be prompted to confirm that they are adults when creating a new Apple Account or while using specific services. This requirement applies to actions such as downloading apps or changing certain settings linked to their Apple Account.

Apple Age Verification UK: How Users Confirm Age

As part of the Apple age verification UK rollout, users can confirm their age through multiple methods. Apple may use existing account information, such as whether a credit card is already linked to the account or how long the account has been active, to help determine if a user is an adult. Users also have the option to add a credit card to confirm their age or scan a government-issued ID, such as a driver’s license or national ID. Apple has stated that credit card details or ID documents are not stored unless users choose to save them for other purposes, such as adding a payment method. To complete the process, users must update their device to the latest software version and follow prompts in the Settings app. If they choose not to confirm immediately, they will continue to see a notification in Settings prompting them to complete the process later. If verification cannot be completed on the device, Apple requires users to use approved methods such as a driver’s license, national ID, or a credit card. Debit cards, gift cards, and passports are not supported, although a Digital ID in Apple Wallet created using a U.S. passport may be accepted in some cases.

Impact on Child Online Accounts

The Apple age verification UK changes also affect how minors use Apple services. In the UK, children under 13 cannot create an Apple Account without parental consent and must be part of a Family Sharing group. In such cases, a parent or guardian who has confirmed their age may be required to approve certain actions, including app downloads or changes to safety settings. Depending on the region, some features may not be available to users until they turn 18. Apple has also noted that age requirements for child accounts vary across countries, with thresholds ranging from under 13 in most regions to higher limits in others.

Regulatory Push on Child Online Safety

The rollout of Apple age verification UK comes as UK regulators increase scrutiny on how platforms enforce age restrictions. The Information Commissioner’s Office (ICO) and Ofcom have asked major platforms to outline how they plan to strengthen child safety protections, particularly in preventing children under 13 from accessing services meant for older users. The UK government is also considering additional measures, including potential restrictions on social media use for younger users and pilot programs to test new regulatory approaches. Several European countries have announced or are considering similar steps. Ofcom has stated that many platforms are not effectively enforcing minimum age requirements, with children continuing to access services despite age restrictions. The regulator has called on companies to implement stronger measures, including effective age checks, improved protections against grooming, safer content feeds, and proper assessment of new product features before they are introduced. Dame Melanie Dawes, Ofcom Chief Executive, said: “These online services are household names, but they’re failing to put children’s safety at the heart of their products. There is a gap between what tech companies promise in private, and what they’re doing publicly to keep children safe on their platforms. “Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose, on services they can’t realistically avoid. That must now change quickly, or Ofcom will act.”

Growing Focus on Enforcement

The Apple age verification measures align with broader enforcement efforts under the UK’s online safety framework. Ofcom has written to major platforms, including Facebook, Instagram, Roblox, Snapchat, TikTok, and YouTube, requiring them to demonstrate how they will enforce minimum age rules and improve child safety protections. Platforms have been given deadlines to respond, after which Ofcom will assess their actions and determine whether further regulatory steps are necessary. The regulator has also indicated it is prepared to take enforcement action if companies fail to meet expectations. The introduction of age verification at the device and account level reflects increasing emphasis on ensuring that age restrictions are applied more consistently across digital services, particularly where children may be exposed to adult content or features.

Reddit, porn sites fined by UK regulators over children’s safety and privacy

The UK’s online safety and privacy regulators are targeting companies that violate new age verification laws at both ends : Porn sites that did not keep children out, and mainstream platforms that profited from children coming in.

On February 23, media regulator Ofcom fined porn operators that failed to put “highly effective” age checks in place. Within the same 24-hour time frame, the Information Commissioner’s Office (ICO)—which is a separate, independent regulatory office—hit Reddit with an $18.2 million fine for unlawfully using children’s personal data for targeted advertising and recommendation systems.​​

Together, the cases show how quickly the UK is moving from guidance and “codes” to very real enforcement for services that don’t take children’s rights seriously.

Porn sites punished for weak age checks

Under the UK’s Online Safety Act 2023, services that publish or host pornography must use “highly effective” age assurance to stop children from accessing pornographic content. That means the classic splash page warning or an “I am over 18” tick box are no longer suitable.

Porn companies have reacted in different ways as the rules began to take hold: Some big players embraced more intrusive checks; others, like Pornhub, geo‑restricted or partially withdrew from the UK, and a minority effectively called the regulator’s bluff and carried on with token measures. Ofcom is now going after that last group.​

On Monday, Ofcom fined US‑based porn operator 8579 LLC around $1.8 million for failing to implement proper age verification on its sites and for dragging its heels on compliance. The company has also been ordered to hand over a complete list of all websites it operates, with an extra daily penalty if it fails to comply.​

Ofcom said it opened investigations into dozens of adult sites and will impose fines and daily penalties until proper age checks are in place, and reports that more than 6,000 porn sites have now moved to “highly effective” age checks. It can also employ certain business‑disruption and blocking powers for stubborn operators that continue to violate the law.​

Reddit fined for using children’s data

Meanwhile, the Information Commissioner’s Office (ICO) took aim at something different, but very much related: How mainstream online platforms treat the children they already have, rather than how they keep children out.

The Office’s latest decision imposes a substantial fine on Reddit for “children’s privacy failures,” after the regulator concluded the platform unlawfully used UK children’s personal data for targeted advertising and profiling. The decision follows a multi‑year push by the ICO to enforce its Age Appropriate Design Code (also known as the Children’s code) and to crack down on platforms that treat under‑18s like just another audience segment.​

The ICO said its investigation found that Reddit:

  • Failed to effectively identify and protect under‑18s on the platform, despite knowing that children were present in large numbers.
  • Used children’s personal information in recommender systems and targeted advertising, without adequate safeguards or a lawful basis under UK data protection law.
  • Did not ensure that the “best interests of the child” were a primary consideration in its design and data‑use decisions, as required by the Children’s code.

The regulator’s concerns about Reddit are not new. In 2025 it announced investigations into TikTok, Reddit, and Imgur that focused on how the companies use UK children’s data and what age‑assurance tools they rely on. By late 2025, the ICO had issued a notice of intent to fine Reddit, signaling that it believed serious breaches had occurred. The new $18.2 million fine is the outcome of that process.

The message from Information Commissioner John Edwards is blunt: Social media and video‑sharing platforms are welcome in the UK, but “this cannot be at the expense of children’s privacy,” and the responsibility to keep children safe “lies firmly at the door of the companies offering these services.”

Two regulators, one child‑safety push

Although Ofcom and the ICO have different remits, their actions line up neatly.

  • Ofcom enforces the Online Safety Act, which focuses on harmful content and requires robust age assurance for porn and other high‑risk services.​​
  • The ICO enforces UK GDPR and the Children’s code, which focus on how children’s personal data is collected, used, and shared.

The regulators have explicitly said they will work closely together to coordinate their efforts on children’s safety. In practical terms, that means:

  • A porn site that implements “highly effective” age checks to satisfy Ofcom may also find itself under ICO scrutiny if its identity checks or data sharing do not respect data protection law.
  • A social platform that complies with the Children’s code by turning off profiling for children and tightening privacy defaults may still need Ofcom‑compliant age assurance if it hosts adult or otherwise high‑risk content.

The overlap is already visible. The ICO investigated how Reddit and other platforms use age assurance and recommender systems, while Ofcom set out specific guidance on acceptable age‑verification methods under the Online Safety Act.

How age checks actually work

Regulators such as Ofcom publish lists of acceptable age‑verification methods, each with its own privacy and usability trade‑offs. None are perfect, and many shift risk from governments and platforms onto users’ most sensitive personal data.​

  • Facial age estimation: Users upload a selfie or short video so an algorithm can guess whether they look over 18, which avoids storing documents but relies on sensitive biometrics and imperfect accuracy.​
  • Open banking: An age‑check service queries your bank for a simple “adult or not” answer. It may be convenient on paper but it’s a hard sell when the relying site is an adult platform.​
  • Digital identity services: Digital ID wallets can assert “over 18” without exposing full credentials, but they add yet another app and infrastructure layer that must be trusted and widely adopted.​
  • Credit card checks: Using a valid payment card as a proxy for adulthood is simple and familiar, but it excludes adults without cards and does not cover lower age thresholds like “over 13.”​
  • Email‑based estimation: Systems infer age from where an email address has been used (such as banks or utilities), effectively encouraging cross‑service profiling and “digital snooping.”​
  • Mobile network checks: Providers indicate whether an account has age‑related restrictions. This can be fast, but is unreliable for pay‑as‑you‑go accounts, burner SIMs, or poorly maintained records.​
  • Photo‑ID matching: Users upload an ID document plus a selfie so systems can match faces and ages. This is effective, but concentrates highly sensitive identity data in yet another attractive target for attackers.​

My personal preference would be double‑blind verification: a third‑party provider verifies your age, then issues a simple token like “18+” to sites without revealing your identity or learning which site you visit, offering stronger privacy than most current approaches.​

In almost every case, users must surrender personal information or documents to prove their age, increasing the risk that sensitive data ends up in the wrong hands.​


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Roblox gives predators “powerful tools” to target children, says LA County

Los Angeles County has sued online gaming company Roblox, adding to a series of suits that accuse the virtual worlds platform of misleading parents into thinking it’s safe while leaving children exposed to predators and sexually explicit content. The February 19 filing makes LA County the first California government body to take the company to court over child safety.

Roblox claims over 151 million daily users, most of which are kids. The company said it disputes the claims and will defend itself vigorously.

What the suit tells us about how predators operate

According to the complaint, Roblox violated California’s Unfair Competition Law and False Advertising Law. County Counsel Dawyn R. Harrison, who filed the lawsuit, said that the gaming platform has repeatedly exposed kids to sexually explicit material, grooming, and exploitation because it has chosen profit over safety.

“This is not about a minor lapse in safety,” Harrison said in a prepared press release. “It is about a company that gives pedophiles powerful tools to prey on innocent and unsuspecting children.”

Until November 2024, anyone could friend and message a child on the platform, the suit said. When Roblox changed those rules it was allegedly still possible for accounts registered with ages over 13 to message each other without having previously been connected, meaning that adults could still message teens who didn’t know them.

The suit also alleged that it’s easy for predators to masquerade as children on the site, because age has historically been self-reported with no enforcement of parental approval when kids sign up.

But Roblox’s approach to age verification changed last September, when the company announced plans to use age estimation on all users who wanted to the platform’s communication features. It then introduced the third-party Persona system, which requires a facial age check to use chat features. But Persona itself has become a problem.

Researchers recently discovered an exposed frontend revealing the tool does far more than check ages, including running facial recognition against watchlists. It can also hold on to personal data including government IDs, device fingerprints, and biometric information for up to three years. Discord has already walked away from Persona, but Roblox hasn’t.

Even setting the vendor aside, the safeguards aren’t working as advertised. When Malwarebytes researchers created an account for a child under 13 on Roblox in December 2025, it found that a child account could find communities linked to cybercrime and fraud-related keywords.

The complaint contains many allegations about the type of behavior that has occurred on Roblox, including:

  • The simulated rape of a seven year-old’s avatar in a digital playground environment
  • “Diddy” games that recreated some events from the imprisoned rap star’s parties
  • The creation of Jeffrey Epstein-themed accounts, and the operation of a game called “Escape to Epstein Island”
  • Virtual strip clubs where avatars can disrobe and give lap dances

The LA County complaint also mentioned a report from financial forensic research company Hindenburg Research published in October 2024. The company, targeting short sellers who trade by selling stocks in vulnerable companies, said that it had found multiple groups on the site trading child sexual abuse material and soliciting sexual favors. The report also alleged that Roblox was cutting safety spending even as problems mounted.

A former senior product designer allegedly told Hindenburg the trade-off was deliberate. “If you’re limiting users’ engagement, it’s hurting your metrics…in a lot of cases, the leadership doesn’t want that,” the product designer allegedly said, according to the lawsuit.

A cacophony of cases

This won’t be the only case Roblox has defended. In 2022, the Social Media Victims Law Center filed suit against the company for allegedly touting child safety while allowing the exploitation of a young girl. The following year, multiple families filed suit against the gaming company for allegedly misleading them about content harmful to children. Last year, the mother of a 15 year-old boy from Texas sued Roblox after he committed suicide. The complaint alleged that he was groomed and subsequently blackmailed over nude pictures he’d been persuaded to send a predator on the site.

Another lawsuit filed against the company in San Mateo in February 2025 claimed that a 27-year-old predator reached a 13-year-old boy through the platform’s “whisper” messaging system. That case described the platform as “a digital and real-life nightmare for children.”

The California suit joins an expanding pile of government cases against Roblox. Louisiana sued the company in August 2025, followed by Kentucky (October 2025), Texas (November 2025), and Florida (December 2025). Georgia’s Attorney General is also investigating the company. And a collection of separate private suits against the company have been consolidated into a single multi-district litigation.

What parents can do

So, what can parents do? Interestingly, one potential answer came last year when the company’s CEO Dave Baszucki spoke with the BBC:

“My first message would be, if you’re not comfortable, don’t let your kids be on Roblox.”

If you do want to let your children use Roblox (or any other site), then close monitoring is important. Restrict friend requests and disable open chat to the extent that the platform allows. Anonymize your children’s profiles to potentially avoid what one family claimed happened to them in an earlier lawsuit, , in which they had to move across the country after the predator reportedly tracked down their child’s address via Roblox.

Child education is key. Tell your children not to reveal personal information and not to take conversations off-platform, because that’s where exploitation escalates. And keep the conversation going, not as a one-time lecture, but as a regular part of talking about their day.

For more information about child safety, check out Malwarebytes’ research on the topic, which also offers useful advice.

LA County is seeking civil penalties of up to $2,500 per violation per day, plus injunctive relief that could force structural changes to how the platform operates.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Age verification vendor Persona left frontend exposed, researchers say

Researchers investigating Discord’s age-verification checks say they discovered an exposed frontend belonging to Persona, the identity-verification vendor used by Discord. It revealed a far more expansive surveillance and financial intelligence stack than a simple “teen safety” tool.

A short while ago we reported that Discord will limit profiles to teen-appropriate mode until you verify your age. That means anyone would wants to continue using Discord as before would have to let it scan their face—and the internet was far from happy.

To analyze these scans, Discord uses biometric identity verification start-up Persona Identities, Inc. a venture that offers Know Your Customer (KYC) and Anti-Money Laundering (AML) solutions that rely on biometric identity checks to estimate a user’s age.

To demonstrate the privacy implications, researchers took a closer look and found a publicly exposed Persona frontend on a US government–authorized server, with 2,456 accessible files.

You read that right. According to researcher “Celeste” the exposed code, which has now been removed, sat at a US government-authorized endpoint that appears to have been isolated from its regular work environment.

In those files, the researchers found details about the extensive surveillance Persona software performs on its users. Beyond checking their age, the software performs 269 distinct verification checks, runs facial recognition against watchlists and politically exposed persons, screens “adverse media” across 14 categories (including terrorism and espionage), and assigns risk and similarity scores.

Persona collects—and can retain for up to three years—IP addresses, browser and device fingerprints, government ID numbers, phone numbers, names, faces, plus a battery of “selfie” analytics like suspicious-entity detection, pose repeat detection, and age inconsistency checks.


See if your personal data has been exposed.


At a time when age verification is very much a hot topic, this is not the kind of news to persuade privacy advocates that age verification is in our best interest. Sending data obtained during age verification checks to data brokers and foreign governments—reportedly Persona was tested by Discord in the UK—will not install the level of trust needed for users to feel comfortable submitting to this kind of scrutiny.

This comes amid broader questions about whether age verification is actually doing what it’s supposed to do. Euronews looked at the effect of Australia’s world-leading ban on social media for under-16s. Australia’s new rules have only been in force for six weeks, but while the country’s internet regulator says it has shut down about 4.7 million accounts held by under‑16s on platforms like TikTok, Instagram, Snapchat, YouTube, X, Twitch, Reddit, and Threads, children and parents describe a very different reality. Interviews with teenagers, parents and researchers indicate that many children are still accessing banned apps through simple workarounds.

According to The Rage,  Discord has stated it will not continue to use Persona for age verification. However, other platforms reported to use Persona include:

  • Roblox: Uses Persona’s facial age estimation and ID verification as the core of its “age checks to chat” system.
  • OpenAI / ChatGPT: OpenAI’s help center explains that if you need to verify being 18+, “Persona is a trusted third-party company we use to help verify age,” and that Persona may ask for a live selfie and/or government ID.
  • Lime: The ride-sharing service deploys custom age verification flows with Persona to meet each region’s unique requirements.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Discord will limit profiles to teen-appropriate mode until you verify your age

Discord announced it will put all existing and new profiles in teen-appropriate mode by default in early March.

The teen-appropriate profile mode will remain in place until users prove they are adults. To change a profile to “full access” will require verification by Discord’s age inference model—a new system that runs in the background to help determine whether an account belongs to an adult, without always requiring users to verify their age.

Savannah Badalich, Head of Product Policy at Discord, explained the reasoning:

“Rolling out teen-by-default settings globally builds on Discord’s existing safety architecture, giving teens strong protections while allowing verified adults flexibility. We design our products with teen safety principles at the core and will continue working with safety experts, policymakers, and Discord users to support meaningful, long term wellbeing for teens on the platform.”

Platforms have been facing growing regulatory pressure—particularly in the UK, EU, and parts of the US—to introduce stronger age-verification measures. The announcement also comes as concerns about children’s safety on social media continue to surface. In research we published today, parents highlighted issues such as exposure to inappropriate content, unwanted contact, and safeguards that are easy to bypass. Discord was one of the platforms we researched.

The problem in Discord’s case lies in the age-verification methods it’s made available, which require either a facial scan or a government-issued ID. Discord says that video selfies used for facial age estimation never leave a user’s device, but this method is known not to work reliably for everyone.

Identity documents submitted to Discord’s vendor partners are also deleted quickly—often immediately after age confirmation, according to Discord. But, as we all know, computers are very bad at “forgetting” things and criminals are very good at finding things that were supposed to be gone.

Besides all that, the effectiveness of this kind of measure remains an issue. Minors often find ways around systems—using borrowed IDs, VPNs, or false information—so strict verification can create a sense of safety without fully eliminating risk. In some cases, it may even push activity into less regulated or more opaque spaces.

As someone who isn’t an avid Discord user, I can’t help but wonder why keeping my profile teen-appropriate would be a bad thing. Let us know in the comments what your objections to this scenario would be.

I wouldn’t have to provide identification and what I’d “miss” doesn’t sound terrible at all:

  • Mature and graphic images would be permanently blocked.
  • Age-restricted channels and servers would be inaccessible.
  • DMs from unknown users would be rerouted to a separate inbox.
  • Friend requests from unknown users would always trigger a warning pop-up.
  • No speaking on server stages.

Given the amount of backlash this news received, I’m probably missing something—and I don’t mind being corrected. So let’s hear it.

Note: All comments are moderated. Those including links and inappropriate language will be deleted. The rest must be approved by a moderator.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

In 2025, age checks started locking people out of the internet

If 2024 was the year lawmakers talked about online age verification, 2025 was the year they actually flipped the switch.​

In 2025, across parts of Europe and the US, age checks for certain websites (especially pornography) turned long‑running child‑protection debates into real‑world access controls. Overnight, users found entire categories of sites locked behind ID checks, platforms geo‑blocking whole countries, and VPN traffic surging as people tried to get around the new walls.​

From France’s hardline stance on adult sites to the UK’s Online Safety Act, to a patchwork of new rules across multiple US states, these “show me your ID before you browse” systems are reshaping the web. The stated goal is to “protect the children,” but in practice the outcome is frequently a blunt national block, followed by users voting with their VPN buttons.​

The core tension: safety vs privacy

The fundamental challenge for websites and services is not checking age in principle, but how to do it without turning everyday browsing into an identity check. Almost every viable method asks users to hand over sensitive data, raising the stakes if (or more likely when) that data leaks in a breach.​

For ordinary users, the result is a confusing mess of blocks, prompts, and workarounds. On paper, countries want better protection for minors. In practice, adults discover that entire platforms are unavailable unless they are prepared to disclose personal information or disguise where they connect from. No website wants to be the one blamed after an age‑verification database is compromised, yet regulators continue to push for stronger identity links.​

How age checks actually work

Regulators such as Ofcom publish lists of acceptable age‑verification methods, each with its own privacy and usability trade‑offs. None are perfect, and many shift risk from governments and platforms onto users’ most sensitive personal data.​

  • Facial age estimation: Users upload a selfie or short video so an algorithm can guess whether they look over 18, which avoids storing documents but relies on sensitive biometrics and imperfect accuracy.​
  • Open banking: An age‑check service queries your bank for a simple “adult or not” answer. It may be convenient on paper but it’s a hard sell when the relying site is an adult platform.​
  • Digital identity services: Digital ID wallets can assert “over 18” without exposing full credentials, but they add yet another app and infrastructure layer that must be trusted and widely adopted.​
  • Credit card checks: Using a valid payment card as a proxy for adulthood is simple and familiar, but it excludes adults without cards and does not cover lower age thresholds like “over 13.”​
  • Email‑based estimation: Systems infer age from where an email address has been used (such as banks or utilities), effectively encouraging cross‑service profiling and “digital snooping.”​
  • Mobile network checks: Providers indicate whether an account has age‑related restrictions. This can be fast, but is unreliable for pay‑as‑you‑go accounts, burner SIMs, or poorly maintained records.​
  • Photo‑ID matching: Users upload an ID document plus a selfie so systems can match faces and ages. This is effective, but concentrates highly sensitive identity data in yet another attractive target for attackers.​

My personal preference would be double‑blind verification: a third‑party provider verifies your age, then issues a simple token like “18+” to sites without revealing your identity or learning which site you visit, offering stronger privacy than most current approaches.​

In almost every case, users must surrender personal information or documents to prove their age, increasing the risk that identity data ends up in the wrong hands. This turns age gates into long‑lived security liabilities rather than temporary access checks.​

Geoblocking, VPNs, and cross‑border frictions

Right now, most platforms comply by detecting user location via IP address and then either demanding age checks or denying access entirely to users in specific regions. France’s enforcement actions, for example, led several major adult sites to geo-block the entire country in 2025, while the UK’s Online Safety Act coincided with a sharp rise in VPN use rather than widespread cross-border blocking.

European regulators generally focus on domestic ISPs, Digital Services Act reporting, and large platform fines rather than on filtering traffic from other countries, partly because broad traffic blocking raises net‑neutrality and technical complexity concerns. In the US, some state proposals have explicitly targeted VPN circumventions, signalling a willingness to attack the workarounds rather than the underlying incentives.​

Meanwhile, network‑level filtering vendors advertise “cross‑border” controls and VPN detection for governments, hinting at future scenarios where unregulated inbound flows or anonymity tools are aggressively throttled. If enforcement pressure grows, these capabilities could evolve from niche offerings into standard state infrastructure.​

A future of less anonymity?

A common argument is that eroding online anonymity will also curb toxic behavior and abuse on social media, since people act differently when their real‑world identity is at stake. But tying everyday browsing to identity checks risks chilling legitimate speech and exploration long before it delivers any proven civility benefits.​

A world where every connection requires ID is unlikely to arrive overnight. Still, the direction of travel is clear: more countries are normalizing age gates that double as identity checks, and more users are learning to route around them. Unless privacy‑preserving systems like robust double‑blind verification become the norm, age‑verification policies intended to protect children may end up undermining both privacy and open access to information.​


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

❌