Visualização normal

Antes de ontemStream principal
  • ✇Malwarebytes
  • If a fake moustache can fool age checks, is the Online Safety Act working?
    A report based on a survey by the UK’s Internet Matters shows that much of the responsibility for managing the online safety of children still falls on families. The Online Safety Act came into effect in July, 2025, and the report explores what has changed in the online lives of UK families since then. We discussed in December 2025 whether the privacy risks of age verification outweighed the enhanced child protection. While the report shows some progress, it mostly provides “an early view
     

If a fake moustache can fool age checks, is the Online Safety Act working?

7 de Maio de 2026, 07:21

A report based on a survey by the UK’s Internet Matters shows that much of the responsibility for managing the online safety of children still falls on families.

The Online Safety Act came into effect in July, 2025, and the report explores what has changed in the online lives of UK families since then.

We discussed in December 2025 whether the privacy risks of age verification outweighed the enhanced child protection. While the report shows some progress, it mostly provides “an early view of how the online landscape is changing, and crucially, where it is not.”

Around half of children say they now see more age-appropriate content, and roughly four in ten parents and children feel the online world has become somewhat safer.

The online world is as much a part of a child’s environment as the physical world is. And blocking the view to parts of that world is not taken lightly. Almost half of children think age checks are easy to bypass. About a third admit to doing so recently, using tactics from fake birthdates and borrowed logins to spoofed faces and, less commonly, VPNs.

“I did catch my son [12] using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old.”

Yet 90% of children who noticed improved blocking and reporting saw this as a good thing. Their support for these safety features is pragmatic. They point to:

  • clearer rules
  • restricted contact with strangers
  • limits on high-risk functions

 They also rate these features as helpful in reducing exposure to harmful content and interactions.

But the system is not perfect. In the month after the child protection codes came into force, almost half of children reported some online harm, including violent, hateful, and body image-related content that should be covered by the Act’s protections.

The survey also revealed that age checks are now commonplace. Over half of children said they were asked to verify their age within a recent two-month window, often on major platforms like TikTok, YouTube/Google, and Roblox, on both new and existing accounts.

The technology is improving. Platforms use facial age estimation, government ID, and third-party age assurance apps, and these are usually easy for children to complete.

However, gains in protection come with unresolved and, in some cases, growing concerns around privacy and data use, especially around age verification and AI.

Parents are worried not just about what data is collected for age checks, but whether it will be stored or reused by government or industry. This has fueled calls for central, privacy-protective solutions rather than fragmented data collection across platforms.

Because age assurance systems are both intrusive (in terms of data) and often ineffective (easy workarounds, weak enforcement), the report suggests they may not yet provide a good safety-to-privacy trade-off from a family perspective.

Obviously, the survey also didn’t capture input from adults pretending to be children to gain access to child-only spaces, a risk that parents link directly to predatory behavior.

The authors conclude that the Online Safety Act has started to reshape children’s online environments, making safety features more visible and enabling more age‑appropriate experiences in some areas.

However, the Act has not yet produced a “step change.” Harmful content remains widespread, age‑assurance is patchy and easy to circumvent, and key concerns such as time spent online, AI risks, and persuasive design remain under‑regulated.


Browse like no one’s watching. 

Malwarebytes Privacy VPN encrypts your connection and never logs what you do, so the next story you read doesn’t have to feel personal. Try it free → 

  • ✇Malwarebytes
  • Millions of students’ personal data stolen in major education breach
    Instructure, the company behind the Canvas learning management system (LMS), confirmed a cyber incident and subsequent data breach affecting its cloud‑hosted environment. The ShinyHunters ransomware group claims it is behind the attack and says it stole roughly 275 million records tied to students, teachers, and staff. Image courtesy of BleepingComputer The criminals shared a list of 8,809 school districts, universities, and online education platforms with BleepingComputer whose Canvas
     

Millions of students’ personal data stolen in major education breach

6 de Maio de 2026, 09:45

Instructure, the company behind the Canvas learning management system (LMS), confirmed a cyber incident and subsequent data breach affecting its cloud‑hosted environment.

The ShinyHunters ransomware group claims it is behind the attack and says it stole roughly 275 million records tied to students, teachers, and staff.

ShinyHunters leak site
Image courtesy of BleepingComputer

The criminals shared a list of 8,809 school districts, universities, and online education platforms with BleepingComputer whose Canvas instances they claim were impacted, with per‑institution record counts ranging from tens of thousands to several million.


Digital Footprint Scan

See if your personal data has been exposed.


What to do if your child’s Instructure/Canvas data was exposed

If you’ve been told that your child was affected by the Instructure breach, you may be wondering what you can do to protect them. Here are some practical steps you can take right away.

1. Check what the school and Instructure are saying

Start with the notification from the school or district and Instructure’s own updates to understand what data about your child was involved (for example: name, email address, student ID, or course information). Follow any specific steps they recommend for student accounts and keep an eye on follow‑up messages in case new information comes to light.

Make sure the notification is real before anything else. If anything in the message looks suspicious, such as odd links, pressure to act immediately, or requests for extra data, check this first. Go to the district’s or Instructure’s site directly and use the contact details listed there to verify.

2. Lock down your child’s school and learning accounts

If your child has a Canvas or related account, change that password immediately, especially if your school lets students or parents log in with a username and password instead of single sign‑on. If your child tends to reuse passwords (for example, using the same one for Canvas, email, and gaming accounts), change those other passwords as well.

Give every account its own strong, unique password and consider using a family password manager so you can create and store these without relying on memory. For younger children, you may want to manage these credentials yourself and keep a list of which education platforms they use.

3. Turn on multi‑factor authentication where possible

Multi‑factor authentication (MFA) makes it much harder for someone to log into an account with just a password. If your school or district allows it on parent or student accounts (for example, a code sent by SMS, email, or generated in an authenticator app), turn it on and, ideally, have the codes go to a device or app you control.

Remind your child that security codes are like short‑term passwords. They should never share them with friends, teachers, or anyone claiming to be “IT support,” even if a message looks urgent or uses school branding.

4. Consider extra identity protection for minors

If the breach included very sensitive identifiers (such as national ID or Social Security numbers in some regions), ask both the school and the breached provider what protection is being offered for minors, such as credit monitoring or identity restoration services. In some countries, you can also place a credit freeze or similar block on a minor’s file to prevent new accounts being opened in their name.

Even if your child is too young to have a credit file today, it’s worth keeping a note of this incident so you remember to check their records once they are old enough.

5. Stay alert for follow‑on scams

Attackers like to reuse stolen data from education platforms to make phishing and scam messages more convincing, mentioning real school names, teachers, or courses. Be especially wary of emails and texts that claim to be from the school, district, or Instructure and that ask you to “confirm” login details, open unexpected attachments (like “new assignments”), or pay fees via unusual methods.

As a rule of thumb, avoid clicking links in unsolicited messages about the breach. Instead, open a new browser window and go to the official site or app as you normally would, then log in from there to check for messages.


What do cybercriminals know about you?

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.

Roblox clamps down on chats and age checks as legal pressure builds

23 de Abril de 2026, 04:57

Roblox is paying millions to settle child safety claims while rolling out strict age checks and chat limits that could reshape how kids use the platform.

The post Roblox clamps down on chats and age checks as legal pressure builds appeared first on Security Boulevard.

  • ✇Malwarebytes
  • Roblox clamps down on chats and age checks as legal pressure builds
    Roblox has long faced criticism over child safety on its platform. Now it has started settling with state attorneys over the issue, and the total is climbing fast. On April 21, Alabama Attorney General Steve Marshall announced a $12.2 million settlement with the child-focused online gaming platform. The State of West Virginia also settled for $11 million the same day. Those came a week after Nevada Attorney General Aaron Ford got the company to hand over $12 million. Their problem with Rob
     

Roblox clamps down on chats and age checks as legal pressure builds

23 de Abril de 2026, 04:57

Roblox has long faced criticism over child safety on its platform. Now it has started settling with state attorneys over the issue, and the total is climbing fast.

On April 21, Alabama Attorney General Steve Marshall announced a $12.2 million settlement with the child-focused online gaming platform. The State of West Virginia also settled for $11 million the same day. Those came a week after Nevada Attorney General Aaron Ford got the company to hand over $12 million.

Their problem with Roblox is clear from the settlement documents: they believe it hasn’t been adequately protecting children from predators on its platform.

What Roblox has to change

As part of Alabama’s settlement, Roblox must now run age checks on everyone via facial age estimation or a government ID starting May 1. That applies to both new and existing accounts. The company must now also monitor account behavior to catch users who lied about their age.

Adults and under-16s won’t be able to talk with each other at all unless they’re on a “trusted friend” list, added via QR code or a phone-contact import, and users that don’t undergo age verification can’t chat to anyone. 

Communication involving any minor cannot be encrypted, so law enforcement can read it during investigations. West Virginia’s settlement also insists that Roblox alert minors the first time they enter a private chat, so children understand how to communicate safely.

Roblox already stopped people from chatting without age verification as of January this year, but under new measures it will start restricting access to games for those that don’t undergo the process. Starting in June, the platform will split into three tiers: Roblox Kids for ages 5–8 will forbid any chats at all, and will only allow access to games labeled ‘minimal’ or ‘mild’ on its maturity scale. Those who don’t complete age verification will also have these restrictions. The other two account levels are Roblox Select for 9–15 year-olds, and standard accounts for those 16 and up.

Plenty more lawsuits to come

Three settlements in eight days totaling more than $35 million must hurt, but it’s just the beginning. Texas, Florida, Louisiana, Iowa, Nebraska, Kentucky, and Tennessee are all pursuing similar claims: that Roblox exposed children to risk and then misled parents about its safeguards.

In February, LA County sued Roblox, accusing the platform of choosing profit over safety and leaving kids exposed to grooming and explicit content.

Roblox is also separately dealing with nearly 80 federal lawsuits filed by families in California alone. And Australia’s eSafety Commissioner has also issued legally-enforceable transparency notices to Roblox and other tech companies. These force them to detail what they’re doing to protect children. Those notices are backed by fines of A$825,000 a day (that’s about US$590,783) for non-compliance.

Where the money will go

The $12.2 million from Alabama’s settlement funds school resource officers through the state’s Safe School Initiative. Nevada’s is earmarked for the Boys & Girls Club and “nondigital activities,” plus a law-enforcement liaison and an online-safety awareness campaign. West Virginia will invest $500,000 in safety education workshops for parents and children, create a $1.5 million three-year public safety campaign, and spend $2.4 million on a dedicated internet safety specialist for six years.

Stay alert

There’s a predictable rhythm to how big tech companies face down state attorneys general. First comes pushback, then rhetoric about shared values, and then they start handing over cash.

It is a step forward that Roblox is agreeing to new safeguards, but questions remain.

In its own lawsuit against Roblox launched last month, Nebraska complained that the company’s existing age-check technology was inadequate. From the complaint:

“Rather than meaningfully protecting children, the system has repeatedly misclassified users’ ages, placing adults in child chat groups and minors in adult categories, while age-verified accounts for young children have already been traded on third-party marketplaces, undermining any purported safety benefits.”

What happens when the age-estimation AI guesses wrong on a 14-year-old who looks 17, or when a “trusted friend” QR code gets passed around a group chat somewhere it shouldn’t?

The company’s Persona age-check tool has also turned out to do more than check ages: researchers say they found an exposed frontend showing the system was also running facial recognition against watchlists.

Settlements address past concerns, but they don’t guarantee future safety. Parents must still do the work to ensure that they know what their kids are signing up for and who else they might be playing with.

For more information about the safety of Roblox and other services, check out our research: How Safe are Kids Using Social Media?


CNET Editors' Choice Award 2026

“One of the best cybersecurity suites on the planet.” 

According to CNET. Read their review


Rental platform unnecessarily collected the data of millions of Australians, privacy commissioner finds

2Apply’s over-collection of personal information adds to the power of the real estate industry in the competitive rental market, Carly Kind says

An online rental platform has been urged to stop collecting users’ personal information after the Australian privacy commissioner found the gathering of “excessive” data compounded the vulnerability of tenants amid the housing crisis.

RentTech platforms are increasingly used by real estate agents in Australia for people applying for rental properties to submit applications and supporting documentation. The Australian Housing and Urban Research Institute has identified 57 different rent platforms operating in Australia.

Continue reading...

© Photograph: Cavan Images/Alamy

© Photograph: Cavan Images/Alamy

© Photograph: Cavan Images/Alamy

  • ✇Malwarebytes
  • Blocking children from social media is a badly executed good idea
    While we can probably all agree that there is more than enough proof that social media is bad for the mental health of our children, the methods we are trying to block or ban them seem to do more harm than good. Across the world, lawmakers are tripping over each other to be seen “doing something” about kids and social media. Europe is slowly turning into a patchwork of age limits, curfews, and partial bans, with each country testing its own flavor of restriction while platforms try to update
     

Blocking children from social media is a badly executed good idea

3 de Abril de 2026, 11:37

While we can probably all agree that there is more than enough proof that social media is bad for the mental health of our children, the methods we are trying to block or ban them seem to do more harm than good.

Across the world, lawmakers are tripping over each other to be seen “doing something” about kids and social media. Europe is slowly turning into a patchwork of age limits, curfews, and partial bans, with each country testing its own flavor of restriction while platforms try to update their systems just fast enough to stay compliant. Australia has gone even further with a nationwide ban for children under 16 that regulators now struggle to enforce at scale. The political message seems to be: social media is dangerous, and the state will step in where parents supposedly fail.

On paper, that sounds decisive. In practice, it is messy, easy to bypass, and it risks shifting the problem rather than solving it. Most of these measures depend on age‑verification systems that were never designed to handle this kind of pressure. Research looking at sign‑up flows for major platforms shows what every teenager already knows: it is not hard to lie about your date of birth, borrow an older friend’s details, or hop to a service that is just outside the current regulatory crosshairs. The result is a lot of political noise, a lot of extra friction for everyone, and only a marginal effect on the very group these rules are aimed at.

Worse, by treating all social media use by minors as equally harmful, bans erase important nuances. There is a world of difference between doom‑scrolling through algorithmically-boosted gore reels at 2 AM and using a group chat to do homework, laugh at memes, or stay in touch with cousins abroad. Studies and expert reviews echo this. Social media can contribute to anxiety, depression, and poor sleep, but it can also provides support, connection, and a sense of belonging, especially for teens who feel isolated offline. A blunt ban cuts off both the toxic and the helpful parts in one sweep, which is not necessarily an improvement.

The tools we build to make bans enforceable come with their own side‑effects. Age‑verification schemes based on IDs, biometric analysis, or third‑party brokers may reduce some underage sign‑ups, but they also normalize handing over sensitive data just to speak or listen online. Legal and technical analysts warn that these systems introduce new privacy risks, expand surveillance, and can disproportionately impact vulnerable communities who rely on pseudonyms and anonymity for their safety. For children, the lesson the takeaway is that if they want to participate, they must accept invasive checks they barely understand or learn how to bypass them.

Which children easily do.

When you close one door without addressing the underlying behavior, kids will find another, as they have done throughout history. From chat rooms to instant messaging to early social networks, every attempt to lock children out has produced a mix of circumvention and secrecy. That secrecy is a problem in itself, because it pushes online life into hidden accounts, borrowed devices, or unregulated platforms where adults have even less visibility into what is going on. The more online activity that moves into that grey area of illegality, the harder it becomes to have honest conversations about the risks.

That, ultimately, is the core weakness of “ban first, ask questions later” policies. They are optimized for sending a strong signal to voters, not for building resilient habits in families. Politicians and platforms both have roles to play to make the online environment safer. Platforms can use a better design, safer defaults, more transparency, and proper enforcement against clear abuse. But none of that will replace what actually makes a difference for a child: an adult who understands the risks well enough to talk about them, set reasonable boundaries, and is trusted enough that the child will come to them when something goes wrong. No child suddenly matures enough on their 13th or even 16th birthday to be able to fight off the pitfalls of extremely fine-tuned algorithms.

We should be honest about this. No regulator, filter, or age‑gate will ever know your child as well as you do. No law will be able to adjust itself on the fly when a teenager suddenly starts using a new app in a worrying way. Governments can and should tackle the worst excesses, and hold companies responsible so they stop pretending that maximized engagement is compatible with child safety. But in the end, the real responsibility for keeping children safe online cannot be outsourced to apps or regulation. In the end, it lies, unavoidably, with the people—daily, compassionately—in their lives.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  • ✇Malwarebytes
  • Landmark verdicts put Meta’s “addiction machine” platforms on trial
    Meta faced two major legal setbacks this week as courts in New Mexico and California both found the company liable for harm to children. A New Mexico jury just ordered Meta to pay $375 million for misleading parents about child safety on Instagram and Facebook. Jurors found the company violated consumer protection laws by claiming its platforms were safe while knowing they exposed children to danger. A day later, a Los Angeles jury found Meta and Google liable in a landmark case over platf
     

Landmark verdicts put Meta’s “addiction machine” platforms on trial

26 de Março de 2026, 07:43

Meta faced two major legal setbacks this week as courts in New Mexico and California both found the company liable for harm to children.

A New Mexico jury just ordered Meta to pay $375 million for misleading parents about child safety on Instagram and Facebook. Jurors found the company violated consumer protection laws by claiming its platforms were safe while knowing they exposed children to danger.

A day later, a Los Angeles jury found Meta and Google liable in a landmark case over platform design. The case, brought by a young woman known as Kaley, accused both companies of getting her addicted to their products as a child, calling their platforms “addiction machines.”

New Mexico wins three-year lawsuit

New Mexico sued Meta in 2023 for violating its Unfair Practices Act, claiming the company’s algorithms were pushing sexual content to kids.

Prosecutors said this wasn’t random. They argued that Meta’s algorithms steered kids towards explicit content. The complaint said that Meta had:

“Proactively served and directed them to a stream of egregious, sexually explicit images through recommended users and posts—even where the child has expressed no interest in this content.”

The lawsuit also alleged that the platform made it easier for adults to contact and exploit minors, including grooming and solicitation.

During the seven-week trial, jurors saw internal memos and heard from several witnesses including Arturo Béjar, a software engineer who quit the company in 2021. He said a stranger propositioned his young daughter on Instagram.

Meta’s internal research presented in court showed that 16% of Instagram users saw unwanted nudity or sexual content in a single week. Documents said Meta knew about the harm.

When announcing the legal win, New Mexico Attorney General Raúl Torrez said:

“Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew. Today the jury joined families, educators, and child safety experts in saying enough is enough.”

New Mexico prosecutors also found employee messages discussing how Mark Zuckerberg’s 2019 announcement of end-to-end encryption on Facebook Messenger would hamper their ability to catch predators. Meta has said it plans to remove end-to-end encryption from Instagram private messages, a move linked to ongoing concerns about detecting abuse on the platform.

Meta’s lawyers said that the company was protecting kids and removing harmful content. The company offers Teen Account protections and parental alerts. Still, they acknowledged that harmful content can slip through. 

The $375m figure in the New Mexico case was calculated from thousands of individual violations, each counting separately toward the penalty, and Meta is set to appeal.

In the LA case, jurors recommended $3m in compensatory damages to Kaley, along with $3m in punitive damages. Both Meta and Google “acted with malice, oppression, or fraud” in their platform operations, the jury found.

Further legal challenges to come

On its own, even the $375m penalty is not especially financially damaging to Meta, which made just over $60bn in net income last year. But these two cases are the first of many forthcoming legal challenges the company will face.

The Kaley case was the first of several “bellwether” cases, which are trials that could set the pace for hundreds or thousands of similar cases. Over 2,400 cases making similar claims against Meta have been consolidated in California. The next bellwether case, RKC vs Meta, will begin in the summer.

Dozens of state attorneys general have also sued Meta, accusing it of deliberately designing its platform with addictive properties that harm young people.

The scrutiny of the algorithm in both cases might also make it more difficult in future for big tech companies to rely on Section 230. The 30-year-old legislation has long protected tech platforms from the actions of users on its platform. That didn’t protect Meta from criticism over how it engineered its own platform.

Beyond the potential for much larger penalties, these cases are important because state legislators have legally shown Meta’s platforms knew about harm to children while telling parents everything was fine. That could be especially problematic for a company focused on growing (or at least maintaining) its engagement numbers.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  • ✇Malwarebytes
  • Instagram flagged explicit messages to minors in 2018. Image-blurring arrived six years later
    Meta took six years to blur explicit images on Instagram, even though internal emails show executives were aware in 2018 that minors were receiving them, according to newly unsealed court documents. In a deposition given last year, Adam Mosseri (now the head of Instagram) discusses an email thread with Guy Rosen, Meta’s VP and chief information security officer at the time. Rosen explained in the thread that adults could find and message minors on the platform. The messages could contain what
     

Instagram flagged explicit messages to minors in 2018. Image-blurring arrived six years later

26 de Fevereiro de 2026, 07:34

Meta took six years to blur explicit images on Instagram, even though internal emails show executives were aware in 2018 that minors were receiving them, according to newly unsealed court documents.

In a deposition given last year, Adam Mosseri (now the head of Instagram) discusses an email thread with Guy Rosen, Meta’s VP and chief information security officer at the time. Rosen explained in the thread that adults could find and message minors on the platform. The messages could contain what Rosen called:

“tier 2 sexual harassment, like dudes sending dick pics to everyone”

up to…

“tier 1 cases where they end up doing horrible damage.”

The tool Meta now uses to address the problem is a client-side classifier that automatically blurs explicit images sent to teens in direct messages. But it wasn’t rolled out until roughly six years after that email exchange, in September 2024.

The deposition was unsealed last week and filed on February 20, 2026, in MDL No. 3047 (Case No. 4:22-md-03047-YGR), a multidistrict litigation case in Northern California in which hundreds of families allege that platforms including Instagram were designed to maximize screen time at the expense of young users’ well-being. The filing is available through the court’s PACER docket.

Internal records reveal teen safety concerns at Meta

The filing also surfaces internal survey data that Instagram had kept confidential. Nearly one in five respondents aged 13 to 15 reported encountering unwanted nudity or sexual imagery on the platform. A further 8.4% of them said they had seen someone harm themselves or threaten to do so on Instagram within the past week.

Instagram’s own Transparency Center didn’t disclose this at the time. Its child-endangerment section stated simply that the company was still working on the numbers. Mosseri also confirmed he had never publicly shared an internal estimate of around 200,000 daily child users experiencing inappropriate interactions, a figure referenced during questioning.

His defence, and Meta’s, rests on the claim that the company was not idle during those six years. Mosseri told the court that other protections were introduced in the interim, including restrictions on adults messaging teens they are not connected to, and systems designed to flag potentially risky accounts.

He pushed back on the idea that parents should have been explicitly warned about unmonitored direct messages, arguing that the risk exists on many messaging platforms. Meta spokesperson Liza Crenshaw pointed to Teen Accounts and parental controls, saying the company has been working on the problem for years.

Other allegations against Meta

The nudity filter is not the only safety measure under scrutiny. Court filings in related proceedings allege Meta explored making teen accounts private by default as early as 2019, then dropped the idea over concerns it would damage engagement metrics. That default-private switch did not arrive until September 2024.

Whistleblower Arturo Béjar, a former Meta engineering director, told the US Senate in 2023 that he had raised teen safety concerns directly with Mosseri and other executives. He acknowledged that the company researched these harms extensively, but questioned whether it acted with sufficient urgency.

An independent audit published in September 2025 found that of 47 teen safety features Instagram publicly promoted, fewer than one in five functioned as described, according to the report’s findings.

Mosseri’s 2023 performance self-review, entered as an exhibit in the case, celebrated revenue at all-time highs and boasted about delivering results despite cutting his team by 13%. Teen well-being did not appear as a criteria in that review. He explained that well-being sat with a centralized Meta team, outside his direct remit.

In a courtroom asking whether Instagram’s leadership prioritised growth over safety, that distinction may not land the way he hopes.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  • ✇Malwarebytes
  • Roblox gives predators “powerful tools” to target children, says LA County
    Los Angeles County has sued online gaming company Roblox, adding to a series of suits that accuse the virtual worlds platform of misleading parents into thinking it’s safe while leaving children exposed to predators and sexually explicit content. The February 19 filing makes LA County the first California government body to take the company to court over child safety. Roblox claims over 151 million daily users, most of which are kids. The company said it disputes the claims and will defend it
     

Roblox gives predators “powerful tools” to target children, says LA County

24 de Fevereiro de 2026, 12:22

Los Angeles County has sued online gaming company Roblox, adding to a series of suits that accuse the virtual worlds platform of misleading parents into thinking it’s safe while leaving children exposed to predators and sexually explicit content. The February 19 filing makes LA County the first California government body to take the company to court over child safety.

Roblox claims over 151 million daily users, most of which are kids. The company said it disputes the claims and will defend itself vigorously.

What the suit tells us about how predators operate

According to the complaint, Roblox violated California’s Unfair Competition Law and False Advertising Law. County Counsel Dawyn R. Harrison, who filed the lawsuit, said that the gaming platform has repeatedly exposed kids to sexually explicit material, grooming, and exploitation because it has chosen profit over safety.

“This is not about a minor lapse in safety,” Harrison said in a prepared press release. “It is about a company that gives pedophiles powerful tools to prey on innocent and unsuspecting children.”

Until November 2024, anyone could friend and message a child on the platform, the suit said. When Roblox changed those rules it was allegedly still possible for accounts registered with ages over 13 to message each other without having previously been connected, meaning that adults could still message teens who didn’t know them.

The suit also alleged that it’s easy for predators to masquerade as children on the site, because age has historically been self-reported with no enforcement of parental approval when kids sign up.

But Roblox’s approach to age verification changed last September, when the company announced plans to use age estimation on all users who wanted to the platform’s communication features. It then introduced the third-party Persona system, which requires a facial age check to use chat features. But Persona itself has become a problem.

Researchers recently discovered an exposed frontend revealing the tool does far more than check ages, including running facial recognition against watchlists. It can also hold on to personal data including government IDs, device fingerprints, and biometric information for up to three years. Discord has already walked away from Persona, but Roblox hasn’t.

Even setting the vendor aside, the safeguards aren’t working as advertised. When Malwarebytes researchers created an account for a child under 13 on Roblox in December 2025, it found that a child account could find communities linked to cybercrime and fraud-related keywords.

The complaint contains many allegations about the type of behavior that has occurred on Roblox, including:

  • The simulated rape of a seven year-old’s avatar in a digital playground environment
  • “Diddy” games that recreated some events from the imprisoned rap star’s parties
  • The creation of Jeffrey Epstein-themed accounts, and the operation of a game called “Escape to Epstein Island”
  • Virtual strip clubs where avatars can disrobe and give lap dances

The LA County complaint also mentioned a report from financial forensic research company Hindenburg Research published in October 2024. The company, targeting short sellers who trade by selling stocks in vulnerable companies, said that it had found multiple groups on the site trading child sexual abuse material and soliciting sexual favors. The report also alleged that Roblox was cutting safety spending even as problems mounted.

A former senior product designer allegedly told Hindenburg the trade-off was deliberate. “If you’re limiting users’ engagement, it’s hurting your metrics…in a lot of cases, the leadership doesn’t want that,” the product designer allegedly said, according to the lawsuit.

A cacophony of cases

This won’t be the only case Roblox has defended. In 2022, the Social Media Victims Law Center filed suit against the company for allegedly touting child safety while allowing the exploitation of a young girl. The following year, multiple families filed suit against the gaming company for allegedly misleading them about content harmful to children. Last year, the mother of a 15 year-old boy from Texas sued Roblox after he committed suicide. The complaint alleged that he was groomed and subsequently blackmailed over nude pictures he’d been persuaded to send a predator on the site.

Another lawsuit filed against the company in San Mateo in February 2025 claimed that a 27-year-old predator reached a 13-year-old boy through the platform’s “whisper” messaging system. That case described the platform as “a digital and real-life nightmare for children.”

The California suit joins an expanding pile of government cases against Roblox. Louisiana sued the company in August 2025, followed by Kentucky (October 2025), Texas (November 2025), and Florida (December 2025). Georgia’s Attorney General is also investigating the company. And a collection of separate private suits against the company have been consolidated into a single multi-district litigation.

What parents can do

So, what can parents do? Interestingly, one potential answer came last year when the company’s CEO Dave Baszucki spoke with the BBC:

“My first message would be, if you’re not comfortable, don’t let your kids be on Roblox.”

If you do want to let your children use Roblox (or any other site), then close monitoring is important. Restrict friend requests and disable open chat to the extent that the platform allows. Anonymize your children’s profiles to potentially avoid what one family claimed happened to them in an earlier lawsuit, , in which they had to move across the country after the predator reportedly tracked down their child’s address via Roblox.

Child education is key. Tell your children not to reveal personal information and not to take conversations off-platform, because that’s where exploitation escalates. And keep the conversation going, not as a one-time lecture, but as a regular part of talking about their day.

For more information about child safety, check out Malwarebytes’ research on the topic, which also offers useful advice.

LA County is seeking civil penalties of up to $2,500 per violation per day, plus injunctive relief that could force structural changes to how the platform operates.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

  • ✇Malwarebytes
  • Child exploitation, grooming, and social media addiction claims put Meta on trial
    Meta is facing two trials over child safety allegations in California and New Mexico. The lawsuits are landmark cases, marking the first time that any such accusations have reached a jury. Although over 40 state attorneys general have filed suits about child safety issues with social media, none had gone to trial until now. The New Mexico case, filed by Attorney General Raúl Torrez in December 2023, centers on child sexual exploitation. Torrez’s team built their evidence by posing as children
     

Child exploitation, grooming, and social media addiction claims put Meta on trial

12 de Fevereiro de 2026, 09:35

Meta is facing two trials over child safety allegations in California and New Mexico. The lawsuits are landmark cases, marking the first time that any such accusations have reached a jury. Although over 40 state attorneys general have filed suits about child safety issues with social media, none had gone to trial until now.

The New Mexico case, filed by Attorney General Raúl Torrez in December 2023, centers on child sexual exploitation. Torrez’s team built their evidence by posing as children online and documenting what happened next, in the form of sexual solicitations. The team brought the suit under New Mexico’s Unfair Trade Practices Act, a consumer protection statute that prosecutors argue sidesteps Section 230 protections.

The most damaging material in the trial, which is expected to run seven weeks, may be Meta’s own paperwork. Newly unsealed internal documents revealed that a company safety researcher had warned about the sheer scale of the problem, claiming that around half a million cases of child exploitation are happening daily. Torrez did not mince words about what he believes the platform has become, calling it an online marketplace for human trafficking. From the complaint:

“Meta’s platforms Facebook and Instagram are a breeding ground for predators who target children for human trafficking, the distribution of sexual images, grooming, and solicitation.”

The complaint’s emphasis on weak age verification touches on a broader issue regulators around the world are now grappling with: how platforms verify the age of their youngest users—and how easily those systems can be bypassed.

In our own research into children’s social media accounts, we found that creating underage profiles can be surprisingly straightforward. In some cases, minimal checks or self-declared birthdates were enough to access full accounts. We also identified loopholes that could allow children to encounter content they shouldn’t or make it easier for adults with bad intentions to find them.

The social media and VR giant has pushed back hard, calling the state’s investigation ethically compromised and accusing prosecutors of cherry-picking data. Defence attorney Kevin Huff argued that the company disclosed its risks rather than concealing them.

Yesterday, Stanford psychiatrist Dr. Anna Lembke told the court she believes Meta’s design features are addictive and that the company has been using the term “Problematic Internet Use” internally to avoid acknowledging addiction.

Meanwhile in Los Angeles, a separate bellwether case against Meta and Google opened on Monday. A 20-year-old woman identified only as KGM is at the center of the case. She alleges that YouTube and Instagram hooked her from childhood. She testified that she was watching YouTube at six, on Instagram by nine, and suffered from worsening depression and body dysmorphia. Her case, which TikTok and Snap settled before trial, is the first of more than 2,400 personal injury filings consolidated in the proceeding. Plaintiffs’ attorney Mark Lanier called it a case about:

“two of the richest corporations in history, who have engineered addiction in children’s brains.”

A litany of allegations

None of this appeared from nowhere. In 2021, whistleblower Frances Haugen leaked internal Facebook documents showing the company knew its platforms damaged teenage mental health. In 2023, Meta whistleblower Arturo Béjar testified before the Senate that the company ignored sexual endangerment of children.

Unredacted documents unsealed in the New Mexico case in early 2024 suggested something uglier still: that the company had actively marketed messaging platforms to children while suppressing safety features that weren’t considered profitable. Internal employees sounded alarms for years but executives reportedly chose growth, according to New Mexico AG Raúl Torrez. Last September, whistleblowers said that the company had ignored child sexual abuse in virtual reality environments.

Outside the courtroom, governments around the world are moving faster than the US Congress. Australia banned under 16s from social media in December 2025, becoming the first country to do so. France’s National Assembly followed, approving a ban on social media for under 15s in January by 130 votes to 21. Spain announced its own under 16 ban this month. By last count, at least 15 European governments were considering similar measures. Whether any of these bans will actually work is uncertain, particularly as young users openly discuss ways to bypass controls.

The United States, by contrast, has passed exactly one major federal child online safety law: the Children’s Online Privacy Protection Act (COPPA), in 1998. The Kids Online Safety Act (KOSA), introduced in 2022, passed the Senate 91-3 in mid-2024 then stalled in the House. It was reintroduced last May and has yet to reach a floor vote. States have tried to fill the gap, with 18 proposed similar legislation in 2025, but only one of those was enacted (in Nebraska). A comprehensive federal framework remains nowhere in sight.

On its most recent earnings call, Meta acknowledged it could face material financial losses this year. The pressure is no longer theoretical. The juries in Santa Fe and Los Angeles will now weigh whether the company’s design choices and safety measures crossed legal lines.

If you want to understand how social media platforms can expose children to harmful content—and what parents can realistically do about it—check out our research project on social media safety.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  • ✇Malwarebytes
  • How safe are kids using social media? We did the groundwork
    When researchers created an account for a child under 13 on Roblox, they expected heavy guardrails. Instead, they found that the platform’s search features still allowed kids to discover communities linked to fraud and other illicit activity. The discoveries spotlight the question that lawmakers around the world are circling: how do you keep kids safe online? Australia has already acted, while the UK, France, and Canada are actively debating tighter rules around children’s use of social me
     

How safe are kids using social media? We did the groundwork

10 de Fevereiro de 2026, 10:50

When researchers created an account for a child under 13 on Roblox, they expected heavy guardrails. Instead, they found that the platform’s search features still allowed kids to discover communities linked to fraud and other illicit activity.

The discoveries spotlight the question that lawmakers around the world are circling: how do you keep kids safe online?

Australia has already acted, while the UK, France, and Canada are actively debating tighter rules around children’s use of social media. This month US Senator Ted Cruz reintroduced a bill to do it while also chairing a Congressional hearing about online kid safety.

Lawmakers have said these efforts are to keep kids safe online. But as the regulatory tide rises, we wanted to understand what digital safety for children actually looks like in practice.

So, we asked a specialist research team to explore how well a dozen mainstream tech providers are protecting children aged under 13 online.

We found that most services work well when kids use the accounts and settings designed for them. But when children are curious, use the wrong account type, or step outside those boundaries, things can go sideways quickly.

Over several weeks in December, the research team explored how platforms from Discord to YouTube handled children’s online use. They relied on standard user behavior rather than exploits or technical tricks to reflect what a child could realistically encounter.

The researchers focused on how platforms catered to kids through specific account types, how age restrictions were enforced in practice, and whether sensitive content was discoverable through normal browsing or search.

What emerged was a consistent pattern: curious kids who poke around a little, or who end up using the wrong account type, can run into inappropriate content with surprisingly little effort.

A detailed breakdown of the platforms tested, account types used, and where sensitive content was discovered appears in the research scope and methodology section at the end of this article.

When kids’ accounts are opt-in

One thing the team tried was to simply access the generic public version of a site rather than the kid-protected area.

This was a particular problem with YouTube. The company runs a kid-specific service called YouTube Kids, which the researchers said is effectively sanitized of inappropriate content (it sounds like things have changed since 2022).

The issue is that YouTube’s regular public site isn’t sanitized, and even though the company says you must be at least 13 to use the service unless ‘enabled’ by a parent, in reality anyone can access it. From the report:

“Some of the content will require signing in (for age verification) prior the viewing, but the minor can access the streaming service as a ‘Guest’ user without logging in, bypassing any filtering that would otherwise apply to a registered child account.”

That opens up a range of inappropriate material, from “how-to” fraud channels through to scenes of semi-nudity and sexually suggestive material, the researchers said. Horrifically, they even found scenes of human execution on the public site. The researchers concluded:

“The absence of a registration barrier on the public platform renders the ‘YouTube Kids’ protection opt-in rather than mandatory.”

When adult accounts are easy to fake

Another worry is that even when accounts are age-gated, enterprising minors can easily get around them. While most platforms require users to be 13+, a self-declaration is often enough. All that remains is for the child to register an email address with a service that doesn’t require age verification.

This “double blind” vulnerability is a big problem. Kids are good at creating accounts. The tech industry has taught them to be, because they need them for most things they touch online, from streaming to school.

When they do get past the age gates, curious kids can quickly get to inappropriate material. Researchers found unmoderated nudity and explicit material on the social network Discord, along with TikTok content providing credit card fraud and identity theft tutorials. A little searching on the streaming site Twitch surfaced ads for escort services.

This points to a trade-off between privacy and age verification. While stricter age verification could close some of these gaps, it requires collecting more personal data, including IDs or biometric information. That creates privacy risks of its own, especially for children. That’s why most platforms rely on self-declared age, but the research shows how easily that can be bypassed.

When kids’ accounts let toxic content through

Cracks in the moderation foundations allow risky content: Roblox, the website and app where users build their own content, filters chats for child accounts. However, it also features “Communities,” which are groups designed for socializing and discovery.

These groups are easily searchable, and some use names and terminology commonly linked to criminal activities, including fraud and identity theft. One, called “Fullz,” uses a term widely understood to refer to stolen personal information, and “new clothes” is often used to refer to a new batch of stolen payment card data. The visible community may serve as a gateway, while the actual coordination of illicit activity or data trading occurs via “inner chatter” between the community members.

This kind of search wasn’t just an issue for Roblox, warned the team. It found Instagram profiles promoting financial fraud and crypto schemes, even from a restricted teen account.

Some sites passed the team’s tests admirably, though. The researchers simulated underage users who’d bypassed age verification, but were unable to find any harmful content on Minecraft, Snapchat, Spotify, or Fortnite. Fortnite’s approach is especially strict, disabling chat and purchases on accounts for kids under 13 until a parent verifies via email. It also uses additional verification steps using a Social Security number or credit card. Kids can still play, but they’re muted.

What parents can do

There is no platform that can catch everything, especially when kids are curious. That makes parental involvement the most important layer of protection.

One reason this matters is a related risk worth acknowledging: adults attempting to reach children through social platforms. Even after Instagram took steps to limit contact between adult and child accounts, parents still discovered loopholes. This isn’t a failure of one platform so much as a reminder that no set of controls can replace awareness and involvement.

Mark Beare, GM of Consumer at Malwarebytes says:

“Parents are navigating a fast-moving digital world where offline consequences are quickly felt, be it spoofed accounts, deepfake content or lost funds. Safeguards exist and are encouraged, but children can still be exposed to harmful content.”

This doesn’t mean banning children from the internet. As the EFF points out, many minors use online services productively with the support and supervision of their parents. But it does mean being intentional about how accounts are set up, how children interact with others online, and how comfortable they feel asking for help.

Accounts and settings

  • Use child or teen accounts where available, and avoid defaulting to adult accounts.
  • Keep friends and followers lists set to private.
  • Avoid using real names, birthdays, or other identifying details unless they are strictly required.
  • Avoid facial recognition features for children’s accounts.
  • For teens, be aware of “spam” or secondary accounts they’ve set up that may have looser settings.

Social behavior

  • Talk to your child about who they interact with online and what kinds of conversations are appropriate.
  • Warn them about strangers in comments, group chats, and direct messages.
  • Encourage them to leave spaces that make them uncomfortable, even if they didn’t do anything wrong.
  • Remind them that not everyone online is who they claim to be.

Trust and communication

  • Keep conversations about online activity open and ongoing, not one-off warnings.
  • Make it clear that your child can come to you if something goes wrong without fear of punishment or blame.
  • Involve other trusted adults, such as parents, teachers, or caregivers, so kids aren’t navigating online spaces alone.

This kind of long-term involvement helps children make better decisions over time. It also reduces the risk that mistakes made today can follow them into the future, when personal information, images, or conversations could be reused in ways they never intended.


Research findings, scope and methodology 

This research examined how children under the age of 13 may be exposed to sensitive content when browsing mainstream media and gaming services. 

For this study, a “kid” was defined as an individual under 13, in line with the Children’s Online Privacy Protection Act (COPPA). Research was conducted between December 1 and December 17, 2025, using US-based accounts. 

The research relied exclusively on standard user behavior and passive observation. No exploits, hacks, or manipulative techniques were used to force access to data or content. 

Researchers tested a range of account types depending on what each platform offered, including dedicated child accounts, teen or restricted accounts, adult accounts created through age self-declaration, and, where applicable, public or guest access without registration. 

The study assessed how platforms enforced age requirements, how easy it was to misrepresent age during onboarding, and whether sensitive or illicit content could be discovered through normal browsing, searching, or exploration. 

Across all platforms tested, default algorithmic content and advertisements were initially benign and policy-compliant. Where sensitive content was found, it was accessed through intentional, curiosity-driven behavior rather than passive recommendations. No proactive outreach from other users was observed during the research period. 

The table below summarizes the platforms tested, the account types used, and whether sensitive content was discoverable during testing. 

Platform Account type tested Dedicated kid/teen account Age gate easy to bypass Illicit content discovered Notes
YouTube (public) No registration (guest) Yes (YouTube Kids) N/A Yes Public YouTube allowed access to scam/fraud content and violent footage without sign-in. Age-restricted videos required login, but much content did not. 
YouTube Kids Kid account Yes N/A No Separate app with its own algorithmic wall. No harmful content surfaced. 
Roblox All-age account (13+) No Not required Yes Child accounts could search for and find communities linked to cybercrime and fraud-related keywords. 
Instagram Teen account (13–17) No Not required Yes Restricted accounts still surfaced profiles promoting fraud and cryptocurrency schemes via search. 
TikTok Younger user account (13+) Yes Not required No View-only experience with no free search. No harmful content surfaced. 
TikTok Adult account No Yes Yes Search surfaced credit card fraud–related profiles and tutorials after age gate bypass. 
Discord Adult account No Yes Yes Public servers surfaced explicit adult content when searched directly. No proactive contact observed. 
Twitch Adult account No Yes Yes Discovered escort service promotions and adult content, some behind paywalls. 
Fortnite Cabined (restricted) account (13+) Yes Hard to bypass No Chat and purchases disabled until parent verification. No harmful content found. 
Snapchat Adult account No Yes No No sensitive content surfaced during testing. 
Spotify Adult account Yes Yes No Explicit lyrics labeled. No harmful content found. 
Messenger Kids Kid account Yes Not required No Fully parent-controlled environment. No search or
external contacts. 

Screenshots from the research

  • List of Roblox communities with cybercrime-oriented keywords
    List of Roblox communities with cybercrime-oriented keywords
  • Roblox community that offers chat without verification
    Roblox community that offers chat without verification
  • Roblox community with cybercrime-oriented keywords
    Roblox community with cybercrime-oriented keywords
  • Graphic content on publicly accessible YouTube
    Graphic content on publicly accessible YouTube
  • Credit card fraud content on publicly accessible YouTube
    Credit card fraud content on publicly accessible YouTube
  • Active escort page on Twitch
    Active escort page on Twitch
  • Stolen credit cards for sale on an Instagram teen account
    Stolen credit cards for sale on an Instagram teen account
  • Carding for beginners content on an Instagram teen account
    Crypto investment scheme on an Instagram teen account
  • Carding for beginners content on a TikTok adult account, accessed by kids with a fake date of birth.
    Carding for beginners content on a TikTok adult account, accessed by kids with a fake date of birth.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A Victorian schoolteacher was applying for ‘heaps of rentals’ online – then someone accessed his bank account

Michael suspects personal information he submitted to rent application platforms was leaked online. And analysis shows millions of documents may also be at risk

Michael* has spent the past two months trying to get his digital identity back.

The 47-year-old Victorian schoolteacher was in the process of moving to a new town and applying for rental properties online. Around this time – and unbeknown to him – his mobile phone number was transferred to someone else.

Continue reading...

© Composite: Getty Images

© Composite: Getty Images

© Composite: Getty Images

  • ✇Malwarebytes
  • Google will pay $8.25m to settle child data-tracking allegations
    Google has settled yet another class-action lawsuit accusing it of collecting children’s data and using it to target them with advertising. The tech giant will pay $8.25 million to address allegations that it tracked data on apps specifically designated for kids. AdMob’s mobile data collection This settlement stems from accusations that apps provided under Google’s “Designed for Families” programme, which was meant to help parents find safe apps, tracked children. Under the terms of this p
     

Google will pay $8.25m to settle child data-tracking allegations

20 de Janeiro de 2026, 08:40

Google has settled yet another class-action lawsuit accusing it of collecting children’s data and using it to target them with advertising. The tech giant will pay $8.25 million to address allegations that it tracked data on apps specifically designated for kids.

AdMob’s mobile data collection

This settlement stems from accusations that apps provided under Google’s “Designed for Families” programme, which was meant to help parents find safe apps, tracked children. Under the terms of this programme, developers were supposed to self-certify COPPA compliance and use advertising SDKs that disabled behavioural tracking. However, some did not, instead using software embedded in the apps that was created by a Google-owned mobile advertising company called AdMob.

When kids used these apps, which included games, AdMob collected data from these apps, according to the class action lawsuit. This included IP addresses, device identifiers, usage data, and the child’s location to within five meters, transmitting it to Google without parental consent. The AdMob software could then use that information to display targeted ads to users.

This kind of activity is exactly what the Children’s Online Privacy Protection Act (COPPA) was created to stop. The law requires operators of child-directed services to obtain verifiable parental consent before collecting personal information from children under 13. That includes cookies and other identifiers, which are the core tools advertisers use to track and target people.

The families filing the lawsuit alleged that Google knew this was going on:

“Google and AdMob knew at the time that their actions were resulting in the exfiltration data from millions of children under thirteen but engaged in this illicit conduct to earn billions of dollars in advertising revenue.”

Security researchers had alerted Google to the issue in 2018, according to the filing.

YouTube settlement approved

What’s most disappointing is that these privacy issues keep happening. This news arrives at the same time that a judge approved a settlement on another child privacy case involving Google’s use of children’s data on YouTube. This case dates back to October 2019, the same year that Google and YouTube paid a whopping $170m fine for violating COPPA.

Families in this class action suit alleged that YouTube used cookies and persistent identifiers on child-directed channels, collecting data including IP addresses, geolocation data, and device serial numbers. This is the same thing that it does for adults across the web, but COPPA protects kids under 13 from such activities, as do some state laws.

According to the complaint, YouTube collected this information between 2013 and 2020 and used it for behavioural advertising. This form of advertising infers people’s interests from their identifiers, and it is more lucrative than contextual advertising, which focuses only on a channel’s content.

The case said that various channel owners opted into behavioural advertising, prompting Google to collect this personal information. No parental consent was obtained, the plaintiffs alleged. Channel owners named in the suit included Cartoon Network, Hasbro, Mattel, and DreamWorks Animation.

Under the YouTube settlement (which was agreed in August and recently approved by a judge), families can file claims through YouTubePrivacySettlement.com, although the deadline is this Wednesday. Eligible families are likely to get $20–$30 after attorneys’ fees and administration costs, if 1–2% of eligible families submit claims.

COPPA is evolving

Last year, the FTC amended its COPPA Rule to introduce mandatory opt-in consent for targeted advertising to children, separate from general data-collection consent.

The amendments expand the definition of personal information to include biometric data and government-issued ID information. It also lets the FTC use a site operator’s marketing materials to determine whether a site targets children.

Site owners must also now tell parents who they’ll share information with, and the amendments stop operators from keeping children’s personal information forever. If these all sounds like measures that should have been included to protect children online from the get-go, we agree with you. In any case, companies have until this April to comply with the new rules.

Will the COPPA rules make a difference? It’s difficult to say, given the stream of privacy cases involving Google LLC (which owns YouTube and AdMob, among others). When viewed against Alphabet’s overall earnings, an $8.25m penalty risks being seen as a routine business expense rather than a meaningful deterrent.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

  • ✇Malwarebytes
  • Lego’s Smart Bricks explained: what they do, and what they don’t
    Lego just made what it claims is its most important product release since it introduced minifigures in 1978. No, it’s not yet another brand franchise. It’s a computer in a brick. Called the Smart Brick, it’s part of a broader system called Smart Play that Lego hopes will revolutionize your child’s interaction with Lego. These aren’t your grandma’s Lego bricks. The 2×4 techno-brick houses a custom ASIC chip that Lego says is smaller than a single Lego stud, measuring about 4.1mm. Inside ar
     

Lego’s Smart Bricks explained: what they do, and what they don’t

8 de Janeiro de 2026, 10:35

Lego just made what it claims is its most important product release since it introduced minifigures in 1978. No, it’s not yet another brand franchise. It’s a computer in a brick.

Called the Smart Brick, it’s part of a broader system called Smart Play that Lego hopes will revolutionize your child’s interaction with Lego.

These aren’t your grandma’s Lego bricks. The 2×4 techno-brick houses a custom ASIC chip that Lego says is smaller than a single Lego stud, measuring about 4.1mm. Inside are accelerometers, light and sound sensors, an LED array, and a miniature speaker with an onboard synthesizer that generates sound effects in real time, rather than just playing pre-recorded clips.

How the pieces talk to each other

The bricks charge wirelessly on a dedicated pad and contain batteries that Lego says can last for years. They also communicate with each other to trigger actions, such as interactive sound effects.

This is where the other Smart Play components come in: Smart Tags and Smart Minifigures. The 2×2 stud-less Smart Tags contain unique digital IDs that tell bricks how to behave. A helicopter tag, for example, might trigger propeller sounds.

There’s also a Neighbor Position Measurement system that detects brick proximity and orientation. So a brick might do different things as it gets closer to a Smart Tag or Smart Minifigure, for example.

The privacy implications of Smart Bricks

Any time parents hear about toys communicating with other devices, they’re right to be nervous. They’ve had to contend with toys that give up kids’ sensitive personal data and allegedly have the potential to become listening devices for surveillance.

However, Lego says its proprietary Bluetooth-based protocol, called BrickNet, comes with encryption and built-in privacy controls.

One clear upside is that the system doesn’t need an internet connection for these devices to work, and there are no screens or companion apps involved either. For parents weary of reading about children’s apps quietly harvesting data, that alone will come as a relief.

Lego also makes specific privacy assurances. Yes, there’s a microphone in the Smart Brick, but no, it doesn’t record sound (it’s just a sensor), the company says. There are no cameras either.

Perhaps the biggest relief of all, though, is that there’s no AI in this brick.

At a time when “AI-powered” is being sprinkled over everything from washing machines to toilets, skipping AI may be the smartest design decision here. AI-driven toys come with their own risks, especially when children don’t get a meaningful choice about how that technology behaves once it’s out of the box.

In the past, they’ve been subjected to sexual content from AI-powered teddy bears. Against that backdrop, Lego’s restraint feels deliberate, and welcome.

Are these the bricks you’re looking for?

Will the world take to Smart Bricks? Probably.

Should it? The best response comes from my seven-year-old, scoffing,

“Kids can make enough annoying noises themselves.”

We won’t have long to wait to find out. Lego announced Lucasafilm as its first Smart Play partner when it unveiled the system at CES 2026 in Las Vegas this week, and pre-orders open on January 9. The initial lineup includes three kits: Tie Fighters, X-Wings, and A-Wings, complete with associated scenery.

Expect lots of engine, laser, and light sabre sounds from those rigs—and perhaps a lack of adorable sound effects from your kids when the blocks start doing the work. That makes us a little sad.

More optimistically, perhaps there are opportunities for creative play, such as devices that spin, flip, and light up based on their communications with other bricks. That could turn this into more of a experiment in basic circuitry and interaction than a simple noise-making device. One of the best things about watching kids play is how far outside the box they think.

Whatever your view on Lego’s latest development, it doesn’t seem like it’ll let people tailor advertising to your kids, whisper atrocities at them from afar, or hack your home network. That, at the very least, is a win.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

❌
❌