A Kid With a Fake Mustache Tricked an Online Age-Verification Tool
Meta patched two WhatsApp flaws affecting iOS, Android, and Windows users, including bugs tied to risky files, links, and Reels previews.
The post New WhatsApp Flaws Could Affect Billions of Users After Meta Security Patch appeared first on TechRepublic.
Hackers abused Google AppSheet to send Meta phishing emails, compromising 30,000 Facebook business accounts across 50 countries.
The post Google AppSheet Abuse Helped Phish 30,000 Facebook Accounts appeared first on TechRepublic.
The challenges of conducting open-source research in China are well-documented. Consistently named one of the most digitally oppressive countries in the world, China blocks some of the world’s largest social media platforms, such as Facebook, Google, and YouTube. Those that are still accessible are mostly Chinese-owned, strictly regulated and monitored in real time by AI systems as well as tens of thousands of “internet police”.
But despite these strict controls, Chinese apps – which boast more than a billion estimated users – remain an information goldmine for investigative journalists covering stories both within and outside China.

Your donations directly contribute to our ability to publish groundbreaking investigations and uncover wrongdoing around the world.
Since most foreign sites are banned, Chinese platforms are the largest resource available to journalists and researchers interested in what’s going on in the world’s second-most populous country. Even when a topic is being censored, patterns in the censorship can themselves serve as investigative leads: a 2020 BuzzFeed News investigation, for example, mapped out detention camps in Xinjiang by examining areas that had been blanked out on China’s Baidu Maps.
With millions of Chinese people living overseas, social media activity by members of the diaspora can also turn into global stories.
Serial rapist Zou Zhenhao, a Chinese PhD student, was jailed in London last year after one of his victims posted a warning on Xiaohongshu, also known as Little Red Book or Rednote, an app popular with young Chinese women living abroad. Another woman Zou had raped reached out to the original poster, who put her in touch with the police – leading to the conviction of a man described by police as possibly one of the worst sexual predators in British history.
Founded in 2013 as a Hong Kong shopping guide, Xiaohongshu has evolved into a lifestyle and e-commerce platform that has been compared with Instagram, Pinterest and Amazon. Last year, it reported about 300 million monthly active users, rivalling some of China’s largest social media platforms.

The app’s 600 million daily searches by the end of 2024 also accounted for half of market leader Baidu’s search volume, demonstrating that it is emerging as a critical search and discovery engine, not just a social platform.
Although primarily a Chinese-language app, Xiaohongshu gained attention in the English-speaking world last year, when millions of American TikTok users flocked to the platform in anticipation of a TikTok ban under US President Donald Trump.
Responding to the surge of international users – sparked by the #TikTokRefugees trend – Xiaohongshu rolled out an AI-powered translation feature, making the app more accessible to non-Chinese audiences. This also meant that journalists without Chinese language skills can more easily communicate on and navigate the platform.
Despite its growing popularity both within and outside China, the app is relatively new and underexplored compared to more well-established platforms such as Weibo.
This guide aims to provide a starting point for those looking to explore Xiaohongshu for open-source investigations, including an overview of its main user demographics, potential topics to explore and strategic search methods specific to the app.
According to Xiaohongshu’s official data, the platform’s demographic profile is mainly young, female and urban. As of 2024, 70 percent of its users were women, with half of all users belonging to Gen Z and living in China’s largest cities.
As previously mentioned, the app has also gained popularity with the Chinese diaspora. Many Chinese nationals living abroad use it as a search engine for local information, posting and searching for content related to their daily lives, from restaurant recommendations and apartment hunting to navigating foreign bureaucracies and finding community resources.
This demographic profile makes Xiaohongshu particularly well-suited for investigating stories about consumer fraud and urban livability issues. For example, Chinese outlets like Jiemian have used Xiaohongshu posts to expose the grey-market ecosystem of paid reviews and fake endorsements tied to the platform’s e-commerce model, while in 2022, International Financial News traced a mother-and-baby store scam that defrauded over 400 parents back to product recommendation posts on the platform.
Given its predominantly female user base, Xiaohongshu has also evolved into one of China’s most important spaces for feminist discourse and women’s issues. Academic researchers have used content on the platform to analyse local discussions on menstrual shaming, sexual harassment, and the controversial “divorce cooling-off period” introduced in 2021. As Rest of World reported, women have increasingly congregated on Xiaohongshu, where they outnumber male users and have found ways to trick the app’s recommendation algorithm so their posts are shown mostly to other women.
Political content and current affairs about China are largely absent from the app – a result of both active censorship and platform design.
All Chinese social media platforms, including Xiaohongshu, operate under strict content moderation requirements from the Cyberspace Administration of China. A leaked 143-page internal document published by China Digital Times in 2022 revealed how Xiaohongshu censors respond to government directives in “real-time”, blocking content related to politically sensitive topics such as criticism of the Chinese Communist Party, labour strikes and student suicides. Xiaohongshu’s commercial focus also makes it less likely that these topics would be discussed on the platform: as Rest of World reported, the platform functions less like Weibo – a public square for current events – and more like “a giant mall, where shoppers tell each other what to buy”.
Coverage of international affairs is also tightly controlled: only state-owned or state-controlled news organisations can obtain licences to publish original news content. However, content about life abroad, particularly stories about the cost of living, healthcare, or social problems in Western countries, circulates more freely on platforms including Xiaohongshu, and provide journalists with insight into how Chinese diaspora communities engage with local political systems.
For example, when the 2025 Miss Finland was accused of making anti-Asian gestures, searching for “芬兰小姐” (Miss Finland) and “投诉” (complaint) on Xiaohongshu revealed a trove of collective action: users shared different complaint pathways, posted templates for filing reports, and documented various outcomes from their complaints.
For such large-scale public events, Xiaohongshu can be both an organising platform and a rich source for tracking how diaspora communities coordinate responses to discrimination, providing journalists with insight into grassroots activism and transnational advocacy networks.
Xiaohongshu is available for download on both Apple’s App Store and Google Play worldwide, or can be accessed via a web browser. In international app stores, the app appears under the name “RedNote,” but this is the same application as Xiaohongshu – content and accounts are shared across both. The key difference is that RedNote users who register with overseas phone numbers are automatically tagged as international users, which affects the content the algorithm surfaces to them.
For users who download the app outside mainland China, Xiaohongshu automatically detects the device language and location. Upon first login, international users are prompted with an option to automatically translate all content into English (or their device language). If enabled, posts and comments will display with translations by default, and the algorithm will prioritise English-language content and posts created by or for international users, such as expat influencers.
For researchers and journalists seeking to observe the platform as Chinese users experience it, consider disabling automatic translation. This allows you to see content as it natively appears and helps you distinguish between posts created for international audiences versus those created for domestic users – a distinction that matters when assessing how representative your sample is for the relevant topic.
The default home feed, or the “Explore” tab, is where the algorithm surfaces content based on your engagement history, location and user profile. The feed uses a grid layout displaying post thumbnails with titles and like counts.
On the top right corner of the screen, the search bar also allows keyword searches across posts, users and topics. Results can be filtered by content type (e.g. notes, videos, users or products) and sorted by relevance or recency.

Xiaohongshu’s search function is relatively basic. You can search by keywords and filter by time and location, but the options are general: time filters include “past day,” “past week,” or “past six months,” while location filters offer “same city” or “nearby”.
For example, searching “Canada” returns posts tagged with that keyword, which you can then sort by recency or proximity.

For breaking news events, try searching location names or names of individuals involved in the incident, filtering for the most recent posts to capture real-time reactions and on-the-ground accounts before they’re censored or deleted.
Xiaohongshu primarily uses algorithms to curate and push content through personalised feeds. For journalists using Xiaohongshu for investigative purposes, it can be useful to actively search for topics of interest to train your algorithm – the more you search and engage with specific content, the more relevant posts the algorithm will surface to you.
However, if you are researching the platform itself – studying what content Xiaohongshu promotes, how censorship operates, or what narratives dominate – you may want to start from a clean slate. In that case, consider periodically turning off personalised recommendations (Settings → Privacy Settings → Personalisation Options), clearing your browsing history, clearing cached data, or using a fresh account to observe what the platform shows to a “neutral” user.
During the influx of “TikTok refugees” in January 2025, Xiaohongshu launched a translation feature for users outside mainland China, enabling the automatic translation of comments and posts.
However, this does not translate search queries. The platform’s search engine is still optimised for Chinese, though there is a “prioritise English” filter for overseas users, and searching in English will return some results.

But the language you search in shapes far more than just your results – it determines which version of the platform you see. When you search in English or use an international account, the algorithm treats you as a foreign user and surfaces content accordingly: influencers explaining why they love living in China, comparisons showing Chinese life favourably against the West.
This isn’t a neutral cross-section of the platform – it is a curated bubble. To access what Chinese users actually discuss among themselves, it would be more effective to search in simplified Chinese and, ideally, use a China-registered account if you have access to one. If you don’t read Chinese, you can also consider using a translation tool (Google Translate, DeepL, or an AI assistant) to convert your search terms into simplified Chinese before entering them.
Despite such tools and the in-app translation feature, it is always useful when researching using Chinese platforms to work with a native speaker familiar with the local context. They can flag when an innocuous-seeming term actually carries hidden meaning, and help identify coded conversations about a censored topic.
On Xiaohongshu specifically, this coded language extends beyond political topics to include anything the platform’s algorithm might flag as “vulgar” or promotional. For example, users substitute fruits and neutral terms for body parts or sexual content to avoid being flagged as inappropriate – the peach emoji for buttocks, or 炒菜 (“cooking”) for explicit material. They may also use abbreviations and emojis for commercial terms to evade anti-marketing filters, such as “vx” (the abbreviation of how WeChat is pronounced in Chinese) or “
绿” (“plus green”, apparently referring to WeChat’s green logo) for WeChat, or “米” (rice) or the moneybag emoji for money.
For more sophisticated searching, consider using third-party marketing analytics tools like Xinhong and Qiangu, which can show trending topics, popular posts and engagement metrics, as well as identify key content creators posting about specific subjects.
For example, on Xinhong, when you search for “Canada” in Chinese, it also shows show trending related searches such as “加拿大总理” (Canadian Prime Minister). Clicking through these suggestions leads to recent posts—for example, posts about Mark Carney’s latest statements at Davos, along with user comments and reactions.

While these tools are designed for marketers, they provide journalists with valuable capabilities: tracking how topics evolve, identifying influential voices in specific communities, and discovering related hashtags or discussions that might not surface through basic platform search. These tools often require paid subscriptions but can significantly enhance research efficiency for long-term investigations.
Another valuable feature is Xiaohongshu’s group chat function, where users gather around shared keywords and topics—from city-specific communities to niche interests. These groups are often highly active and provide access to candid community discussions that don’t appear in public posts. To find relevant groups, go to Messages → Group Square, where you can browse categories or search by keyword and request to join.
Monitoring active group chats related to relevant topics, whether that’s a specific city, industry, or issue, can help journalists and researchers stay updated on emerging issues and detect potential story leads before they become widely visible on public feeds.
Chinese social media content can disappear quickly and without warning due to censorship, making immediate preservation critical.
Always take two preservation steps immediately upon discovering relevant content:
First, screenshot the entire post, including the URL, timestamp, username, like/comment counts, and location tags. These metrics establish context and authenticity. Use tools that capture full-page screenshots rather than just visible portions, as posts can be long and comments extensive. Second, archive the web page using services like archive.today or Wayback Machine. Note that these services capture only static content – comments and engagement metrics may not be fully preserved and should be screenshotted separately.
For Xiaohongshu specifically, always preserve the user’s unique ID found in their profile URL when viewed on a browser, which follows the format “user/profile/[unique ID]”. Users can change their display names, but this unique identifier remains constant, allowing you to track accounts over time even after name changes. This is critical for long-term investigations or when monitoring specific sources.

Xiaohongshu operates under the same legal and censorship constraints as all Chinese social media platforms, and researchers should approach it with appropriate caution. Content moderation is extensive: users who post about sensitive subjects risk having their content removed or their accounts suspended, and the platform is required to comply with government data requests. For researchers, this means the information you find represents only what has survived the censorship process.
That said, Xiaohongshu remains a remarkably rich resource for open-source research. Its strength lies precisely in its apolitical, lifestyle-oriented identity: while political discussion is suppressed, candid conversations about everyday life flourish. For journalists willing to invest in learning the platform’s rhythms, building Chinese-language search skills, and understanding its coded vocabularies, Xiaohongshu offers a window into how ordinary Chinese people talk among themselves – an area that remains largely untapped by international media.
Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Bluesky here, Instagram here, Reddit here and YouTube here.
The post Mining China’s ‘Little Red Book’ for Open Source Gold appeared first on bellingcat.
Bluesky’s DDoS attack caused outages for a second day, disrupting feeds, notifications, and search across the platform.
The post Bluesky Outage: Coordinated Traffic Attack Causes Widespread Errors appeared first on TechRepublic.
WhatsApp is testing usernames that could let users chat without sharing phone numbers, adding a new privacy layer now rolling out to some beta users.
The post WhatsApp New Update Lets You Chat Without Sharing Your Phone Number appeared first on TechRepublic.
Mike Masnick points out that the recent New Mexico court ruling against Meta has some bad implications for end-to-end encryption, and security in general:
If the “design choices create liability” framework seems worrying in the abstract, the New Mexico case provides a concrete example of where it leads in practice.
One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.
The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”
Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.
End-to-end encryption protects billions of people from surveillance, data breaches, authoritarian governments, stalkers, and domestic abusers. It’s one of the most important privacy and security tools ordinary people have. Every major security expert and civil liberties organization in the world has argued for stronger encryption, not weaker.
But under the “design liability” theory, implementing encryption becomes evidence of negligence, because a small number of bad actors also use encrypted communications. The logic applies to literally every communication tool ever invented. Predators also use the postal service, telephones, and in-person conversation. The encryption itself harms no one. Like infinite scroll and autoplay, it is inert without the choices of bad actors - choices made by people, not by the platform’s design.
The incentive this creates goes far beyond encryption, and it’s bad. If any product improvement that protects the majority of users can be held against you because a tiny fraction of bad actors exploit it, companies will simply stop making those improvements. Why add encryption if it becomes Exhibit A in a future lawsuit? Why implement any privacy-protective feature if a plaintiff’s lawyer will characterize it as “shielding bad actors”?
And it gets worse. Some of the most damaging evidence in both trials came from internal company documents where employees raised concerns about safety risks and discussed tradeoffs. These were played up in the media (and the courtroom) as “smoking guns.” But that means no company is going to allow anyone to raise concerns ever again. That’s very, very bad.
In a sane legal environment, you want companies to have these internal debates. You want engineers and safety teams to flag potential risks, wrestle with difficult tradeoffs, and document their reasoning. But when those good-faith deliberations become plaintiff’s exhibits presented to a jury as proof that “they knew and did it anyway,” the rational corporate response is to stop putting anything in writing. Stop doing risk assessments. Stop asking hard questions internally.
The lesson every general counsel in Silicon Valley is learning right now: ignorance is safer than inquiry. That makes everyone less safe, not more.
The essay has a lot more: about Section 230, about competition in this space, about the myopic nature of the ruling. Go read it.
A landmark jury verdict has found Meta and YouTube negligent in a social media addiction case, raising major questions about platform accountability and legal protections under Section 230. This episode covers the details of the case, why the ruling is significant, and what it could mean for the future of social media, privacy, and cybersecurity. […]
The post Meta & YouTube Found Negligent: A Turning Point for Big Tech? appeared first on Shared Security Podcast.
The post Meta & YouTube Found Negligent: A Turning Point for Big Tech? appeared first on Security Boulevard.
While we can probably all agree that there is more than enough proof that social media is bad for the mental health of our children, the methods we are trying to block or ban them seem to do more harm than good.
Across the world, lawmakers are tripping over each other to be seen “doing something” about kids and social media. Europe is slowly turning into a patchwork of age limits, curfews, and partial bans, with each country testing its own flavor of restriction while platforms try to update their systems just fast enough to stay compliant. Australia has gone even further with a nationwide ban for children under 16 that regulators now struggle to enforce at scale. The political message seems to be: social media is dangerous, and the state will step in where parents supposedly fail.
On paper, that sounds decisive. In practice, it is messy, easy to bypass, and it risks shifting the problem rather than solving it. Most of these measures depend on age‑verification systems that were never designed to handle this kind of pressure. Research looking at sign‑up flows for major platforms shows what every teenager already knows: it is not hard to lie about your date of birth, borrow an older friend’s details, or hop to a service that is just outside the current regulatory crosshairs. The result is a lot of political noise, a lot of extra friction for everyone, and only a marginal effect on the very group these rules are aimed at.
Worse, by treating all social media use by minors as equally harmful, bans erase important nuances. There is a world of difference between doom‑scrolling through algorithmically-boosted gore reels at 2 AM and using a group chat to do homework, laugh at memes, or stay in touch with cousins abroad. Studies and expert reviews echo this. Social media can contribute to anxiety, depression, and poor sleep, but it can also provides support, connection, and a sense of belonging, especially for teens who feel isolated offline. A blunt ban cuts off both the toxic and the helpful parts in one sweep, which is not necessarily an improvement.
The tools we build to make bans enforceable come with their own side‑effects. Age‑verification schemes based on IDs, biometric analysis, or third‑party brokers may reduce some underage sign‑ups, but they also normalize handing over sensitive data just to speak or listen online. Legal and technical analysts warn that these systems introduce new privacy risks, expand surveillance, and can disproportionately impact vulnerable communities who rely on pseudonyms and anonymity for their safety. For children, the lesson the takeaway is that if they want to participate, they must accept invasive checks they barely understand or learn how to bypass them.
Which children easily do.
When you close one door without addressing the underlying behavior, kids will find another, as they have done throughout history. From chat rooms to instant messaging to early social networks, every attempt to lock children out has produced a mix of circumvention and secrecy. That secrecy is a problem in itself, because it pushes online life into hidden accounts, borrowed devices, or unregulated platforms where adults have even less visibility into what is going on. The more online activity that moves into that grey area of illegality, the harder it becomes to have honest conversations about the risks.
That, ultimately, is the core weakness of “ban first, ask questions later” policies. They are optimized for sending a strong signal to voters, not for building resilient habits in families. Politicians and platforms both have roles to play to make the online environment safer. Platforms can use a better design, safer defaults, more transparency, and proper enforcement against clear abuse. But none of that will replace what actually makes a difference for a child: an adult who understands the risks well enough to talk about them, set reasonable boundaries, and is trusted enough that the child will come to them when something goes wrong. No child suddenly matures enough on their 13th or even 16th birthday to be able to fight off the pitfalls of extremely fine-tuned algorithms.
We should be honest about this. No regulator, filter, or age‑gate will ever know your child as well as you do. No law will be able to adjust itself on the fly when a teenager suddenly starts using a new app in a worrying way. Governments can and should tackle the worst excesses, and hold companies responsible so they stop pretending that maximized engagement is compatible with child safety. But in the end, the real responsibility for keeping children safe online cannot be outsourced to apps or regulation. In the end, it lies, unavoidably, with the people—daily, compassionately—in their lives.
We don’t just report on threats – we help protect your social media
Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.
Meta will soon end Instagram’s end-to-end encrypted chats, citing low adoption and directing users to export affected messages.
The post Instagram Users Urged to Save Encrypted DMs Before Feature Disappears appeared first on TechRepublic.
In a move that bucks the entire industry trend, TikTok has confirmed it will not implement end-to-end encryption (E2EE) for direct messages on its platform — arguing that E2EE would make users less safe. We break down what’s really going on: the child safety argument, the privacy counterargument, the geopolitical questions surrounding ByteDance, and what […]
The post TikTok Says No to End-to-End Encryption: Here’s Why That’s a Big Deal appeared first on Shared Security Podcast.
The post TikTok Says No to End-to-End Encryption: Here’s Why That’s a Big Deal appeared first on Security Boulevard.

Explore how advancements in surveillance infrastructure and the democratization of intelligence have transformed espionage.
The post The Worm Turns – When the Hunter Becomes the Hunted Mass Surveillance and the Weaponization of the Data We Voluntarily Create appeared first on Security Boulevard.
The MIT Technology Review has a good article on Moltbook, the supposed AI-only social network:
Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.
“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”
Humans must create and verify their bots’ accounts and provide the prompts for how they want a bot to behave. The agents do not do anything that they haven’t been prompted to do.
I think this take has it mostly right:
What happened on Moltbook is a preview of what researcher Juergen Nittner II calls “The LOL WUT Theory.” The point where AI-generated content becomes so easy to produce and so hard to detect that the average person’s only rational response to anything online is bewildered disbelief.
We’re not there yet. But we’re close.
The theory is simple: First, AI gets accessible enough that anyone can use it. Second, AI gets good enough that you can’t reliably tell what’s fake. Third, and this is the crisis point, regular people realize there’s nothing online they can trust. At that moment, the internet stops being useful for anything except entertainment.
Discord will soon roll out global age verification, using age inference plus video selfie or ID checks to limit access to sensitive content. Find out when.
The post Coming Soon: Discord to Roll Out Global Age Verification Using Facial Scans, ID appeared first on TechRepublic.
Flickr disclosed a data exposure tied to a third-party email provider, highlighting how external service vulnerabilities can put millions of users at risk.
The post Flickr’s 35M Users Affected by Third-Party Data Exposure appeared first on TechRepublic.