Visualização normal

Antes de ontemStream principal
  • ✇Firewall Daily – The Cyber Express
  • UK’s Online Age Checks Are Failing—Kids are Beating Them with AI, Fake Beards Mihir Bagwe
    When governments introduced stricter online age checks under the UK’s Online Safety Act, the goal was to keep children away from harmful content. But in practice, the system is already showing cracks—and the most telling insight comes from the very users it’s meant to protect. Children aren’t just countering age checks, they’re actively bypassing them—and often with surprising ease. According to a new report from Internet Matters foundation, nearly half of children (46%) believe age verificati
     

UK’s Online Age Checks Are Failing—Kids are Beating Them with AI, Fake Beards

U.S. Government Sues TikTok, TikTok

When governments introduced stricter online age checks under the UK’s Online Safety Act, the goal was to keep children away from harmful content. But in practice, the system is already showing cracks—and the most telling insight comes from the very users it’s meant to protect.

Children aren’t just countering age checks, they’re actively bypassing them—and often with surprising ease.

According to a new report from Internet Matters foundation, nearly half of children (46%) believe age verification systems are easy to get around, while only 17% think they are difficult. That perception isn’t theoretical. It’s grounded in real behavior, shared knowledge, and increasingly creative workarounds.

From simply entering a fake birthdate to using someone else’s ID, children have developed a toolkit to bypass techniques. Some methods are almost trivial—changing a date of birth or borrowing a parent’s login—while others reflect a growing sophistication. Kids reported submitting altered images, using AI-generated faces, or even drawing facial hair on themselves to trick facial recognition systems.

In one striking example, a parent described catching their child using makeup to appear older—successfully fooling the system.

I did catch my son using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old. – Mum of boy, 12

But the problem goes deeper than perception. It’s systemic.

Also read: UK Regulator Ofcom Launches Probe into Telegram, Teen Chat Platforms

Bypassing Is the Norm, Not the Exception

The report reveals that nearly one in three children (32%) admitted to bypassing age restrictions in just the past two months. Older children are even more likely to do so, which shows how digital literacy often translates into evasion capability.

The most common methods?

  • Entering a fake birthdate (13%)
  • Using someone else’s login credentials (9%)
  • Accessing platforms via another person’s device (8%)

Despite widespread concerns about VPNs, they play a relatively minor role. Only 7% of children reported using them to bypass restrictions, suggesting that simpler, low-effort tactics remain the preferred route.

In other words, the barrier to entry is not just low—it’s practically optional.

Europe Threat Landscape Q1 2026, Online Age Check Europe’s cyber threat landscape Q1 2026 shows a sharp acceleration in cyber threats across the region. Do you know what's contributing to it?

Check Cyble's full analysis report here!

Even When It Works, It Doesn’t Work

Ironically, even when children attempt to follow the rules, the technology doesn’t always cooperate.

Some reported being incorrectly identified as older—or younger—by facial recognition systems. In cases where they were flagged as underage, enforcement was often inconsistent or temporary. One child described being blocked from going live on a platform for just 10 minutes before being allowed to try again.

This inconsistency creates a loophole where persistence pays. If at first you’re denied, simply try again.

A Risky Side Effect

Perhaps the most concerning finding isn’t that children can bypass age checks—it’s that adults can too.

The report states fears that adults may exploit these same weaknesses to access spaces intended for younger users. In some cases, this involves using images or videos of children to trick verification systems. There are even reports of adults acquiring child-registered accounts to blend into youth platforms.

This flips the entire premise of age verification on its head. Instead of protecting children, flawed systems may inadvertently expose them to greater risk.

Parents, Part of the Problem—or the Solution?

Adding another layer of complexity, parents themselves are sometimes complicit.

About 26% of parents admitted to allowing their children to bypass age checks, with 17% actively helping them do so. The reasoning is often pragmatic. Parents feel they understand the risks and trust their child’s judgment.

I have helped my son get around them. It was to play a game, and I knew the game, and I was happy and confident that I was fine with him playing it. – Mum of non-binary child, 13

But this undermines the consistency of enforcement. If rules vary from household to household, platform-level protections lose their impact.

Interestingly, the data also suggests that communication matters. Children who regularly discuss their online activity with parents are less likely to bypass restrictions than those who don’t.

Why Kids Are Bypassing in the First Place

The motivations aren’t always malicious. In many cases, children are simply trying to access social media (34%), gaming communities (30%), or messaging apps (29%) that their peers are already using.

What this resonate is a fundamental tension where age verification systems are trying to enforce boundaries in environments where social participation is the norm.

Age verification is often positioned as a cornerstone of online safety. But in practice, it’s proving to be more of a speed bump than a safeguard.

Children understand the systems. They share methods. They adapt quickly. And until the technology—and its enforcement—becomes significantly more robust, age checks may offer more reassurance than real protection.

  • ✇Firewall Daily – The Cyber Express
  • NCSC Warns Organisations to Act Fast as Hidden Software Flaws Surface Samiksha Jain
    Organisations worldwide are being urged to prepare for a vulnerability patch wave, as security experts warn that advances in artificial intelligence (AI) could rapidly expose long-standing weaknesses across software systems. The warning comes from National Cyber Security Centre (NCSC), which says businesses must act now to strengthen their environments before a surge of critical updates arrives. In a blog, Chief Technology Officer Ollie Whitehouse highlighted that years of accumulated technic
     

NCSC Warns Organisations to Act Fast as Hidden Software Flaws Surface

vulnerability patch wave

Organisations worldwide are being urged to prepare for a vulnerability patch wave, as security experts warn that advances in artificial intelligence (AI) could rapidly expose long-standing weaknesses across software systems. The warning comes from National Cyber Security Centre (NCSC), which says businesses must act now to strengthen their environments before a surge of critical updates arrives. In a blog, Chief Technology Officer Ollie Whitehouse highlighted that years of accumulated technical debt are now becoming a major cybersecurity risk. Technical debt refers to unresolved flaws and compromises in software that arise when organisations prioritise speed or short-term delivery over long-term resilience. According to Whitehouse, artificial intelligence is accelerating the problem. Skilled attackers are increasingly able to use AI tools to identify and exploit vulnerabilities at scale, forcing what the NCSC describes as a “correction” across the technology ecosystem. This is expected to trigger a vulnerability patch wave, with a high volume of security updates affecting open source, commercial, proprietary, and software-as-a-service platforms.

Prioritising External Attack Surfaces

As part of preparing for the vulnerability patch wave, the NCSC advises organisations to first focus on their external attack surfaces. Internet-facing systems, cloud services, and exposed infrastructure present the highest risk when new vulnerabilities are disclosed. The guidance recommends a perimeter-first approach. Organisations should secure outward-facing technologies before moving deeper into internal systems. This reduces the likelihood that attackers can exploit newly discovered weaknesses during the vulnerability patch wave. Where resources are limited, priority should be given to patching systems that are directly exposed to the internet. Critical security infrastructure should follow next. However, the NCSC cautions that patching alone will not solve every issue. Legacy and end-of-life systems remain a major concern. Many of these technologies no longer receive security updates, leaving organisations vulnerable even during a vulnerability patch wave. In such cases, businesses may need to replace outdated systems or bring them back into supported environments, especially if they are externally accessible.

Preparing for Faster and Large-scale Patching

The expected vulnerability patch wave will require organisations to rethink how they manage updates. The NCSC is urging businesses to prepare for faster, more frequent, and large-scale deployment of security patches, including across supply chains. Several key measures have been recommended:
  • Enable automatic updates wherever possible to reduce operational burden
  • Adopt secure “hot patching” to apply fixes without service disruption
  • Ensure internal processes support rapid and large-scale updates
  • Use risk-based prioritisation models such as Stakeholder Specific Vulnerability Categorisation (SSVC)
Whitehouse noted that organisations must be ready to accelerate patching timelines when critical vulnerabilities are actively exploited, particularly those affecting internet-facing systems. At the core of this approach is an “update by default” policy. This means applying software updates as quickly as possible, ideally through automated processes. While this may not always be feasible for safety-critical or operational technology systems, the NCSC says it should form the foundation of modern vulnerability management strategies.

Beyond Vulnerability Patch Wave: Addressing Systemic Risks

The NCSC emphasises that the vulnerability patch wave is only part of a broader cybersecurity challenge. Patching addresses immediate risks, but it does not eliminate the underlying causes of technical debt. Technology vendors are being encouraged to build more secure systems from the outset. This includes adopting memory safety and containment technologies such as CHERI, which can reduce the likelihood of exploitable vulnerabilities. For organisations operating critical services, strengthening cybersecurity fundamentals is equally important. Frameworks such as Cyber Essentials and sector-specific resilience models can help reduce the impact of breaches and improve overall security posture. Additional guidance has also been issued for high-risk environments, covering areas such as privileged access workstations, cross-domain security architecture, and threat detection through observability and proactive hunting.

Organisations Urged to Act Now

The NCSC has made it clear that preparation cannot be delayed. The anticipated vulnerability patch wave is expected to impact organisations of all sizes and sectors. Businesses are advised to review their vulnerability management processes, assess their exposure, and ensure their supply chains are also ready to respond. Larger organisations, in particular, are encouraged to seek assurance from both commercial and open-source partners. As Whitehouse concluded, readiness for the vulnerability patch wave will depend on proactive planning, strong fundamentals, and the ability to respond quickly at scale.

Norway to Introduce Social Media Age Limit of 16, Platforms to Enforce Verification

Norway social media age limit

The Norway social media age limit is moving closer to becoming law, with the government confirming it will introduce legislation this year to restrict access for children under 16. The proposal, expected to be presented to Parliament (Stortinget), aims to reshape how young users interact with digital platforms and place greater responsibility on technology companies for enforcing age restrictions. Prime Minister Jonas Gahr Støre said the move is designed to protect childhood experiences from being dominated by screens and algorithms. He emphasized that children should have space for play, friendships, and offline development, positioning the Norway social media age limit as a safeguard rather than a restriction.

How the Norway Social Media Age Limit Will Work

Under the proposed law, the Norway social media age limit will apply from January 1 of the year a child turns 16. This means access will be granted based on birth year rather than exact birthdate, ensuring that entire school cohorts are treated equally. In practice, most children will be at least 15 years old when they gain access. Minister for Children and Families Lene Vågslid explained that this approach addresses concerns raised during public consultations. Many respondents argued that differences based on birthdates could create social divides among peers. By aligning access with school cohorts, the government aims to balance protection with inclusion. “For me, it is important both to give better protection for children in the digital world and to listen to what young people are saying. I understand that social media can be an important social arena. We want to ensure inclusion and a sense of community. That is why we are proposing that the cutoff be based on the year of birth rather than the exact birth date, so that cohorts are given equal opportunities, regardless of when each person is born,” said Minister for Children and Families Lene Vågslid (Labour). At the same time, officials acknowledge that social media plays a role in young people’s social lives. The policy attempts to maintain that balance while reducing early exposure to potential harms linked to excessive screen time and online interactions.

Tech Companies to Enforce the Norway Social Media Age Limit

A key feature of the Norway social media age limit is the shift in responsibility to technology companies. Platforms will be required to implement effective age verification systems at login, ensuring that underage users cannot bypass restrictions. Minister of Digitalisation and Public Governance Karianne Tung made it clear that enforcement will not rely on children or parents alone. She stated that companies must take full responsibility for compliance and ensure that safeguards are operational from the first day the law takes effect. “I expect technology companies to ensure that the age limit is respected. Children cannot be left with the responsibility for staying away from platforms they are not allowed to use. That responsibility rests with the companies providing these services. They must implement effective age verification and comply with the law from day one,” said Minister of Digitalisation and Public Governance Karianne Tung (Labour). This approach aligns with broader European regulatory trends, particularly the Digital Services Act, which is expected to require platforms to take stronger accountability for user safety, including age verification measures.

Part of a Wider European Push

Norway is among the first countries in Europe to move forward with a nationwide social media restriction of this kind. However, it is not acting in isolation. Several European governments are exploring or advancing similar policies. In France, lawmakers have already backed a proposal to restrict social media use for children under 15, with strong support from President Emmanuel Macron. Spain has also announced plans to block access for users aged 15 and under, while the Netherlands is considering a minimum age of 15. In the United Kingdom, Prime Minister Keir Starmer has supported tighter controls, with pilot programs underway to assess the impact of limiting social media use among teenagers. These developments suggest that the Norway social media age limit is part of a broader shift across Europe toward stricter regulation of digital platforms and greater protection for minors.

Implementation Timeline and Next Steps

The Norwegian government plans to send the proposed legislation for consultation within the European Economic Area before the summer. This process typically lasts around three months. Full enforcement of the Norway social media age limit is expected once the Digital Services Act is incorporated into Norwegian law. Officials say recent trends support the move. Data indicates a decline in the number of children owning smartphones and using social media, partly due to national screen-time guidelines and initiatives such as mobile-free schools. The government intends to implement the policy in stages, but it has made clear that service providers are expected to begin compliance preparations immediately.

A Shift in Digital Policy

The Norway social media age limit reflects growing concern among policymakers about the impact of digital platforms on children’s mental health, privacy, and development. By placing legal responsibility on technology companies and aligning with European regulation, Norway is positioning itself at the forefront of this policy shift. As similar measures gain traction across Europe, the effectiveness of age verification and enforcement will be closely watched. The Norwegian model could become a reference point for other countries seeking to balance digital access with child protection.
  • ✇Firewall Daily – The Cyber Express
  • High Court Backs UK Police Use of Live Facial Recognition Technology Samiksha Jain
    A Live Facial Recognition Policy used by the Metropolitan Police Service has been upheld by the High Court of Justice, marking a significant legal development in the use of surveillance technology in the UK. The ruling, delivered on April 21, 2026, dismissed a legal challenge that questioned whether the policy allows excessive discretion in how facial recognition is deployed. The case, brought by civil liberties campaigners, focused on whether the Live Facial Recognition Policy complies with
     

High Court Backs UK Police Use of Live Facial Recognition Technology

Facial Recognition Policy

A Live Facial Recognition Policy used by the Metropolitan Police Service has been upheld by the High Court of Justice, marking a significant legal development in the use of surveillance technology in the UK. The ruling, delivered on April 21, 2026, dismissed a legal challenge that questioned whether the policy allows excessive discretion in how facial recognition is deployed. The case, brought by civil liberties campaigners, focused on whether the Live Facial Recognition Policy complies with protections under the European Convention on Human Rights, particularly rights related to privacy, expression, and assembly.

Challenge to Live Facial Recognition Policy and Legal Grounds

The judicial review was filed by Shaun Thompson and Silkie Carlo, director of Big Brother Watch. The claimants argued that the Live Facial Recognition Policy gives police officers too much freedom to decide where and how the technology is used, potentially leading to arbitrary surveillance. Their case relied on Articles 8, 10, and 11 of the ECHR, which protect the right to privacy and freedom of expression and assembly. They argued that the policy lacked sufficient clarity and safeguards, making it incompatible with legal standards that require laws to be foreseeable and constrained. However, the court clarified that the case was not about whether facial recognition technology itself is appropriate, but whether the policy governing its use meets legal requirements.

Court Finds Safeguards and Structure in Live Facial Recognition Policy

In its judgment, the court ruled that the Live Facial Recognition Policy contains clear rules and does not grant unchecked powers to police officers. Judges highlighted that the policy limits deployment to three defined scenarios: crime hotspots, protective security operations, and cases involving specific intelligence about a suspect’s presence. The court noted that each deployment must undergo a proportionality assessment, ensuring that potential impacts on privacy and civil liberties are considered. It also emphasized that decisions are subject to oversight and follow a structured chain of command. According to the ruling, these safeguards distinguish the current policy from earlier concerns raised in previous cases. The judges concluded that the Live Facial Recognition Policy meets the legal requirement of being “in accordance with the law.”

Evidence and Concerns Around Misuse Rejected

The claimants pointed to concerns about wrongful identification and potential misuse of facial recognition technology. One claimant described being mistakenly stopped after being incorrectly matched to a suspect. Despite these concerns, the court found that much of the supporting evidence did not directly address the legality of the policy. Some submissions were dismissed as opinion rather than factual or expert evidence relevant to the legal issues being considered. The court also rejected arguments that the policy enables widespread surveillance in crowded areas. It clarified that deployment decisions are based on crime data and intelligence, not simply on the number of people in a location.

Discrimination Concerns and Broader Debate

Concerns about bias in facial recognition systems were raised during the proceedings, particularly following earlier findings by the National Physical Laboratory. However, the court stated that no substantial legal challenge on discrimination grounds had been properly presented. As a result, it did not find evidence that the Live Facial Recognition Policy is unlawful on those grounds. Separately, the UK government has signaled plans to expand the use of facial recognition technology. The Home Office has proposed increasing its deployment and is consulting on a stronger legal framework to support wider use.

Operational Impact and Future of Facial Recognition

The Metropolitan Police has defended the use of facial recognition, stating that the technology has supported thousands of arrests angd helped identify suspects in serious crimes, including violent and sexual offenses. Officials also highlighted improvements in accuracy and safeguards, including the immediate deletion of non-matching data and human review of alerts. Commissioner Mark Rowley described the ruling as a major step forward for public safety, emphasizing that the technology is carefully controlled and effective. With the court confirming that the Live Facial Recognition Policy meets legal standards, the decision is likely to influence how surveillance tools are used and regulated in the UK. It also sets a precedent for future legal challenges as governments and law enforcement agencies continue to expand the use of biometric technologies.

U.S. Treasury Rolls Out Cybersecurity Information Sharing Initiative as Crypto Attacks Rise

digital asset cybersecurity initiative

The U.S. Department of the Treasury has unveiled a new digital asset cybersecurity initiative, aimed at strengthening defenses across the rapidly growing digital asset ecosystem. The initiative, announced by the Treasury’s Office of Cybersecurity and Critical Infrastructure Protection (OCCIP), seeks to provide timely and actionable cyber threat intelligence to eligible U.S.-based digital asset firms. The move comes amid escalating cyberattacks targeting cryptocurrency platforms and follows recommendations outlined in the federal report “Strengthening American Leadership in Digital Financial Technology.”

Understanding About Digital Asset Cybersecurity Initiative 

At its core, the digital asset cybersecurity initiative will extend high-quality threat intelligence, previously reserved for traditional financial institutions—to digital asset companies and industry organizations. This includes insights that help firms detect, prevent, and respond to cyber threats affecting their platforms, customers, and infrastructure. “Digital asset firms are an increasingly important part of the U.S. financial sector, and their resilience is critical to the health of the broader system,” said Luke Pettit, Assistant Secretary for Financial Institutions. “By extending access to the same high-quality cybersecurity information used by traditional financial institutions, Treasury is helping promote a more secure and responsible digital asset ecosystem,” he added further. Eligible firms that meet Treasury criteria will receive this information at no cost, signaling a broader push to align cybersecurity standards across financial sectors.

Rising Threats Drive Urgency for Digital Asset Cybersecurity

The digital asset cybersecurity initiative comes at a time when cyber threats against cryptocurrency platforms are intensifying in both scale and complexity. Treasury officials emphasized that the initiative directly responds to this evolving threat landscape. “Cyber threats targeting digital asset platforms are growing in frequency and sophistication,” said Cory Wilson, Deputy Assistant Secretary for Cybersecurity. “This initiative expands access to actionable threat information that helps firms strengthen defenses, reduce risk, and respond more effectively to incidents.” Recent incidents emphasize the urgency. Alleged North Korean hackers reportedly stole $280 million from crypto platform Drift using a complex attack. Industry-wide losses exceeded $3.4 billion last year, with billions more lost annually over the past five years. In another case, Bitcoin ATM operator Bitcoin Depot disclosed a cyberattack on March 23 that resulted in losses exceeding $3.6 million. Additional breaches this year have reported losses of $26 million and $40 million, highlighting persistent vulnerabilities across the sector.

Government Push Amid Ongoing Crypto Crime

Despite increased enforcement efforts, cybercriminals and nation-state actors continue to exploit weaknesses in the digital asset ecosystem. U.S. authorities, including the Justice Department, have ramped up prosecutions and issued repeated warnings about infiltration attempts, particularly by North Korean threat groups. However, these measures have had limited success in curbing attacks. Threat actors continue to exploit coding flaws, social engineering tactics, and employee vulnerabilities to gain access to crypto platforms. The digital asset cybersecurity initiative is designed to complement these efforts by shifting focus toward proactive defense and real-time intelligence sharing rather than reactive enforcement alone.

Strengthening the Future of Digital Finance

Treasury officials also framed the digital asset cybersecurity initiative as a foundational step for the future of digital finance. As digital assets become more integrated into mainstream financial systems, cybersecurity is emerging as a critical pillar for sustainable growth. “This initiative reflects the principles of the GENIUS Act by promoting responsible innovation grounded in strong cybersecurity and operational resilience,” said Tyler Williams, Counselor to the Secretary for Digital Assets. “As digital assets become more integrated into the financial system, access to timely and actionable cyber threat information is essential to protecting consumers and safeguarding the stability of U.S. financial markets,” Williams added. The broader federal strategy emphasizes balancing innovation with security. The Treasury’s report highlights the need for regulatory clarity, risk mitigation, and public-private collaboration to support the long-term growth of digital assets while addressing illicit finance and cyber risks.

A Step Toward Industry-Wide Cyber Resilience

With cyberattacks continuing to disrupt the crypto ecosystem, the digital asset cybersecurity initiative represents a significant step toward improving industry-wide resilience. By bridging the gap between traditional financial cybersecurity frameworks and emerging digital asset platforms, the initiative aims to create a more secure and stable environment for innovation. As digital assets evolve from niche technology to a core component of global finance, initiatives like this may play a key role in shaping how the industry manages risk, and whether it can keep pace with increasing cyber threats.
  • ✇Firewall Daily – The Cyber Express
  • Snapchat Faces EU Child Safety Probe Under Digital Services Act Samiksha Jain
    The European Commission has launched a formal DSA child protection investigation into Snapchat, examining whether the platform is meeting its obligations to ensure a high level of safety, privacy, and security for minors. The move comes under the framework of the Digital Services Act (DSA), which sets strict standards for online platforms operating in the European Union and can impose fines of up to 6% of global annual turnover for non-compliance. Age Assurance Under Digital Services Act Scr
     

Snapchat Faces EU Child Safety Probe Under Digital Services Act

27 de Março de 2026, 03:13

DSA child protection investigation

The European Commission has launched a formal DSA child protection investigation into Snapchat, examining whether the platform is meeting its obligations to ensure a high level of safety, privacy, and security for minors. The move comes under the framework of the Digital Services Act (DSA), which sets strict standards for online platforms operating in the European Union and can impose fines of up to 6% of global annual turnover for non-compliance.

Age Assurance Under Digital Services Act Scrutiny

At the center of the DSA child protection investigation is Snapchat’s approach to age assurance. According to its terms, users must be at least 13 years old to access the platform. However, the Commission suspects that Snapchat’s reliance on self-declaration is insufficient. It raises concerns that this method neither prevents children under 13 from accessing the service nor adequately verifies whether users are under 17, which is necessary to ensure age-appropriate experiences. There are also concerns that tools to report underage users may not be easily accessible within the app. The investigation also focuses on the risk of minors being exposed to grooming attempts and recruitment for criminal purposes. The Commission suspects that Snapchat may not be doing enough to prevent users with harmful intent from contacting children, particularly in cases where individuals misrepresent their age or manipulate their profiles. This includes concerns around exposure to harmful content, conduct, and contact that could place minors at risk.

Default Settings And Privacy Concerns 

Another key area under the DSA child protection investigation is Snapchat’s default account settings. The Commission believes that the platform may not provide sufficient privacy, safety, and security protections for minors by default. Features such as the “Find Friends” system, which recommends users, and push notifications that remain enabled by default are under scrutiny. The Commission also notes that users may not receive adequate guidance during account creation on how to manage privacy and safety settings, or how to adjust them effectively.

Illegal Content And Reporting Mechanisms Under Review

The investigation further examines whether Snapchat is effectively preventing the dissemination of illegal content, including information related to the sale of drugs and age-restricted products such as alcohol and vapes. Under the DSA, platforms are required to mitigate systemic risks arising from their services. The Commission suspects that current content moderation measures may not be sufficient to block or limit access to such content, especially for younger users. Reporting mechanisms for illegal content are also part of the Digital Services Act child protection investigation. The Commission raises concerns that these systems may not be easy to access or user-friendly and could involve design practices that make reporting less straightforward. There are also concerns that users may not be properly informed about complaint procedures or available redress options within the platform.

Next Steps in DSA Child Protection Investigation

The European Commission will now conduct an in-depth investigation by gathering further evidence, including requesting information from Snapchat and conducting interviews or inspections. The opening of formal proceedings allows the Commission to take further enforcement actions, including adopting interim measures or issuing a non-compliance decision. It can also accept commitments from Snapchat to address the issues identified during the investigation. The action against Snapchat builds on broader regulatory efforts under the Digital Services Act to strengthen online child protection across platforms. The Commission has used its 2025 DSA Guidelines on the protection of minors as a benchmark for evaluating compliance, emphasizing that self-declaration alone should not be considered a reliable age assurance method and that default settings should offer the highest level of protection for minors.
“From grooming and exposure to illegal products to account settings that undermine minors’ safety, Snapchat appears to have overlooked that the Digital Services Act demands high safety standards for all users. With this investigation, we will closely look into their compliance with our legislation,” said Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy.

Age Verification Under Question

In a related development, the European Commission has also taken preliminary action against adult content platforms including Pornhub, Stripchat, XNXX, and XVideos under the Digital Services Act. The Commission found that these platforms may have failed to adequately protect minors from accessing pornographic content. It noted that their risk assessments did not sufficiently identify or evaluate risks to children, and in some cases, placed more emphasis on business considerations than on child safety.
“In the EU, online platforms have a responsibility. Children are accessing adult content at increasingly younger ages and these platforms must put in place robust, privacy-preserving and effective measures to keep minors off their services. Today, we are taking another action to enforce the DSA - ensuring that children are properly protected online, as they have the right to be,” said Virkkunen.
The findings also indicate that these platforms rely heavily on self-declaration for age verification, which the Commission considers ineffective. Additional measures such as content warnings, page blurring, or “restricted to adults” labels were also deemed insufficient to prevent minors from accessing harmful material. The Commission has suggested that more robust, privacy-preserving age verification methods are required to address these risks. As part of ongoing proceedings, these platforms will have the opportunity to respond to the Commission’s findings and take corrective measures. If the breaches are confirmed, the Commission may issue a non-compliance decision, which could result in significant financial penalties or enforcement actions to ensure compliance. The broader enforcement push reflects a clear regulatory direction under the Digital Services Act, with authorities focusing on ensuring that platforms, regardless of size, take stronger responsibility for protecting minors online.
  • ✇Firewall Daily – The Cyber Express
  • The FCC Just Blocked Every New Foreign-Made Router from the U.S. Market Mihir Bagwe
    The router sitting in your home — the one connecting every phone, laptop, and smart device on your network to the internet — is almost certainly made overseas. As of March 23, no new model of that device can receive U.S. market authorization unless it clears a security review by the Department of War or the Department of Homeland Security first. The Federal Communications Commission updated its Covered List to include all routers produced in a foreign country, following a National Security Dete
     

The FCC Just Blocked Every New Foreign-Made Router from the U.S. Market

25 de Março de 2026, 08:01

Foreign-Made Router, FCC Ban, FCC

The router sitting in your home — the one connecting every phone, laptop, and smart device on your network to the internet — is almost certainly made overseas. As of March 23, no new model of that device can receive U.S. market authorization unless it clears a security review by the Department of War or the Department of Homeland Security first.

The Federal Communications Commission updated its Covered List to include all routers produced in a foreign country, following a National Security Determination received on March 20 from a White House-convened Executive Branch interagency body.

The determination concluded that foreign-produced routers introduce a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense, and pose a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure and directly harm U.S. persons.

The FCC's Covered List — established under the Secure and Trusted Communications Networks Act — carries real enforcement teeth. Equipment on the Covered List is prohibited from receiving FCC equipment authorization, and most electronic devices require FCC equipment authorization prior to importation, marketing, or sale in the U.S. Covered equipment is banned from receiving new equipment authorizations, preventing new devices from entering the U.S. market.

The national security determination cited three Chinese state-sponsored cyber campaigns by name. Routers produced abroad were directly implicated in the Volt, Flax, and Salt Typhoon cyberattacks, which targeted critical American communications, energy, transportation, and water infrastructure.

Salt Typhoon penetrated multiple U.S. telecommunications carriers and persisted inside their networks for months; Volt Typhoon pre-positioned itself inside U.S. critical infrastructure for potential future disruption; and Flax Typhoon operated a 260,000-device botnet largely built from compromised consumer routers.

Unlike prior Covered List entries that targeted specific entities such as Huawei and ZTE, this update applies categorically based on place of production, not manufacturer identity. That distinction matters enormously for the industry.

Virtually all routers are made outside the United States, including those produced by U.S.-based companies like TP-Link, which manufactures its products in Vietnam. It appears that the entire router industry will be impacted by the FCC's announcement concerning new devices not previously authorized by the FCC. Netgear, Amazon Eero, Google Nest WiFi, Asus, Linksys, and D-Link all manufacture in Asia. The one apparent exception is the newer Starlink Wi-Fi router, which the company says is manufactured in Texas.

The action does not strand existing users. Consumers can continue using any router they have already purchased, and retailers can continue selling previously authorized models already in their supply chains. Firmware updates for covered devices remain permitted at least through March 1, 2027.

The disruption falls entirely on new product cycles — which in a fast-moving consumer networking market means the freeze begins almost immediately.

A rule that bans new foreign router models while leaving millions of existing foreign-made devices completely untouched does not make U.S. networks measurably more secure today. Security researchers have noted that the Volt Typhoon attacks cited by the FCC as justification, primarily targeted Cisco and Netgear hardware — U.S.-designed products — pointing to software patching failures rather than manufacturing origin as the operational vulnerability.

A Conditional Approval pathway exists for manufacturers willing to pursue it. The Conditional Approval pathway requires companies to commit to establishing or expanding U.S. manufacturing for the products they want to bring to market. That is a significant industrial policy commitment on top of any security review, and one that smaller router vendors may find prohibitive.

The December 2025 drone ban used an identical framework — and as of publication, it had cleared exactly four non-Chinese drone systems while leaving major Chinese manufacturers fully blocked.

Also read: FCC Set to Reverse Course on Telecom Cybersecurity Mandate
  • ✇Firewall Daily – The Cyber Express
  • New York Water Systems Get New Cybersecurity Standards and $2.5M Funding Samiksha Jain
    Cyber threats targeting critical infrastructure are no longer limited to energy grids or financial networks. Increasingly, water infrastructure cybersecurity has become a major concern for governments worldwide as drinking water and wastewater systems rely more heavily on digital technologies. In response to these growing risks, Kathy Hochul, Governor of New York, announced this week a set of new cybersecurity regulations and a $2.5 million grant program aimed at helping communities protect d
     

New York Water Systems Get New Cybersecurity Standards and $2.5M Funding

17 de Março de 2026, 06:42

water infrastructure cybersecurity

Cyber threats targeting critical infrastructure are no longer limited to energy grids or financial networks. Increasingly, water infrastructure cybersecurity has become a major concern for governments worldwide as drinking water and wastewater systems rely more heavily on digital technologies. In response to these growing risks, Kathy Hochul, Governor of New York, announced this week a set of new cybersecurity regulations and a $2.5 million grant program aimed at helping communities protect drinking water and wastewater systems from cyber attacks. The initiative represents what state officials describe as a whole-of-government approach to water infrastructure cybersecurity, combining regulatory standards, financial support, and technical assistance to strengthen the security of essential services used by millions of New Yorkers. “Cyber attacks on our water infrastructure can disrupt services and threaten public health and safety,” Governor Hochul said. “My administration is protecting New Yorkers by modernizing regulations and providing resources to adopt these important safeguards. There is nothing more important than keeping New Yorkers safe.”

Why Water Infrastructure Cybersecurity Matters

Water infrastructure has traditionally been seen as a physical utility issue. But as treatment plants and distribution systems increasingly rely on internet-connected controls and digital monitoring systems, water infrastructure cybersecurity has become a frontline concern. Modern wastewater facilities and drinking water plants use digital systems to monitor chemical balances, control pumps, manage filtration processes, and coordinate distribution networks. While these technologies improve efficiency, they also introduce potential cyber risks. State officials warn that cyber attacks targeting water infrastructure could disrupt essential services or interfere with systems that protect public health. As digital infrastructure expands across critical utilities, improving water infrastructure cybersecurity is becoming just as important as maintaining physical infrastructure.

New Cybersecurity Standards for Water Systems

To address these challenges, the New York State Department of Environmental Conservation and the New York State Department of Health jointly developed new cybersecurity standards for water utilities across the state. The regulations establish minimum security requirements designed to strengthen water infrastructure cybersecurity while remaining practical for local operators. Key measures include:
  • Mandatory cybersecurity training for certified operators
  • Cybersecurity incident reporting requirements
  • Risk-based, tiered standards to protect critical operations and sensitive information
  • Designation of a cybersecurity lead role at larger drinking water systems
These measures aim to move water utilities toward a more structured approach to water infrastructure cybersecurity, ensuring that operators have both the knowledge and accountability required to respond to emerging threats.

$2.5 Million SECURE Grant Program to Support Local Utilities

Alongside the regulatory framework, the state is introducing financial support to help communities implement cybersecurity improvements. The Strengthening Essential Cybersecurity for Utilities and Resiliency Enhancements (SECURE) grant program, administered by the New York State Environmental Facilities Corporation, will provide $2.5 million in funding to support cybersecurity projects at local water and wastewater facilities. The program includes:
  • Up to $50,000 for cybersecurity assessments
  • Up to $100,000 for cybersecurity upgrades
In addition to funding, the Environmental Facilities Corporation will provide no-cost technical assistance through Community Assistance Teams, helping utilities implement cybersecurity best practices and navigate grant applications. State officials believe that combining regulations with funding will make water infrastructure cybersecurity improvements more realistic for smaller communities with limited resources.

A Coordinated Approach to Critical Infrastructure Security

Officials involved in the initiative emphasize that cybersecurity challenges cannot be solved by individual agencies alone. Instead, the state’s strategy relies on coordination across multiple departments and levels of government. New York State Director of Security and Intelligence Colin Ahern highlighted the need for proactive defense. “In today’s threat environment, the security of our digital infrastructure is just as critical as the physical security of our reservoirs. Under Governor Hochul’s leadership, we are moving beyond reactive defense. By pairing nation-leading standards with the SECURE grant program, we are providing New York’s water sectors with the intelligence-driven framework and the muscle they need to preemptively harden our most vital systems against sophisticated global adversaries.” Similarly, Acting Chief Cyber Officer Michaela Lee stressed that cybersecurity requires long-term cooperation between state agencies and local operators. “Effective cybersecurity is not a one-time fix; it is a sustained partnership between the State and our local operators. Following the successful implementation of new standards for our financial and healthcare sectors, Governor Hochul is continuing her steady, sector-by-sector plan to fortify New York’s most critical infrastructure. By providing both the regulatory roadmap and the $2.5 million SECURE grant, we are ensuring that water and wastewater utilities have the guidance and resources they need to remain resilient in an increasingly digital world.”

A Broader Push to Secure Water Infrastructure

The initiative also reflects a broader investment strategy. New York State has significantly expanded funding for water infrastructure projects in recent years, including $3.8 billion in financial assistance for local projects in State Fiscal Year 2025. State officials argue that modern infrastructure investments must now include cybersecurity protections. As water systems continue to digitize, ignoring cyber risks could expose essential services to disruption. From an infrastructure security perspective, the new regulations and grant program signal a shift in how governments think about public utilities. Protecting water systems is no longer just about pipes, pumps, and treatment facilities — it is increasingly about strengthening water infrastructure cybersecurity to safeguard essential services in a connected world.
  • ✇Firewall Daily – The Cyber Express
  • India Outlines Legal Framework to Protect Children from AI and Online Harm Samiksha Jain
    As artificial intelligence (AI) continues to reshape how people interact with technology, the conversation around AI child safety in India is becoming increasingly important. From AI-powered toys to social media algorithms, digital technologies are now deeply embedded in the lives of children. While these tools can support learning and innovation, they also raise serious concerns around privacy, exploitation, and online harm. The Indian government says it is aware of these risks. In a recent
     

India Outlines Legal Framework to Protect Children from AI and Online Harm

12 de Março de 2026, 04:04

AI child safety in India

As artificial intelligence (AI) continues to reshape how people interact with technology, the conversation around AI child safety in India is becoming increasingly important. From AI-powered toys to social media algorithms, digital technologies are now deeply embedded in the lives of children. While these tools can support learning and innovation, they also raise serious concerns around privacy, exploitation, and online harm. The Indian government says it is aware of these risks. In a recent statement in Indian Parliament, Union Minister for Electronics and IT Ashwini Vaishnaw listed a series of legal and regulatory safeguards designed to strengthen AI child safety in India and reduce potential risks from emerging technologies. The focus, officials say, is on ensuring that the growth of artificial intelligence does not come at the expense of children's online safety.

AI Child Safety in India Backed by Existing IT Laws

One of the strongest pillars supporting AI child safety in India is the long-standing Information Technology Act, 2000. The law requires online platforms to prevent the hosting or sharing of harmful content involving children, including sexually explicit material or content that promotes violence. Under the law and its associated rules, social media platforms must remove unlawful content quickly after receiving government or court notifications. In some sensitive cases, such as non-consensual intimate content—platforms are required to act within two hours. These provisions are particularly relevant in the AI era, where harmful content can spread rapidly across platforms or be generated using advanced technologies. Authorities say the law also requires platforms to report certain offences to authorities under legislation such as the Protection of Children from Sexual Offences Act, 2012, reinforcing the broader legal framework designed to protect minors online.

Data Protection Rules Strengthen AI Governance in India

Another key element supporting AI child safety in India is the Digital Personal Data Protection Act, 2023. The law introduces strict rules around how children’s personal data can be collected and used, including data gathered through emerging technologies such as AI-powered toys or apps. The law requires companies to obtain verifiable parental consent before processing a child’s personal data. It also places strong limits on practices such as behavioural tracking, targeted advertising, or monitoring directed at children. In practical terms, these rules are meant to ensure that AI systems interacting with children cannot quietly collect or exploit personal data without parental oversight.

Responsible AI Development Remains a Policy Priority

Beyond existing laws, the government has also issued India AI Governance Guidelines to encourage ethical and responsible AI development. These guidelines specifically recognize children as a vulnerable group that could face long-term harm from poorly designed AI systems. They recommend risk assessment frameworks and monitoring mechanisms to help policymakers identify potential AI-related harms early. The emphasis on responsible development reflects India’s broader AI strategy—one that aims to expand innovation while keeping citizens protected. As officials often emphasize, the country’s AI roadmap is closely aligned with Indian Prime Minister Narendra Modi’s vision of democratizing technology and ensuring that digital transformation benefits society as a whole.

Cybercrime Reporting and Enforcement Measures

Protecting children online is not just about policy. Enforcement tools also play a critical role in strengthening AI child safety in India. The government operates the Indian Cyber Crime Coordination Centre and the National Cyber Crime Reporting Portal, allowing citizens to report cybercrimes, including crimes targeting children. Authorities have also worked with internet service providers to block websites hosting child sexual abuse material using global databases maintained by organizations such as the Internet Watch Foundation. In addition, law enforcement agencies receive support through training programs and cyber forensic infrastructure funded under national cybercrime prevention initiatives.

Awareness and Education Remain Essential

Legal frameworks alone cannot guarantee AI child safety in India. Public awareness remains just as important. Government-backed programs such as Information Security Education and Awareness (ISEA) have conducted thousands of workshops across India, reaching students, teachers, police personnel, and members of the public. Research and guidance from bodies like the National Commission for Protection of Child Rights have also helped shape cyber safety guidelines for schools, parents, and educators.

A Strong Framework, but Implementation Matters

India now has a growing set of laws, policies, and awareness programs aimed at strengthening AI child safety in India. Taken together, these measures signal a clear attempt to build guardrails around emerging technologies. But regulations alone cannot solve the problem. As AI systems become more advanced, experts argue that enforcement, platform accountability, and digital literacy will be just as critical as legislation. Without strong implementation, even well-designed safeguards risk falling short. The challenge for India moving forward is to ensure that its ambition to lead in AI innovation does not outpace the protections needed for its youngest digital citizens.

Vietnam Announces National Cybersecurity Firewall Plan Under New Digital Governance Law

cybersecurity firewall

Vietnam has announced plans to focus on building a cybersecurity firewall. The statement was delivered by Public Security Minister Lương Tam Quang on Feb. 7, following the closing session of the Communist Party of Vietnam’s 14th National Congress.  It was the first time a senior official explicitly used the term “cybersecurity firewall” to describe the country’s direction in digital governance. While Vietnam has long been regarded internationally as operating one of the most tightly controlled online environments, authorities had not previously declared an intention to construct what they now describe as a national cybersecurity firewall.  The announcement coincides with sweeping reforms to the country’s cybersecurity law framework. 

A New Cybersecurity Law Anchors the Digital Governance Strategy 

On Dec. 10, 2025, the 15th National Assembly passed a new Cybersecurity Law that will take effect on July 1, 2026. Drafted by the Ministry of Public Security (MPS), the legislation replaces both the 2018 Cybersecurity Law and the 2015 Law on Information Security.  The 2025 cybersecurity law introduces new language into Vietnam’s digital governance architecture. Notably, Point d, Clause 2, Article 10 states that authorities will “study the development of a national firewall system.” This is the first time such terminology has appeared in Vietnamese legislation, formally embedding the concept of a cybersecurity firewall within statutory law.  The inclusion of this provision represents a structural shift in how cybersecurity law is framed in the country, elevating technical filtering and monitoring infrastructure to the level of national policy objectives, as reported by The Vietnamese Magazine.

Draft Technical Standards Outline Cybersecurity Firewall Requirements 

Approximately two months after the law’s passage, the Ministry of Public Security released a draft regulation for public comment titled “National Technical Standard on Cybersecurity—Firewall—Basic Technical Requirements.” The document provides insight into the proposed technical architecture of the cybersecurity firewall.  According to the draft, firewall systems meeting national standards would be mandatory infrastructure for monitoring and filtering internet activity. These devices would be capable of filtering traffic and conducting deep packet inspection (DPI).  The proposal also includes SSL/TLS inspection capabilities. SSL/TLS protocols—indicated by the “https” prefix in web addresses—are commonly used to encrypt communications between users and websites. Under the draft framework, firewall systems would be able to decrypt encrypted communications, inspect their contents, and then re-encrypt them before forwarding the data.   In addition, the draft calls for integrating user identity data into individualized control policies. Web-filtering mechanisms would rely on blacklists containing at least 100,000 domain names. These blacklists are defined as collections of IP addresses, domains, and URLs subject to restriction under information security policies, aimed at blocking content or activity considered “undesirable.” 

Data Logging, Risk Assessment, and Centralized Oversight 

Beyond filtering capabilities, the proposed cybersecurity firewall would require network devices to log detailed information for every user session. Logged data would include time stamps, source and destination addresses, protocols used, and system responses.  User activity would then be assessed and assigned a “risk level.” If defined thresholds are exceeded, automated controls or alerts would be triggered and transmitted to cybersecurity authorities. This risk-based monitoring model adds another layer to the country’s digital governance structure, combining surveillance mechanisms with automated enforcement tools.  Separate draft regulations implementing the 2025 cybersecurity law would further obligate telecommunications and internet service providers to retain IP address identification data linked to subscriber information for a minimum of 12 months. Companies would also be required to establish direct technical connections enabling the transfer of IP data to the Ministry’s specialized cybersecurity force.  Under the proposed rules, user information must be provided within 24 hours upon request, or within three hours in urgent cases. All user data would be stored domestically at the MPS’s National Data Center. 

UK Tightens Government Cyber Security After Cutting Critical Vulnerabilities by 75%

27 de Fevereiro de 2026, 07:03

government cyber security

The UK government is tightening its government cyber security posture with a dual strategy, faster vulnerability remediation and a long-term workforce pipeline. With cyberattacks increasingly targeting public services, the launch of a new vulnerability monitoring service (VMS) alongside the creation of a dedicated cyber profession signals a structural shift in how the state plans to defend its digital infrastructure. Public-facing systems used by millions—from the National Health Service to the Legal Aid Agency—have become prime targets for cybercriminals. The government’s latest move acknowledges a simple reality: improving government cyber security is no longer just about tools; it is about speed, coordination, and skilled people.

Vulnerability Monitoring Service Accelerates Government Cyber Security Response

At the center of the announcement is the new vulnerability monitoring service designed to detect and fix cyber weaknesses significantly faster across public sector systems. According to government data, critical vulnerabilities are now being resolved six times faster than before reducing the average remediation window from nearly 50 days to just eight. The service focuses heavily on Domain Name System (DNS) risks, often overlooked but highly dangerous. DNS weaknesses can allow attackers to redirect users to malicious websites or disrupt essential services entirely. In the context of government cyber security, even small misconfigurations can have widespread consequences. The VMS continuously scans approximately 6,000 public sector organizations and detects around 1,000 different types of vulnerabilities. By automating detection and providing actionable remediation guidance, the government has also cut the backlog of critical unresolved vulnerabilities by 75%. This shift highlights a growing trend in public sector cyber security, automation is becoming essential as threat volumes continue to rise.

Cyber Risks Now Directly Impact Public Services

Speaking at the Government Cyber Security and Digital Resilience conference, Ian Murray emphasized the real-world consequences of cyber incidents: “Cyber-attacks aren’t abstract threats — they delay NHS appointments, disrupt essential services, and put people’s most sensitive data at risk. When public services struggle it’s families, patients and frontline workers that feel it. The vulnerability monitoring service has transformed how quickly we can spot and fix weaknesses before they’re exploited so we can protect against that." Adding further, he said, "We’ve cut cyber-attack fix times by 84% and reduced the backlog of critical issues by three quarters. And as the service expands to cover more types of cyber threats, fix times are falling there too. But technology alone isn’t enough. Today I’m launching a new government Cyber Profession to attract and develop the talented people we need to stay ahead of increasingly sophisticated threats - making government a destination of choice for cyber professionals who want to protect the services that matter most to people’s lives.” His remarks underline a key insight shaping modern government cyber security strategy—technical fixes must be matched with workforce capability.

Building Long-Term Cyber Resilience Through Talent

Alongside technical improvements, the government has launched its first dedicated cyber profession program in collaboration with the Department for Science Innovation and Technology and the National Cyber Security Centre. The initiative includes a cyber academy, apprenticeship pathways, and a structured career framework aligned with national professional standards. Manchester is expected to become a central hub, reinforcing the region’s growing digital ecosystem. Richard Horne, CEO of the NCSC, highlighted the broader impact of strengthening UK cyber resilience: “Cyber security is more consequential than ever today with attacks in the headlines showing the profound impacts they can have on people’s everyday lives and livelihoods. As our public services continue to innovate, it is vital that they remain resilient to evolving threats and vulnerabilities are being effectively managed to reduce the chances of disruption. The government Cyber Action Plan is a crucial step in building stronger cyber defences across our public services and the launch of the government Cyber Profession today will help attract and retain the most talented professionals with the top-tier skills needed to keep the UK safe online.”

Why Government Cyber Security Is Becoming a Workforce Challenge

While the new vulnerability monitoring service improves detection and response speed, the creation of a cyber profession reflects a deeper structural issue—skills shortages remain one of the biggest risks to government cyber security. Recent assessments have consistently warned that public sector organizations struggle to compete with private industry for cyber talent. By formalizing cyber career pathways, the government is attempting to make public service roles more competitive and sustainable. Ultimately, the announcement shows that cyber resilience is no longer treated as an IT function but as a national capability. Faster patching reduces immediate risk, but long-term government cyber security will depend on whether the public sector can successfully attract and retain the people needed to defend increasingly complex digital systems.
  • ✇Firewall Daily – The Cyber Express
  • India Strengthens Space Cyber Security with New CERT-In and SIA-India Framework Samiksha Jain
    India’s rapidly expanding space sector has received a major policy push with the release of new space cyber security guidelines aimed at strengthening protection across satellite and ground infrastructure. The framework, jointly developed by the Indian Computer Emergency Response Team (CERT-In) and SatCom Industry Association India (SIA-India), signals a growing recognition that cyber resilience is now as critical to space missions as launch capability itself. The guidelines were unveiled dur
     

India Strengthens Space Cyber Security with New CERT-In and SIA-India Framework

27 de Fevereiro de 2026, 04:22

space cyber security

India’s rapidly expanding space sector has received a major policy push with the release of new space cyber security guidelines aimed at strengthening protection across satellite and ground infrastructure. The framework, jointly developed by the Indian Computer Emergency Response Team (CERT-In) and SatCom Industry Association India (SIA-India), signals a growing recognition that cyber resilience is now as critical to space missions as launch capability itself. The guidelines were unveiled during the DefSat Conference & Expo 2026 held in New Delhi, India, at a time when satellite communication systems are increasingly becoming the backbone of connectivity, navigation, defense operations, and disaster management across the country.

Space Cyber Security Moves from Technical Layer to Strategic Priority

India’s space ecosystem is no longer limited to government-led missions. The rapid rise of private satellite operators, ground station providers, and space-tech startups has significantly expanded the attack surface. As satellite communication networks support everything from banking connectivity in remote regions to military operations, the importance of space cyber security has moved beyond technical discussions into national strategic planning. The new framework acknowledges this shift by outlining security controls across the entire satellite lifecycle, from space assets and ground stations to supply chains and user terminals. It also highlights emerging risks such as signal spoofing, unauthorized command uplinks, firmware manipulation, and ground infrastructure compromise. [caption id="attachment_109838" align="aligncenter" width="602"]space cyber security guidelines Image Source: PIB[/caption] These space cyber security guidelines are advisory in nature but provide a structured baseline for organizations to assess and improve their cyber posture. Importantly, the document pushes stakeholders to adopt risk-based governance rather than reactive compliance.

A Collaborative Model for Space Sector Cyber Resilience

According to Sanjay Bahl, Director General of CERT-In, “CERT-In remains steadfast in strengthening the cyber resilience of all sectors across Bharat. Recognizing the strategic importance of space systems, including satellite communication networks, to India’s technological sovereignty and future growth, these comprehensive guidelines establish a unified and forward-looking framework by considering defense in depth, breadth and height to safeguard satellite networks, ground infrastructure, space related supply chains and space assets against the rapidly evolving and increasingly sophisticated cyber threat landscape.” The emphasis on layered defense reflects a broader industry realization—traditional IT security models are insufficient for space systems, where physical assets in orbit cannot be easily patched or replaced. Subba Rao Pavuluri, President of SIA-India, highlighted the importance of public-private collaboration: “Public Private Partnership and the considered views of industry are fundamental to strengthening cyber resilience across any sector. This joint guideline document issued by CERT-In and SIA India reflect a holistic and collaborative approach, integrating industry perspectives with the deep cyber security expertise of CERT-In. Together, they mark a significant step forward in advancing the cyber security posture of India’s space sector and reinforcing its preparedness against emerging digital threats.” The collaborative approach is particularly relevant as private players now design, launch, and operate critical satellite services.

Rising Threat Landscape Forces a Shift in Security Thinking

The urgency behind strengthening space cyber security becomes clearer when viewed against recent threat activity. Anil Prakash, Director General, SIA-India, highlighted the scale of the challenge, emphasizing that India’s expanding space ecosystem can no longer treat cybersecurity as a technical afterthought. “India’s expanding space ecosystem now requires cybersecurity to evolve from a technical afterthought into a core pillar of mission assurance. The joint framework developed with CERT-In institutionalizes resilience across satellites, ground infrastructure, and supply chains—particularly significant at a time when over 1.5 million cyberattack attempts were recorded during Operation Sindoor and attacks on government networks surged nearly sevenfold,” he said. He further explained, “In this evolving threat landscape, critical infrastructure and industry are equally vulnerable. Importantly, these cyber guidelines are based on an adaptive model and will be periodically refined through structured industry consultation to remain responsive to emerging threats and technological advancements.” Concluding with a call to action for the industry, Prakash noted, “For industry, this is a clear call to adopt secure-by-design architectures and align innovation with national security imperatives.”

Why the Space Cyber Security Framework Matters Now

The release of these space cyber security guidelines marks an important shift in how India approaches digital risk in space. Instead of reacting to incidents, the framework promotes proactive controls such as threat intelligence sharing, supply chain security validation, and governance mechanisms including the appointment of CISOs for satellite operations. More importantly, the framework positions space cyber security as a continuous process rather than a one-time compliance exercise. As satellite constellations grow and commercial launches accelerate, cyber resilience will increasingly determine operational reliability. India’s space ambitions are expanding rapidly—but without secure communication layers, innovation alone cannot sustain trust. The CERT-In and SIA-India framework is a timely reminder that the future of space is not just about reaching orbit—it is about securing it.
  • ✇Firewall Daily – The Cyber Express
  • FTC Clarifies COPPA Stance, Backs Age Verification Technologies for Platforms Samiksha Jain
    The Federal Trade Commission (FTC) takes its stand around age verification technologies and children’s online privacy. In a new policy statement released Wednesday, the agency clarified that it will not bring enforcement actions under the Children’s Online Privacy Protection Rule (COPPA Rule) against website and online service providers that collect and use personal data solely for age verification technologies, provided strict safeguards are followed. This move signals a practical shift in h
     

FTC Clarifies COPPA Stance, Backs Age Verification Technologies for Platforms

26 de Fevereiro de 2026, 03:20

age verification technologies

The Federal Trade Commission (FTC) takes its stand around age verification technologies and children’s online privacy. In a new policy statement released Wednesday, the agency clarified that it will not bring enforcement actions under the Children’s Online Privacy Protection Rule (COPPA Rule) against website and online service providers that collect and use personal data solely for age verification technologies, provided strict safeguards are followed. This move signals a practical shift in how regulators are approaching the complex balance between privacy compliance and real-world child safety online.

FTC Encourages Adoption of Age Verification Technologies

The new FTC policy statement aims to remove regulatory uncertainty that has long discouraged platforms from implementing age verification technologies. Under the COPPA Rule, operators must obtain verifiable parental consent before collecting personal information from children under 13. However, determining whether a user is a child often requires collecting some form of personal data—creating a compliance dilemma for companies. By clarifying its enforcement stance, the FTC is effectively encouraging platforms to adopt stronger age verification technologies rather than relying on outdated self-reported age gates that are easy for children to bypass. “Age verification technologies are some of the most child-protective technologies to emerge in decades,” said Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection. “Our statement incentivizes operators to use these innovative tools, empowering parents to protect their children online.” The policy reflects the reality that children’s internet usage has dramatically expanded since COPPA was first enacted in 1998. Today’s digital ecosystem includes social platforms, gaming environments, streaming services, and AI-driven applications—many of which were unimaginable when the law was originally written.

Why Age Verification Technologies Are Becoming Essential

The FTC’s position comes at a time when policymakers globally are questioning whether existing frameworks are sufficient to protect minors online. Several U.S. states have already begun introducing regulations requiring platforms to implement age verification technologies. The core issue is simple: platforms cannot protect children if they cannot reliably identify them. Traditional age-gating methods—such as asking users to enter their date of birth—have proven ineffective. More advanced age verification technologies now use biometric estimation, identity verification tools, or secure third-party validation systems to improve accuracy. However, these tools often require temporary collection of personal data, which previously raised concerns about COPPA violations. The FTC’s updated enforcement approach attempts to resolve this contradiction.

Conditions Platforms Must Follow Under the FTC Policy

While the FTC is offering flexibility, the policy is far from a free pass. Platforms must comply with several strict conditions when using age verification technologies, including:
  • Using collected data strictly for age verification purposes
  • Deleting the information promptly after verification
  • Implementing strong security safeguards
  • Providing clear transparency to parents and children
  • Sharing data only with trusted third-party providers capable of maintaining confidentiality
  • Ensuring the verification method produces reasonably accurate results
Importantly, the FTC emphasized that operators must still comply with all other COPPA requirements when handling children’s data. This structured approach suggests the agency is trying to promote responsible innovation rather than loosen privacy protections.

A Practical but Transitional Regulatory Shift

The FTC also confirmed that it plans to review the COPPA Rule to formally address age verification technologies, indicating that this policy statement may be a transitional step toward broader regulatory updates. From an industry perspective, the decision removes a key barrier that has slowed adoption of modern child-safety controls. Many platforms have hesitated to deploy stronger verification tools due to fears of enforcement risk. At the same time, privacy advocates are likely to closely monitor how companies implement these technologies—particularly around biometric data and third-party verification vendors. Ultimately, the FTC’s message is clear: identifying children online is becoming a regulatory expectation, not just a technical option. As digital environments grow more difficult, age verification technologies are increasingly positioned as a foundational layer of online safety. The challenge ahead will be ensuring these tools protect children without creating new privacy risks, a balance regulators and technology providers will need to navigate carefully in the coming years.
❌
❌