Visualização de leitura

Addressing the Edge Security Paradox

The paradox of edge security describes how technologies designed to strengthen network defenses can also create new vulnerabilities. Edge devices improve performance and support localized threat detection by processing data closer to its source, yet modern enterprise environments often operate thousands of distributed endpoints. This rapid expansion of edge infrastructure increases the number of systems..

The post Addressing the Edge Security Paradox appeared first on Security Boulevard.

Managed OAuth for Access: make internal apps agent-ready in one click

We have thousands of internal apps at Cloudflare. Some are things we’ve built ourselves, others are self-hosted instances of software built by others. They range from business-critical apps nearly every person uses, to side projects and prototypes.

All of these apps are protected by Cloudflare Access. But when we started using and building agents — particularly for uses beyond writing code — we hit a wall. People could access apps behind Access, but their agents couldn’t.

Access sits in front of internal apps. You define a policy, and then Access will send unauthenticated users to a login page to choose how to authenticate. 

Example of a Cloudflare Access login page

This flow worked great for humans. But all agents could see was a redirect to a login page that they couldn’t act on.

Providing agents with access to internal app data is so vital that we immediately implemented a stopgap for our own internal use. We modified OpenCode’s web fetch tool such that for specific domains, it triggered the cloudflared CLI to open an authorization flow to fetch a JWT (JSON Web Token). By appending this token to requests, we enabled secure, immediate access to our internal ecosystem.

While this solution was a temporary answer to our own dilemma, today we’re retiring this workaround and fixing this problem for everyone. Now in open beta, every Access application supports managed OAuth. One click to enable it for an Access app, and agents that speak OAuth 2.0 can easily discover how to authenticate (RFC 9728), send the user through the auth flow, and receive back an authorization token (the same JWT from our initial solution). 

Now, the flow works smoothly for both humans and agents. Cloudflare Access has a generous free tier. And building off our newly-introduced Organizations beta, you’ll soon be able to bridge identity providers across Cloudflare accounts too.

How managed OAuth works

For a given internal app protected by Cloudflare Access, you enable managed OAuth in one click:

Once managed OAuth is enabled, Cloudflare Access acts as the authorization server. It returns the www-authenticate header, telling unauthorized agents where to look up information on how to get an authorization token. They find this at https://<your-app-domain>/.well-known/oauth-authorization-server. Equipped with that direction, agents can just follow OAuth standards: 

  1. The agent dynamically registers itself as a client (a process known as Dynamic Client Registration — RFC 7591), 

  2. The agent sends the human through a PKCE (Proof Key for Code Exchange) authorization flow (RFC 7636)

  3. The human authorizes access, which grants a token to the agent that it can use to make authenticated requests on behalf of the user

Here’s what the authorization flow looks like:

If this authorization flow looks familiar, that’s because it’s what the Model Context Protocol (MCP) uses. We originally built support for this into our MCP server portals product, which proxies and controls access to many MCP servers, to allow the portal to act as the OAuth server. Now, we’re bringing this to all Access apps so agents can access not only MCP servers that require authorization, but also web pages, web apps, and REST APIs.

Mass upgrading your internal apps to be agent-ready

Upgrading the long tail of internal software to work with agents is a daunting task. In principle, in order to be agent-ready, every internal and external app would ideally have discoverable APIs, a CLI, a well-crafted MCP server, and have adopted the many emerging agent standards.

AI adoption is not something that can wait for everything to be retrofitted. Most organizations have a significant backlog of apps built over many years. And many internal “apps” work great when treated by agents as simple websites. For something like an internal wiki, all you really need is to enable Markdown for Agents, turn on managed OAuth, and agents have what they need to read protected content.

To make the basics work across the widest set of internal applications, we use Managed OAuth. By putting Access in front of your legacy internal apps, you make them agent-ready instantly. No code changes, no retrofitting. Instead, just immediate compatibility.

It’s the user’s agent. No service accounts and tokens needed

Agents need to act on behalf of users inside organizations. One of the biggest anti-patterns we’ve seen is people provisioning service accounts for their agents and MCP servers, authenticated using static credentials. These have their place in simple use cases and quick prototypes, and Cloudflare Access supports service tokens for this purpose.

But the service account approach quickly shows its limits when fine-grained access controls and audit logs are required. We believe that every action an agent performs must be easily attributable to the human who initiated it, and that an agent must only be able to perform actions that its human operator is likewise authorized to do. Service accounts and static credentials become points at which attribution is lost. Agents that launder all of their actions through a service account are susceptible to confused deputy problems and result in audit logs that appear to originate from the agent itself.

For security and accountability, agents must use security primitives capable of expressing this user–agent relationship. OAuth is the industry standard protocol for requesting and delegating access to third parties. It gives agents a way to talk to your APIs on behalf of the user, with a token scoped to the user’s identity, so that access controls correctly apply and audit logs correctly attribute actions to the end user.

Standards for the win: how agents can and should adopt RFC 9728 in their web fetch tools

RFC 9728 is the OAuth standard that makes it possible for agents to discover where and how to authenticate. It standardizes where this information lives and how it’s structured. This RFC became official in April 2025 and was quickly adopted by the Model Context Protocol (MCP), which now requires that both MCP servers and clients support it.

But outside of MCP, agents should adopt RFC 9728 for an even more essential use case: making requests to web pages that are protected behind OAuth and making requests to plain old REST APIs.

Most agents have a tool for making basic HTTP requests to web pages. This is commonly called the “web fetch” tool. It’s similar to using the fetch() API in JavaScript, often with some additional post-processing on the response. It’s what lets you paste a URL into your agent and have your agent go look up the content.

Today, most agents’ web fetch tools won’t do anything with the www-authenticate header that a URL returns. The underlying model might choose to introspect the response headers and figure this out on its own, but the tool itself does not follow www-authenticate, look up /.well-known/oauth-authorization-server, and act as the client in the OAuth flow. But it can, and we strongly believe it should! Agents already do this to act as remote MCP clients.

To demonstrate this, we’ve put up a draft pull request that adapts the web fetch tool in Opencode to show this in action. Before making a request, the adapted tool first checks whether it already has credentials ; if it does, it uses them to make the initial request. If the tool gets back a 401 or a 403 with a www-authenticate header, it asks the user for consent to be sent through the server’s OAuth flow.

Here’s how that OAuth flow works. If you give the agent a URL that is protected by OAuth and complies with RFC 9728, the agent prompts the human for consent to open the authorization flow:

…sending the human to the login page:

…and then to a consent dialog that prompts the human to grant access to the agent:

Once the human grants access to the agent, the agent uses the token it has received to make an authenticated request:

Any agent from Codex to Claude Code to Goose and beyond can implement this, and there’s nothing bespoke to Cloudflare. It’s all built using OAuth standards.

We think this flow is powerful, and that supporting RFC 9728 can help agents with more than just making basic web fetch requests. If a REST API supports RFC 9728 (and the agent does too), the agent has everything it needs to start making authenticated requests against that API. If the REST API supports RFC 9727, then the client can discover a catalog of REST API endpoints on its own, and do even more without additional documentation, agent skills, MCP servers or CLIs. 

Each of these play important roles with agents — Cloudflare itself provides an MCP server for the Cloudflare API (built using Code Mode), Wrangler CLI, and Agent Skills, and a Plugin. But supporting RFC 9728 helps ensure that even when none of these are preinstalled, agents have a clear path forward. If the agent has a sandbox to execute untrusted code, it can just write and execute code that calls the API that the human has granted it access to. We’re working on supporting this for Cloudflare’s own APIs, to help your agents understand how to use Cloudflare.

Coming soon: share one identity provider (IdP) across many Cloudflare accounts

At Cloudflare our own internal apps are deployed to dozens of different Cloudflare accounts, which are all part of an Organization — a newly introduced way for administrators to manage users, configurations, and view analytics across many Cloudflare accounts. We have had the same challenge as many of our customers: each Cloudflare account has to separately configure an IdP, so Cloudflare Access uses our identity provider. It’s critical that this is consistent across an organization — you don’t want one Cloudflare account to inadvertently allow people to sign in just with a one-time PIN, rather than requiring that they authenticate via single-sign on (SSO).

To solve this, we’re currently working on making it possible to share an identity provider across Cloudflare accounts, giving organizations a way to designate a single primary IdP for use across every account in their organization.

As new Cloudflare accounts are created within an organization, administrators will be able to configure a bridge to the primary IdP with a single click, so Access applications across accounts can be protected by one identity provider. This removes the need to manually configure IdPs account by account, which is a process that doesn’t scale for organizations with many teams and individuals each operating their own accounts.

What’s next

Across companies, people in every role and business function are now using agents to build internal apps, and expect their agents to be able to access context from internal apps. We are responding to this step function growth in internal software development by making the Workers Platform and Cloudflare One work better together — so that it is easier to build and secure internal apps on Cloudflare. 

Expect more to come soon, including:

  • More direct integration between Cloudflare Access and Cloudflare Workers, without the need to validate JWTs or remember which of many routes a particular Worker is exposed on.

  • wrangler dev --tunnel — an easy way to expose your local development server to others when you’re building something new, and want to share it with others before deploying

  • A CLI interface for Cloudflare Access and the entire Cloudflare API

  • More announcements to come during Agents Week 2026

Enable Managed OAuth for your internal apps behind Cloudflare Access

Managed OAuth is now available, in open beta, to all Cloudflare customers. Head over to the Cloudflare dashboard to enable it for your Access applications. You can use it for any internal app, whether it’s one built on Cloudflare Workers, or hosted elsewhere. And if you haven’t built internal apps on the Workers Platform yet — it’s the fastest way for your team to go from zero to deployed (and protected) in production.

RSA Launches ID Plus Sovereign Deployment for Organizations That Can’t Afford Identity Downtime

RSA opened RSAC 2026 with a new deployment model for its ID Plus identity platform, aimed squarely at government agencies, financial services firms, and critical infrastructure operators that need identity security to work even when everything else fails. RSA ID Plus Sovereign Deployment is a “deploy anywhere” identity and access management solution that gives organizations..

The post RSA Launches ID Plus Sovereign Deployment for Organizations That Can’t Afford Identity Downtime appeared first on Security Boulevard.

From reactive to proactive: closing the phishing gap with LLMs

Email security has always been defined by impermanence. It is a perpetual call-and-response arms race, where defenses are only as strong as the last bypass discovered and attackers iterate relentlessly for even marginal gains. Every control we deploy eventually becomes yesterday’s solution.

What makes this challenge especially difficult is that our biggest weaknesses are, by definition, invisible.

This problem is best illustrated by a classic example from World War II. Mathematician Abraham Wald was tasked with helping Allied engineers decide where to reinforce bomber aircraft. Engineers initially focused on the bullet holes visible on planes returning from missions. Wald pointed out the flaw: they were reinforcing the areas where planes could already take damage and survive. The true vulnerabilities were on the planes that never came back.

Email security faces an identical hurdle: our detection gaps are unseen. By integrating LLMs, we advance email phishing protection and move from reactive to proactive detection improvement.

The limits of reactive defense

Traditional email security systems improve primarily through user-reported misses. For example, if we marked a spam message as clean, customers can send us the original EML to our pipelines for our analysts to analyze and update our models. This feedback loop is necessary and valuable, but it is inherently reactive. It depends on someone noticing a failure after the fact and taking the time to report it.

That means detection improvements are often driven by what attackers already succeeded at, rather than by what they are about to exploit next.

To close this gap, we need a way to systematically observe the “planes that didn’t make it back.”

Mapping the threat landscape with LLMs

Large Language Models (LLMs) hit the mainstream market in late 2022 and early 2023, fundamentally changing how we process unstructured data. At their core, LLMs use deep learning and massive datasets to predict the next token in a sequence, allowing them to understand context and nuance. They are particularly well-suited for email security because they can read natural language and characterize complex concepts (like intent, urgency, and deception) across millions of messages.

Every day, Cloudflare processes millions of unwanted emails. Historically, it was not feasible to deeply characterize each message beyond coarse classifications. Manually mapping emails to nuanced threat vectors simply did not scale. 

Now, Cloudflare has integrated LLMs into our email security tools to identify threats before they strike. By using the power of LLMs, as we’ll describe below, we can finally see a clear and comprehensive picture of the evolving threat landscape.

Our LLM-driven categorization shows clear spikes and persistent trends across several distinct categories, including "PrizeNotification" and "SalesOutreach".

These LLM-generated tags provide Cloudflare analysts with high-fidelity signals in near real time. Tasks that previously required hours of manual investigation and complex querying can now be surfaced automatically, with relevant context attached. This directly increases the velocity at which we can build new targeted Machine Learning models or retrain existing ones to address emerging behaviors.

Because Cloudflare operates at global Internet scale, we can gather these insights earlier than ever before, often before a new technique becomes widely visible through customer-reported misses.

The Sales Outreach threat

One of the clearest patterns we’ve identified using this new intelligence is the continued persistence of malicious messages structured to look like Sales Outreach-style phishing. These emails are designed to mimic legitimate B2B communication, often presenting opportunities to purchase or receive "special deals" on unique items or services, to lure targets into clicking malicious links or providing credentials.

Once LLM categorization surfaced Sales Outreach as a dominant vector, we moved from broad visibility to targeted data collection. 

Using LLM-generated tags, we began systematically isolating messages that exhibited Sales Outreach characteristics across our global dataset. This produced a continuously growing, high-precision corpus of real-world examples, including confirmed malicious messages as well as borderline cases that traditional systems struggled to classify. From this corpus, we built a dedicated training pipeline.

First, we curated training data by grouping messages based on shared linguistic and structural traits identified by the LLMs. These traits included persuasive framing, manufactured urgency, transactional language, and subtle forms of social proof.

Next, we focused feature extraction on sentiment and intent rather than static indicators. The model learns how requests are phrased, how credibility is established, and how calls to action are embedded within otherwise normal business conversations.

Finally, we trained a purpose-built sentiment analysis model optimized specifically for Sales Outreach behavior. This avoided overloading a general phishing classifier and allowed us to tune precision and recall for this threat class.

Turning language into enforcement

The output of this model is a risk score that reflects how closely a message aligns with known Sales Outreach attack patterns. That score is evaluated alongside existing signals such as sender reputation, link behavior, and historical context to determine whether a message should be blocked, quarantined, or allowed.

This process is continuous. As attackers adapt their language, newly observed messages are fed back into the pipeline and used to refine the model without waiting for large volumes of user-reported misses. LLMs act as the discovery layer by surfacing new linguistic variants, while the specialized model performs fast and scalable enforcement.

This is what an all-out offensive looks like in practice. It is a feedback loop where large-scale language understanding drives focused, high-precision detection. The result is earlier intervention against a threat class that thrives on subtlety, and fewer malicious sales emails reaching the inbox.

Results of the undertaking

The visibility unlocked by LLM-driven mapping fundamentally changed how we improve detections. Instead of waiting for attackers to succeed and relying on downstream user reports, we gained the ability to identify systemic gaps earlier and address them at the source. This shift from reactive remediation to proactive reinforcement translated directly into measurable customer impact.

The most immediate signal of success was a marked reduction in customer friction. Sales Outreach–related phishing has historically generated a high volume of user-reported misses, largely because these messages closely resemble legitimate business communication and often evade traditional rule-based or reputation-driven systems. As our targeted models came online and were continuously refined using LLM-derived insights, fewer of these messages reached end users in the first place.

The data reflects this change clearly. Average daily Sales Outreach submissions — messages that we labeled as clean but were in fact Sales Outreach phishing emails, flagged by end users — dropped from 965 in Q3 2025 to 769 in Q4 2025, representing a 20.4% reduction in reported misses in a single quarter.

This reduction is not just a metric improvement; it represents thousands fewer disruptive moments per day for security teams and end users alike. Each avoided submission is a phishing attempt that was stopped before it could erode trust, consume analyst time, or force a user to make a security judgment mid-workflow. We have seen this trend continue in Q1 of 2026 with average daily submissions decreasing by two-thirds.

In effect, LLMs allowed us to “see” the planes that never made it back. By illuminating previously invisible failure modes, we were able to reinforce defenses precisely where attackers were concentrating their efforts. The result is a system that improves not only detection rates, but also the day-to-day experience of the people relying on it.

The next front in the arms race

Our work with LLMs is just beginning. 

To stay ahead of the next evolution of attacks, we are moving toward a model of total environmental awareness by refining LLM specificity to extract forensic-level detail from every interaction. This granular mapping allows us to identify specific tactical signatures rather than relying on broad labels. 

Simultaneously, we are deploying specialized machine learning models purpose-built to hunt for emerging, high-obfuscation vectors at the "fringes" that traditional defenses miss. By leveraging this real-time LLM data as a strategic compass, we can shift our human expertise away from known noise and toward the critical gaps where the next strike is likely to land.

By illuminating the "planes that didn't make it back," we are doing more than just reacting to missed email; we are systematically narrowing the battlefield. In the email arms race, the advantage belongs to the side that can see the invisible first.

Ready to enhance your email security?

We provide all organizations (whether a Cloudflare customer or not) with free access to our Retro Scan tool, allowing them to use our predictive AI models to scan existing inbox messages in Microsoft 365. 

Retro Scan will detect and highlight any threats found, enabling organizations to remediate them directly in their email accounts. With these insights, organizations can implement further controls, either using Cloudflare Email Security or their preferred solution, to prevent similar threats from reaching their inboxes in the future.

If you are interested in how Cloudflare can help secure your inboxes, sign up for a phishing risk assessment here.

Prompt Control is the New Front Door of Application Security 

Run Security, security,

Discover how AI-driven systems are redefining application security. Research highlights the importance of focusing on inference layers, prompt control, and token management to effectively secure AI inference services and minimize risks associated with cost, latency, and data leakage.

The post Prompt Control is the New Front Door of Application Security  appeared first on Security Boulevard.

15 years of helping build a better Internet: a look back at Birthday Week 2025

Cloudflare launched fifteen years ago with a mission to help build a better Internet. Over that time the Internet has changed and so has what it needs from teams like ours.  In this year’s Founder’s Letter, Matthew and Michelle discussed the role we have played in the evolution of the Internet, from helping encryption grow from 10% to 95% of Internet traffic to more recent challenges like how people consume content. 

We spend Birthday Week every year releasing the products and capabilities we believe the Internet needs at this moment and around the corner. Previous Birthday Weeks saw the launch of IPv6 gateway in 2011,  Universal SSL in 2014, Cloudflare Workers and unmetered DDoS protection in 2017, Cloudflare Radar in 2020, R2 Object Storage with zero egress fees in 2021,  post-quantum upgrades for Cloudflare Tunnel in 2022, Workers AI and Encrypted Client Hello in 2023. And those are just a sample of the launches.

This year’s themes focused on helping prepare the Internet for a new model of monetization that encourages great content to be published, fostering more opportunities to build community both inside and outside of Cloudflare, and evergreen missions like making more features available to everyone and constantly improving the speed and security of what we offer.

We shipped a lot of new things this year. In case you missed the dozens of blog posts, here is a breakdown of everything we announced during Birthday Week 2025. 

Monday, September 22

What In a sentence …
Help build the future: announcing Cloudflare’s goal to hire 1,111 interns in 2026 To invest in the next generation of builders, we announced our most ambitious intern program yet with a goal to hire 1,111 interns in 2026.
Supporting the future of the open web: Cloudflare is sponsoring Ladybird and Omarchy To support a diverse and open Internet, we are now sponsoring Ladybird (an independent browser) and Omarchy (an open-source Linux distribution and developer environment).
Come build with us: Cloudflare’s new hubs for startups We are opening our office doors in four major cities (San Francisco, Austin, London, and Lisbon) as free hubs for startups to collaborate and connect with the builder community.
Free access to Cloudflare developer services for non-profit and civil society organizations We extended our Cloudflare for Startups program to non-profits and public-interest organizations, offering free credits for our developer tools.
Introducing free access to Cloudflare developer features for students We are removing cost as a barrier for the next generation by giving students with .edu emails 12 months of free access to our paid developer platform features.
Cap’n Web: a new RPC system for browsers and web servers We open-sourced Cap'n Web, a new JavaScript-native RPC protocol that simplifies powerful, schema-free communication for web applications.
A lookback at Workers Launchpad and a warm welcome to Cohort #6 We announced Cohort #6 of the Workers Launchpad, our accelerator program for startups building on Cloudflare.

Tuesday, September 23

What In a sentence …
Building unique, per-customer defenses against advanced bot threats in the AI era New anomaly detection system that uses machine learning trained on each zone to build defenses against AI-driven bot attacks.
Why Cloudflare, Netlify, and Webflow are collaborating to support Open Source tools To support the open web, we joined forces with Webflow to sponsor Astro, and with Netlify to sponsor TanStack.
Launching the x402 Foundation with Coinbase, and support for x402 transactions We are partnering with Coinbase to create the x402 Foundation, encouraging the adoption of the x402 protocol to allow clients and services to exchange value on the web using a common language
Helping protect journalists and local news from AI crawlers with Project Galileo We are extending our free Bot Management and AI Crawl Control services to journalists and news organizations through Project Galileo.
Cloudflare Confidence Scorecards - making AI safer for the Internet Automated evaluation of AI and SaaS tools, helping organizations to embrace AI without compromising security.

Wednesday, September 24

What In a sentence …
Automatically Secure: how we upgraded 6,000,000 domains by default Our Automatic SSL/TLS system has upgraded over 6 million domains to more secure encryption modes by default and will soon automatically enable post-quantum connections.
Giving users choice with Cloudflare’s new Content Signals Policy The Content Signals Policy is a new standard for robots.txt that lets creators express clear preferences for how AI can use their content.
To build a better Internet in the age of AI, we need responsible AI bot principles A proposed set of responsible AI bot principles to start a conversation around transparency and respect for content creators' preferences.
Securing data in SaaS to SaaS applications New security tools to give companies visibility and control over data flowing between SaaS applications.
Securing today for the quantum future: WARP client now supports post-quantum cryptography (PQC) Cloudflare’s WARP client now supports post-quantum cryptography, providing quantum-resistant encryption for traffic.
A simpler path to a safer Internet: an update to our CSAM scanning tool We made our CSAM Scanning Tool easier to adopt by removing the need to create and provide unique credentials, helping more site owners protect their platforms.

Thursday, September 25

What In a sentence …
Every Cloudflare feature, available to everyone We are making every Cloudflare feature, starting with Single Sign On (SSO), available for anyone to purchase on any plan.
Cloudflare's developer platform keeps getting better, faster, and more powerful Updates across Workers and beyond for a more powerful developer platform – such as support for larger and more concurrent Container images, support for external models from OpenAI and Anthropic in AI Search (previously AutoRAG), and more.
Partnering to make full-stack fast: deploy PlanetScale databases directly from Workers You can now connect Cloudflare Workers to PlanetScale databases directly, with connections automatically optimized by Hyperdrive.
Announcing the Cloudflare Data Platform A complete solution for ingesting, storing, and querying analytical data tables using open standards like Apache Iceberg.
R2 SQL: a deep dive into our new distributed query engine A technical deep dive on R2 SQL, a serverless query engine for petabyte-scale datasets in R2.
Safe in the sandbox: security hardening for Cloudflare Workers A deep-dive into how we’ve hardened the Workers runtime with new defense-in-depth security measures, including V8 sandboxes and hardware-assisted memory protection keys.
Choice: the path to AI sovereignty To champion AI sovereignty, we've added locally-developed open-source models from India, Japan, and Southeast Asia to our Workers AI platform.
Announcing Cloudflare Email Service’s private beta We announced the Cloudflare Email Service private beta, allowing developers to reliably send and receive transactional emails directly from Cloudflare Workers.
A year of improving Node.js compatibility in Cloudflare Workers There are hundreds of new Node.js APIs now available that make it easier to run existing Node.js code on our platform.

Friday, September 26

What In a sentence …
Cloudflare just got faster and more secure, powered by Rust We have re-engineered our core proxy with a new modular, Rust-based architecture, cutting median response time by 10ms for millions.
Introducing Observatory and Smart Shield New monitoring tools in the Cloudflare dashboard that provide actionable recommendations and one-click fixes for performance issues.
Monitoring AS-SETs and why they matter Cloudflare Radar now includes Internet Routing Registry (IRR) data, allowing network operators to monitor AS-SETs to help prevent route leaks.
An AI Index for all our customers We announced the private beta of AI Index, a new service that creates an AI-optimized search index for your domain that you control and can monetize.
Introducing new regional Internet traffic and Certificate Transparency insights on Cloudflare Radar Sub-national traffic insights and Certificate Transparency dashboards for TLS monitoring.
Eliminating Cold Starts 2: shard and conquer We have reduced Workers cold starts by 10x by implementing a new "worker sharding" system that routes requests to already-loaded Workers.
Network performance update: Birthday Week 2025 The TCP Connection Time (Trimean) graph shows that we are the fastest TCP connection time in 40% of measured ISPs – and the fastest across the top networks.
How Cloudflare uses performance data to make the world’s fastest global network even faster We are using our network's vast performance data to tune congestion control algorithms, improving speeds by an average of 10% for QUIC traffic.
Code Mode: the better way to use MCP It turns out we've all been using MCP wrong. Most agents today use MCP by exposing the "tools" directly to the LLM. We tried something different: Convert the MCP tools into a TypeScript API, and then ask an LLM to write code that calls that API. The results are striking.

Come build with us!

Helping build a better Internet has always been about more than just technology. Like the announcements about interns or working together in our offices, the community of people behind helping build a better Internet matters to its future. This week, we rolled out our most ambitious set of initiatives ever to support the builders, founders, and students who are creating the future.

For founders and startups, we are thrilled to welcome Cohort #6 to the Workers Launchpad, our accelerator program that gives early-stage companies the resources they need to scale. But we’re not stopping there. We’re opening our doors, literally, by launching new physical hubs for startups in our San Francisco, Austin, London, and Lisbon offices. These spaces will provide access to mentorship, resources, and a community of fellow builders.

We’re also investing in the next generation of talent. We announced free access to the Cloudflare developer platform for all students, giving them the tools to learn and experiment without limits. To provide a path from the classroom to the industry, we also announced our goal to hire 1,111 interns in 2026 — our biggest commitment yet to fostering future tech leaders.

And because a better Internet is for everyone, we’re extending our support to non-profits and public-interest organizations, offering them free access to our production-grade developer tools, so they can focus on their missions.

Whether you're a founder with a big idea, a student just getting started, or a team working for a cause you believe in, we want to help you succeed.

Until next year

Thank you to our customers, our community, and the millions of developers who trust us to help them build, secure, and accelerate the Internet. Your curiosity and feedback drive our innovation.

It’s been an incredible 15 years. And as always, we’re just getting started!

(Watch the full conversation on our show ThisWeekinNET.com about what we launched during Birthday Week 2025 here.)

Everything you need to know about NIST’s new guidance in “SP 1800-35: Implementing a Zero Trust Architecture”

For decades, the United States National Institute of Standards and Technology (NIST) has been guiding industry efforts through the many publications in its Computer Security Resource Center. NIST has played an especially important role in the adoption of Zero Trust architecture, through its series of publications that began with NIST SP 800-207: Zero Trust Architecture, released in 2020.

NIST has released another Special Publication in this series, SP 1800-35, titled "Implementing a Zero Trust Architecture (ZTA)" which aims to provide practical steps and best practices for deploying ZTA across various environments.  NIST’s publications about ZTA have been extremely influential across the industry, but are often lengthy and highly detailed, so this blog provides a short and easier-to-read summary of NIST’s latest guidance on ZTA.

And so, in this blog post:

  • We summarize the key items you need to know about this new NIST publication, which presents a reference architecture for Zero Trust Architecture (ZTA) along with a series of “Builds” that demonstrate how different products from various vendors can be combined to construct a ZTA that complies with the reference architecture.

  • We show how Cloudflare’s Zero Trust product suite can be integrated with offerings from other vendors to support a Zero Trust Architecture that maps to the NIST’s reference architecture.

  • We highlight a few key features of Cloudflare’s Zero Trust platform that are especially valuable to customers seeking compliance with NIST’s ZTA reference architecture, including compliance with FedRAMP and new post-quantum cryptography standards.

Let’s dive into NIST’s special publication!

Overview of SP 1800-35

In SP 1800-35, NIST reminds us that:

A zero-trust architecture (ZTA) enables secure authorized access to assets — machines, applications and services running on them, and associated data and resources — whether located on-premises or in the cloud, for a hybrid workforce and partners based on an organization’s defined access policy.

NIST uses the term Subject to refer to entities (i.e. employees, developers, devices) that require access to Resources (i.e. computers, databases, servers, applications).  SP 1800-35 focuses on developing and demonstrating various ZTA implementations that allow Subjects to access Resources. Specifically, the reference architecture in SP 1800-35 focuses mainly on EIG or “Enhanced Identity Governance”, a specific approach to Zero Trust Architecture, which is defined by NIST in SP 800-207 as follows:

For [the EIG] approach, enterprise resource access policies are based on identity and assigned attributes. 

The primary requirement for [R]esource access is based on the access privileges granted to the given [S]ubject. Other factors such as device used, asset status, and environmental factors may alter the final confidence level calculation … or tailor the result in some way, such as granting only partial access to a given [Resource] based on network location.

Individual [R]esources or [policy enforcement points (PEP)] must have a way to forward requests to a policy engine service or authenticate the [S]ubject and approve the request before granting access.

While there are other approaches to ZTA mentioned in the original NIST SP 800-207, we omit those here because SP 1800-35 focuses mostly on EIG.

The ZTA reference architecture from SP 1800-35 focuses on EIG approaches as a set of logical components as shown in the figure below.  Each component in the reference architecture does not necessarily correspond directly to physical (hardware or software) components, or products sold by a single vendor, but rather to the logical functionality of the component.

Figure 1: General ZTA Reference Architecture. Source: NIST, Special Publication 1800-35, "Implementing a Zero Trust Architecture (ZTA)”, 2025.

The logical components in the reference architecture are all related to the implementation of policy. Policy is crucial for ZTA because the whole point of a ZTA is to apply policies that determine who has access to what, when and under what conditions.

The core components of the reference architecture are as follows:

| Policy Enforcement Point(PEP) | The PEP protects the “trust zones” that host enterprise Resources, and handles enabling, monitoring, and eventually terminating connections between Subjects and Resources. You can think of the PEP as the dataplane that supports the Subject’s access to the Resources.

Policy Enforcement Point
(PEP)

The PEP protects the “trust zones” that host enterprise Resources, and handles enabling, monitoring, and eventually terminating connections between Subjects and Resources.  You can think of the PEP as the dataplane that supports the Subject’s access to the Resources.

Policy Engine

(PE)

The PE handles the ultimate decision to grant, deny, or revoke access to a Resource for a given Subject, and calculates the trust scores/confidence levels and ultimate access decisions based on enterprise policy and information from supporting components. 

Policy Administrator

(PA)

The PA executes the PE’s policy decision by sending commands to the PEP to establish and terminate the communications path between the Subject and the Resource.

Policy Decision Point (PDP)

The PDP is where the decision as to whether or not to permit a Subject to access a Resource is made.  The PIP included the Policy Engine (PE) and the Policy Administrator (PA).  You can think of the PDP as the control plane that controls the Subject’s access to the Resources.

The PDP operates on inputs from Policy Information Points (PIPs) which are supporting components that provide critical data and policy rules to the Policy Decision Point (PDP).

Policy Information Point

(PIP)

The PIPs provide various types of telemetry and other information needed for the PDP to make informed access decisions.  Some PIPs include:

  • ICAM, or Identity, Credential, and Access Management, covering user authentication, single sign-on, user groups and access control features that are typically offered by Identity Providers (IdPs) like Okta, AzureAD or Ping Identity.  
  • Endpoint security includes endpoint detection and response (EDR) or endpoint protection platforms (EPP) that protect end user devices like laptops and mobile devices.  An EPP primarily focuses on preventing known threats using features like antivirus protection. Meanwhile, an EDR actively detects and responds to threats that may have already breached initial defenses using forensics, behavioral analysis and incident response tools. EDR and EPP products are offered by vendors like CrowdStrikeMicrosoftSentinelOne, and more
  • Security Analytics and Data Security products use data collection, aggregation, and analysis to discover security threats using network traffic, user behavior, and other system data, such as, CrowdStrikeDatadogIBM QRadarMicrosoft SentinelNew RelicSplunk, and more.

 

NIST’s figure might suggest that supporting components in the PIP are mere plug-ins responding in real-time to the PDP.  However, for many vendors, the ICAM, EDR/EPP, security analytics, and data security PIPs often represent complex and distributed infrastructures.

Crawl or run, but don’t walk

Next, the SP 1800-35 introduces two more detailed reference architectures, the “Crawl Phase” and the “Run Phase”.  The “Run Phase” corresponds to the reference architecture that is shown in the figure above.  The “Crawl Phase” is a simplified version of this reference architecture that only deals with protecting on-premise Resources, and omits cloud Resources. Both of these phases focused on Enhanced Identity Governance approaches to ZTA, as we defined above. NIST stated, "We are skipping the EIG walk phase and have proceeded directly to the run phase".

The SP 1800-35 then provides a sequence of detailed instructions, called “Builds”, that show how to implement “Crawl Phase” and “Run Phase” reference architectures using products sold by various vendors.

Since Cloudflare’s Zero Trust platform natively supports access to both cloud and on-premise resources, we will skip over the “Crawl Phase” and move directly to showing how Cloudflare’s Zero Trust platform can be used to support “Run Phase” of the reference architecture.

A complete Zero Trust Architecture using Cloudflare and integrations

Nothing in NIST SP 1800-35 represents an endorsement of specific vendor technologies. Instead, the intent of the publication is to offer a general architecture that applies regardless of the technologies or vendors an organization chooses to deploy.   It also includes a series of “Builds” using a variety of technologies from different vendors, that allow organizations to achieve a ZTA.   This section describes how Cloudflare fits in with a ZTA, enabling you to accelerate your ZTA deployment from Crawl directly to Run.

Regarding the “Builds” in SP 1800-35, this section can be viewed as an aggregation of the following three specific builds:

Now let’s see how we can map Cloudflare’s Zero Trust platform to the ZTA reference architecture:

Figure 2: General ZTA Reference Architecture Mapped to Cloudflare Zero Trust & Key Integrations. Source: NIST, Special Publication 1800-35, "Implementing a Zero Trust Architecture (ZTA)”, 2025, with modification by Cloudflare.

Cloudflare’s platform simplifies complexity by delivering the PEP via our global anycast network and the PDP via our Software-as-a-Service (SaaS) management console, which also serves as a global unified control plane. A complete ZTA involves integrating Cloudflare with PIPs provided by other vendors, as shown in the figure above.

Now let’s look at several key points in the figure.

In the bottom right corner of the figure are Resources, which may reside on-premise, in private data centers, or across multiple cloud environments.  Resources are made securely accessible through Cloudflare’s global anycast network via Cloudflare Tunnel (as shown in the figure) or Magic WAN (not shown). Resources are shielded from direct exposure to the public Internet by placing them behind Cloudflare Access and Cloudflare Gateway, which are PEPs that enforce zero-trust principles by granting access to Subjects that conform to policy requirements.

In the bottom left corner of the figure are Subjects, both human and non-human, that need access to Resources.  With Cloudflare’s platform, there are multiple ways that Subjects can again access to Resources, including:

  • Agentless approaches that allow end users to access Resources directly from their web browsers. Alternatively, Cloudflare’s Magic WAN can be used to support connections from enterprise networks directly to Cloudflare’s global anycast network via IPsec tunnels, GRE tunnels or Cloudflare Network Interconnect (CNI).

  • Agent-based approaches use Cloudflare’s lightweight WARP client, which protects corporate devices by securely and privately sending traffic to Cloudflare's global network.

Now we move onto the PEP (the Policy Enforcement Point), which is the dataplane of our ZTA.   Cloudflare Access is a modern Zero Trust Network Access solution that serves as a dynamic PEP, enforcing user-specific application access policies based on identity, device posture, context, and other factors.  Cloudflare Gateway is a Secure Web Gateway for filtering and inspecting traffic sent to the public Internet, serving as a dynamic PEP that provides DNS, HTTP and network traffic filtering, DNS resolver policies, and egress IP policies.

Both Cloudflare Access and Cloudflare Gateway rely on Cloudflare’s control plane, which acts as a PDP offering a policy engine (PE) and policy administrator (PA).  This PDP takes in inputs from PIPs provided by integrations with other vendors for ICAM, endpoint security, and security analytics.  Let’s dig into some of these integrations.

  • ICAM: Cloudflare’s control plane integrates with many ICAM providers that provide Single Sign On (SSO) and Multi-Factor Authentication (MFA). The ICAM provider authenticates human Subjects and passes information about authenticated users and groups back to Cloudflare’s control plane using Security Assertion Markup Language (SAML) or OpenID Connect (OIDC) integrations.  Cloudflare’s ICAM integration also supports AI/ML powered behavior-based user risk scoring, exchange, and re-evaluation. In the figure above, we depicted Okta as the ICAM provider, but Cloudflare supports many other ICAM vendors (e.g. Microsoft Entra, Jumpcloud, GitHub SSO, PingOne).   For non-human Subjects — such as service accounts, Internet of Things (IoT) devices, or machine identities — authentication can be performed through certificates, service tokens, or other cryptographic methods.

  • Endpoint security: Cloudflare’s control plane integrates with many endpoint security providers to exchange signals, such as device posture checks and user risk levels. Cloudflare facilitates this through integrations with endpoint detection and response EDR/EPP solutions, such as CrowdStrike, Microsoft, SentinelOne, and more. When posture checks are enabled with one of these vendors such as Microsoft, device state changes, 'noncompliant', can be sent to Cloudflare Zero Trust, automatically restricting access to Resources. Additionally, Cloudflare Zero Trust enables the ability to synchronize the Microsoft Entra ID risky users list and apply more stringent Zero Trust policies to users at higher risk. 

  • Security Analytics: Cloudflare’s control plane integrates with real-time logging and analytics for persistent monitoring.  Cloudflare's own analytics and logging features monitor access requests and security events. Optionally, these events can be sent to a Security Information and Event Management (SIEM)  solution such as, CrowdStrike, Datadog, IBM QRadar, Microsoft Sentinel, New Relic, Splunk, and more using Cloudflare’s logpush integration. Cloudflare's user risk scoring system is built on the OpenID Shared Signals Framework (SSF) Specification, which allows integration with existing and future providers that support this standard. SSF focuses on the exchange of Security Event Tokens (SETs), a specialized type of JSON Web Token (JWT). By using SETs, providers can share user risk information, creating a network of real-time, shared security intelligence. In the context of NIST’s Zero Trust Architecture, this system functions as a PIP, which is responsible for gathering information about the Subject and their context, such as risk scores, device posture, or threat intelligence. This information is then provided to the PDP, which evaluates access requests and determines the appropriate policy actions. The PEP uses these decisions to allow or deny access, completing the cycle of secure, dynamic access control.

  • Data security: Cloudflare’s Zero Trust offering provides robust data security capabilities across data-in-transit, data-in-use, and data-at-rest. Its Data Loss Prevention (DLP) safeguards sensitive information in transit by inspecting and blocking unauthorized data movement. Remote Browser Isolation (RBI) protects data-in-use by preventing malware, phishing, and unauthorized exfiltration while enabling secure web access. Meanwhile, Cloud Access Security Broker (CASB) ensures data-at-rest security by enforcing granular controls over SaaS applications, preventing unauthorized access and data leakage. Together, these capabilities provide comprehensive protection for modern enterprises operating in a cloud-first environment.

By leveraging Cloudflare's Zero Trust platform, enterprises can simplify and enhance their ZTA implementation, securing diverse environments and endpoints while ensuring scalability and ease of deployment. This approach ensures that all access requests—regardless of where the Subjects or Resources are located—adhere to robust security policies, reducing risks and improving compliance with modern security standards.

Support for agencies and enterprises running towards Zero Trust Architecture

Cloudflare works with multiple enterprises, and federal and state agencies that rely on NIST guidelines to secure their networks.  So we take a brief detour to describe some unique features of Cloudflare’s Zero Trust platform that we’ve found to be valuable to these enterprises.

  • FedRAMP data centers.  Many government agencies and commercial enterprises have FedRAMP requirements, and Cloudflare is well-equipped to support them. FedRAMPs requirements sometimes require organizations to self-host software and services inside their own network perimeter, which can result in higher latency, degraded performance and increased cost.  At Cloudflare, we take a different approach. Organizations can still benefit from Cloudflare’s global network and unparalleled performance while remaining Fedramp compliant.  To support FedRAMP customers, Cloudflare’s dataplane (aka our PEP, or Policy Enforcement Point) consists of data centers in over 330 cities where customers can send their encrypted traffic, and 32 FedRAMP datacenters where traffic is sent to when sensitive dataplane operations are required (e.g. TLS inspection).  This architecture means that our customers do not need to self-host a PEP and incur the associated cost, latency, and performance degradation.

  • Post-quantum cryptography. NIST has announced that by 2030 all conventional cryptography (RSA and ECDSA) must be deprecated and upgraded to post-quantum cryptography.  But upgrading cryptography is hard and takes time, so Cloudflare aims to take on the burden of managing cryptography upgrades for our customers. That’s why organizations can tunnel their corporate network traffic though Cloudflare’s Zero Trust platform, protecting it against quantum adversaries without the hassle of individually upgrading each and every corporate application, system, or network connection. End-to-end quantum safety is available for communications from end-user devices, via web browser (today) or Cloudflare’s WARP device client (mid-2025), to secure applications connected with Cloudflare Tunnel.

Run towards Zero Trust Architecture with Cloudflare 

NIST’s latest publication, SP 1800-35, provides a structured approach to implementing Zero Trust, emphasizing the importance of policy enforcement, continuous authentication, and secure access management. Cloudflare’s Zero Trust platform simplifies this complex framework by delivering a scalable, globally distributed solution that is FedRAMP-compliant and integrates with industry-leading providers like Okta, Microsoft, Ping, CrowdStrike, and SentinelOne to ensure comprehensive protection.

A key differentiator of Cloudflare’s Zero Trust solution is our global anycast network, one of the world’s largest and most interconnected networks. Spanning 330+ cities across 120+ countries, this network provides unparalleled performance, resilience, and scalability for enforcing Zero Trust policies without negatively impacting the end user experience. By leveraging Cloudflare’s network-level enforcement of security controls, organizations can ensure that access control, data protection, and security analytics operate at the speed of the Internet — without backhauling traffic through centralized choke points. This architecture enables low-latency, highly available enforcement of security policies, allowing enterprises to seamlessly protect users, devices, and applications across on-prem, cloud, and hybrid environments.

Now is the time to take action. You can start implementing Zero Trust today by leveraging Cloudflare’s platform in alignment with NIST’s reference architecture. Whether you are beginning your Zero Trust journey or enhancing an existing framework, Cloudflare provides the tools, network, and integrations to help you succeed. Sign up for Cloudflare Zero Trust, explore our integrations, and secure your organization with a modern, globally distributed approach to cybersecurity.

❌