Visualização de leitura

Addressing the Edge Security Paradox

The paradox of edge security describes how technologies designed to strengthen network defenses can also create new vulnerabilities. Edge devices improve performance and support localized threat detection by processing data closer to its source, yet modern enterprise environments often operate thousands of distributed endpoints. This rapid expansion of edge infrastructure increases the number of systems..

The post Addressing the Edge Security Paradox appeared first on Security Boulevard.

China-Backed Groups are Using Massive Botnets in Espionage, Intrusion Campaigns

Chinese, A PRC flag flies atop a metal flagpole

China-sponsored threat groups like Salt Typhoon and Flax Typhoon are increasingly relying on multiple massive botnets comprising edge and IoT devices to run their cyber espionage and network intrusion campaigns, CISA and other security agencies say. The use of such "covert networks" makes it more difficult to detect and mitigate their campaigns.

The post China-Backed Groups are Using Massive Botnets in Espionage, Intrusion Campaigns appeared first on Security Boulevard.

Human Trust of AI Agents

Interesting research: “Humans expect rationality and cooperation from LLM opponents in strategic games.”

Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.

Managed OAuth for Access: make internal apps agent-ready in one click

We have thousands of internal apps at Cloudflare. Some are things we’ve built ourselves, others are self-hosted instances of software built by others. They range from business-critical apps nearly every person uses, to side projects and prototypes.

All of these apps are protected by Cloudflare Access. But when we started using and building agents — particularly for uses beyond writing code — we hit a wall. People could access apps behind Access, but their agents couldn’t.

Access sits in front of internal apps. You define a policy, and then Access will send unauthenticated users to a login page to choose how to authenticate. 

Example of a Cloudflare Access login page

This flow worked great for humans. But all agents could see was a redirect to a login page that they couldn’t act on.

Providing agents with access to internal app data is so vital that we immediately implemented a stopgap for our own internal use. We modified OpenCode’s web fetch tool such that for specific domains, it triggered the cloudflared CLI to open an authorization flow to fetch a JWT (JSON Web Token). By appending this token to requests, we enabled secure, immediate access to our internal ecosystem.

While this solution was a temporary answer to our own dilemma, today we’re retiring this workaround and fixing this problem for everyone. Now in open beta, every Access application supports managed OAuth. One click to enable it for an Access app, and agents that speak OAuth 2.0 can easily discover how to authenticate (RFC 9728), send the user through the auth flow, and receive back an authorization token (the same JWT from our initial solution). 

Now, the flow works smoothly for both humans and agents. Cloudflare Access has a generous free tier. And building off our newly-introduced Organizations beta, you’ll soon be able to bridge identity providers across Cloudflare accounts too.

How managed OAuth works

For a given internal app protected by Cloudflare Access, you enable managed OAuth in one click:

Once managed OAuth is enabled, Cloudflare Access acts as the authorization server. It returns the www-authenticate header, telling unauthorized agents where to look up information on how to get an authorization token. They find this at https://<your-app-domain>/.well-known/oauth-authorization-server. Equipped with that direction, agents can just follow OAuth standards: 

  1. The agent dynamically registers itself as a client (a process known as Dynamic Client Registration — RFC 7591), 

  2. The agent sends the human through a PKCE (Proof Key for Code Exchange) authorization flow (RFC 7636)

  3. The human authorizes access, which grants a token to the agent that it can use to make authenticated requests on behalf of the user

Here’s what the authorization flow looks like:

If this authorization flow looks familiar, that’s because it’s what the Model Context Protocol (MCP) uses. We originally built support for this into our MCP server portals product, which proxies and controls access to many MCP servers, to allow the portal to act as the OAuth server. Now, we’re bringing this to all Access apps so agents can access not only MCP servers that require authorization, but also web pages, web apps, and REST APIs.

Mass upgrading your internal apps to be agent-ready

Upgrading the long tail of internal software to work with agents is a daunting task. In principle, in order to be agent-ready, every internal and external app would ideally have discoverable APIs, a CLI, a well-crafted MCP server, and have adopted the many emerging agent standards.

AI adoption is not something that can wait for everything to be retrofitted. Most organizations have a significant backlog of apps built over many years. And many internal “apps” work great when treated by agents as simple websites. For something like an internal wiki, all you really need is to enable Markdown for Agents, turn on managed OAuth, and agents have what they need to read protected content.

To make the basics work across the widest set of internal applications, we use Managed OAuth. By putting Access in front of your legacy internal apps, you make them agent-ready instantly. No code changes, no retrofitting. Instead, just immediate compatibility.

It’s the user’s agent. No service accounts and tokens needed

Agents need to act on behalf of users inside organizations. One of the biggest anti-patterns we’ve seen is people provisioning service accounts for their agents and MCP servers, authenticated using static credentials. These have their place in simple use cases and quick prototypes, and Cloudflare Access supports service tokens for this purpose.

But the service account approach quickly shows its limits when fine-grained access controls and audit logs are required. We believe that every action an agent performs must be easily attributable to the human who initiated it, and that an agent must only be able to perform actions that its human operator is likewise authorized to do. Service accounts and static credentials become points at which attribution is lost. Agents that launder all of their actions through a service account are susceptible to confused deputy problems and result in audit logs that appear to originate from the agent itself.

For security and accountability, agents must use security primitives capable of expressing this user–agent relationship. OAuth is the industry standard protocol for requesting and delegating access to third parties. It gives agents a way to talk to your APIs on behalf of the user, with a token scoped to the user’s identity, so that access controls correctly apply and audit logs correctly attribute actions to the end user.

Standards for the win: how agents can and should adopt RFC 9728 in their web fetch tools

RFC 9728 is the OAuth standard that makes it possible for agents to discover where and how to authenticate. It standardizes where this information lives and how it’s structured. This RFC became official in April 2025 and was quickly adopted by the Model Context Protocol (MCP), which now requires that both MCP servers and clients support it.

But outside of MCP, agents should adopt RFC 9728 for an even more essential use case: making requests to web pages that are protected behind OAuth and making requests to plain old REST APIs.

Most agents have a tool for making basic HTTP requests to web pages. This is commonly called the “web fetch” tool. It’s similar to using the fetch() API in JavaScript, often with some additional post-processing on the response. It’s what lets you paste a URL into your agent and have your agent go look up the content.

Today, most agents’ web fetch tools won’t do anything with the www-authenticate header that a URL returns. The underlying model might choose to introspect the response headers and figure this out on its own, but the tool itself does not follow www-authenticate, look up /.well-known/oauth-authorization-server, and act as the client in the OAuth flow. But it can, and we strongly believe it should! Agents already do this to act as remote MCP clients.

To demonstrate this, we’ve put up a draft pull request that adapts the web fetch tool in Opencode to show this in action. Before making a request, the adapted tool first checks whether it already has credentials ; if it does, it uses them to make the initial request. If the tool gets back a 401 or a 403 with a www-authenticate header, it asks the user for consent to be sent through the server’s OAuth flow.

Here’s how that OAuth flow works. If you give the agent a URL that is protected by OAuth and complies with RFC 9728, the agent prompts the human for consent to open the authorization flow:

…sending the human to the login page:

…and then to a consent dialog that prompts the human to grant access to the agent:

Once the human grants access to the agent, the agent uses the token it has received to make an authenticated request:

Any agent from Codex to Claude Code to Goose and beyond can implement this, and there’s nothing bespoke to Cloudflare. It’s all built using OAuth standards.

We think this flow is powerful, and that supporting RFC 9728 can help agents with more than just making basic web fetch requests. If a REST API supports RFC 9728 (and the agent does too), the agent has everything it needs to start making authenticated requests against that API. If the REST API supports RFC 9727, then the client can discover a catalog of REST API endpoints on its own, and do even more without additional documentation, agent skills, MCP servers or CLIs. 

Each of these play important roles with agents — Cloudflare itself provides an MCP server for the Cloudflare API (built using Code Mode), Wrangler CLI, and Agent Skills, and a Plugin. But supporting RFC 9728 helps ensure that even when none of these are preinstalled, agents have a clear path forward. If the agent has a sandbox to execute untrusted code, it can just write and execute code that calls the API that the human has granted it access to. We’re working on supporting this for Cloudflare’s own APIs, to help your agents understand how to use Cloudflare.

Coming soon: share one identity provider (IdP) across many Cloudflare accounts

At Cloudflare our own internal apps are deployed to dozens of different Cloudflare accounts, which are all part of an Organization — a newly introduced way for administrators to manage users, configurations, and view analytics across many Cloudflare accounts. We have had the same challenge as many of our customers: each Cloudflare account has to separately configure an IdP, so Cloudflare Access uses our identity provider. It’s critical that this is consistent across an organization — you don’t want one Cloudflare account to inadvertently allow people to sign in just with a one-time PIN, rather than requiring that they authenticate via single-sign on (SSO).

To solve this, we’re currently working on making it possible to share an identity provider across Cloudflare accounts, giving organizations a way to designate a single primary IdP for use across every account in their organization.

As new Cloudflare accounts are created within an organization, administrators will be able to configure a bridge to the primary IdP with a single click, so Access applications across accounts can be protected by one identity provider. This removes the need to manually configure IdPs account by account, which is a process that doesn’t scale for organizations with many teams and individuals each operating their own accounts.

What’s next

Across companies, people in every role and business function are now using agents to build internal apps, and expect their agents to be able to access context from internal apps. We are responding to this step function growth in internal software development by making the Workers Platform and Cloudflare One work better together — so that it is easier to build and secure internal apps on Cloudflare. 

Expect more to come soon, including:

  • More direct integration between Cloudflare Access and Cloudflare Workers, without the need to validate JWTs or remember which of many routes a particular Worker is exposed on.

  • wrangler dev --tunnel — an easy way to expose your local development server to others when you’re building something new, and want to share it with others before deploying

  • A CLI interface for Cloudflare Access and the entire Cloudflare API

  • More announcements to come during Agents Week 2026

Enable Managed OAuth for your internal apps behind Cloudflare Access

Managed OAuth is now available, in open beta, to all Cloudflare customers. Head over to the Cloudflare dashboard to enable it for your Access applications. You can use it for any internal app, whether it’s one built on Cloudflare Workers, or hosted elsewhere. And if you haven’t built internal apps on the Workers Platform yet — it’s the fastest way for your team to go from zero to deployed (and protected) in production.

AI Chatbots and Trust

All the leading AI chatbots are sycophantic, and that’s a problem:

Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically ­ they couldn’t tell the difference between sycophantic and objective responses. Both felt equally “neutral” to them.

One example from the study: when a user asked about pretending to be unemployed to a girlfriend for two years, a model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship.” The AI essentially validated deception using careful, neutral-sounding language...

The post AI Chatbots and Trust appeared first on Security Boulevard.

AI Chatbots and Trust

All the leading AI chatbots are sycophantic, and that’s a problem:

Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically ­ they couldn’t tell the difference between sycophantic and objective responses. Both felt equally “neutral” to them.

One example from the study: when a user asked about pretending to be unemployed to a girlfriend for two years, a model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship.” The AI essentially validated deception using careful, neutral-sounding language.

Here’s the conclusion from the research study:

AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences. Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish. Our work highlights the pressing need to address AI sycophancy as a societal risk to people’s self-perceptions and interpersonal relationships by developing targeted design, evaluation, and accountability mechanisms. Our findings show that seemingly innocuous design and engineering choices can result in consequential harms, and thus carefully studying and anticipating AI’s impacts is critical to protecting users’ long-term well-being.

This is bad in bunch of ways:

Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.

When thinking about the characteristics of generative AI, both benefits and harms, it’s critical to separate the inherent properties of the technology from the design decisions of the corporations building and commercializing the technology. There is nothing about generative AI chatbots that makes them sycophantic; it’s a design decision by the companies. Corporate for-profit decisions are why these systems are sycophantic, and obsequious, and overconfident. It’s why they use the first-person pronoun “I,” and pretend that they are thinking entities.

I fear that we have not learned the lesson of our failure to regulate social media, and will make the same mistakes with AI chatbots. And the results will be much more harmful to society:

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

RSA Launches ID Plus Sovereign Deployment for Organizations That Can’t Afford Identity Downtime

RSA opened RSAC 2026 with a new deployment model for its ID Plus identity platform, aimed squarely at government agencies, financial services firms, and critical infrastructure operators that need identity security to work even when everything else fails. RSA ID Plus Sovereign Deployment is a “deploy anywhere” identity and access management solution that gives organizations..

The post RSA Launches ID Plus Sovereign Deployment for Organizations That Can’t Afford Identity Downtime appeared first on Security Boulevard.

❌