Visualização normal

Antes de ontemStream principal
  • ✇Security | CIO
  • 칼럼 | 화려한 AI보다 현실적 통제…구글이 제시한 에이전트 전략
    구글이 지난주 개최한 연례 컨퍼런스 ‘구글 클라우드 넥스트 2026’에서 내놓은 발표 가운데 가장 주목할 점은 새로운 모델이나 TPU가 아니었다. 기업 전반에 제미나이를 확산하는 또 다른 방식 역시 핵심은 아니었다. 오히려 이는 하나의 인정이자, 동시에 경고에 가까운 메시지로 읽힌다. 에이전트에는 감독이 필요하다 이미 알고 있던 사실이지만, “알고도 실행하지 않으면 진정으로 아는 것이 아니다”라는 말처럼 실제로 이를 실천하는 것은 또 다른 문제다. 우리는 에이전트를 분주하게 일을 처리하는 디지털 직원처럼 여기지만, 동시에 이들은 인증 정보와 예산, 메모리, 민감 데이터 접근 권한을 가진 취약한 소프트웨어 시스템이기도 하다. 게다가 비용이 크게 들고 원인 추적이 어려운 방식으로 실패하는 특성까지 갖고 있다. 이것이 ‘구글 클라우드 넥스트 2026’의 본질적인 메시지다. 많은 이들은 구글이 에이전틱 엔터프라이즈 시장을 선점
     

칼럼 | 화려한 AI보다 현실적 통제…구글이 제시한 에이전트 전략

30 de Abril de 2026, 05:10

구글이 지난주 개최한 연례 컨퍼런스 ‘구글 클라우드 넥스트 2026’에서 내놓은 발표 가운데 가장 주목할 점은 새로운 모델이나 TPU가 아니었다. 기업 전반에 제미나이를 확산하는 또 다른 방식 역시 핵심은 아니었다.

오히려 이는 하나의 인정이자, 동시에 경고에 가까운 메시지로 읽힌다.

에이전트에는 감독이 필요하다

이미 알고 있던 사실이지만, “알고도 실행하지 않으면 진정으로 아는 것이 아니다”라는 말처럼 실제로 이를 실천하는 것은 또 다른 문제다. 우리는 에이전트를 분주하게 일을 처리하는 디지털 직원처럼 여기지만, 동시에 이들은 인증 정보와 예산, 메모리, 민감 데이터 접근 권한을 가진 취약한 소프트웨어 시스템이기도 하다. 게다가 비용이 크게 들고 원인 추적이 어려운 방식으로 실패하는 특성까지 갖고 있다.

이것이 ‘구글 클라우드 넥스트 2026’의 본질적인 메시지다. 많은 이들은 구글이 에이전틱 엔터프라이즈 시장을 선점하기 위해 나섰다고 해석하지만, 보다 흥미로운 해석은 구글이 이를 ‘통제하기 위해’ 등장했다는 점이다.

물론 구글은 ‘에이전틱 클라우드(agentic cloud)’를 적극적으로 강조했다. 요즘 어떤 행사에서도 빠지지 않는 주제다. 제미나이 엔터프라이즈 에이전트 플랫폼, 8세대 TPU(Tensor Processing Unit), 새로운 워크스페이스 인텔리전스 AI(Workspace Intelligence AI) 기능, 그리고 기업 전반에 AI를 자연스럽게 녹여내기 위한 다양한 통합 기능도 함께 발표했다. 에이전트 시대의 성과를 자축하는 자리로만 본다면 충분한 발표였다.

하지만 화려한 연출을 걷어내면 더 중요한 메시지가 드러난다. 지난 2년 동안 기업은 AI 에이전트에 열광해 왔고, 이제는 이들이 기업의 평판을 해치거나 재무적 손실을 일으키거나 민감 정보를 노출하지 않도록 통제해야 할 단계에 이르렀다는 점이다.

이는 구글을 비판하는 이야기가 아니다. 오히려 그 반대다. 이번 행사에서 가장 실질적인 가치가 있는 발표일 수 있다.

“신뢰하되 검증하라”

AI가 단순히 말하는 수준을 넘어 실제 행동을 수행하기 시작하는 순간, 기업 환경에서는 필수적인 질문들이 쏟아진다. 누가 이를 승인했는지, 어떤 데이터를 사용했는지, 어떤 시스템에 접근했는지, 왜 그런 행동을 했는지, 비용은 얼마나 들었는지, 그리고 필요할 경우 어떻게 중단할 수 있는지 등이다.

구글의 이번 발표는 상당 부분 이러한 질문에 대한 답변으로 구성됐다.

구글이 강조한 내용을 보면 이를 분명히 알 수 있다. 지식 카탈로그(Knowledge Catalog)는 기업 데이터 전반에서 신뢰할 수 있는 비즈니스 맥락을 제공해 에이전트의 판단을 보완하도록 설계됐다. 제미나이 엔터프라이즈에는 장시간 실행되는 에이전트를 포함해 이를 관리·모니터링할 수 있는 기능이 추가됐다.

워크스페이스에는 에이전트의 데이터 접근을 모니터링하고 제어하며 감사할 수 있는 기능이 도입돼 프롬프트 인젝션, 과도한 정보 공유, 데이터 유출 위험을 줄인다. 또한 구글 클라우드는 에이전트 방어 기능과 위즈(Wiz) 기반 보안 체계를 통해 클라우드와 AI 개발 환경 전반에서 에이전트를 보호할 수 있도록 했다.

이러한 기능들은 시스템이 완벽하게 작동할 때 필요한 도구가 아니다. 오히려 “데모에서는 잘 작동했지만 실제 업무에 맡겨도 되는가”라는 현실적인 고민에 직면한 기업을 위해 만들어진 것이다.

에이전트 관리 계층

업계 분석가들은 기업용 AI의 새로운 계층을 설명하는 용어로 ‘에이전트 컨트롤 플레인(agent control plane)’에 점차 합의하는 분위기다. 익숙한 개념이라는 점에서 적절한 표현이다. 마치 쿠버네티스(Kubernetes)가 인프라를 통합 관리하듯, AI 에이전트의 동작을 중앙에서 관리하는 플랫폼을 떠올리게 한다. 즉, 다수의 AI 에이전트를 한곳에서 관리하고 관찰하며, 라우팅·보안·최적화를 수행할 수 있는 통합 시스템을 의미한다.

하지만 현실은 아직 그 단계와 거리가 멀다.

에이전트에 컨트롤 플레인이 필요한 이유는 이들이 이미 직원을 대체하고 있어서가 아니다. 오히려 기업이 확률 기반 시스템인 에이전트를 기존의 결정론적 업무 프로세스에 연결하면서, 그 사이를 누군가 반드시 관리해야 한다는 사실을 깨닫고 있기 때문이다. 에이전트 데모에서는 자율성이 깔끔하게 보이지만, 실제 엔터프라이즈 시스템에서는 상황이 훨씬 복잡하게 전개된다.

고객 데이터는 한 시스템에, 계약 정보는 또 다른 시스템에 흩어져 있고, 예외 처리는 누군가의 이메일함에 남아 있으며, 정책 문서는 2021년에 업데이트된 PDF 파일에 머물러 있는 경우가 많다. 게다가 해당 업무 흐름을 이해하던 담당자는 팬데믹 기간 중 회사를 떠났을 수도 있다.

이처럼 복잡한 환경에 이제 에이전트까지 추가되고 있다.

이 때문에 필자는 구글의 컨트롤 플레인 전략에 일정 부분 공감하면서도, 지나치게 정돈된 벤더의 서사에는 여전히 경계심을 갖는다. 통합 에이전트 플랫폼, 거버넌스, 모니터링, 평가, 관측성, 시뮬레이션 기능은 모두 필요하다. 특히 제미나이 엔터프라이즈는 기업이 개별적으로 엮어 왔던 복잡한 운영 요소를 중앙화하려는 시도라는 점에서 의미가 있다.

다만 컨트롤 플레인을 실제 업무 그 자체로 오해해서는 안 된다.

파일럿은 쉽고, 운영은 어렵다

에이전틱 AI 관련 데이터는 한 가지 메시지를 반복하고 있다. 기대감이 실제 운영 성숙도를 크게 앞서고 있다는 점이다.

업무 자동화 기술 카문다(Camunda)의 ‘2026 에이전트 오케스트레이션 및 자동화 현황’ 보고서에 따르면, 71%의 조직이 AI 에이전트를 사용하고 있다고 답했지만 지난 1년간 실제 운영 환경에 적용된 사례는 11%에 그쳤다. 또한 73%는 에이전틱 AI에 대한 비전과 현실 사이에 격차가 있다고 인정했다.

가트너 역시 비슷한 전망을 내놓았다. 2027년 말까지 에이전틱 AI 프로젝트의 40% 이상이 중단될 것으로 예상되며, 그 이유로는 비용 부담, 불명확한 비즈니스 가치, 미흡한 리스크 관리가 꼽힌다.

분명히 짚고 넘어가야 할 점은, 이것이 모델의 문제가 아니라는 사실이다. 전형적인 엔터프라이즈 소프트웨어 운영 문제에 가깝다.

이 같은 흐름은 보안과 거버넌스 영역에서도 동일하게 나타난다. 생성형 AI 관리 플랫폼 라이터(Writer)의 2026 조사에 따르면, 67%의 경영진이 승인되지 않은 AI 도구로 인해 데이터 유출이나 보안 사고를 경험했다고 답했다.

또한 36%는 AI 에이전트를 감독하기 위한 공식적인 계획이 없으며, 35%는 문제가 발생한 에이전트를 즉시 중단할 수 없다고 밝혔다.

세 가지 가운데서도 특히 마지막 수치가 가장 우려되는 대목이다. 기업 시스템과 고객 데이터, 조직의 인증 정보에 접근할 수 있는 소프트웨어 에이전트임에도 불구하고, 3분의 1이 넘는 기업이 문제가 발생했을 때 이를 신속하게 중단할 수 있다고 확신하지 못하고 있다.

그럼에도 정말 걱정하지 않아도 되는 걸까?

에이전트는 덜 중요한 요소

에이전틱 엔터프라이즈 환경의 숨겨진 진실은, 정작 에이전트 자체는 아키텍처에서 가장 덜 중요한 요소일 수 있다는 점이다. 모든 주목과 기대는 에이전트에 쏠리지만, 실제 핵심은 따로 있다. 인증과 권한 관리, 워크플로 경계 설정, 데이터 품질, 검색과 메모리, 평가 체계, 감사 추적, 비용 통제, 그리고 에이전트가 혼란에 빠졌을 때 어떤 시스템을 ‘단일 진실의 원천(source of truth)’으로 삼을지 결정하는 문제 등이 진짜 과제다.

구글 클라우드 넥스트에서의 발표는 에이전틱 엔터프라이즈가 이미 도래했음을 증명하지는 않았다. 대신, 에이전틱 기업이 현실화된다면 결국 기존 엔터프라이즈 소프트웨어가 중요한 국면에 접어들었을 때와 매우 유사한 모습이 될 것임을 보여줬다. 마법 같은 혁신보다는 거버넌스 중심의 구조로 수렴한다는 의미다.

이는 분명 진전이지만, 결코 ‘화려한 발전’은 아니다.

에이전틱 AI 시장에서 승자를 가려내고 싶다면, 가장 똑똑한 에이전트를 가진 기업을 찾기보다 데이터 계약이 명확하고, 평가 체계가 정교하며, 일관된 인증 모델을 갖추고, 비공식적인 ‘섀도우 AI’ 확산을 최소화하는 기업을 주목해야 한다. 그러나 업계는 이러한 이야기를 꺼리는 경향이 있다. 자율적으로 일하는 디지털 노동자에 대해 말하는 것이 데이터 계보나 접근 통제를 논하는 것보다 훨씬 흥미롭기 때문이다.

하지만 엔터프라이즈 소프트웨어가 현실이 되는 지점은 바로 이런 ‘지루함’ 속에 있다.

에이전트 시대의 도래를 성급히 선언하기 어려운 또 다른 이유도 있다. 에이전트의 유용성은 결국 안전하게 이해하고 활용할 수 있는 데이터에 달려 있기 때문이다. 구글 역시 이를 분명히 인식하고 있다. 지식 카탈로그 크로스 클라우드 레이크하우스 전략을 포함한 ‘에이전트 데이터 클라우드’ 개념은, 에이전트가 신뢰할 수 있는 비즈니스 맥락을 필요로 한다는 점을 인정한 것이다.

이러한 맥락이 없다면 에이전트는 엔터프라이즈의 업무 수행자가 아니라, 시스템을 떠도는 ‘말 잘하는 관광객’에 불과하다.

결국 이번 구글 클라우드 넥스트에서 가장 고무적인 발표는 에이전트를 더 자율적으로 만드는 기술이 아니었다. 오히려 에이전트를 더 잘 관리할 수 있도록 만드는 기능이었다. 에이전틱 AI는 거대한 가능성을 지니고 있지만, 그것이 현실이 되기 위해서는 무엇보다 ‘지루할 만큼 안정적인’ 특성을 입증해야 한다.
dl-ciokorea@foundryco.com

  • ✇Firewall Daily – The Cyber Express
  • Dutch Health Tech Firm ChipSoft Confirms Destruction of Stolen Patient Data Ashish Khaitan
    The Cyber Express previously reported the ChipSoft cyberattack, in which ransomware actors stole patient data. Now, reports have surfaced from the Dutch medical software provider, noting that the compromised data has been destroyed, though key details about the incident remain undisclosed.  In an update issued on April 28, 2026, ChipSoft stated that all data collected during the cyberattack had been deleted. According to the company, cybersecurity specialists verified that the destruction was
     

Dutch Health Tech Firm ChipSoft Confirms Destruction of Stolen Patient Data

ChipSoft cyberattack

The Cyber Express previously reported the ChipSoft cyberattack, in which ransomware actors stole patient data. Now, reports have surfaced from the Dutch medical software provider, noting that the compromised data has been destroyed, though key details about the incident remain undisclosed.  In an update issued on April 28, 2026, ChipSoft stated that all data collected during the cyberattack had been deleted. According to the company, cybersecurity specialists verified that the destruction was carried out in a “technically sound manner,” although no further explanation was provided about the methods used.  The company emphasized that preventing the publication of stolen data was a top priority. “With the support of cybersecurity experts, we managed to prevent the data from being published. Furthermore, the stolen data has been destroyed,” the statement read. However, ChipSoft has not clarified whether it paid a ransom to the attackers, despite earlier indications that negotiations had taken place.  “Protecting our customers’ data has always been our top priority. In this exceptional situation, that priority weighed very heavily,” the company added, hinting at the difficult decisions made during the ransomware attack response. 

Timeline of the ChipSoft Cyberattack 

The ChipSoft cyberattack first came to light in early April 2026. On April 12, ChipSoft disclosed that it had fallen victim to a cyberattack on its systems earlier that week. As an immediate precaution, the company disabled connections to several key services, including its Care Portal, Care Platform, and HiX Mobile applications, starting April 8.  At the time, ChipSoft confirmed it had engaged Z-CERT, the Dutch healthcare cybersecurity expertise center, and external cybersecurity professionals to conduct a forensic investigation. The company acknowledged the disruption caused to healthcare providers and patients, noting that patient portals were temporarily unavailable and data exchange via the platform had been halted. 

Data Theft Confirmed in the Netherlands 

By April 16, the investigation revealed that cybercriminals behind the ransomware attack had successfully stolen personal and medical data from several Dutch healthcare institutions. ChipSoft confirmed that affected organizations were being notified directly.  Hans Mulder, CEO of ChipSoft, addressed the breach, stating: “After forty years of dedication to reliable healthcare IT, it pains us that this situation has arisen. We cannot undo this data theft. However, we are doing everything we can to support the affected customers as best as possible in this situation.”  In contrast, a separate update on the same day confirmed that Belgian patient data had not been compromised in the cyberattack on ChipSoft systems. 

Systems Shutdown and Gradual Recovery 

The cyberattack forced ChipSoft to shut down multiple services as a preventive measure. Systems such as Zorgplatform, Zorgportaal, and HiX Mobile were temporarily taken offline, affecting daily operations in healthcare institutions.  By April 17, after extensive analysis conducted in collaboration with cybersecurity experts and Z-CERT, ChipSoft announced that the affected systems were safe to use again. A phased rollout began shortly afterward, with healthcare institutions being informed directly about the restoration process.  Further progress was reported on April 24, when ChipSoft confirmed that most healthcare institutions had regained access to Zorgplatform. Connections to Zorgportaal were also being restored, allowing many patient portals to become operational again. The HiX Mobile app became available once institutions reactivated their systems.  Despite these advancements, ChipSoft cautioned that the recovery process required time and careful handling. The company acknowledged the strain placed on healthcare providers, stating that the precautionary measures had significantly impacted daily workflows and patient care. 
  • ✇Security | CIO
  • What Google’s “unified stack” pitch at Cloud Next ‘26 really means for CIOs
    Google didn’t so much announce products at Cloud Next ’26 as it tried to reframe the real bottleneck to scaling AI as the architecture that CIOs have been building while trying to piece it together. For years, enterprises have treated AI like a kit, with models, infrastructure, and data spread across different vendors and heterogenous environments, an approach that worked well enough in pilot mode, but has proven harder to scale into something dependable. That, at le
     

What Google’s “unified stack” pitch at Cloud Next ‘26 really means for CIOs

24 de Abril de 2026, 15:06

Google didn’t so much announce products at Cloud Next ’26 as it tried to reframe the real bottleneck to scaling AI as the architecture that CIOs have been building while trying to piece it together.

For years, enterprises have treated AI like a kit, with models, infrastructure, and data spread across different vendors and heterogenous environments, an approach that worked well enough in pilot mode, but has proven harder to scale into something dependable.

That, at least, is the problem that Google Cloud CEO Thomas Kurian chose to name, and own, on stage. “You have moved beyond the pilot. The experimental phase is behind us,” he said, before posing the more uncomfortable question for CIOs: “How do you move AI into production across your entire enterprise?”

His answer: “A unified stack.”

What that “unified stack” amounts to in practice, though, is Google stitching together layers it has historically sold and marketed separately into an architecture that represents a single operating fabric for enterprise AI.  

Kurian cast it as the “connective tissue” binding what are typically siloed layers, such as custom silicon, models, data, applications, and security, into a single, coordinated system. That translates into workload-specific TPUs to run and scale AI, Gemini Enterprise and the Gemini Enterprise Agent Platform to build and embed agents into business workflows, the Agentic Data Cloud to ground them in enterprise context, and a parallel push to secure both agents and the infrastructure they run on.

A turnkey answer to integration fatigue?

It’s a neat and a timely argument for enterprises, said independent consultant David Linthicum, especially for those that are frustrated with stalled pilots as a result of fragmented AI stacks,.

In addition, noted Ashish Chaturvedi, leader of executive research at HFS Research, most CIOs are drowning in integration tax, which compounds the costs of scaling an AI initiative. “The average enterprise has spent the last two years stitching together models from one vendor, orchestration from another, data pipelines from a third, and governance as an afterthought,”  Chaturvedi said. “Google, in contrast, is pitching a turnkey solution.”

That turnkey solution, said Shelly DeMotte Kramer, principal analyst at Kramer & Company, could be attractive on a number of fronts if CIOs are building on Google Cloud. It could reduce integration risk, offer faster pilot-to-production trajectories, and democratize AI across the organization and beyond IT via the Workspace Studio no-code agent builder

Concerns around execution and clarity

However, Kramer is not confident about Google’s execution of its unified stack vision. “Google Cloud has consistently come in in third place in terms of enterprise cloud share, with what could, in all candor, be called thinner organizational muscle for large-scale professional services engagements than what you might expect from AWS and Microsoft,” he said.

HyperFRAME Research’s leader of the AI stack Stephanie Walter, also has doubts. She questioned the clarity of the offerings that Google is packaging and marketing as part of that vision.

“While the pitch will resonate with enterprises tired of stitching together products to scale AI, it lacks clarity,” she said. “Google announced a lot at once, and the way the AI product portfolio fits together is still somewhat unclear, so CIOs will like the ambition while still asking for a cleaner map of where Gemini Enterprise, the Agent Platform, the Application, and the data layer begin and end.”

Converging vendor visions add complexity

That ambiguity, analysts say, will be further deepened for CIOs as they try to evaluate Google’s pitch against converging visions from rivals AWS and Microsoft, who, since last year, have been promoting their own visions of moving AI pilots into production.

While the convergence in vendor pitches will simplify choices at a high level, it will add complexity in practice because the control planes, pricing, ecosystem depth, and interoperability across offerings vary meaningfully, Linthicum said.

“CIOs still have to map those differences to their existing estate, talent base, and governance model. Similar narratives do not mean equivalent operating realities,” he added.

That, according to Walter, risks leaving CIOs comparing architectures that sound strikingly alike on paper, even as their underlying trade-offs remain difficult to parse at an operational level.

The convergence in vendor pitches could also backfire on Google, Chaturvedi noted. “The more similar the top-line narratives become, the more the decision swings on non-technical factors such as existing relationships, migration costs, and trust,” he said.

If anything, that dynamic may push enterprises toward a more pragmatic split. Paul Chada, co-founder of agentic AI startup Doozer AI, expects CIOs to end up standardizing on two distinct layers when scaling AI: a primary agent control plane aligned with where enterprise applications and user workflows reside, and a separate data reasoning layer anchored in governed data environments.

“The dream of a single vendor owning both likely won’t survive procurement,” he said.

“Unified” could still mean complex pricing

Further, analysts pointed out that Google’s unified stack pitch could introduce concerns for CIOs that go beyond architectural clarity.

For example, Linthicum noted that bundling infrastructure, models, data services, and agents into a single narrative doesn’t necessarily simplify costs, rather it makes pricing harder to predict and optimize,.

“A unified product story can still produce a highly fragmented bill. CIOs should expect more pricing complexity,” he said.

And Mike Leone, principal analyst at Moor Insights and Strategy, added that the problem of pricing complexity around AI offerings, doesn’t change with CIOs switching vendors. “Every hyperscaler is walking in the same direction,” he said.

That, said Dion Hinchcliffe, lead of the CIO practice at The Futurum Group, leaves CIOs with fewer levers to simplify costs at the vendor level and more responsibility to manage them internally. To that extent, he added that enterprises will need to lean more heavily on FinOps disciplines to regain control over increasingly complex and opaque AI spending.

Different strengths

There is, however, a more nuanced upside for CIOs willing to look past the unified vision pitch.

Kramer, for one, pointed to Google’s control over its own AI silicon as a potential differentiator. “That makes the comparatively better performance-per-dollar pitch for AI workloads at the infra level somewhat defensible,” he said.

At the same time, the analysts agreed, the competitive field, at least for CIOs, is far from settled.

“Microsoft looks best positioned on enterprise distribution and workflow adjacency. AWS is strongest on operational breadth, developer familiarity, and cloud maturity. Google is strongest where AI infrastructure, analytics, and model-platform integration matter most,” Linthicum said.

CIOs, in turn, should align  vendor strengths with enterprise priorities, whether that’s driving user adoption, scaling operations, or deepening AI and data platform capabilities, he added.

  • ✇Security Boulevard
  • Why Traditional Security Tools Fail-and How Unified AI Platforms Solve the Problem Anamika Pandey
    When More Tools Create More Problems For years, organizations have approached cybersecurity with a simple mindset-add more tools to strengthen defenses. Firewalls, endpoint solutions, intrusion detection systems, and monitoring platforms have all been layered together to create what appears to be a comprehensive security posture. Yet, despite this growing investment, security outcomes have not improved The post Why Traditional Security Tools Fail-and How Unified AI Platforms Solve the Problem a
     

Why Traditional Security Tools Fail-and How Unified AI Platforms Solve the Problem

17 de Abril de 2026, 08:13

When More Tools Create More Problems For years, organizations have approached cybersecurity with a simple mindset-add more tools to strengthen defenses. Firewalls, endpoint solutions, intrusion detection systems, and monitoring platforms have all been layered together to create what appears to be a comprehensive security posture. Yet, despite this growing investment, security outcomes have not improved

The post Why Traditional Security Tools Fail-and How Unified AI Platforms Solve the Problem appeared first on Seceon Inc.

The post Why Traditional Security Tools Fail-and How Unified AI Platforms Solve the Problem appeared first on Security Boulevard.

Securing non-human identities: automated revocation, OAuth, and scoped permissions

Agents let you build software faster than ever, but securing your environment and the code you write — from both mistakes and malice — takes real effort. Open Web Application Security Project (OWASP) details a number of risks present in agentic AI systems, including the risk of credential leaks, user impersonation, and elevation of privilege. These risks can result in extreme damage to your environments including denial of service, data loss, or data leaks — which can do untold financial and reputational damage. 

This is an identity problem. In modern development, "identities" aren't just people — they are the agents, scripts, and third-party tools that act on your behalf. To secure these non-human identities, you need to manage their entire lifecycle: ensuring their credentials (tokens) aren't leaked, seeing which applications have access via OAuth, and narrowing their permissions using granular RBAC.

Today, we are introducing updates to address these needs: scannable tokens to protect your credentials, OAuth visibility to manage your principals, and resource-scoped RBAC to fine-tune your policies.

Understanding identity: Principals, Credentials, and Policies

To secure the Internet in an era of autonomous agents, we have to rethink how we handle identity. Whether a request comes from a human developer or an AI agent, every interaction with an API relies on three core pillars:

  • The Principal (The Traveler): This is the identity itself — the "who." It might be you logging in via OAuth, or a background agent using an API token to deploy code.

  • The Credential (The Passport): This is the proof of that identity. In this world, your API token is your passport. If it’s stolen or leaked, anyone can "wear" your identity.

  • The Policy (The Visa): This defines what that identity is allowed to do. Just because you have a valid passport doesn't mean you have a visa to enter every country. A policy ensures that even a verified identity can only access the specific resources it needs.

When these three pillars aren't managed together, security breaks down. You might have a valid Principal using a stolen Credential, or a legitimate identity with a Policy that is far too broad.

Leaked token detection

Agents and other third-party applications use API tokens to access the Cloudflare API. One of the simplest ways that we see people leaking their secrets is by accidentally pushing them to a public GitHub repository. GitGuardian reports that last year more than 28 million secrets were published to public GitHub repositories, and that AI is causing leaks to happen 5x faster than before.

If an API token is a digital passport, then leaking it on a public repository is like leaving your passport on a park bench. Anyone who finds it can impersonate that identity until the document is canceled. Our partnership with GitHub acts like a global "lost and found" for these credentials. By the time you realize your passport is missing, we’ve already identified the document, verified its authenticity via the checksum, and voided it to prevent misuse.

We’re partnering with several leading credential scanning tools to help proactively find your leaked tokens and revoke them before they could be used maliciously. We know it’s not a matter of if, but rather when, before you, an employee, or one of your agents makes a mistake and pushes a secret somewhere it shouldn’t be. 

GitHub

We’ve partnered with GitHub and are participating in their Secret Scanning program to find your tokens in both public and private repositories. If we are notified that a token has leaked to a public repository, we will automatically revoke the token to prevent it from being used maliciously. For private repositories, GitHub will notify you about any leaked Cloudflare tokens and you can clean these up.

How it works

We’ve shared the new token formats (below!) with GitHub, and they now scan for them on every commit. If they find something that looks like a leaked Cloudflare token, they verify the token is real (using the checksum), send us a webhook to revoke it, and then we notify you via email so you can generate a new one in Dashboard settings.

This means we plug the hole as soon as it’s found. By the time you realize you made a mistake, we've already fixed it. 

We hope this is the kind of feature you don’t need to use, but our partners are on the lookout for leaks to help keep you secure. 

Cloudflare One

Cloudflare One customers are also protected from these leaks. By configuring the Credentials and Secrets DLP profile, organizations can activate prevention everywhere a credential can travel:

  • Network Traffic (Cloudflare Gateway): Apply these entries to a policy to detect and block Cloudflare API tokens moving across your network. A token in a file upload, an outbound request, or a download is stopped before it reaches its destination.

  • Outbound Email (Cloudflare Email Security): Microsoft 365 customers can extend this same prevention to Outlook. The DLP Assist add-in scans messages before delivery, catching a token before it’s sent externally.

  • Data at Rest (Cloudflare CASB): Cloudflare’s Cloud Access Security Broker applies the same profile to scan files across connected SaaS applications, catching tokens saved or shared in Google Drive, OneDrive, Dropbox, and other integrated services.

The most novel exposure vector, though, is AI traffic. Cloudflare AI Gateway integrates with the same DLP profiles to scan and block both incoming prompts and outgoing AI model responses in real time.

Other credential scanners

The only way credential scanning works is if we meet you where you are, so we are working with several open source and commercial credential scanners to ensure you are protected no matter what secret scanner you use. 

How it works

Until now, Cloudflare’s API tokens were pretty generic looking, so they were hard for credential scanners to identify with high confidence. These automated security tools scan your code repositories looking for exposed credentials like API keys, tokens or passwords. The “cf” prefix makes Cloudflare tokens instantly recognizable with greater confidence, and the checksum makes it easy for tools to statically validate them. Your existing tokens will continue to work, but every new token you generate will use the scannable format so it’s easily detected with high confidence.

Credential Type

What it's for

New Format

User API Key

Legacy global API key tied to your user account (full access)

cfk_[40 characters][checksum]

User API Token

Scoped token you create for specific permissions

cfut_[40 characters][checksum]

Account API Token

Token owned by the account (not a specific user)

cfat_[40 characters][checksum]

Getting started

If you have existing API tokens, you can roll the token to create a new, scannable API token. This is optional, but recommended to ensure that your tokens are easily discoverable in case they leak. 

While API tokens are generally used by your own scripts and agents, OAuth is how you manage access for third-party platforms. Both require clear visibility to prevent unauthorized access and ensure you know exactly who — or what — has access to your data.

Improving the OAuth consent experience

When you connect third-party applications like Wrangler to your Cloudflare Account using OAuth, you're granting that application access to your account’s data. Over time, you may forget why you granted a third party application access to your Account in the first place. Previously, there was no central place to view & manage those applications. Starting today, there is.  

Going forward, when a third party application requests access to your Cloudflare account, you’ll be able to review: 

  • Which third-party application is requesting access, along with information about the application like Name, Logo, and the Publisher.

  • Which scopes the third-party application is requesting access to.

  • Which accounts to grant the third party application access to.

Before After


Not all applications require the same permissions; some only need to read data, others may need to make changes to your Account. Understanding these scopes before you grant access helps you maintain least-privilege.

We also added a Connected Applications experience so you can see which applications have access to which accounts, what scopes/permissions are associated with that application, and easily revoke that access as needed. 

Getting started

The OAuth consent and revocation improvements are available now. Check which apps currently have access to your accounts by visiting My Profile > Access Management > Connected Applications. 

For developers building integrations with Cloudflare, keep an eye on the Cloudflare Changelog for more announcements around how you can register your own OAuth apps soon! 

Fine-grained resource-level permissioning 

If the token is the passport, then resource-scoped permissions are the visas inside it. Having a valid passport gets you through the front door, but it shouldn't give you access to every room in the building. By narrowing the scope to specific resources — like a single Load Balancer pool or a specific Gateway policy — you are ensuring that even if an identity is verified, it only has the "visa" to go where it’s strictly necessary.

Last year, we announced support for resource scoped permissions in Cloudflare’s role-based access control (RBAC) system for several of our Zero Trust products. This enables you to right size permissions for both users and agents to minimize security risks. We’ve expanded this capability to several new resources-level permissions. The resource scope is now supported for:

  • Access Applications

  • Access Identity Providers

  • Access Policies

  • Access Service Tokens

  • Access Targets

We’ve also completely overhauled the API Token creation experience, making it easier for customers to provision and manage Account API Tokens right from the Cloudflare Dashboard.

How it works

When you add a member to your Cloudflare account or create an API Token, you typically assign that principal a policy. A Permission Policy is what gives a principal permission to take an action, whether that’s managing Cloudflare One Access Applications, or DNS Records. Without a policy, a principal can authenticate, but they are unauthorized to do any actions within an account.

Policies are made up of three components: a Principal, a Role, and a Scope. The Principal is who or what you're granting access to, whether that's a human user, a Non-Human Identity (NHI) like an API Token, or increasingly, an Agent acting on behalf of a user. The Role defines what actions they're permitted to take. The Scope determines where those permissions apply, and historically, that's been restricted to the entire account, or individual zones.

New permission roles

We’re also expanding the role surface more broadly at both the Account & Zone level with the introduction of a number of new roles for many products. 

  • Account scope

    • CDN Management

    • MCP Portals

    • Radar

    • Request Tracer

    • SSL/TLS Management

  • Zone scope

    • Analytics

    • Logpush

    • Page Rules

    • Security Center

    • Snippets

    • Zone Settings

Getting started

The resource scope and all new account and zone-level roles are available today for all Cloudflare customers. You can assign account, zone, or resource-scoped policies through the Cloudflare Dashboard, the API, or Terraform. 

For a full breakdown of all available roles and how scopes work, visit our roles and scope documentation.

Secure your accounts

These updates provide the granular building blocks needed for a true least-privilege architecture. By refining how we manage permissions and credentials, developers and enterprises can have greater confidence in their security posture across the users, apps, agents, and scripts that access Cloudflare. Least privilege isn’t a new concept, and for enterprises, it’s never been optional. Whether a human administrator is managing a zone or an agent is programmatically deploying a Worker, the expectation is the same, they should only be authorized to do the job it was given, and nothing else. 

Following today’s announcement, we recommend customers:

  1. Review your API tokens, and reissue with the new, scannable API tokens as soon as possible. 

  2. Review your authorized OAuth apps, and revoke any that you are no longer using

  3. Review member & API Token permissions in your accounts and ensure that users are taking advantage of the new account, zone, or resource scoped permissions as needed to reduce your risk area. 

Scaling MCP adoption: Our reference architecture for simpler, safer and cheaper enterprise deployments of MCP

14 de Abril de 2026, 10:00

We at Cloudflare have aggressively adopted Model Context Protocol (MCP) as a core part of our AI strategy. This shift has moved well beyond our engineering organization, with employees across product, sales, marketing, and finance teams now using agentic workflows to drive efficiency in their daily tasks. But the adoption of agentic workflow with MCP is not without its security risks. These range from authorization sprawl, prompt injection, and supply chain risks. To secure this broad company-wide adoption, we have integrated a suite of security controls from both our Cloudflare One (SASE) platform and our Cloudflare Developer platform, allowing us to govern AI usage with MCP without slowing down our workforce. 

In this blog we’ll walk through our own best practices for securing MCP workflows, by putting different parts of our platform together to create a unified security architecture for the era of autonomous AI. We’ll also share two new concepts that support enterprise MCP deployments:

We also talk about how our organization approached deploying MCP, and how we built out our MCP security architecture using Cloudflare products including remote MCP servers, Cloudflare Access, MCP server portals and AI Gateway

Remote MCP servers provide better visibility and control

MCP is an open standard that enables developers to build a two-way connection between AI applications and the data sources they need to access. In this architecture, the MCP client is the integration point with the LLM or other AI agent, and the MCP server sits between the MCP client and the corporate resources.

The separation between MCP clients and MCP servers allows agents to autonomously pursue goals and take actions while maintaining a clear boundary between the AI (integrated at the MCP client) and the credentials and APIs of the corporate resource (integrated at the MCP server). 

Our workforce at Cloudflare is constantly using MCP servers to access information in various internal resources, including our project management platform, our internal wiki, documentation and code management platforms, and more. 

Very early on, we realized that locally-hosted MCP servers were a security liability. Local MCP server deployments may rely on unvetted software sources and versions, which increases the risk of supply chain attacks or tool injection attacks. They prevent IT and security administrators from administrating these servers, leaving it up to individual employees and developers to choose which MCP servers they want to run and how they want to keep them up to date. This is a losing game.

Instead, we have a centralized team at Cloudflare that manages our MCP server deployment across the enterprise. This team built a shared MCP platform inside our monorepo that provides governed infrastructure out of the box. When an employee wants to expose an internal resource via MCP, they first get approval from our AI governance team, and then they copy a template, write their tool definitions, and deploy, all the while inheriting default-deny write controls with audit logging, auto-generated CI/CD pipelines, and secrets management for free. This means standing up a new governed MCP server is minutes of scaffolding. The governance is baked into the platform itself, which is what allowed adoption to spread so quickly. 

Our CI/CD pipeline deploys them as remote MCP servers on custom domains on Cloudflare’s developer platform. This gives us visibility into which MCPs servers are being used by our employees, while maintaining control over software sources. As an added bonus, every remote MCP server on the Cloudflare developer platform is automatically deployed across our global network of data centers, so MCP servers can be accessed by our employees with low latency, regardless of where they might be in the world.

Cloudflare Access provides authentication

Some of our MCP servers sit in front of public resources, like our Cloudflare documentation MCP server or Cloudflare Radar MCP server, and thus we want them to be accessible to anyone. But many of the MCP servers used by our workforce are sitting in front of our private corporate resources. These MCP servers require user authentication to ensure that they are off limits to everyone but authorized Cloudflare employees. To achieve this, our monorepo template for MCP servers integrates Cloudflare Access as the OAuth provider. Cloudflare Access secures login flows and issues access tokens to resources, while acting as an identity aggregator that verifies end user single-sign on (SSO), multifactor authentication (MFA), and a variety of contextual attributes such as IP addresses, location, or device certificates. 

MCP server portals centralize discovery and governance

MCP server portals unify governance and control for all AI activity.

As the number of our remote MCP servers grew, we hit a new wall: discovery. We wanted to make it easy for every employee (especially those that are new to MCP) to find and work with all the MCP servers that are available to them. Our MCP server portals product provided a convenient solution. The employee simply connects their MCP client to the MCP server portal, and the portal immediately reveals every internal and third-party MCP servers they are authorized to use. 

Beyond this, our MCP server portals provide centralized logging, consistent policy enforcement and data loss prevention (DLP guardrails). Our administrators can see who logged into what MCP portal and create DLP rules that prevent certain data, like personally identifiable data (PII), from being shared with certain MCP servers.

We can also create policies that control who has access to the portal itself, and what tools from each MCP server should be exposed. For example, we could set up one MCP server portal that is only accessible to employees that are part of our finance group that exposes just the read-only tools for the MCP server in front of our internal code repository. Meanwhile, a different MCP server portal, accessible only to employees on their corporate laptops that are in our engineering team, could expose more powerful read/write tools to our code repository MCP server.

An overview of our MCP server portal architecture is shown above. The portal supports both remote MCP servers hosted on Cloudflare, and third-party MCP servers hosted anywhere else. What makes this architecture uniquely performant is that all these security and networking components run on the same physical machine within our global network. When an employee's request moves through the MCP server portal, a Cloudflare-hosted remote MCP server, and Cloudflare Access, their traffic never needs to leave the same physical machine. 

Code Mode with MCP server portals reduces costs

After months of high-volume MCP deployments, we’ve paid out our fair share of tokens. We’ve also started to think most people are doing MCP wrong.

The standard approach to MCP requires defining a separate tool for every API operation that is exposed via an MCP server. But this static and exhaustive approach quickly exhausts an agent’s context window, especially for large platforms with thousands of endpoints.

We previously wrote about how we used server-side Code Mode to power Cloudflare’s MCP server, allowing us to expose the thousands of end-points in Cloudflare API while reducing token use by 99.9%. The Cloudflare MCP server exposes just two tools: a search tool lets the model write JavaScript to explore what’s available, and an execute tool lets it write JavaScript to call the tools it finds. The model discovers what it needs on demand, rather than receiving everything upfront.

We like this pattern so much, we had to make it available for everyone. So we have now launched the ability to use the “Code Mode” pattern with MCP server portals. Now you can front all of your MCP servers with a centralized portal that performs audit controls and progressive tool disclosure, in order to reduce token costs.

Here is how it works. Instead of exposing every tool definition to a client, all of your underlying MCP servers collapse into just two MCP portal tools: portal_codemode_search and portal_codemode_execute. The search tool gives the model access to a codemode.tools() function that returns all the tool definitions from every connected upstream MCP server. The model then writes JavaScript to filter and explore these definitions, finding exactly the tools it needs without every schema being loaded into context. The execute tool provides a codemode proxy object where each upstream tool is available as a callable function. The model writes JavaScript that calls these tools directly, chaining multiple operations, filtering results, and handling errors in code. All of this runs in a sandboxed environment on the MCP server portal powered by Dynamic Workers

Here is an example of an agent that needs to find a Jira ticket and update it with information from Google Drive. It first searches for the right tools:

// portal_codemode_search
async () => {
 const tools = await codemode.tools();
 return tools
  .filter(t => t.name.includes("jira") || t.name.includes("drive"))
  .map(t => ({ name: t.name, params: Object.keys(t.inputSchema.properties || {}) }));
}

The model now knows the exact tool names and parameters it needs, without the full schemas of tools ever entering its context. It then writes a single execute call to chain the operations together:

// portal_codemode_execute
async () => {
 const tickets = await codemode.jira_search_jira_with_jql({
  jql: ‘project = BLOG AND status = “In Progress”’,
  fields: [“summary”, “description”]
 });
 const doc = await codemode.google_workspace_drive_get_content({
  fileId: “1aBcDeFgHiJk”
 });
 await codemode.jira_update_jira_ticket({
  issueKey: tickets[0].key,
  fields: { description: tickets[0].description + “\n\n” + doc.content }
 });
 return { updated: tickets[0].key };
}

This is just two tool calls. The first discovers what's available, the second does the work. Without Code Mode, this same workflow would have required the model to receive the full schemas of every tool from both MCP servers upfront, and then make three separate tool invocations.

Let’s put the savings in perspective: when our internal MCP server portal is connected to just four of our internal MCP servers, it exposes 52 tools that consume approximately 9,400 tokens of context just for their definitions. With Code Mode enabled, those 52 tools collapse into 2 portal tools consuming roughly 600 tokens, a 94% reduction. And critically, this cost stays fixed. As we connect more MCP servers to the portal, the token cost of Code Mode doesn’t grow.

Code Mode can be activated on an MCP server portal by adding a query parameter to the URL. Instead of connecting to your portal over its usual URL (e.g. https://myportal.example.com/mcp), you attach ?codemode=search_and_execute to the URL (e.g. https://myportal.example.com/mcp?codemode=search_and_execute).

AI Gateway provides extensibility and cost controls

We aren’t done yet. We plug AI Gateway into our architecture by positioning it on the connection between the MCP client and the LLM. This allows us to quickly switch between various LLM providers (to prevent vendor lock-in) and to enforce cost controls (by limiting the number of tokens each employee can burn through). The full architecture is shown below.

Cloudflare Gateway discovers and blocks shadow MCP

Now that we’ve provided governed access to authorized MCP servers, let’s look into dealing with unauthorized MCP servers. We can perform shadow MCP discovery using Cloudflare Gateway. Cloudflare Gateway is our comprehensive secure web gateway that provides enterprise security teams with visibility and control over their employees’ Internet traffic.

We can use the Cloudflare Gateway API to perform a multi-layer scan to find remote MCP servers that are not being accessed via an MCP server portal. This is possible using a variety of existing Gateway and Data Loss Prevention (DLP) selectors, including:

  • Using the Gateway httpHost selector to scan for 

    • known MCP server hostnames using (like mcp.stripe.com)

    • mcp.* subdomains using wildcard hostname patterns 

  • Using the Gateway httpRequestURI selector to scan for MCP-specific URL paths like /mcp and /mcp/sse 

  • Using DLP-based body inspection to find MCP traffic, even if that traffic uses URI that do not contain the telltale mentions of mcp or sse. Specifically, we use the fact that MCP uses JSON-RPC over HTTP, which means every request contains a "method" field with values like "tools/call", "prompts/get", or "initialize." Here are some regex rules that can be used to detect MCP traffic in the HTTP body:

const DLP_REGEX_PATTERNS = [
  {
    name: "MCP Initialize Method",
    regex: '"method"\\s{0,5}:\\s{0,5}"initialize"',
  },
  {
    name: "MCP Tools Call",
    regex: '"method"\\s{0,5}:\\s{0,5}"tools/call"',
  },
  {
    name: "MCP Tools List",
    regex: '"method"\\s{0,5}:\\s{0,5}"tools/list"',
  },
  {
    name: "MCP Resources Read",
    regex: '"method"\\s{0,5}:\\s{0,5}"resources/read"',
  },
  {
    name: "MCP Resources List",
    regex: '"method"\\s{0,5}:\\s{0,5}"resources/list"',
  },
  {
    name: "MCP Prompts List",
    regex: '"method"\\s{0,5}:\\s{0,5}"prompts/(list|get)"',
  },
  {
    name: "MCP Sampling Create Message",
    regex: '"method"\\s{0,5}:\\s{0,5}"sampling/createMessage"',
  },
  {
    name: "MCP Protocol Version",
    regex: '"protocolVersion"\\s{0,5}:\\s{0,5}"202[4-9]',
  },
  {
    name: "MCP Notifications Initialized",
    regex: '"method"\\s{0,5}:\\s{0,5}"notifications/initialized"',
  },
  {
    name: "MCP Roots List",
    regex: '"method"\\s{0,5}:\\s{0,5}"roots/list"',
  },
];

The Gateway API supports additional automation. For example, one can use the custom DLP profile we defined above to block traffic, or redirect it, or just to log and inspect MCP payloads. Put this together, and Gateway can be used to provide comprehensive detection of unauthorized remote MCP servers accessed via an enterprise network. 

For more information on how to build this out, see this tutorial

Public-facing MCP Servers are protected with AI Security for Apps

So far, we’ve been focused on protecting our workforce’s access to our internal MCP servers. But, like many other organizations, we also have public-facing MCP servers that our customers can use to agentically administer and operate Cloudflare products. These MCP servers are hosted on Cloudflare’s developer platform. (You can find a list of individual MCPs for specific products here, or refer back to our new approach for providing more efficient access to the entire Cloudflare API using Code Mode.)

We believe that every organization should publish official, first-party MCP servers for their products. The alternative is that your customers source unvetted servers from public repositories where packages may contain dangerous trust assumptions, undisclosed data collection, and any range of unsanctioned behaviors. By publishing your own MCP servers, you control the code, update cadence, and security posture of the tools your customers use.

Since every remote MCP server is an HTTP endpoint, we can put it behind the Cloudflare Web Application Firewall (WAF). Customers can enable the AI Security for Apps feature within the WAF to automatically inspect inbound MCP traffic for prompt injection attempts, sensitive data leakage, and topic classification. Public facing MCPs are protected just as any other web API.  

The future of MCP in the enterprise

We hope our experience, products, and reference architectures will be useful to other organizations as they continue along their own journey towards broad enterprise-wide adoption of MCP.

We’ve secured our own MCP workflows by: 

  • Offering our developers a templated framework for building and deploying remote MCP servers on our developer platform using Cloudflare Access for authentication

  • Ensuring secure, identity-based access to authorized MCP servers by connecting our entire workforce to MCP server portals

  • Controlling costs using AI Gateway to mediate access to the LLMs powering our workforce’s MCP clients, and using Code Mode in MCP server portals to reduce token consumption and context bloat

  • Discovering shadow MCP usage by Cloudflare Gateway 

For organizations advancing on their own enterprise MCP journeys, we recommend starting by putting your existing remote and third-party MCP servers behind  Cloudflare MCP server portals and enabling Code Mode to start benefitting for cheaper, safer and simpler enterprise deployments of MCP.  

Acknowledgements:  This reference architecture and blog represents this work of many people across many different roles and business units at Cloudflare. This is just a partial list of contributors: Ann Ming Samborski,  Kate Reznykova, Mike Nomitch, James Royal, Liam Reese, Yumna Moazzam, Simon Thorpe, Rian van der Merwe, Rajesh Bhatia, Ayush Thakur, Gonzalo Chavarri, Maddy Onyehara, and Haley Campbell.

Managed OAuth for Access: make internal apps agent-ready in one click

We have thousands of internal apps at Cloudflare. Some are things we’ve built ourselves, others are self-hosted instances of software built by others. They range from business-critical apps nearly every person uses, to side projects and prototypes.

All of these apps are protected by Cloudflare Access. But when we started using and building agents — particularly for uses beyond writing code — we hit a wall. People could access apps behind Access, but their agents couldn’t.

Access sits in front of internal apps. You define a policy, and then Access will send unauthenticated users to a login page to choose how to authenticate. 

Example of a Cloudflare Access login page

This flow worked great for humans. But all agents could see was a redirect to a login page that they couldn’t act on.

Providing agents with access to internal app data is so vital that we immediately implemented a stopgap for our own internal use. We modified OpenCode’s web fetch tool such that for specific domains, it triggered the cloudflared CLI to open an authorization flow to fetch a JWT (JSON Web Token). By appending this token to requests, we enabled secure, immediate access to our internal ecosystem.

While this solution was a temporary answer to our own dilemma, today we’re retiring this workaround and fixing this problem for everyone. Now in open beta, every Access application supports managed OAuth. One click to enable it for an Access app, and agents that speak OAuth 2.0 can easily discover how to authenticate (RFC 9728), send the user through the auth flow, and receive back an authorization token (the same JWT from our initial solution). 

Now, the flow works smoothly for both humans and agents. Cloudflare Access has a generous free tier. And building off our newly-introduced Organizations beta, you’ll soon be able to bridge identity providers across Cloudflare accounts too.

How managed OAuth works

For a given internal app protected by Cloudflare Access, you enable managed OAuth in one click:

Once managed OAuth is enabled, Cloudflare Access acts as the authorization server. It returns the www-authenticate header, telling unauthorized agents where to look up information on how to get an authorization token. They find this at https://<your-app-domain>/.well-known/oauth-authorization-server. Equipped with that direction, agents can just follow OAuth standards: 

  1. The agent dynamically registers itself as a client (a process known as Dynamic Client Registration — RFC 7591), 

  2. The agent sends the human through a PKCE (Proof Key for Code Exchange) authorization flow (RFC 7636)

  3. The human authorizes access, which grants a token to the agent that it can use to make authenticated requests on behalf of the user

Here’s what the authorization flow looks like:

If this authorization flow looks familiar, that’s because it’s what the Model Context Protocol (MCP) uses. We originally built support for this into our MCP server portals product, which proxies and controls access to many MCP servers, to allow the portal to act as the OAuth server. Now, we’re bringing this to all Access apps so agents can access not only MCP servers that require authorization, but also web pages, web apps, and REST APIs.

Mass upgrading your internal apps to be agent-ready

Upgrading the long tail of internal software to work with agents is a daunting task. In principle, in order to be agent-ready, every internal and external app would ideally have discoverable APIs, a CLI, a well-crafted MCP server, and have adopted the many emerging agent standards.

AI adoption is not something that can wait for everything to be retrofitted. Most organizations have a significant backlog of apps built over many years. And many internal “apps” work great when treated by agents as simple websites. For something like an internal wiki, all you really need is to enable Markdown for Agents, turn on managed OAuth, and agents have what they need to read protected content.

To make the basics work across the widest set of internal applications, we use Managed OAuth. By putting Access in front of your legacy internal apps, you make them agent-ready instantly. No code changes, no retrofitting. Instead, just immediate compatibility.

It’s the user’s agent. No service accounts and tokens needed

Agents need to act on behalf of users inside organizations. One of the biggest anti-patterns we’ve seen is people provisioning service accounts for their agents and MCP servers, authenticated using static credentials. These have their place in simple use cases and quick prototypes, and Cloudflare Access supports service tokens for this purpose.

But the service account approach quickly shows its limits when fine-grained access controls and audit logs are required. We believe that every action an agent performs must be easily attributable to the human who initiated it, and that an agent must only be able to perform actions that its human operator is likewise authorized to do. Service accounts and static credentials become points at which attribution is lost. Agents that launder all of their actions through a service account are susceptible to confused deputy problems and result in audit logs that appear to originate from the agent itself.

For security and accountability, agents must use security primitives capable of expressing this user–agent relationship. OAuth is the industry standard protocol for requesting and delegating access to third parties. It gives agents a way to talk to your APIs on behalf of the user, with a token scoped to the user’s identity, so that access controls correctly apply and audit logs correctly attribute actions to the end user.

Standards for the win: how agents can and should adopt RFC 9728 in their web fetch tools

RFC 9728 is the OAuth standard that makes it possible for agents to discover where and how to authenticate. It standardizes where this information lives and how it’s structured. This RFC became official in April 2025 and was quickly adopted by the Model Context Protocol (MCP), which now requires that both MCP servers and clients support it.

But outside of MCP, agents should adopt RFC 9728 for an even more essential use case: making requests to web pages that are protected behind OAuth and making requests to plain old REST APIs.

Most agents have a tool for making basic HTTP requests to web pages. This is commonly called the “web fetch” tool. It’s similar to using the fetch() API in JavaScript, often with some additional post-processing on the response. It’s what lets you paste a URL into your agent and have your agent go look up the content.

Today, most agents’ web fetch tools won’t do anything with the www-authenticate header that a URL returns. The underlying model might choose to introspect the response headers and figure this out on its own, but the tool itself does not follow www-authenticate, look up /.well-known/oauth-authorization-server, and act as the client in the OAuth flow. But it can, and we strongly believe it should! Agents already do this to act as remote MCP clients.

To demonstrate this, we’ve put up a draft pull request that adapts the web fetch tool in Opencode to show this in action. Before making a request, the adapted tool first checks whether it already has credentials ; if it does, it uses them to make the initial request. If the tool gets back a 401 or a 403 with a www-authenticate header, it asks the user for consent to be sent through the server’s OAuth flow.

Here’s how that OAuth flow works. If you give the agent a URL that is protected by OAuth and complies with RFC 9728, the agent prompts the human for consent to open the authorization flow:

…sending the human to the login page:

…and then to a consent dialog that prompts the human to grant access to the agent:

Once the human grants access to the agent, the agent uses the token it has received to make an authenticated request:

Any agent from Codex to Claude Code to Goose and beyond can implement this, and there’s nothing bespoke to Cloudflare. It’s all built using OAuth standards.

We think this flow is powerful, and that supporting RFC 9728 can help agents with more than just making basic web fetch requests. If a REST API supports RFC 9728 (and the agent does too), the agent has everything it needs to start making authenticated requests against that API. If the REST API supports RFC 9727, then the client can discover a catalog of REST API endpoints on its own, and do even more without additional documentation, agent skills, MCP servers or CLIs. 

Each of these play important roles with agents — Cloudflare itself provides an MCP server for the Cloudflare API (built using Code Mode), Wrangler CLI, and Agent Skills, and a Plugin. But supporting RFC 9728 helps ensure that even when none of these are preinstalled, agents have a clear path forward. If the agent has a sandbox to execute untrusted code, it can just write and execute code that calls the API that the human has granted it access to. We’re working on supporting this for Cloudflare’s own APIs, to help your agents understand how to use Cloudflare.

Coming soon: share one identity provider (IdP) across many Cloudflare accounts

At Cloudflare our own internal apps are deployed to dozens of different Cloudflare accounts, which are all part of an Organization — a newly introduced way for administrators to manage users, configurations, and view analytics across many Cloudflare accounts. We have had the same challenge as many of our customers: each Cloudflare account has to separately configure an IdP, so Cloudflare Access uses our identity provider. It’s critical that this is consistent across an organization — you don’t want one Cloudflare account to inadvertently allow people to sign in just with a one-time PIN, rather than requiring that they authenticate via single-sign on (SSO).

To solve this, we’re currently working on making it possible to share an identity provider across Cloudflare accounts, giving organizations a way to designate a single primary IdP for use across every account in their organization.

As new Cloudflare accounts are created within an organization, administrators will be able to configure a bridge to the primary IdP with a single click, so Access applications across accounts can be protected by one identity provider. This removes the need to manually configure IdPs account by account, which is a process that doesn’t scale for organizations with many teams and individuals each operating their own accounts.

What’s next

Across companies, people in every role and business function are now using agents to build internal apps, and expect their agents to be able to access context from internal apps. We are responding to this step function growth in internal software development by making the Workers Platform and Cloudflare One work better together — so that it is easier to build and secure internal apps on Cloudflare. 

Expect more to come soon, including:

  • More direct integration between Cloudflare Access and Cloudflare Workers, without the need to validate JWTs or remember which of many routes a particular Worker is exposed on.

  • wrangler dev --tunnel — an easy way to expose your local development server to others when you’re building something new, and want to share it with others before deploying

  • A CLI interface for Cloudflare Access and the entire Cloudflare API

  • More announcements to come during Agents Week 2026

Enable Managed OAuth for your internal apps behind Cloudflare Access

Managed OAuth is now available, in open beta, to all Cloudflare customers. Head over to the Cloudflare dashboard to enable it for your Access applications. You can use it for any internal app, whether it’s one built on Cloudflare Workers, or hosted elsewhere. And if you haven’t built internal apps on the Workers Platform yet — it’s the fastest way for your team to go from zero to deployed (and protected) in production.

B2B Authentication Provider Comparison: Features, Pricing & SSO Support (2026)

This comprehensive guide compares the leading B2B authentication providers in 2026, including Auth0, Okta, SSOJet, MojoAuth, FusionAuth, and Keycloak.

The article explores enterprise SSO, SCIM provisioning, pricing models, developer experience, and authentication protocols such as SAML, OAuth, and OpenID Connect. It also includes feature comparisons, real-world SaaS use cases, pricing analysis, and future identity trends like passkeys and zero-trust security.

The post B2B Authentication Provider Comparison: Features, Pricing & SSO Support (2026) appeared first on Security Boulevard.

Introducing Intelligence Center 3.7: Faster decisions with clearer context across defense and enterprise

Counting intelligence outputs is simple: volume, velocity, coverage. The real question is this: does your intelligence improve decisions under pressure, with confidence you can defend?

Free TIP Bundles to test, validate, and operationalize threat intelligence faster

You cannot confidently choose threat intelligence integrations and services when you have to commit before you can validate operational impact. That is how you end up with tools that look good on paper, but do not always reduce triage time, improve detection quality, or support response the way you hoped.

Disarming disinformation: How EclecticIQ helps you analyze and track influence operations with the DISARM Framework

Disinformation is no longer just a nuisance.  It’s a weapon leveraged by both state and non-state actors.  For information operations analysts tracking influence campaigns across elections, national security threats, and coordinated disinformation efforts, the challenge is growing. Whether you work in a government agency, intelligence service, election security organization, or corporate trust and safety team, the tools at your disposal were not built for this fight.  

Deduplication, done right: Full control, full context, one entity

Threat intelligence teams deal with a constant influx of data from multiple providers, often describing the same threat actor, malware, or vulnerability in slightly different ways. Instead of speeding up analysis, this duplication adds friction and slows decisions. 

  • ✇SOC Prime Blog
  • Telemetry Pipeline: How It Works and Why It Matters in 2026 Steven Edwards
    A telemetry pipeline has become a core layer in modern security operations because teams no longer send data from applications, infrastructure, and cloud services straight into a single backend and hope for the best. In 2026, most environments are distributed across cloud, hybrid, and on-prem systems, which means more services, more data sources, more formats, and more operational complexity for teams that already struggle to keep visibility, control costs, and respond quickly.  Splunk’s State
     

Telemetry Pipeline: How It Works and Why It Matters in 2026

25 de Março de 2026, 08:31
Delemetry Data Pipeline

A telemetry pipeline has become a core layer in modern security operations because teams no longer send data from applications, infrastructure, and cloud services straight into a single backend and hope for the best. In 2026, most environments are distributed across cloud, hybrid, and on-prem systems, which means more services, more data sources, more formats, and more operational complexity for teams that already struggle to keep visibility, control costs, and respond quickly. 

Splunk’s State of Security 2025 found that 46% of security professionals spend more time maintaining tools than defending the organization. Cisco’s research adds that 59% deal with too many alerts, 55% face too many false positives, and 57% lose valuable investigation time because of gaps in data management. When too much raw telemetry flows into the stack without filtering, enrichment, or routing, the result is higher bills, slower investigations, and more noise for already stretched teams.

That is why telemetry pipelines are gaining momentum. They give organizations a control layer to normalize, enrich, route, and govern telemetry before it reaches SIEM, observability, or storage platforms. What began primarily as a way to control volume and cost is quickly becoming a must for modern security operations. Gartner suggests that by 2027, 40% of all log data will be processed through telemetry pipeline products, up from less than 20% in 2024.

As that model matures, the next logical step is not just to manage telemetry better, but to make it useful earlier. If teams are already adding a pipeline to reduce noise, control spend, and improve routing, it makes sense to move part of the detection process closer to the stream itself rather than waiting for every event to land in downstream tools first. Solutions like SOC Prime’s DetectFlow act as an additional detection layer running directly on the stream. Instead of using the pipeline only for transport and optimization, DetectFlow applies tens of thousands of Sigma rules on live Kafka streams with Apache Flink, tags and enriches events in flight, and helps teams act on higher-value signals much earlier in the flow.

What Is Telemetry?

Before talking about telemetry pipelines, it is important to define telemetry itself.

Telemetry is the evidence systems leave behind while they run. It shows how applications, infrastructure, and services behave in real time, including performance, failures, usage, and health. 

For enterprises, that evidence is valuable because it shows what users are actually experiencing, where bottlenecks form, when failures begin, and where suspicious activity starts to flicker. For security teams, telemetry is even more important because it becomes the raw material for detection, investigation, hunting, and response.

Put differently, telemetry is the trail of digital footprints your environment leaves behind. Useful on its own, but much more powerful when it is organized before the tracks disappear into the mud.

What Are the Main Types of Telemetry Data?

Most teams work with four main telemetry categories grouped under the MELT model: Metrics, Events, Logs, and Traces.

Metrics

Metrics are numerical measurements collected over time, such as CPU usage, memory consumption, latency, throughput, request volume, and error rate. They help teams track system health, identify trends, and spot anomalies before they become visible outages.

Events

Events capture notable actions or state changes inside a system. They usually mark something important that happened, such as a user login, a deployment, a configuration update, a purchase, or a failover. Events are especially useful because they often connect technical activity to business activity.

Logs

Logs are timestamped records of discrete activity inside an application, system, or service. They provide detailed evidence about what happened, when it happened, and often who or what triggered it. Logs are essential for debugging, troubleshooting, auditing, and security investigations.

Traces

Traces show the end-to-end path of a request as it moves across different services and components. They help teams understand how systems interact, how long each step takes, and where delays or failures occur. Traces are especially valuable in distributed systems and microservices environments.

Some platforms also break telemetry into more specific categories, such as requests, dependencies, exceptions, and availability signals. These help teams understand incoming operations, external service calls, failures, and uptime. 

Telemetry Data Pros and Cons

Telemetry data can be one of the most valuable assets in modern operations, but only when it is managed with purpose. Done well, it gives teams a real-time view of how systems behave, how users interact with services, and where risks or inefficiencies begin to form. Done poorly, it becomes just another stream of noisy, expensive data.

Telemetry Data Benefits

The biggest advantage of telemetry is visibility. By collecting and analyzing metrics, logs, traces, and events, teams can see what is happening across applications, infrastructure, and services in real time.

Key benefits include:

  • Real-time visibility into system health, performance, and user activity
  • Proactive issue detection by spotting anomalies before they turn into outages or incidents
  • Improved operational efficiency through automated monitoring and faster workflows
  • Faster troubleshooting by giving teams the context needed to identify root causes quickly
  • Better decision-making through data-backed insights for product, operations, and security teams

To get the full value, telemetry needs to be consolidated and handled consistently. A unified telemetry layer helps reduce mess across tools, improves scalability, and makes data easier to analyze and act on.

Telemetry Data Challenges

Telemetry also comes with real challenges, especially as data volumes grow. The most common ones include:

  • Security and privacy risks when sensitive data is collected or stored without strong controls
  • Legacy system integration across different formats, sources, and older technologies
  • Rising storage and ingestion costs when too much low-value data is kept in expensive platforms
  • Tool fragmentation makes correlation and investigation harder
  • Interoperability issues when systems do not follow consistent standards or schemas

This is exactly why telemetry strategy matters. The goal is not to collect more data for the sake of it, but to collect the right data, shape it early, and route it where it creates the most value. In cybersecurity, that difference is critical. The right telemetry can speed up detection and response, while unmanaged telemetry can bury important signals under cost and noise.

How to Analyze Telemetry Data 

The best way to analyze telemetry data is to stop treating analysis as the last step. In practice, good analysis starts much earlier, with clear goals, structured collection, smart routing, and storage policies that keep useful data accessible without flooding downstream tools. 

Define Goals

Start with the question behind the data. Are you trying to improve performance, reduce MTTR, monitor customer experience, detect security threats, or control SIEM costs? Once that is clear, decide which signals matter most and which KPIs will show progress. For a product team, that may be latency and error rate. For a SOC, it may be detection coverage, false positives, and investigation speed. This is also the stage to set privacy and compliance boundaries so teams know what data should be collected, masked, or excluded from the start. 

Configure Collection

Once goals are clear, configure the tools that will collect the right telemetry from the right places. That usually means deciding which applications, hosts, cloud services, APIs, endpoints, and identity systems should send logs, metrics, traces, and events. It also means setting practical rules for sampling, field selection, filtering, and schema consistency.

Shape and Route the Data 

Before data reaches SIEM, observability, or storage platforms, it should be shaped to fit the goal. That can mean normalizing records into consistent schemas, enriching events with identity or asset context, filtering noisy data, redacting sensitive fields, and routing each signal to the destination where it creates the most value.

Store Data With Intent

Not all telemetry needs the same retention period, storage tier, or query speed. High-value operational and security data may need to stay hot for rapid search and alerting, while bulk historical data can move to cheaper long-term storage. The key is to align retention with investigation needs, compliance obligations, and cost tolerance. 

Analyze, Alert, and Refine

Only after that foundation is in place does analysis become truly useful. Dashboards, alerts, anomaly detection, and visualizations work much better when the underlying telemetry is already clean, consistent, and routed with purpose. Machine learning and AI can make this process more effective by helping teams spot unusual patterns, detect anomalies faster, and identify changes that may be easy to miss in high-volume environments.

That is especially important in security operations, where the real challenge is turning telemetry into better decisions with less noise. This is exactly why a pipeline-based approach becomes so valuable. When telemetry is already being normalized, enriched, and routed upstream, analysis can start earlier, before raw events pile up in costly SIEM platforms.

Solutions like DetectFlow placе detection logic, threat correlation, and Agentic AI capabilities directly in the pipeline. At the pre-SIEM stage, DetectFlow can correlate events across log sources from multiple systems, while Flink Agent and AI help surface the attack chains that matter in real time and reduce false positives. In practice, that means teams can move detection left and deliver cleaner, richer, and more actionable signals downstream.

Telemetry and Monitoring: Main Difference

Telemetry and monitoring are closely related, but they are not the same thing. Telemetry is the process of collecting and transmitting data from systems and applications. It captures raw signals such as metrics, logs, traces, and events, then sends them to a central place for analysis. Monitoring is what teams do with that data to understand system health, performance, and availability. It turns telemetry into dashboards, alerts, and reports that help people act on what they see.

The difference matters because many organizations still build their strategy around dashboards and alerts alone. Monitoring is important, but it is only one use of telemetry. Security teams also rely on telemetry for investigation, hunting, root-cause analysis, and detection engineering. In other words, telemetry is the foundation, while monitoring is one of the ways that foundation is used.

In fact, telemetry is like the nervous system, constantly gathering signals from every part of the body. Monitoring is like the brain, interpreting those signals and deciding what needs attention. Telemetry feeds monitoring. Without telemetry, there is nothing to monitor. Without monitoring, telemetry remains a raw signal with no clear action attached.

What Is a Telemetry Pipeline?

A telemetry pipeline is the operating layer between telemetry sources and telemetry destinations. It collects signals from applications, hosts, cloud platforms, APIs, identity systems, endpoints, and networks, then processes that data before sending it onward.

The easiest way to think about it is that telemetry sources produce data, but the pipeline gives that data direction. Without a pipeline, downstream tools become catch-all warehouses. With a pipeline, telemetry can be standardized, routed by value, and governed according to policy. That is especially important for security operations, where one class of data may need real-time detection while another belongs in lower-cost retention or long-term investigation storage.

From a business perspective, the value is straightforward:

  • Lower cost by reducing unnecessary downstream ingestion
  • Better signal quality through normalization and enrichment
  • Less analyst fatigue by cutting noisy, low-value events earlier
  • More flexibility to send each data type where it creates the most value
  • Stronger governance through filtering, redaction, and policy-based routing

 

How Does the Telemetry Pipeline Work?

At a high level, a telemetry pipeline works through three core stages: ingest, process, and route. Together, these stages turn raw telemetry from many sources into clean, useful data to act on.

Ingest

The first stage is ingestion. This is where the pipeline collects telemetry from across the environment: applications, cloud services, containers, endpoints, identity systems, network tools, and infrastructure components. In modern environments, this stage must handle multiple signal types at once, including logs, metrics, traces, and events, often arriving at very different volumes and speeds.

Process

The second stage is processing, and this is where most of the value is created. Data is cleaned, normalized, enriched, filtered, and optimized before it reaches downstream systems. That can include removing duplicates, standardizing schemas, enriching records with identity or threat context, redacting sensitive fields, or reducing noisy data that creates cost without adding much value.

This is also where optimization and governance come in. Instead of treating all telemetry as equally important, teams can shape data according to business and security priorities. High-value signals can be enriched and preserved. Low-value records can be reduced, tiered, or dropped. Sensitive information can be handled according to the compliance policy. In other words, processing is where the pipeline stops being a transport mechanism and becomes a control mechanism. 

Route

The final stage is routing. Once telemetry has been shaped, the pipeline sends it to the right destinations. Security-relevant events may go to a SIEM or an in-stream detection layer. Operational metrics may go to observability tooling. Bulk logs may go to lower-cost storage. Archived data may be retained for compliance or long-term investigation. The point is that the same data no longer has to go everywhere in the same form.

By integrating collection, processing, and routing into one flow, a telemetry pipeline turns data from a flood into a controlled stream. It does not just move telemetry. It makes telemetry usable.

What Kind of Companies Need Telemetry Data Pipelines?

Any company running modern digital systems needs telemetry. The real difference is how urgently it needs to manage that telemetry well. Telemetry pipelines become especially important when blind spots are expensive, which usually means complex infrastructure, regulated data, customer-facing services, or constant security pressure. AWS’s observability guidance is explicitly built for cloud, hybrid, and on-prem environments, which already describes most enterprise estates.

That need shows up across many industries. Technology and SaaS companies rely on telemetry pipelines to protect uptime and customer experience. Financial institutions use them to monitor transactions, improve fraud detection, and keep audit data under control. Healthcare organizations use them to balance reliability with privacy and compliance. Retailers, telecom providers, manufacturers, logistics firms, and public-sector agencies need them because scale and continuity leave very little room for guesswork.

For security teams, the case is even sharper. Telemetry becomes the evidence layer behind detection, triage, investigation, and response. That is why the better question is no longer whether a company needs telemetry, but whether it is still treating telemetry like raw exhaust, or finally managing it like the strategic asset it has become.

How SOC Prime Turns Telemetry Pipelines Into Detection Pipelines

Telemetry pipelines started as a smarter way to move, shape, and control data before it reached expensive downstream platforms. SOC Prime extends that idea further with DetectFlow, which turns the pipeline into an active detection layer instead of using it only for transport and optimization. 

DetectFlow can run tens of thousands of Sigma detections on live Kafka streams, chain detections at line speed, drastically reduce the volume of potential alerts, and surface attack chains that are then further correlated and pre-triaged by Agentic AI before they hit the SIEM. It also brings real-time visibility, in-flight tagging and enrichment, and ensures infrastructure scalability that goes beyond traditional SIEM limits. That moves detection left, closer to the data, earlier in the flow, and far less dependent on costly downstream solutions.

For cybersecurity teams, that is the larger takeaway. Telemetry pipelines are not just an observability upgrade or a cost-control tactic. They are becoming a core part of modern cyber defense. And when detection logic, correlation, and AI move into the pipeline itself, telemetry stops being just something teams store and search later, instead acting on it in real time.

 



The post Telemetry Pipeline: How It Works and Why It Matters in 2026 appeared first on SOC Prime.

  • ✇SOC Prime Blog
  • SOC Prime Launches DetectFlow Enterprise To Enhance Security Data Pipelines with Agentic AI Andrii Bezverkhyi
    BOSTON, MA — March 12, 2026 — SOC Prime today announced the release of DetectFlow Enterprise, a solution that brings real-time threat detection to the ingestion layer, turning data pipelines into detection pipelines. Running tens of thousands of Sigma detections on live Kafka streams with millisecond MTTD using Apache Flink, DetectFlow Enterprise enables security teams to detect, tag, enrich, and correlate threat data in flight before data reaches downstream systems such as SIEM, EDR, and Data
     

SOC Prime Launches DetectFlow Enterprise To Enhance Security Data Pipelines with Agentic AI

12 de Março de 2026, 05:03
SOC Prime releases DetectFlow enterprise

BOSTON, MAMarch 12, 2026SOC Prime today announced the release of DetectFlow Enterprise, a solution that brings real-time threat detection to the ingestion layer, turning data pipelines into detection pipelines.

Running tens of thousands of Sigma detections on live Kafka streams with millisecond MTTD using Apache Flink, DetectFlow Enterprise enables security teams to detect, tag, enrich, and correlate threat data in flight before data reaches downstream systems such as SIEM, EDR, and Data Lakes. This gives organizations a way to expand detection coverage earlier in the processing flow, enrich security telemetry before downstream analysis, and scale detection on infrastructure they already have.

As detection volumes continue to grow, many SOC teams face the same set of operational challenges, such as delayed detections, rising ingestion costs, infrastructure bottlenecks, fragmented visibility across tools, and difficulty scaling rule coverage without adding more operational overhead. DetectFlow Enterprise is designed to address those pressures by moving detection closer to the data pipeline itself, where events can be inspected, enriched, and correlated in real time.

This release reflects a practical shift in how detection is operationalized. Rather than treating the pipeline as a transport layer alone, DetectFlow Enterprise turns it into an active part of the detection workflow. Teams can manage detections from cloud or local sources, stage and validate updates, and roll out changes safely with full traceability and zero downtime. This new architectural approach also establishes DetectFlow Enterprise as a foundation for unified CI/CD workflows across the SOC Prime Platform, supporting more scalable and efficient security operations.

Teams can also run thousands of detections directly on streaming pipelines with real-time visibility and in-flight tagging and enrichment. They can correlate events across multiple log sources at the pre-SIEM stage, helping surface the attack chains that matter in real time while reducing noise and false positives.

By performing correlation before data reaches the SIEM, DetectFlow Enterprise allows teams to evaluate full telemetry streams against thousands of rules without the performance and cost trade-offs of downstream ingestion. Built on SOC Prime’s Detection Intelligence dataset, shaped by 11 years of continuous threat research and detection engineering, DetectFlow uses Flink Agent to assemble detections, events, and relevant active threat context for AI-powered analysis. This helps security teams surface high-confidence attack chains, improve investigative clarity, and accelerate response to critical threats.

I have spent most of my career working across threat detection, SIEM, EDR, and SOC operations, and one challenge remained constant. Detection logic was always constrained by the performance and economics of the underlying stack. With DetectFlow Enterprise, we are giving teams a way to move beyond those constraints by turning the data pipeline into an active detection layer, running rules at stream speed, enriching telemetry in flight, and helping organizations scale detection without rearchitecting the rest of their security environment.

Andrii Bezverkhyi, CEO and Founder of SOC Prime

DetectFlow is designed to work with existing ingestion architecture, requiring no changes to established SIEM workflows. It supports both air-gapped and cloud-connected deployments, allowing organizations to keep data under their control while extending detection across the broader security ecosystem. It can achieve an MTTD of 0.005–0.01 seconds and help organizations increase rule capacity on existing infrastructure by up to ten times.

About SOC Prime

SOC Prime has built and operates the world’s largest AI-Native Detection Intelligence Platform for SOC teams. Trusted by over 11,000 organizations, the company delivers real-time, cross-platform detection intelligence that helps security teams to anticipate, detect, validate, and respond to cyber threats faster and more effectively.

Pioneering Security-as-Code approach, SOC Prime’s Detection Intelligence is applied to over 56 SIEM, EDR, Data Lake, and Data Pipeline platforms. The company continuously improves its breadth and quality of threat coverage, shipping top-quality signals for AI SOCs and security analysts.

For more information, visit https://socprime.com or follow us on LinkedIn & X.



The post SOC Prime Launches DetectFlow Enterprise To Enhance Security Data Pipelines with Agentic AI appeared first on SOC Prime.

  • ✇ASEC BLOG
  • Ransom & Dark Web Issues Week 2, March 2026 ATCP
    ASEC Blog publishes Ransom & Dark Web Issues Week 2, March 2026         Qilin ransomware attack targeting a well-known dermatology clinic in South Korea and the Korean branch of a global advertising company [1], [2] KillSec and Everest ransomware attacks targeting a South Korean exhibition management platform and an elevator manufacturer [1], […]
     

Ransom & Dark Web Issues Week 2, March 2026

Por:ATCP
11 de Março de 2026, 12:00
ASEC Blog publishes Ransom & Dark Web Issues Week 2, March 2026         Qilin ransomware attack targeting a well-known dermatology clinic in South Korea and the Korean branch of a global advertising company [1], [2] KillSec and Everest ransomware attacks targeting a South Korean exhibition management platform and an elevator manufacturer [1], […]

Mission-ready threat intelligence: Aligning with doctrine through Defense TIP

The defense community deserves a threat intelligence platform that speaks their language. With our new Defense TIP mode, EclecticIQ aligns fully with NATO and US military doctrine, eliminating the friction caused by mismatched terminology, structure, and limited interoperability with joint and coalition intelligence workflowsThis is a mission-ready capability built to meet the strategic and operational demands of modern defense intelligence.

  • ✇Security Boulevard
  • Insights into Claude Code Security: A New Pattern of Intelligent Attack and Defense NSFOCUS
    On February 20, 2026, AI company Anthropic released a new code security tool called Claude Code Security. This release coincided with the highly sensitive period of global capital markets to AI technology subverting the traditional software industry, which quickly triggered violent fluctuations in the capital market and caused the fall of stock prices of major […] The post Insights into Claude Code Security: A New Pattern of Intelligent Attack and Defense appeared first on NSFOCUS, Inc., a globa
     

Insights into Claude Code Security: A New Pattern of Intelligent Attack and Defense

26 de Fevereiro de 2026, 05:57

On February 20, 2026, AI company Anthropic released a new code security tool called Claude Code Security. This release coincided with the highly sensitive period of global capital markets to AI technology subverting the traditional software industry, which quickly triggered violent fluctuations in the capital market and caused the fall of stock prices of major […]

The post Insights into Claude Code Security: A New Pattern of Intelligent Attack and Defense appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..

The post Insights into Claude Code Security: A New Pattern of Intelligent Attack and Defense appeared first on Security Boulevard.

  • ✇SOC Prime Blog
  • What Is a Security Data Pipeline Platform: Key Benefits for Modern SOC Steven Edwards
    Security teams are drowning in telemetry: cloud logs, endpoint events, SaaS audit trails, identity signals, and network data. Yet many programs still push everything into a SIEM, hoping detections will sort it out later. The problem is that “more data in the SIEM” doesn’t automatically translate into better detection. It often translates into chaos. Many SOCs admit they don’t even know what they’ll do with all that data once it’s ingested. The SANS 2025 Global SOC Survey reports that 42% of SOC
     

What Is a Security Data Pipeline Platform: Key Benefits for Modern SOC

24 de Fevereiro de 2026, 13:23

Security teams are drowning in telemetry: cloud logs, endpoint events, SaaS audit trails, identity signals, and network data. Yet many programs still push everything into a SIEM, hoping detections will sort it out later.

The problem is that “more data in the SIEM” doesn’t automatically translate into better detection. It often translates into chaos. Many SOCs admit they don’t even know what they’ll do with all that data once it’s ingested. The SANS 2025 Global SOC Survey reports that 42% of SOCs dump all incoming data into a SIEM without a plan for retrieval or analysis. Without upstream control over quality, structure, and routing, the SIEM becomes a dumping ground where messy inputs create messy outcomes: false positives, brittle detections, and missing context when it matters most.

That pressure shows up directly in the analyst experience. A Devo survey found that 83% of cyber defenders are overwhelmed by alert volume, false positives, and missing context, and 85% spend substantial time gathering and connecting evidence just to make alerts actionable. Even the mechanics of SIEM-based detection can work against you. Events must be collected, parsed, indexed, and stored before they’re reliably searchable and correlatable.

Cost is part of the same story. Forrester notes that “How do we reduce our SIEM ingest costs?” is one of the top inquiry questions it gets from clients. The practical answer is data pipeline management for security: route, reduce, redact, enrich, and transform logs before they hit the SIEM. Done well, this reduces spend and makes telemetry usable by enforcing consistent fields, stable schemas, and healthier pipelines so data turns into detections.

The demand pushes security teams to borrow a familiar idea from the data world. ETL stands for Extract, Transform, Load. It pulls data from multiple sources, transforms it into a consistent format, and then loads it into a target system for analytics and reporting. IBM describes ETL as a way to consolidate and prepare data, and notes that ETL is often batch-oriented and can be time-consuming when updates need to be frequent. Security increasingly needs the real-time version of this concept because a security signal loses value when it arrives late.

That is why event streaming has become so relevant. Apache Kafka sees event streaming as capturing events in real time, storing streams durably, processing them in real time or later, and routing them to different destinations. In security terms, this means you can normalize and enrich telemetry before detections depend on it, monitor telemetry health so the SOC does not go blind, and route the right data to the right place for response, hunting, or retention.

This is where Security Data Pipeline Platforms (SDPP) enter the picture. An SDPP is the solution located between sources and destinations that turns raw telemetry into governed, security-ready data. It handles ingestion, normalization, enrichment, routing, tiering, and data health so downstream systems can rely on clean and consistent events instead of compensating for broken schemas and missing context.

What Is a Security Data Pipeline Platform (SDPP)?

A Security Data Pipeline Platform (SDPP) is a centralized system that ingests security telemetry from many sources, processes it in-flight, and delivers it to one or more destinations, including SIEM, XDR, SOAR, and Data Lakes. The SDPP job is to take raw security data as it arrives, shape it properly, and deliver it downstream in a form that is consistent, enriched, and ready for detection and response. The shift is subtle but important. Instead of treating log management as “collect and store,” an SDPP treats it as “collect, improve, then distribute.”

In practice, SDPPs commonly support:

  • Collection from agents, APIs, syslog, cloud streams, and message buses
  • Parsing and normalization to consistent schemas (e.g., OCSF-style concepts)
  • Enrichment with asset, identity, vulnerability, and threat intel context
  • Filtering and sampling to reduce noise and control spend
  • Routing to multiple destinations (and different formats per destination)

Unlike legacy data pipelines that mainly move data from point A to point B, an SDPP adds intelligence and governance. It treats security data as a managed capability that can be standardized, observed, and adapted as environments change. That matters as teams adopt hybrid SIEM plus Data Lake strategies, scale cloud infrastructure for detection & response, and standardize telemetry for correlation & automation.

What Are the Key Capabilities of a Security Data Pipeline?

A security data pipeline turns raw telemetry into something usable before it hits your security stack. The most effective pipelines do two things at once. They improve data quality, and they control where data goes, how long it stays, and what it looks like when it arrives.

Ingest at Scale

A modern security data pipeline must collect continuously, not occasionally. That means cloud logs, SaaS audit feeds, endpoint telemetry, identity signals, and network data, pulled via APIs, agents, and streaming transports.

Transform in Flight

In-flight transformation is where the pipeline earns its value. As data flows, fields are parsed, key attributes are extracted, and formats are normalized into stable schemas. This reduces errors from inconsistent data and keeps correlation logic portable across tools. At the same time, noise can be filtered, events sampled, and privacy or redaction rules applied in a controlled, measurable, and reversible way. The result is clean, reliable data that’s ready for detection and action as it moves through the system.

Enrich With Context

Enrichment transforms daily SOC work by bringing context to the data before it reaches analysts. Instead of spending time manually gathering information, the pipeline adds identity and asset details, environment tags, vulnerability insights, and threat intelligence so events are ready for triage and correlation.

Route and Tier

Routing is where telemetry becomes truly governed. Instead of sending all data to a single destination, the pipeline applies policies to deliver the right events to SIEM, XDR, SOAR, and Data Lakes. Data is stored by value, with clear hot, warm, and cold retention paths, and can be accessed quickly when investigations require it. By handling different formats and subsets for each tool, routing keeps the pipeline organized, consistent, and fully managed across environments, turning raw streams into reliable, actionable telemetry.

Monitor Data Health

Pipelines need their own observability. Missing data, unexpected schema changes, or sudden spikes and drops can create blind spots that may only be noticed during an incident. A strong Security Data Pipeline Platform provides observability across the system, making these issues visible early and supporting safe rerouting if a destination fails.

AI Assistance

Teams are increasingly comfortable with relevant AI assistance in pipelines, especially for repetitive tasks like parser generation when formats change, drift detection, clustering similar events, and QA. The goal is not autonomous decision-making. It is a faster, more consistent pipeline operation with human control.

Detect in Stream

Some teams are now running detections directly in the data stream, turning their pipelines into active detection layers. Tools like SOC Prime’s DetectFlow enable this by applying tens of thousands of Sigma rules to live Kafka streams using Apache Flink, tagging and enriching events in real time before they reach systems like SIEM. The goal is not to replace centralized analytics, but to prioritize critical events earlier, improve routing, and reduce mean time to detect (MTTD).

What Challenge SDPPs Help to Solve?

Security Data Pipeline Platforms exist because modern SOC pain is not only “too many logs.” It is the friction between data collection and real detection outcomes. When telemetry is late, inconsistent, expensive to store, and hard to query at scale, the SOC ends up working around the data instead of working on threats. The main challenges SDPPs help solve are the following:

  • Data arrives too late to be useful. SIEM-based detection is not instant. Events must be collected, parsed, ingested, indexed, and stored before they are reliably searchable and correlatable. In real environments, correlation can take 15+ minutes depending on ingestion and processing load. SDPPs reduce this gap by shaping telemetry in-flight so downstream systems receive cleaner, normalized events sooner, and by routing high-priority data on faster paths when needed.
  • “Store everything” breaks the budget. Event data growth makes the default approach unaffordable. Even if you can pay to ingest everything, you still end up indexing and retaining huge volumes that do not improve detection outcomes. SDPPs help teams set clear policies, so high-value security events go to real-time systems, while bulk or long-retention logs are routed to cheaper tiers with predictable rehydration during investigations.
  • Detection logic can’t keep up with log volume. Average SOCs deploy roughly 40 rules per year, while practical SIEM rule programs and performance limits often cap usable coverage in the hundreds. More telemetry lands, but detection content does not scale at the same pace. SDPPs close the gap by reducing noise, stabilizing schemas, and preparing data so each rule has a higher signal value and works more consistently across environments.
  • ETL is not enough on its own. ETL is great for extracting, transforming, and loading data for analytics and reporting, often in batch. Security needs the continuous version of that idea. Telemetry arrives as a stream, formats change frequently, and detections need consistent schemas plus health monitoring to stay reliable. SDPPs complement ETL-style workflows by providing security-specific processing for streaming logs, schema drift handling, and operational observability.
  • Threats iterate faster than your query budget. AI-driven campaigns can evolve malicious payloads in minutes, which punishes workflows that depend on slow query cycles and manual evidence stitching. SIEMs also impose practical ceilings, including hard caps like under 1,000 queries per hour, depending on platform and licensing. SDPPs help by making each query more effective through normalization and enrichment, and by reducing the need for brute-force querying via smart routing, filtering, and tagging upstream.

What Are the Benefits of a Security Data Pipeline Platform?

When security teams talk about “too much data,” they rarely mean they want less visibility. They mean the work has become inefficient. Analysts waste time stitching context together, detections break when schemas drift, and leaders end up paying for ingest that does not move risk down.

A Security Data Pipeline Platform changes the day-to-day reality by putting one layer in charge of how telemetry is prepared and where it goes. For SOC teams, that means events arrive cleaner, more consistent, and easier to investigate. For the business, it means you can scale detection and retention without turning SIEM spend and operational noise into a permanent paycheck.

Therefore, key benefits of using Security Data Pipeline Platforms include the following:

  • Less noise, more signal. By filtering low-value events, deduplicating repeats, and adding context before events reach alerting systems, the SDPP helps analysts focus on what actually matters.
  • Lower SIEM and storage spend. The pipeline controls what gets sent to expensive destinations, routing high-value events to real-time systems while pushing bulk telemetry to cheaper tiers.
  • Less manual burden and rework. Transformation and routing rules live once in the pipeline instead of being rebuilt across tools and environments.
  • Stronger governance and compliance. Centralized policies simplify privacy controls, data residency constraints, and retention rules.
  • Fewer blind spots and surprises. Silence detection and telemetry health monitoring surface missing logs, drift, and delivery failures before incidents do.

How a Security Data Pipeline Platform Can Help Your Business?

At a business level, a Security Data Pipeline Platform is about making security operations predictable. When telemetry is governed upstream, leadership gets clearer answers to three questions that usually stay messy in mature environments: what data matters, where it should live, and what it should cost to operate at scale.

One practical impact is budget planning that survives data growth. Instead of treating ingestion as an uncontrollable variable, the pipeline makes volume a managed policy. You can set targets, prove what was reduced, and preserve the context that supports detection and compliance. That predictability turns cost reduction into operational freedom rather than a risky cut.

Another impact is standardization that unlocks reuse. When normalization is done once and applied everywhere, detection content and correlation logic can be reused across environments instead of being rewritten per source or per destination. That reduces the hidden maintenance costs that slow rollouts and drain engineering time.

A third impact is flexibility without lock-in. Intelligent routing and tiering let you align data to purpose, not vendor limitations. High-priority telemetry stays hot for response, broader datasets support hunting in cheaper stores, and long-retention logs can be archived with a clear rehydration path for investigations. The pipeline keeps the data layer stable while destinations evolve.

Finally, pipelines support operational assurance. Many organizations worry more about missing telemetry than noisy telemetry because quiet failures create blind spots that surface during incidents and audits. A pipeline that monitors source health and drift makes gaps visible early and improves confidence in security reporting.

Unlocking More SDPP Value With SOC Prime DetectFlow

Security data pipelines already help you collect, shape, and route telemetry with intent. SOC Prime’s DetectFlow adds an in-stream detection layer that turns your data pipeline into a detection pipeline. It runs Sigma rules on live Kafka streams using Apache Flink, tags, and enriches matching events in-flight, and routes high-priority matches downstream without changing your SIEM ingestion architecture.

Detect Flow, in-stream detection layer for SDPP

This directly targets the detection coverage gap. There are 216 MITRE ATT&CK techniques and 475 sub-techniques, yet the average SOC ships ~40 rules per year, and many SIEMs start to struggle around ~500 custom rules. DetectFlow is built to run tens of thousands of Sigma rules at stream speed with sub-second MTTD versus 15+ minutes common in SIEM-first pipelines. Because it scales with your infrastructure, you avoid vendor caps, keep data in your environment, support air-gapped or cloud-connected deployments, and unlock up to 10× rule capacity on existing infrastructure.

DetectFlow vs Traditional Approach: Benefits for SOC Teams

For more details, reach out to us at sales@socprime.com or kick off your journey at socprime.com/detectflow.



The post What Is a Security Data Pipeline Platform: Key Benefits for Modern SOC appeared first on SOC Prime.

  • ✇Cybersecurity Blog | SentinelOne
  • OneClaw: Discovery and Observability for the Agentic Era SentinelOne
    Autonomous agents and personal AI assistants are moving from experimentation to enterprise reality. Tools like OpenClaw (formerly Moltbot and Clawdbot), Nanobot and Picoclaw are being embedded across development environments, cloud workflows, and operational pipelines. They install quickly, evolve dynamically, and often operate with deep system-level access. For CISOs and security leaders, this presents a new governance challenge: How do you secure what you can’t see? OneClaw, built by Prompt Se
     

OneClaw: Discovery and Observability for the Agentic Era

18 de Fevereiro de 2026, 13:04

Autonomous agents and personal AI assistants are moving from experimentation to enterprise reality. Tools like OpenClaw (formerly Moltbot and Clawdbot), Nanobot and Picoclaw are being embedded across development environments, cloud workflows, and operational pipelines. They install quickly, evolve dynamically, and often operate with deep system-level access. For CISOs and security leaders, this presents a new governance challenge: How do you secure what you can’t see?

OneClaw, built by Prompt Security from SentinelOne®, was created to answer that question. It is a lightweight discovery and observability tool built to help secure AI usage by providing broad, organization-wide visibility into OpenClaw deployments without disrupting workflows or slowing down innovation. Where agent sprawl is accelerating faster than policy frameworks can adapt, OneClaw restores clarity, accountability, and executive oversight where it matters most.

Understanding The Risks Behind OpenClaw

While most security programs assume that AI applications and agents are vetted, IT sanctioned, registered, and monitored, OpenClaw agents can be:

  • Installed directly by developers
  • Extended through public skills and plugins
  • Granted persistent memory and tool access
  • Configured to run autonomously, and
  • Connected to external services and API.

They can also operate quietly in local environments, call tools automatically, schedule tasks via cron jobs, and access sensitive systems – all outside of most application inventory controls. What manifests is two main concerns:

  • Shadow Agent Proliferation – How many OpenClaw instances are running across the enterprise?
  • Data Egress & External Exposure – Where are agents sending information, and to whom?

Without that centralized observability, these risks persist. As assistants such as OpenClaw, Nanobot, Picoclaw and other autonomous agents continue to emerge every day, OneClaw is designed to eliminate the opacity of the inherent risks they bring to businesses.

OpenClaw is only the beginning of a much larger shift toward autonomous agents and personal AI assistants embedded across the enterprise. In addition to OpenClaw, OneClaw already supports visibility coverage for emerging frameworks such as Nanobot and Picoclaw, extending the same discovery and observability capabilities across multiple agent ecosystems.

For CISOs, this future-proof approach is critical. Rather than deploying point solutions for each new tool that appears, OneClaw establishes a unified oversight layer for the entire category of new assistants, copilots, and autonomous workflows that continue to surface. The goal is not simply to manage one platform, but to provide lasting control over a rapidly expanding attack surface, ensuring that as agent adoption accelerates, security visibility and accountability keep pace.

Empowering Teams with Observability

OneClaw provides structured delivery and observability across OpenClaw deployments leveraging a scanner that automatically detects supported CLIs and inspects the local .openclaw directory within user environments. It parses session logs, configuration files, and runtime artifacts to summarize meaningful usage patterns and surface critical operational details. OneClaw also captures browser activity performed by agents to deliver visibility into external interactions and potential data exposure paths.

From this data, OneClaw then summarizes active skills and installed plugins, recent tool and application usage, scheduled cron jobs, configured communication channels, available nodes and AI models, security configurations, and autonomous execution settings.

OneClaw outputs are structured in JSON format, making it flexible for integration into existing security ecosystems. Organizations can conduct local reviews, inject into SIEM platforms, feed centralized monitoring systems, and correlate with identity, endpoint, and cloud telemetry. For security leaders invested in unified visibility, this ensures that agent observability does not become siloed.

For CISOs, this transforms AI agent behavior from invisible background activity into auditable telemetry that is integrated into broader enterprise security and risk management discussions.

From Raw Telemetry to Executive Intelligence

Though many tools generate logs, very few of them translate them into governance-ready insights. OneClaw provides a fully centralized dashboard that aggregates reports across all employees. Rather than just reviewing isolated endpoint-level findings, security leaders get visualized deployment trends, risk heatmaps, organization-wide exposure mapping, and skill usage distribution.

In a matter of minutes, the tool can be deployed via Jamf, Intune, Kandji or SentinelOne Remote Ops.

This elevates the conversation from technical detail to strategic oversight, which is critical as CISOs are now being asked for AI agent inventories, and whether the agents are operating within policy, can access sensitive systems, and are transmitting data externally.

With OneClaw, CISOs can gain visibility into:

  • How many OpenClaw agents are deployed enterprise-wide
  • What percentage is configured for autonomous execution
  • Which teams are installing high-risk skills
  • Where agents are making outbound connections
  • How has agent adoption changes day to day

This level of structured visibility empowers security leaders to build the foundational layer for proactive governance for agentic AI security and have effective conversations about the short and long-term risks from OpenClaw adoption in their organization.

Security-Focused Behavioral Analysis

OneClaw does not assume all autonomy is dangerous. Instead, it surfaces where autonomy exists so that leaders can make informed, balanced policy decisions and strengthen controls with precision. For example, security teams may determine that autonomous execution is appropriate within a sandboxed development environment but requires approvals in production systems. They may allow specific, vetted skills while restricting newly installed public plugins pending review. They also may decide that outbound communication to approved internal APIs is acceptable, while flagging unknown external domains for investigation.

By making these conditions visible, OneClaw enables proportionate governance rather than blanket restrictions. Teams can confidently approve safe automation, enforce guardrails where needed, and document oversight for executive and regulatory reporting. The result is not reduced innovation, but safer acceleration where autonomous agents operate within clearly defined boundaries, and security leaders remain firmly in control of how and where that autonomy is trusted.

Transparency Without Disruption

OneClaw was designed to deliver visibility into autonomous activity without interrupting the work those agents (and the teams deploying them) are meant to accelerate. Rather than creating friction into development pipelines or altering how OpenClaw, Nanobot, or Picoclaw operate, it acts as an observability layer that security teams can deploy quietly across the environment.

The approach matters: Security leaders are not looking to slow down innovation, but they are accountable for what happens if agent behavior crosses policy boundaries or introduces risk. OneClaw gives the context needed for CISOs to act decisively using the cybersecurity controls they already trust. It does not replace prevention capabilities, it makes them smarter by ensuring autonomous activity is no longer invisible.

By surfacing where agents exist, how they are configured, and what they are interacting with, OneClaw allows organizations to distinguish between productive automation and risk behavior that warrants intervention. Security teams can then enforce standards through existing controls without imposing blanket restrictions that only frustrate developers.

The result? This is a model aligned with how modern security actually operates – observe first, contextualize risk, and then apply controls proportionately. OneClaw makes transparency a force multiplier for the security stack, and empowers CISOs to have confidence that autonomous innovation continues under a watchful, informed oversight instead with blind trust.

Chaos to Clarity | Why This Matters Now

Agentic AI discovery is no longer optional. OpenClaw’s rapid growth and decentralized infrastructure is creating a large, and often unmanaged attack surface. With skills being frequently installed from public repositories and agents being granted deep permissions, configuration drift is occurring silently. Agent ecosystems can fall to exploitation through malicious skills and supply chain manipulation.

OneClaw supports how CISOs defend against OpenClaw-related risks by providing deep visibility and observability of agentic AI use and autonomous behavior, detecting risky configurations early on, and quantifying the exposures before incidents occur.

As adoption increases and autonomy deepens, OneClaw works to restore visibility while allowing powerful agents to accelerate development pipelines. CISOs have the clarity needed to govern autonomous systems responsibly, while getting structured discovery, centralized reporting, and security-focused analysis.

Start getting visibility into agent activity and security insights into OpenClaw deployments across your organization here. To learn more about securing your OpenClaw agents, register for our upcoming webinar happening Tuesday, March 3, 2026.

Join the Webinar
SentinelOne AI & Intelligence Leaders Discuss How to Secure OpenClaw Agents on March 3, 2026 at 10:00AM PST / 1:00PM EST
❌
❌