Visualização de leitura

The DSPM promise vs the enterprise reality

The data sprawl problem is worse than anyone admits

Before a DSPM tool can protect data, it must find it. That sounds straightforward. In practice, it is the first place most programs quietly begin to unravel.

Enterprises have been operating in hybrid and multi-cloud environments for a long time. Data has followed every workflow — into Salesforce, into SharePoint, into dozens of S3 buckets that were created by developers who have since moved on, and into collaboration tools adopted during the pandemic without any formal data classification policy attached. Nobody tracked it systematically. Research from Cyera’s 2024 DSPM Adoption Report found that 90% of the world’s data was created in just the last two years, and total data volume by 2025 reached 181 zettabytes. Security teams are being asked to govern a landscape that is growing faster than any tool or team was designed to handle.

When DSPM scanners go to work on a large enterprise environment, the volume of findings almost always exceeds initial expectations — sometimes by an order of magnitude. One organization I worked with discovered sensitive customer PII in seventeen cloud storage locations that they had no formal record of. Another found regulated financial data sitting in a collaboration workspace that had been shared with an external contractor two years prior and never revoked.

The visibility is genuinely valuable. But, as Wiz notes in their DSPM framework, visibility without remediation capacity is just a longer list of things that can go wrong. And that is exactly where the first real friction begins.

Ownership is a political problem, not a technical one

DSPM tools are exceptionally good at identifying data risk. They are not designed to resolve the organizational question of who is responsible for fixing it. That question, in most enterprises, does not have a clean answer.

Security teams surface the finding. The data sits in a business unit’s environment. The IT team may own the cloud account, but the data owner is in Finance, HR, or a product team operating on a separate roadmap and budget cycle. When the DSPM platform generates a remediation ticket, the question of who closes it — and who gets measured on closing it — is rarely answered in advance.

This creates what I call the remediation gap. Findings accumulate. Risk scores rise. But nothing gets fixed, because no single team has both the authority and the incentive to fix it. Security points at the business. The business points at IT. IT points at the data owner. The data owner has a product launch in six weeks and no security budget. Forcepoint’s DSPM implementation research confirms this pattern: Even capable platforms underdeliver when rollout turns into a scanning project with unclear ownership and remediation that lives in a permanently deferred backlog.

I have watched this dynamic play out in organizations across industries. It is not a technology failure. It is a governance failure — and no DSPM platform in the market today ships with a solution to it. That solution must be built by leadership, before deployment, with teeth. That means defined data ownership models, escalation paths and accountability metrics that connect to performance conversations, not just security dashboards.

Classification debt is real, and it goes well with compounding

Every DSPM implementation depends on one foundational input: A coherent data classification framework. Most enterprises do not have one that is current, enforced, or agreed upon across business units.

Organizations are equipped with policy documents written five years ago, and what was defined there, nobody uses consistently. What adds more is a growing volume of unstructured content that was never classified at all. According to a 2024 industry survey cited by Securiti, 83% of IT and cybersecurity leaders assert that lack of visibility into data contributes significantly to their weak security posture — a figure that points directly at the classification gap sitting underneath most programs.

DSPM tools apply machine learning to infer sensitivity from data patterns — and they are increasingly good at it. But inference is not a substitute for intentional classification. False positives create noise. False negatives create blind spots. Both erode trust in the platform over time. And once analysts stop trusting the findings, the program stalls regardless of how sophisticated the tooling is.

The harder truth is that many organizations use the DSPM project as a forcing function to finally build the classification framework they should have built years ago. That is not inherently wrong. But it dramatically expands the scope and timeline, and it requires business stakeholder engagement that security teams are rarely resourced to drive on their own. Executives who budget for a DSPM tool without budgeting for the classification work alongside it are setting their programs up for a slow, expensive drift toward shelfware.

Integration complexity is systematically underestimated

DSPM vendors will show you a connector library that spans AWS, Azure, GCP, Microsoft 365, Salesforce, Snowflake and a long list of other platforms. What the demo does not show you is what happens when your specific version of a legacy ERP system does not match the connector’s assumptions or when your on-premises database sits behind a network segment the cloud-native scanner cannot reach without significant architectural change.

Enterprise environments are heterogeneous by nature. Palo Alto Networks’ market analysis puts the DSPM market on a trajectory toward $2 billion by 2025, growing at rates between 25% and 37% annually — a reflection of just how aggressively organizations are investing in this space. But investment velocity and implementation maturity are not the same thing. The average large organization runs hundreds of distinct data stores across multiple cloud providers, legacy systems and third-party SaaS applications. Getting DSPM coverage across all of them is not a deployment — it is an ongoing engineering program.

Connectors break when APIs change. New data sources appear with every acquisition and product build. Maintaining coverage requires dedicated resources that are rarely factored into the initial business case. Executives should push their vendors on exactly which environments will have full coverage at go-live versus which ones are on a roadmap with no committed timeline. The distinction matters enormously because a DSPM deployment with significant coverage gaps gives a false sense of security that can be more dangerous than no deployment at all.

This is a point worth reinforcing with your procurement team: Gartner’s Market Guide for DSPM explicitly flags that organizations can no longer separate data visibility from data control — and that coverage depth, not just breadth, is the critical variable when evaluating platforms.

Alert fatigue arrives faster than expected

A fully operational DSPM deployment in a large enterprise will generate findings at a volume that most security operations teams are not built to absorb. The irony is that the better the tool works, the faster alert fatigue sets in.

Risk prioritization is the answer in theory. In practice, prioritization logic requires ongoing tuning that takes months of calibration with your specific data environment. Varonis, in their DSPM guidance for CISOs, makes the point directly: The goal should not be to generate a list of findings but to surface meaningful, actionable alerts that can be remediated — ideally with automation doing the heavy lifting. Most implementations fall well short of that standard in the early months.

In the meantime, analysts are triaging hundreds of findings per week, many of which turn out to be acceptable risks or known exceptions. Teams burn out. Findings get acknowledged and deprioritized. The board dashboard shows a healthy posture score that no longer reflects ground reality. Zscaler’s analysis of cloud data security challenges identifies this precisely: Security teams need AI and ML-powered prioritization not just to reduce noise but to help analysts focus effort on the data exposures that could realistically lead to a breach.

This is not an argument for turning off the tool. It is an argument for honest capacity planning. If your security operations team is already stretched, a DSPM deployment without additional analyst headcount or a meaningful automation investment is not going to improve your security posture. It is going to add a new category of noise to an already overloaded function.

What good looks like

None of the friction described here is insurmountable. Organizations that get DSPM right tend to share a few common attributes that have nothing to do with which vendor they chose.

They treat DSPM as an organizational change program, not a technology deployment. They invest in governance structures before they deploy scanners. They define data ownership at the business unit level with clear accountability, and they build that accountability into how people are measured and managed. They budget for the classification work alongside the tooling. They phase their integration roadmap honestly, scope the first phase to environments where coverage will be complete, and build confidence before expanding.

They also pay attention to what Microsoft’s research on enterprise data security posture flags as the underlying imperative: Organizations must stop seeing data security as a collection of individual tools and start treating it as a holistic program anchored in measurable business outcomes. That shift in framing changes everything — from how the board conversation is structured to how remediation accountability is assigned across the business.

Most importantly, they have executive sponsorship that goes beyond signing the purchase order. The CISOs who successfully land DSPM programs are the ones who have a CFO, COO, or CEO who understands that data security risk is a business risk — and who is willing to hold business unit leaders accountable for their piece of it.

DSPM, at its best, gives your enterprise the situational awareness it needs to make informed decisions about data risk. The organizations that leverage awareness as a genuine security improvement are the ones that walk in with eyes open — prepared for the friction, staffed for the remediation work and governed for the accountability.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

칼럼 | 화려한 AI보다 현실적 통제…구글이 제시한 에이전트 전략

구글이 지난주 개최한 연례 컨퍼런스 ‘구글 클라우드 넥스트 2026’에서 내놓은 발표 가운데 가장 주목할 점은 새로운 모델이나 TPU가 아니었다. 기업 전반에 제미나이를 확산하는 또 다른 방식 역시 핵심은 아니었다.

오히려 이는 하나의 인정이자, 동시에 경고에 가까운 메시지로 읽힌다.

에이전트에는 감독이 필요하다

이미 알고 있던 사실이지만, “알고도 실행하지 않으면 진정으로 아는 것이 아니다”라는 말처럼 실제로 이를 실천하는 것은 또 다른 문제다. 우리는 에이전트를 분주하게 일을 처리하는 디지털 직원처럼 여기지만, 동시에 이들은 인증 정보와 예산, 메모리, 민감 데이터 접근 권한을 가진 취약한 소프트웨어 시스템이기도 하다. 게다가 비용이 크게 들고 원인 추적이 어려운 방식으로 실패하는 특성까지 갖고 있다.

이것이 ‘구글 클라우드 넥스트 2026’의 본질적인 메시지다. 많은 이들은 구글이 에이전틱 엔터프라이즈 시장을 선점하기 위해 나섰다고 해석하지만, 보다 흥미로운 해석은 구글이 이를 ‘통제하기 위해’ 등장했다는 점이다.

물론 구글은 ‘에이전틱 클라우드(agentic cloud)’를 적극적으로 강조했다. 요즘 어떤 행사에서도 빠지지 않는 주제다. 제미나이 엔터프라이즈 에이전트 플랫폼, 8세대 TPU(Tensor Processing Unit), 새로운 워크스페이스 인텔리전스 AI(Workspace Intelligence AI) 기능, 그리고 기업 전반에 AI를 자연스럽게 녹여내기 위한 다양한 통합 기능도 함께 발표했다. 에이전트 시대의 성과를 자축하는 자리로만 본다면 충분한 발표였다.

하지만 화려한 연출을 걷어내면 더 중요한 메시지가 드러난다. 지난 2년 동안 기업은 AI 에이전트에 열광해 왔고, 이제는 이들이 기업의 평판을 해치거나 재무적 손실을 일으키거나 민감 정보를 노출하지 않도록 통제해야 할 단계에 이르렀다는 점이다.

이는 구글을 비판하는 이야기가 아니다. 오히려 그 반대다. 이번 행사에서 가장 실질적인 가치가 있는 발표일 수 있다.

“신뢰하되 검증하라”

AI가 단순히 말하는 수준을 넘어 실제 행동을 수행하기 시작하는 순간, 기업 환경에서는 필수적인 질문들이 쏟아진다. 누가 이를 승인했는지, 어떤 데이터를 사용했는지, 어떤 시스템에 접근했는지, 왜 그런 행동을 했는지, 비용은 얼마나 들었는지, 그리고 필요할 경우 어떻게 중단할 수 있는지 등이다.

구글의 이번 발표는 상당 부분 이러한 질문에 대한 답변으로 구성됐다.

구글이 강조한 내용을 보면 이를 분명히 알 수 있다. 지식 카탈로그(Knowledge Catalog)는 기업 데이터 전반에서 신뢰할 수 있는 비즈니스 맥락을 제공해 에이전트의 판단을 보완하도록 설계됐다. 제미나이 엔터프라이즈에는 장시간 실행되는 에이전트를 포함해 이를 관리·모니터링할 수 있는 기능이 추가됐다.

워크스페이스에는 에이전트의 데이터 접근을 모니터링하고 제어하며 감사할 수 있는 기능이 도입돼 프롬프트 인젝션, 과도한 정보 공유, 데이터 유출 위험을 줄인다. 또한 구글 클라우드는 에이전트 방어 기능과 위즈(Wiz) 기반 보안 체계를 통해 클라우드와 AI 개발 환경 전반에서 에이전트를 보호할 수 있도록 했다.

이러한 기능들은 시스템이 완벽하게 작동할 때 필요한 도구가 아니다. 오히려 “데모에서는 잘 작동했지만 실제 업무에 맡겨도 되는가”라는 현실적인 고민에 직면한 기업을 위해 만들어진 것이다.

에이전트 관리 계층

업계 분석가들은 기업용 AI의 새로운 계층을 설명하는 용어로 ‘에이전트 컨트롤 플레인(agent control plane)’에 점차 합의하는 분위기다. 익숙한 개념이라는 점에서 적절한 표현이다. 마치 쿠버네티스(Kubernetes)가 인프라를 통합 관리하듯, AI 에이전트의 동작을 중앙에서 관리하는 플랫폼을 떠올리게 한다. 즉, 다수의 AI 에이전트를 한곳에서 관리하고 관찰하며, 라우팅·보안·최적화를 수행할 수 있는 통합 시스템을 의미한다.

하지만 현실은 아직 그 단계와 거리가 멀다.

에이전트에 컨트롤 플레인이 필요한 이유는 이들이 이미 직원을 대체하고 있어서가 아니다. 오히려 기업이 확률 기반 시스템인 에이전트를 기존의 결정론적 업무 프로세스에 연결하면서, 그 사이를 누군가 반드시 관리해야 한다는 사실을 깨닫고 있기 때문이다. 에이전트 데모에서는 자율성이 깔끔하게 보이지만, 실제 엔터프라이즈 시스템에서는 상황이 훨씬 복잡하게 전개된다.

고객 데이터는 한 시스템에, 계약 정보는 또 다른 시스템에 흩어져 있고, 예외 처리는 누군가의 이메일함에 남아 있으며, 정책 문서는 2021년에 업데이트된 PDF 파일에 머물러 있는 경우가 많다. 게다가 해당 업무 흐름을 이해하던 담당자는 팬데믹 기간 중 회사를 떠났을 수도 있다.

이처럼 복잡한 환경에 이제 에이전트까지 추가되고 있다.

이 때문에 필자는 구글의 컨트롤 플레인 전략에 일정 부분 공감하면서도, 지나치게 정돈된 벤더의 서사에는 여전히 경계심을 갖는다. 통합 에이전트 플랫폼, 거버넌스, 모니터링, 평가, 관측성, 시뮬레이션 기능은 모두 필요하다. 특히 제미나이 엔터프라이즈는 기업이 개별적으로 엮어 왔던 복잡한 운영 요소를 중앙화하려는 시도라는 점에서 의미가 있다.

다만 컨트롤 플레인을 실제 업무 그 자체로 오해해서는 안 된다.

파일럿은 쉽고, 운영은 어렵다

에이전틱 AI 관련 데이터는 한 가지 메시지를 반복하고 있다. 기대감이 실제 운영 성숙도를 크게 앞서고 있다는 점이다.

업무 자동화 기술 카문다(Camunda)의 ‘2026 에이전트 오케스트레이션 및 자동화 현황’ 보고서에 따르면, 71%의 조직이 AI 에이전트를 사용하고 있다고 답했지만 지난 1년간 실제 운영 환경에 적용된 사례는 11%에 그쳤다. 또한 73%는 에이전틱 AI에 대한 비전과 현실 사이에 격차가 있다고 인정했다.

가트너 역시 비슷한 전망을 내놓았다. 2027년 말까지 에이전틱 AI 프로젝트의 40% 이상이 중단될 것으로 예상되며, 그 이유로는 비용 부담, 불명확한 비즈니스 가치, 미흡한 리스크 관리가 꼽힌다.

분명히 짚고 넘어가야 할 점은, 이것이 모델의 문제가 아니라는 사실이다. 전형적인 엔터프라이즈 소프트웨어 운영 문제에 가깝다.

이 같은 흐름은 보안과 거버넌스 영역에서도 동일하게 나타난다. 생성형 AI 관리 플랫폼 라이터(Writer)의 2026 조사에 따르면, 67%의 경영진이 승인되지 않은 AI 도구로 인해 데이터 유출이나 보안 사고를 경험했다고 답했다.

또한 36%는 AI 에이전트를 감독하기 위한 공식적인 계획이 없으며, 35%는 문제가 발생한 에이전트를 즉시 중단할 수 없다고 밝혔다.

세 가지 가운데서도 특히 마지막 수치가 가장 우려되는 대목이다. 기업 시스템과 고객 데이터, 조직의 인증 정보에 접근할 수 있는 소프트웨어 에이전트임에도 불구하고, 3분의 1이 넘는 기업이 문제가 발생했을 때 이를 신속하게 중단할 수 있다고 확신하지 못하고 있다.

그럼에도 정말 걱정하지 않아도 되는 걸까?

에이전트는 덜 중요한 요소

에이전틱 엔터프라이즈 환경의 숨겨진 진실은, 정작 에이전트 자체는 아키텍처에서 가장 덜 중요한 요소일 수 있다는 점이다. 모든 주목과 기대는 에이전트에 쏠리지만, 실제 핵심은 따로 있다. 인증과 권한 관리, 워크플로 경계 설정, 데이터 품질, 검색과 메모리, 평가 체계, 감사 추적, 비용 통제, 그리고 에이전트가 혼란에 빠졌을 때 어떤 시스템을 ‘단일 진실의 원천(source of truth)’으로 삼을지 결정하는 문제 등이 진짜 과제다.

구글 클라우드 넥스트에서의 발표는 에이전틱 엔터프라이즈가 이미 도래했음을 증명하지는 않았다. 대신, 에이전틱 기업이 현실화된다면 결국 기존 엔터프라이즈 소프트웨어가 중요한 국면에 접어들었을 때와 매우 유사한 모습이 될 것임을 보여줬다. 마법 같은 혁신보다는 거버넌스 중심의 구조로 수렴한다는 의미다.

이는 분명 진전이지만, 결코 ‘화려한 발전’은 아니다.

에이전틱 AI 시장에서 승자를 가려내고 싶다면, 가장 똑똑한 에이전트를 가진 기업을 찾기보다 데이터 계약이 명확하고, 평가 체계가 정교하며, 일관된 인증 모델을 갖추고, 비공식적인 ‘섀도우 AI’ 확산을 최소화하는 기업을 주목해야 한다. 그러나 업계는 이러한 이야기를 꺼리는 경향이 있다. 자율적으로 일하는 디지털 노동자에 대해 말하는 것이 데이터 계보나 접근 통제를 논하는 것보다 훨씬 흥미롭기 때문이다.

하지만 엔터프라이즈 소프트웨어가 현실이 되는 지점은 바로 이런 ‘지루함’ 속에 있다.

에이전트 시대의 도래를 성급히 선언하기 어려운 또 다른 이유도 있다. 에이전트의 유용성은 결국 안전하게 이해하고 활용할 수 있는 데이터에 달려 있기 때문이다. 구글 역시 이를 분명히 인식하고 있다. 지식 카탈로그 크로스 클라우드 레이크하우스 전략을 포함한 ‘에이전트 데이터 클라우드’ 개념은, 에이전트가 신뢰할 수 있는 비즈니스 맥락을 필요로 한다는 점을 인정한 것이다.

이러한 맥락이 없다면 에이전트는 엔터프라이즈의 업무 수행자가 아니라, 시스템을 떠도는 ‘말 잘하는 관광객’에 불과하다.

결국 이번 구글 클라우드 넥스트에서 가장 고무적인 발표는 에이전트를 더 자율적으로 만드는 기술이 아니었다. 오히려 에이전트를 더 잘 관리할 수 있도록 만드는 기능이었다. 에이전틱 AI는 거대한 가능성을 지니고 있지만, 그것이 현실이 되기 위해서는 무엇보다 ‘지루할 만큼 안정적인’ 특성을 입증해야 한다.
dl-ciokorea@foundryco.com

The secure intelligence framework: Architecting AI systems for a data-driven world

When I first started deploying AI systems at scale, I made the same mistake most technology leaders make: I treated security and data architecture as problems to solve after the intelligence layer was built. We moved fast, we shipped models and we celebrated early wins. Then, six months in, we discovered that one of our machine learning pipelines was inadvertently exposing sensitive customer data to downstream systems that had no business accessing it. No breach, no headlines but it was a wake-up call that reshaped how I think about AI architecture entirely.

The truth is, most organizations are building AI the wrong way. They invest heavily in model performance, infrastructure and compute, but treat data governance and security as afterthoughts. In my experience working across industries, this approach creates systems that are technically impressive but fundamentally fragile. Intelligence without integrity is just sophisticated risk.

This article outlines the framework I developed what I now call the Secure Intelligence Framework and how any technology leader can apply it to build AI systems that are both powerful and trustworthy.

Why security must be designed in, not bolted on

The instinct to move fast when deploying AI is understandable. Business pressure is real and AI projects often begin as proofs of concept that quietly grow into production systems before anyone has thought seriously about security.

But this sequencing is dangerous. According to the IBM Cost of a Data Breach Report 2024, the average cost of a data breach reached $4.88 million globally and organizations without AI and automation embedded in their security operations paid significantly more. Poorly architected AI systems expand an organization’s attack surface, creating new vulnerabilities through model APIs, training data pipelines and inference endpoints that traditional security frameworks were never designed to address.

The deeper problem is cultural. When security is treated as a deployment checklist rather than a design principle, teams inevitably cut corners under deadline pressure. I have seen organizations launch production AI systems with no access logging, no output monitoring and no rollback plan because those conversations happened after the build, not before it. By that point, the architecture is already set and retrofitting security is expensive, disruptive and often incomplete.

When I redesigned our AI architecture, I started from a single principle: every layer of the system must assume that every other layer is potentially compromised. This is zero-trust thinking applied to AI and it changes everything about how you design data flows, access controls and model governance. The NIST AI Risk Management Framework offers a strong foundation here it is one of the first documents I share with any team beginning a serious AI deployment.

width="450" height="326" sizes="auto, (max-width: 450px) 100vw, 450px">
Figure 1: The secure intelligence framework data, model and governance layers.

Sunil Kumar Mudusu

The 3 layers of a secure AI system

The Secure Intelligence Framework is built on three interdependent layers. Each must be addressed independently and then integrated as a whole.

The data layer

This is where most vulnerabilities begin. I have seen organizations connect machine learning models directly to production databases with minimal access controls, reasoning that the model itself is not a user and therefore does not pose a risk. This thinking is wrong and expensive.

Data pipelines must enforce least-privilege access; every component of the AI system should access only the specific data it needs, nothing more. At one organization I worked with, implementing role-based access controls at the pipeline level alone reduced sensitive data exposure by over 60% without any impact on model performance. Equally important is data lineage. You must be able to answer, at any point, exactly what data trained a given model, where it came from and who had access to it. Without lineage, you cannot audit, you cannot comply and you cannot debug when something goes wrong.

The model layer

Once data is governed properly, attention turns to the models themselves. The key risks here are model inversion attacks, where adversaries extract training data from model outputs, and prompt injection in large language model deployments, where malicious inputs manipulate model behavior.

Defending against these threats means treating model endpoints like any other sensitive API authentication, rate limiting, output filtering and adversarial testing built into the deployment pipeline as standard practice. The OWASP Top 10 for Large Language Model Applications is one of the most practical references I have found for model-layer risk it catalogs the exact attack patterns that keep AI security teams up at night. When we deployed an NLP system for internal knowledge management, we added an output review layer that scanned responses for personally identifiable information before returning results to users. It added 40 milliseconds of latency. It was worth every millisecond.

The governance layer

This is the layer most often overlooked because it feels administrative rather than architectural. In reality, governance is what holds the other two layers together over time.

Governance means clear ownership for every model in production, who built it, who maintains it and who is accountable for its outputs. It means model versioning and rollback capabilities. And it means regular audits of both model performance and data access patterns. Microsoft’s Responsible AI Standard and Google’s Model Cards framework are both practical starting points that I have adapted in my own work. Neither is a plug-and-play solution, but both offer structured thinking that can be tailored to almost any organizational context.

What this looks like in practice

Implementing this framework does not require rebuilding everything at once. I introduced it using a phased approach over three quarters.

In the first quarter, we focused on the data layer auditing pipelines, implementing access controls and establishing lineage tracking. Unglamorous work, but it surfaced three data access issues we had not previously known existed. In two cases, internal teams had been querying datasets they were never authorized to use, simply because no restriction had been put in place.

In the second quarter, we addressed the model layer hardening endpoints, introducing output filtering and embedding adversarial testing into our CI/CD pipeline. The team developed a security-first mindset that made these changes feel natural rather than imposed.

In the third quarter, we formalized governance, assigning model owners, establishing review cycles and integrating model audits into existing IT processes. By year-end, we had a system our security team, legal team and business stakeholders could all trust. New AI projects that previously took weeks to approve were being scoped and greenlit in days because the foundational questions had already been answered at the architecture level.

Figure 2: Three-quarter phased implementation roadmap with outcomes per phase
Figure 2: Three-quarter phased implementation roadmap with outcomes per phase.

Sunil Kumar Mudusu

Trust is architected, not assumed

Security and intelligence are not in tension they are complementary. The discipline that makes an AI system secure also makes it more reliable, more auditable and more explainable to the stakeholders who need to trust it.

AI is not a technology problem. It is a trust problem.

If you are building AI systems without a structured approach to data governance and security, you are not moving faster than your competitors. You are accumulating technical debt that will cost far more than the speed ever gained. The organizations that lead in AI over the next decade will not be those that deploy the most models; they will be those that deploy models people can trust.

Start with the data. Secure the model. Govern everything. The rest is execution.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Google adds end-to-end Gmail encryption to Android, iOS devices for enterprises

Google has made a big step forward by extending end-to-end encryption to Android and iOS devices for Gmail client-side encryption (CSE) users, says an expert.

“All in all, this is a welcome update, especially in light of recent concerns surrounding WhatsApp’s encryption methods,” said Gartner analyst Avivah Litan. “Google’s approach offers verifiable customer-managed keys and ensures the provider does not have access to encrypted content.”

This, she said, addresses allegations raised in the January 2026 lawsuit against Meta regarding their internal access to customer encrypted message data.

Meta has reportedly said the claims are false, and that WhatsApp messages remain protected by default. The suit’s allegations have not been proven in court.

Litan noted that Google’s encryption update is only for organizations subscribing to its Enterprise Plus with Assured Controls edition. Messages and attachments are encrypted directly on-device, with encryption keys managed externally by the customer.

“For CSOs in regulated industries, this development is significant, as it supports secure mobile communication, compliance with regulations such as HIPAA [the U.S. Health Insurance Portability and Accountability Act] and GDPR [the European General Data Protection Regulation], and reduces the risk of plaintext data exposure on mobile devices,” she said. “External recipients retain the ability to reply via a web portal.”

However, Litan added, the capability remains opt-in, requires premium licensing and administrative configuration, and disables several Gmail functions, including AI features and comprehensive search, on encrypted content. But, she pointed out, the limitations are consistent with those in Gmail web and desktop implementations.

It’s also a capability that Microsoft doesn’t provide. A Microsoft spokesperson said in an email that the company doesn’t currently offer end-to-end Outlook encryption on mobile, although messages can be digitally signed and encrypted. 

In its April 9 announcement, Google said Workspace users can compose and read end-to-end encrypted messages natively within the Gmail app on Android and iOS without the need to download extra apps or use mail portals. Users with a Gmail E2EE license can send an encrypted message to any recipient, regardless of their email address. If the recipient uses the Gmail app, the encrypted message will be delivered as a normal message thread to their inbox, but if not, they can seamlessly and securely read and reply in their own native browser. This, Google said, ensures that all users have a simple and secure interface, regardless of their email service or device.

Google Workspace admins will need to enable the Android and iOS clients in the CSE admin interface to give users access to the new capability. This can be done in the Admin Console.

End users also need to be taught the new process: To add client-side encryption to any message, they must click the lock icon and select ‘additional encryption’. Then they can compose a message and add attachments as they normally do.

Forrester Research Senior Analyst Andrew Cornwall noted the biggest benefit for enterprises is that Workspace admins or Google can disable the ability to take screenshots and screen recordings when users read an encrypted message in the Gmail app. That will prevent Android and iOS recipients from forwarding a message as an image, he said, noting that Google can also disable screenshots in Android Chrome for business users and presumably will do this when Android users with email programs other than Gmail open a message in a browser.

From a user’s perspective, he added, this encryption gives Gmail an advantage over third-party email programs like Outlook and Thunderbird, which won’t automatically decrypt messages that have been encrypted using Google’s encryption mechanism. Unlike some encryption methods, Gmail doesn’t require the exchange of a key in advance, so users will be more likely to use it.

However, he pointed out, Google’s client-side encryption doesn’t encrypt headers or message senders, so an attacker with access to the device can still get some potentially sensitive information even with encryption enabled.

“If you’re planning to use Gmail to commit financial crimes or plan a revolution,” he added, “you should know that Google controls the display and often the keyboard on devices they build. Even if emails are encrypted on device, your messages may still be available while being read or composed.”

And while end-to-end encryption (E2EE) is considered by experts to be an excellent protection against the hijacking of data in transit, it won’t protect data on compromised devices, stolen and hacked devices, or in unencrypted backups.

David Shipley, CEO of security awareness provider Beauceron Security, noted the extension of Gmail end to end encryption to mobile platforms will help organizations ensure compliance with privacy concerns. “On the downside,” he added, “this is going to be a powerful tool for criminals. If they spin up a Google Workspace tenant and send encrypted messages to end users who aren’t on Gmail, in those cases, users will get a link to a new portal to read the sent message which will not be intercepted by a lot of security tools like email filters.”

This article originally appeared on Computerworld.

❌