Visualização normal

Ontem — 8 de Maio de 2026Stream principal
  • ✇Security | CIO
  • AI sprawl: Why your productivity trap is about to get expensive
    I have seen this movie before. A decade ago, at Tesla, our Finance team faced a data crisis. We had information scattered across accounting, supply chain and delivery systems, all disconnected, all using different structures. The engineering team was rightfully focused on Full Self-Driving (FSD) and manufacturing. So, we did what productivity-hungry teams always do: We built our own solution. We taught ourselves Structured Query Language (SQL), normalized the data with
     

AI sprawl: Why your productivity trap is about to get expensive

8 de Maio de 2026, 07:00

I have seen this movie before.

A decade ago, at Tesla, our Finance team faced a data crisis. We had information scattered across accounting, supply chain and delivery systems, all disconnected, all using different structures. The engineering team was rightfully focused on Full Self-Driving (FSD) and manufacturing. So, we did what productivity-hungry teams always do: We built our own solution. We taught ourselves Structured Query Language (SQL), normalized the data with creative IF-THEN logic and created our own reporting database.

It worked beautifully. Until it became a governance nightmare. The VP of Engineering hated our siloed system with embedded business logic. We eventually handed it over to IT, but not before our workaround forced the company to finally resource a proper data team.

The pattern is always the same: Productivity-hungry teams build workarounds faster than the organization can govern them, and by the time leadership notices, the workarounds have become the infrastructure.

That was more than a decade ago. The pattern took years to unfold.

Today, I am watching the exact same dynamic play out in insurance and industries across the board, but compressed into months, not years. AI adoption is sprawling across organizations, led by the same productivity-hungry individuals, but without central platforms or governance. Leadership has not created space for safe experimentation, so adoption spreads like a city without a highway system. The difference? Back then, we were building SQL databases. In 2026, we are building AI agents. And the cost of fragmentation is exponentially higher.

What is AI sprawl?

AI Sprawl is what happens when the cost of building AI drops faster than an organization can govern it. Teams spin up models, agents and automations independently. Each one works in isolation. None of them connect. The result is fragmented data, drifting decisions and intelligent systems that quietly get abandoned.

It happens because execution has become cheap. Large Language Model (LLM) APIs, no-code tools and cloud infrastructure have made spinning up AI trivially easy. A claims team builds an automation to speed adjudication. Underwriting builds a model to assess risk. Customer service deploys a chatbot. Each initiative delivers local value. No single project looks like a problem.

But collectively, they create an ungovernable landscape.

Over the past 18 months, the GenAI acceleration intensified what IDC calls the GenAI scramble: scattered, fragmented and sometimes redundant applications launched by business-led initiatives without central oversight. Many organizations have fallen into what researchers describe as a productivity trap: Focusing on short-sighted value generation instead of scalability, which limits their ability to create reusable capabilities across departments.

AI sprawl is everywhere

A major property and casualty carrier recently invited us to speak with their innovation leadership about implementing process automation. We spoke with more than 10 key stakeholders across multiple lines of business and found more than a dozen different POCs and local solutions across claims intake, underwriting and fraud detection.

Six of them were solving overlapping problems. None shared data infrastructure. Two had been abandoned months earlier but were still running and still being billed.

This is not an outlier. It is the norm.

AI Sprawl persists because it is insidious, hiding in plain sight unless you look for it. Business units move fast, build independently and solve immediate problems. IT discovers shadow AI only when something breaks, when an audit is triggered or when a vendor renewal surfaces a tool, nobody knew existed. And this symptom multiplies as more innovative teams exist within the organization.

The 4 hidden costs of sprawl

AI Sprawl creates costs that compound over time, many of which are not visible in any single budget line. It results in a dangerous cascade of failures:

  1. Governance becomes impossible. Companies cannot govern what they cannot see. When AI systems scatter across departments, audit trails fragment. Bias monitoring becomes inconsistent. Explainability standards vary by team.
  2. Scaling stalls. Disconnected systems cannot integrate. Every new initiative starts from scratch instead of building on shared infrastructure.
  3. Maintenance and redundant spending multiply. Teams that built AI to accelerate their work end up spending most of their time maintaining it. One carrier reported that 60% of their AI engineering capacity was devoted to maintaining existing tools rather than building new capabilities. Meanwhile, teams unknowingly pay for overlapping capabilities because nobody has a complete view of AI spending.
  4. Talent drains away. The best AI engineers want to solve hard problems. When they are cornered into spending their time maintaining fragmented infrastructure, they walk out the door.

Why traditional governance fails

Seventy percent of large insurers are investing in AI governance frameworks. Yet only 5% have mature frameworks in place. This gap is not about commitment or resources. It is about a category mistake.

For the last two decades, enterprise software governance worked because the software itself worked a certain way. Systems were point solutions. A claims platform did claims. A policy admin system did policy admin. Each tool had a clear owner, a defined scope and a predictable boundary. Governance could wrap around the edges, through access controls, audit logs, change management, vendor reviews, because the edges were visible. We governed the perimeter because the perimeter was the product.

AI is not a point solution. It is foundational technology, closer to electricity or a database than to a piece of software. It does not sit inside a defined boundary; it flows across every process, every decision and every department that touches data. And because it flows, it cannot be governed at the perimeter.

This is why carriers applying the old playbook keep running in place. Policy documents, oversight committees and compliance checklists were designed to govern systems that stood still. AI does not stand still. It is built, modified, retrained and extended by the same teams it is meant to serve, often in the same week. By the time a governance committee reviews it, three more versions exist somewhere else in the organization.

The failure is not that carriers are governing AI badly. It is that they are governing it as if it were software, when it’s actually infrastructure. Infrastructure requires a different discipline: Shared foundations, common standards and the assumption that everyone will build on top of it. You do not govern electricity by reviewing each appliance. You govern it by standardizing the grid.

Until carriers make that shift, their frameworks will keep maturing on paper while sprawl compounds underneath.

3 questions every insurance CIO should be able to answer

If the failure of traditional governance is a category mistake, the first job of leadership is to check which category they are actually operating in. These three questions are not meant to produce tidy answers. They are meant to reveal whether you are still governing AI as software when you should be governing it as infrastructure.

1. Are you governing AI at the perimeter, or at the foundation?

Look at your current AI governance artifacts, such as the policies, the committees, the review processes. Are they designed to wrap around individual tools after they are built, or to set shared standards that every tool must be built on top of? Perimeter governance asks, “is this specific model compliant?” Foundational governance asks, “does every model in this organization inherit the same definitions, the same lineage and the same guardrails by default?” If your governance only kicks in at review time, you’re still treating AI like software. You’re already behind.

2. If you standardized one thing across your entire organization tomorrow, what would create the most leverage and why haven’t you?

Every carrier has a list of things they know should be standardized but have not been. Shared definitions for core entities. Common ways of handling unstructured inputs. A single source of truth for how decisions get logged. The question is not which item belongs at the top of the list; most CIOs already know. The question is what has been blocking the standardization: Is it political, budgetary, or organizational? Because that blocker, whatever it is, is also what is letting sprawl compound. Governance frameworks cannot fix what foundational decisions have been deferred.

3. When a new AI initiative launches next quarter, what will it automatically inherit from what already exists?

This is the real test. In a point-solution world, every new system is built fresh and governance is applied afterward. In a foundational world, every new system inherits shared standards, shared definitions, shared oversight before a single line of code is written. If the honest answer is “it will inherit nothing, and we will govern it after the fact,” then you do not have an AI governance problem. You have an AI foundation problem, and no amount of policy will close the gap.

The uncomfortable truth is that most carriers will answer these questions honestly and discover they are still operating from the old playbook. It is a signal that the work to be done is not more governance, but different governance, the kind that assumes AI is the ground floor, not the top floor.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • 멀티클라우드 시대 AI 에이전트 관리 전쟁···MS·구글 전략 ‘온도차’
    마이크로소프트(MS)와 구글은 기업 IT 조직이 기업 데이터에 접근하고 다양한 비즈니스 애플리케이션을 넘나들며 작업을 수행하는 도구에 대응할 수 있도록, AI 에이전트 통제 기능을 강화하고 있다. MS는 5월 1일 기업 고객을 대상으로 ‘에이전트 365(Agent 365)’를 정식 출시했다. 이 서비스는 조직이 AI 에이전트를 탐색하고, 관리하며, 보안을 유지할 수 있도록 지원한다. 특히 MS 환경뿐 아니라 서드파티 SaaS, 클라우드, 온프레미스 등 다양한 환경에서 작동하는 에이전트까지 포괄하는 것이 특징이다. 구글은 4일 ‘워크스페이스(Workspace)’용 AI 컨트롤 센터를 발표했다. 해당 기능은 AI 사용 현황, 보안 설정, 데이터 보호 정책, 프라이버시 보호 기능 등을 중앙에서 통합적으로 확인할 수 있도록 하는 데 초점을 맞췄다. 이 같은 발표 시점은 기업 AI 활용 방식의 변화를 반영한다. 많은 기업이 더 이상 챗
     

멀티클라우드 시대 AI 에이전트 관리 전쟁···MS·구글 전략 ‘온도차’

8 de Maio de 2026, 03:23

마이크로소프트(MS)와 구글은 기업 IT 조직이 기업 데이터에 접근하고 다양한 비즈니스 애플리케이션을 넘나들며 작업을 수행하는 도구에 대응할 수 있도록, AI 에이전트 통제 기능을 강화하고 있다.

MS는 5월 1일 기업 고객을 대상으로 ‘에이전트 365(Agent 365)’를 정식 출시했다. 이 서비스는 조직이 AI 에이전트를 탐색하고, 관리하며, 보안을 유지할 수 있도록 지원한다. 특히 MS 환경뿐 아니라 서드파티 SaaS, 클라우드, 온프레미스 등 다양한 환경에서 작동하는 에이전트까지 포괄하는 것이 특징이다.

구글은 4일 ‘워크스페이스(Workspace)’용 AI 컨트롤 센터를 발표했다. 해당 기능은 AI 사용 현황, 보안 설정, 데이터 보호 정책, 프라이버시 보호 기능 등을 중앙에서 통합적으로 확인할 수 있도록 하는 데 초점을 맞췄다.

이 같은 발표 시점은 기업 AI 활용 방식의 변화를 반영한다. 많은 기업이 더 이상 챗봇 테스트 단계에 머무르지 않고, 기업 시스템에 접근해 사용자를 대신해 업무를 수행하는 에이전트 도입을 본격화하고 있다.

이 변화는 CIO와 CISO가 기업 내 AI 에이전트를 바라보는 방식에도 영향을 미친다.

시장조사업체 포레스터의 수석 애널리스트 비스와짓 마하파트라는 “벤더들이 에이전트 통제를 신원, 접근, 데이터, 워크로드 관리와 함께 배치하면서 AI 거버넌스를 IT와 보안 조직이 공동으로 책임지는 운영 영역으로 자리매김시키고 있다”라며 “CIO 입장에서는 AI 에이전트를 다른 디지털 인력과 마찬가지로 관리해야 하며, 라이프사이클 관리와 비용 가시성, 서비스 관리 체계와의 통합이 필요하다”라고 설명했다.

CISO의 역할도 확대되고 있다. 기존의 모델 리스크나 데이터 유출 대응을 넘어, 자율성이 높아진 에이전트의 행동을 지속적으로 통제하고, 위험 발생 시 영향을 최소화할 수 있는 체계가 요구된다.

옴디아(Omdia)의 수석 애널리스트 리안 지에 수는 “AI 거버넌스가 모든 AI 기반 기업 애플리케이션의 핵심 구성 요소로 부상하고 있다”라며 “파일럿 단계를 넘어 전사적 도입으로 확대되는 과정에서, 거버넌스는 AI 구축 단계부터 필수적으로 포함돼야 한다”라고 강조했다.

MS와 구글의 차이점

MS의 ‘에이전트 365’와 구글의 AI 컨트롤 센터는 유사한 거버넌스 문제를 다루지만, 출발점은 서로 다르다.

옴디아의 수는 “기업들이 멀티클라우드와 하이브리드 IT 환경에서 AI를 점점 더 적극적으로 도입하고 있다는 점을 고려하면 두 접근 방식은 상호 보완적”이라며 “각각 자사 환경의 AI 워크로드에 최적화돼 있어 특정 벤더에 집중 투자한 기업일수록 네이티브 AI 거버넌스 경험이 훨씬 원활해질 것”이라고 설명했다.

포레스터의 마하파트라는 이러한 차이를 거버넌스 성숙도가 아닌 ‘플랫폼 범위’의 문제로 해석했다. MS는 AI 에이전트를 조직 전반에서 관리해야 하는 ‘기업 행위자’로 보는 반면, 구글은 협업 데이터와 사용자 콘텐츠 내에서 AI가 어떻게 작동하는지에 더 집중하는 경향이 있다는 분석이다.

마하파트라는 “두 접근 방식은 서로 다른 통제 영역을 다루기 때문에 완전히 경쟁 관계라고 보기는 어렵다”라면서도 “기업이 두 생태계를 동시에 표준으로 채택하지 않는 한 완전한 보완 관계라고 보기도 어렵다”라고 말했다. 이어 “시간이 지날수록 각 모델은 자사 생산성 및 데이터 플랫폼과 더욱 긴밀하게 결합되면서, AI 거버넌스 의사결정이 기업 아키텍처 전략이 아닌 특정 벤더 선택에 종속될 위험이 커질 수 있다”라고 덧붙였다.

파리크 컨설팅(Pareekh Consulting)의 CEO 파리크 자인은 보다 중립적인 시각을 제시했다. 자인은 “두 접근 방식은 보완적이면서 동시에 경쟁적 성격을 지닌다”라며 “특히 MS와 구글을 함께 사용하는 기업의 경우 AI 거버넌스가 각 벤더의 기반 플랫폼에 더욱 밀접하게 연결될 가능성이 있다”라고 분석했다.

남아 있는 리스크

새로운 통제 기능은 기업이 AI 에이전트를 보다 잘 파악할 수 있도록 돕지만, 섀도우 AI, 서드파티 통합, 자율적 행동에 대한 책임 문제 등 더 큰 리스크를 해소하지는 못한다는 분석이 나온다.

파리크 컨설팅(Pareekh Consulting)의 CEO 파리크 자인은 개발 도구, 브라우저 확장 프로그램, 로컬 어시스턴트, SaaS 코파일럿, 비인가 도구 연동 등을 통해 섀도우 AI 에이전트가 여전히 등장할 수 있다고 지적했다. 또한 서드파티 통합은 보안 검증 속도를 앞지르며 빠르게 확산될 가능성도 있다고 덧붙였다.

자인은 “감사 로그는 어떤 일이 발생했는지는 보여주지만, 자율형 에이전트가 왜 그런 행동을 선택했는지까지는 항상 설명하지 못한다”라고 말했다.

이로 인해 에이전트가 비즈니스나 보안 리스크를 유발하는 행동을 했을 때, 기업은 통제와 책임 소재를 둘러싼 어려운 문제에 직면하게 된다. 로그가 개선된다고 해서 책임이나 통제 문제가 자동으로 해결되는 것은 아니라는 의미다.

포레스터(Forrester)의 수석 애널리스트 비스와짓 마하파트라는 가장 큰 공백이 네이티브 플랫폼 외부에서 발생할 가능성이 높다고 지적했다. 로우코드 도구, 외부 API, SaaS 애플리케이션을 통해 생성된 섀도우 에이전트는 중앙 통제를 우회하고 과도하거나 상속된 권한으로 작동할 수 있다는 설명이다.

마하파트라는 “서드파티 통합은 에이전트의 활동 범위를 확장시키지만, 이후 발생하는 행동이나 데이터 전파에 대한 가시성은 동일한 수준으로 확보되지 않는 경우가 많다”라며 “여러 시스템을 거치며 연쇄적으로 작동하는 경우 감사 가능성도 균일하지 않아 의도와 결과를 구분하기 어렵고, 자율형 에이전트가 실질적인 비즈니스 또는 보안 영향을 초래했을 때 책임 소재 역시 여전히 불분명하다”라고 분석했다.

결국 MS와 구글이 제공하는 기본 통제 기능은 도움이 되지만, 전체 AI 에이전트 환경을 완전히 포괄하기는 어렵다는 것이 전문가들의 공통된 시각이다. 멀티클라우드, 다양한 SaaS, 개발 플랫폼, 브라우저 기반 AI 어시스턴트를 함께 사용하는 기업이라면 단일 벤더 콘솔을 넘어서는 거버넌스 체계를 별도로 마련해야 한다는 지적이다.
dl-ciokorea@foundryco.com

Antes de ontemStream principal
  • ✇Security | CIO
  • I gave our developers an AI coding assistant. The security team nearly mutinied
    I’ve sat in enough risk meetings to know the sound a bad surprise makes before anyone names it. It usually starts with a pause. Then a throat gets cleared. Then someone says, “We may need to bring the CISO into this.” That happened over a developer tool. Not a breach. Not a regulator. Not ransomware at 2:00 a.m. A coding assistant. At first, I thought the reaction was overcooked. I’d seen the same pattern in other boardrooms and delivery teams. A new tool appears.
     

I gave our developers an AI coding assistant. The security team nearly mutinied

6 de Maio de 2026, 09:00

I’ve sat in enough risk meetings to know the sound a bad surprise makes before anyone names it. It usually starts with a pause. Then a throat gets cleared. Then someone says, “We may need to bring the CISO into this.”

That happened over a developer tool.

Not a breach. Not a regulator. Not ransomware at 2:00 a.m. A coding assistant.

At first, I thought the reaction was overcooked. I’d seen the same pattern in other boardrooms and delivery teams. A new tool appears. Engineers like it because it saves time. Leadership likes it because it promises more output without hiring half a city. Security hates it because security has the social burden of being the adult in the room when everyone else is buying fireworks.

I backed the rollout because the case was clean on paper. Developers were drowning in repetitive work. Deadlines were tightening. Technical debt had started breeding in the dark. The assistant could draft tests, explain old code, suggest refactors and help junior engineers stop treating Stack Overflow like an underground pharmacy. And this was no longer fringe behavior. In 2025, Microsoft said that 15 million developers were already using GitHub Copilot, and the tool has spread further since then.

So yes, I approved it.

Then security nearly revolted.

That week taught me something I now say to clients more bluntly than I used to. AI coding tools do not just change software delivery. They change the terms of trust inside the company. They force you to answer ugly questions about control, proof, accountability and review discipline. Most public coverage still stares at productivity. The harder story sits elsewhere. Governance.

The part that looked sensible

The truth is, I didn’t approve the tool because I was dazzled. I approved it because I’ve spent years watching good people waste good hours on bad repetition.

You can only tell a team to “be strategic” so many times before they start laughing at you. Developers were buried under boilerplate, documentation drift, brittle legacy code and the kind of ticket churn that makes bright people look tired. A coding assistant looked like a relief. Not magic. Relief.

That distinction matters.

In advisory work, I’ve learned that many poor decisions do not begin as foolish decisions. They begin as reasonable decisions made inside an outdated control model. That’s what this was. The business case made sense. The mistake was assuming the old review system could keep up with the new speed.

That old assumption dies hard. Leaders often think software risk changes when the code changes. Often, it changes earlier, as production conditions change. If a machine now drafts what humans once wrote line by line, the issue is not only code quality. It is code volume, code origin and the shrinking time between suggestion and production.

That is a different risk shape.

Why security lost its patience

The security team was upset because they could see the math.

Code output was about to rise. Review time was not.

That gap is where trouble rents office space.

Many non-security leaders still imagine the concern is simple. “The AI might write bad code.” That’s the kindergarten version. The real concern is broader and nastier. Who reviewed the output? What hidden package did the model nudge into the build? What sensitive context got pasted into the prompt window? Which junior engineer trusted the suggestion because it sounded calm and looked polished? Which policy assumed human authorship when the draft came from somewhere else?

Those are not philosophical questions. They are operating questions.

Recent security work has made this much harder to dismiss. Snyk described a February 2026 case in which a vulnerability chain turned an AI coding tool’s issue triage bot into a supply chain attack path. That is the sort of sentence that makes security teams sit up straight and ask for names, logs and meeting invites.

And that is before you get to the quieter problem. AI-generated code can look tidy long before it is safe. Security people know that neat syntax can hide weak controls, lazy validation, poor handling of secrets and dependency choices nobody meant to own.

So when the team escalated, they weren’t staging a mutiny over a plugin. They were reacting to a change in production logic that nobody had yet governed.

What the fight was really about

Once the temperature dropped, the shape of the dispute became obvious to me. It was not engineering versus security. It was speed versus proof.

More precisely, it was four things:

  1. Velocity. The assistant increased output far faster than assurance could keep pace.
  2. Visibility. We did not have a clear sight of where the tool was used, what prompts were fed into it, what code it influenced or what external components it smuggled into the discussion.
  3. Validation. Existing checks were built for a world in which humans produced most of the first draft. That world is fading. When code generation speeds up, review cannot stay ceremonial.
  4. Governance. Nobody had written the rules that mattered most. Which use cases were fine? Which were off-limits? Who owned the risk of acceptance? What evidence would prove that the tool was used safely enough?

That last point gets too little airtime. Governance sounds dull until you don’t have it. Then it becomes the difference between controlled use and polite chaos.

NIST’s recent work on monitoring deployed AI systems makes the same point more broadly. Organizations need post-deployment measurement and monitoring because real-world behavior drifts, surprises occur and governance after launch remains immature. Different setting, same lesson. You cannot inspect your way out of weak operating design.

What we did next

We did not ban the tool. That would have been theatre dressed as courage.

We also did not waive it through and tell security to “partner more closely.” I’ve heard that sentence enough times to know it usually means, “Please absorb more risk with better manners.”

We did something less dramatic and more useful. We narrowed the rollout and rewrote the conditions of trust.

Low-risk use cases stayed in play. Drafting tests. Explaining old functions, helping with documentation and suggesting boilerplate. Those were manageable.

High-risk areas got tighter boundaries. Auth flows. Secrets handling. Encryption logic. Infrastructure-as-code for sensitive environments. Anything tied to regulated data or material security controls. Those needed a stricter review or stayed out of scope.

We also drew a hard line on prompt hygiene. No customer data. No credentials. No confidential architecture details were dropped into a chat window because someone wanted a faster answer on a Friday afternoon. You would think that goes without saying. It does not.

Then we raised the review standard. Human sign-off meant real sign-off, not a quick skim and a merge. Scanning had to cover dependencies and code changes with more discipline. Provenance mattered more. Logging mattered more. Exception paths had to be explicit, not social.

Most importantly, security moved from late-stage critic to co-designer. That changed the tone. The question stopped being, “Can we use this?” and became, “Under what conditions can we trust its use enough to defend it later?”

That small shift matters more than many policy documents.

What both sides got right — and wrong

Developers were right about the waste. They were right that these tools remove drudgery. They were right that refusing every new capability is not a strategy. A team that cannot experiment eventually decays into compliance theatre and backlog sorrow.

They were wrong to assume readable code is trustworthy code. They were wrong to treat assistance as neutral. Tools shape behavior. That is what tools do. Once suggestions arrive fast and fluently, people accept more than they admit.

Security was right about review debt. Right about supply chain exposure, right about data leakage risk. Right, governance should not arrive three incidents late, wearing a blazer and a lessons-learned slide.

They were wrong at first, as many security teams are when they feel cornered. They made the conversation sound like a moral referendum. That never helps. If security cannot offer a usable path, the business routes around it. Then you get the worst of both worlds: Secret adoption and public optimism.

I don’t say that with smugness. I say it because I’ve watched good teams damage each other by defending the right thing in the wrong way.

The bigger lesson for leaders

This is where the story stops being about one rollout and starts becoming board material.

If your developers can now produce more code with less effort, your governance burden rises even if your headcount does not. The old ratio between output and oversight has broken. Many firms have not adjusted.

That matters because software governance is no longer just about secure coding standards or release gates. It is about production conditions. Who can generate? Under what rules? With what evidence? Across which risk zones? With whose approval? And if something goes wrong, who owns the final act of acceptance?

Those questions sound administrative until the first incident report lands, and nobody can explain whether the flawed logic was written, suggested, copied, reviewed or merely assumed.

The market is moving quickly. Microsoft’s own recent security reporting says organizations adopting AI agents need observability, governance and security now, not later. Snyk is making a similar argument from the perspective of the software supply chain. Visibility first. Then prevention. Then governance that holds under pressure.

That is why I now advise something that used to sound severe and now sounds merely accurate. If you deploy AI coding tools without redesigning your control model, you are not buying productivity. You are buying ambiguity at machine speed.

What you should ask before you approve the next tool

You do not need a grand doctrine. You need a few hard questions asked before excitement turns into policy by accident.

Where can this tool be used, and where can’t it be used?

What data may enter it?

How will you know when the generated code reaches production?

What review standard applies when the first draft came from a machine?

Who can approve exceptions?

What logs, scans and decision records will let you defend the setup six months later, when memories blur and staff rotate?

That is not bureaucracy. That is self-respect.

I still believe these tools have value. I’d be foolish not to. But I trust them the way I trust a very fast junior colleague with a beautiful writing style and uneven judgment. Useful. Impressive. Worth keeping. Not someone you leave unsupervised near the crown jewels.

The near-mutiny turned out to be healthy. It forced the truth into the room before a failure did. Security was not blocking progress. They were objecting to unmanaged speed. Developers were not being reckless. They were asking for relief from the grind. Leadership’s job was not to pick a side. It was to write a better contract between them.

That is the part that too many firms still miss.

The argument was never only about a coding assistant. It was about whether we still knew how to govern work once the work started moving faster than our habits. That is a much bigger story. And if you listen carefully, you can hear it starting in many companies right now.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • AI is spreading decision-making, but not accountability
    On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised. As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks sugg
     

AI is spreading decision-making, but not accountability

6 de Maio de 2026, 07:00

On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised.

As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks suggest responsibility is shared across legal, risk, IT, and business teams, courts may ultimately find it far less evenly distributed when something goes wrong.

AI, after all, may diffuse decision-making, but not legal liability.

AI doesn’t show up in court — people do

Jessica Eaves Mathews, an AI and intellectual property attorney and founder of Leverage Legal Group, understands that when an AI system influences a consequential decision, the algorithm isn’t what will show up in court. “It’ll be the humans who developed it, deployed it, or used it,” she says. For now, however, the deeper uncertainty is there’s very little case law to guide those decisions.

“We’re still in a phase where a lot of this is speculative,” says Mathews, comparing the moment to the early days of the internet, when courts were still figuring out how existing legal frameworks applied to new technologies. Regulators have signaled that responsibility can’t be outsourced to algorithms. But how liability will be apportioned across vendors, deployers, and executives remains unsettled — an uncertainty that’s unlikely to persist for long.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jessica Eaves Mathews, founder, Leverage Legal Group

LLG

“There are going to be companies that become the poster children for how not to do this,” she says. “The cases working their way through the system now are going to define how this plays out.”

In most scenarios, responsibility will attach first and foremost to the deploying organization, the enterprise that chose to implement the system. “Saying that we bought it from a vendor isn’t likely to be a defense,” she adds.

The underlying legal principle is familiar, even if the technology isn’t: liability follows the party best positioned to prevent harm. In an AI context, that tends to be the organization integrating the system into real-world decision-making, so what changes isn’t who’s accountable but how difficult it becomes to demonstrate appropriate safeguards were in place.

CIO as the system’s last line of defense

If legal accountability points to the enterprise, operational accountability often converges on the CIO. While CIOs don’t formally own AI in most organizations, they do own the systems, infrastructure, and data pipelines through which AI operates.

“Whether they like it or not, CIOs are now in the AI governance and risk oversight business,” says Chris Drumgoole, president of global infrastructure services at DXC Technology and former global CIO and CTO of GE.

The pattern is becoming familiar, and increasingly predictable. Business teams experiment with AI tools, often outside formal processes, and early results are promising. Adoption accelerates but controls lag. Then something breaks. “At that moment,” Drumgoole says, “everyone looks to the CIO first to fix it, then to explain how it happened.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Chris Drumgoole, president, global infrastructure services, DXC Technology

DXC

The dynamic is intensified by the rise of shadow AI. Unlike earlier forms of shadow IT, the risks here aren’t limited to cost or inefficiency. They extend to things like data leakage, regulatory exposure, and reputational damage.

“Everyone is an expert now,” Drumgoole says. “The tools are accessible, and the speed to proof of concept is measured in minutes.” For CIOs, this creates a structural asymmetry. They’re accountable for systems they don’t fully control, and increasingly for decisions they didn’t directly authorize.

In practice, that makes the CIO the enterprise’s last line of defense, not because governance models assign that role, but because operational reality does.

The illusion of distributed accountability

Most organizations, however, aren’t building governance structures around a single accountable executive. Instead, they’re constructing distributed models that reflect the cross-functional nature of AI.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Ojas Rege, SVP and GM, privacy and data governance, OneTrust

OneTrust

Ojas Rege, SVP and GM of privacy and data governance at OneTrust, sees this distribution as unavoidable, but also potentially misleading. “AI governance spans legal, compliance, risk, IT, and the business,” he says. “No single function can manage it end to end.”

But that doesn’t mean accountability is shared in the same way. In Rege’s view, responsibility for outcomes remains firmly with the business. “You still keep the owners of the business accountable for the outcomes,” he says. “If those outcomes rely on AI systems, they have to figure out how to own that.”

In practice, however, governance is fragmented. Legal teams interpret regulatory exposure, risk and compliance define frameworks, and IT secures and operates systems. The result is a model in which responsibility appears distributed while accountability, when tested, is not — and it often compresses to a single point of failure. “AI doesn’t replace responsibility,” says Simon Elcham, co-founder and CAIO at payment fraud platform Trustpair. “It increases the number of points where things can go wrong.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Simon Elcham, CAIO, Trustpair

Trustpair

And those points are multiplying. Beyond traditional concerns such as security and privacy, enterprises must now manage algorithmic bias and discrimination, intellectual property infringement, trade secret exposure, and limited explainability of model outputs.

Each risk category may fall under a different function, but when they intersect, as they often do in AI systems, ownership becomes blurred. Mathews frames the issue more starkly in that accountability ultimately rests with whoever could have prevented the harm. The difficulty in AI systems is that multiple actors may plausibly claim, or deny, that role. So the result is a governance model that’s distributed by design, but not always coherent in execution.

The emergence and limits of the CAIO

To address this ambiguity, some organizations are beginning to formalize AI accountability through new leadership roles. The CAIO is one attempt to centralize oversight without constraining innovation.

At Hi Marley, the conversational platform for the P&C insurance industry, CTO Jonathan Tushman recently expanded his role to include CAIO responsibilities, formalizing what he describes as executive accountability for AI infrastructure and governance. In his view, effective AI governance depends on structured separation. “AI Ops owns how we build and run AI internally,” he says. “But AI in the product belongs to the CTO and product leadership, and compliance and legal act as independent checks and balances.”

The intention isn’t to eliminate tension, but to institutionalize it. “You need people pushing AI forward and people holding it back,” says Tushman. “The value is in that tension.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jonathan Tushman, CTO, Hi Marley

Hi Marley

This reflects a broader shift in enterprise governance away from centralized control and toward managed friction between competing priorities — speed versus safety, innovation versus compliance. Yet even this model has limits.

When disagreements inevitably arise, someone must decide whether to proceed, pause, or reverse course. “In most organizations, that decision escalates often to the CEO or CFO,” says Tushman.

The CAIO, in other words, may coordinate accountability. But ultimate responsibility still sits at the top and can’t be delegated.

The widening gap between deployment and governance

If organizational models for AI accountability are still evolving, the gap between deployment and governance is already widening. “Companies are deploying AI at production speed, but governing at committee speed,” Mathews says. “That’s where the risk lives.”

Consequences are beginning to surface as a result. Many organizations lack even a basic inventory of AI systems in use across the enterprise. Shadow AI further complicates visibility, as employees adopt tools independently, often without understanding the implications.

The risks are both immediate and systemic. Employees may input sensitive corporate data into public AI platforms, inadvertently exposing trade secrets. AI-generated content may infringe on copyrighted material, and decision systems may produce biased or discriminatory outcomes that trigger regulatory scrutiny.

At the same time, regulatory expectations are rising, even in the absence of clear legal precedent. That combination — rapid deployment, limited governance, and legal uncertainty — makes it likely that a small number of high-profile cases will shape the future of AI accountability, as Mathews describes.

Where the buck stops

For all the complexity surrounding AI governance, one pattern is becoming clear. Responsibility may be distributed, authority may be shared, and new roles may emerge to coordinate oversight, but accountability doesn’t remain diffused indefinitely.

When systems fail, or when regulators intervene, it often points at enterprise leadership, and, in operational terms, to the executives closest to the systems in question. AI may decentralize how decisions are made, obscure the pathways through which those decisions emerge, and challenge traditional notions of control, but what it doesn’t do is eliminate responsibility. If anything, it magnifies it.

AI accountability is a familiar problem, refracted through a more complex system. The difference is the system is moving faster, and the cost of getting it wrong is increasing.

  • ✇Security | CIO
  • Microsoft, Google push AI agent governance into enterprise IT mainstream
    Microsoft and Google are adding new controls for AI agents, as enterprise IT teams try to keep up with tools that can access corporate data and act across business applications. Microsoft’s Agent 365, made generally available for commercial customers on May 1, is designed to help organizations discover, govern, and secure AI agents, including those operating across Microsoft, third-party SaaS, cloud, and local environments. Google’s new AI control center for Workspac
     

Microsoft, Google push AI agent governance into enterprise IT mainstream

5 de Maio de 2026, 06:38

Microsoft and Google are adding new controls for AI agents, as enterprise IT teams try to keep up with tools that can access corporate data and act across business applications.

Microsoft’s Agent 365, made generally available for commercial customers on May 1, is designed to help organizations discover, govern, and secure AI agents, including those operating across Microsoft, third-party SaaS, cloud, and local environments.

Google’s new AI control center for Workspace, announced this week, focuses more specifically on giving administrators a centralized view of AI usage, security settings, data protection controls, and privacy safeguards within Workspace.

The timing reflects a shift in enterprise AI use. Many companies are no longer just testing chatbots, but are beginning to use agents that can reach corporate systems and carry out tasks on behalf of users.

Analysts said the shift changes how CIOs and CISOs should think about AI agents inside the enterprise.

“By placing agent controls alongside identity, access, data, and workload management, vendors are positioning AI governance as an operational discipline owned jointly by IT and security,” said Biswajeet Mahapatra, principal analyst at Forrester. “For CIOs, this means AI agents now need to be managed like any other digital workforce, with lifecycle oversight, cost visibility, and integration into service management.”

For CISOs, that broadens the mandate beyond model risk and data leakage. As agents are given more autonomy, security teams will need a more continuous way to control what they can do and contain the impact when their actions create risk.

The announcements also elevate AI governance to a “core component of all AI-assisted enterprise applications,” signaling to CIOs and CISOs that governance will need to be built into AI deployments as adoption moves from pilots to enterprise-wide enablement, according to Lian Jye Su, chief analyst at Omdia.

Where Microsoft and Google differ

Microsoft Agent 365 and Google’s AI control center address related governance problems, but from different starting points.

“Given how enterprises are increasingly deploying AI in multicloud and hybrid IT environments, these two are complementary,” Su said. “They are highly optimized for AI workloads within their respective environments, meaning enterprises heavily invested in one vendor will find the native AI governance experience to be far smoother.”

According to Mahapatra, enterprises should see the distinction as a matter of platform scope rather than governance maturity. Microsoft’s approach treats AI agents as enterprise actors that require broad organizational oversight, while Google’s controls are more narrowly focused on how AI interacts with collaboration data and user content.

“These are not fully competing approaches because they govern different control planes, but they are not truly complementary either unless an enterprise standardizes on both ecosystems,” Mahapatra said. “Over time, each model reinforces governance capabilities that are tightly coupled to its underlying productivity and data platforms, which increases the risk that AI governance decisions become implicitly tied to vendor choice rather than enterprise architecture strategy.”

Pareekh Jain, CEO of Pareekh Consulting, took a middle view, saying the approaches are both complementary and competitive, especially as enterprises using both Microsoft and Google may find AI governance becoming more closely tied to each vendor’s underlying platform.

Risks left to resolve

The new controls may give enterprises better visibility into AI agents, but analysts said they do not eliminate bigger risks related to shadow AI, third-party integrations, and accountability for autonomous actions.

According to Jain, shadow AI agents can still emerge through developer tools, browser extensions, local assistants, SaaS copilots, and unsanctioned tool connections. Third-party integrations, he said, could also expand faster than security teams can validate them.

“Audit logs may show what happened, but not always why an autonomous agent chose an action,” Jain said.

That leaves enterprises with difficult questions when an agent takes actions that create business or security risks. Better logs do not automatically settle questions of control or responsibility.

Mahapatra said the biggest gaps are likely to remain outside the boundaries of native platforms. Shadow agents created through low-code tools, external APIs, or embedded SaaS applications can bypass central controls and operate with excessive or inherited permissions.

“Third-party integrations often expand agent reach without equivalent visibility into downstream actions or data propagation,” Mahapatra said. “Auditability remains uneven when agents chain actions across systems, making it hard to reconstruct intent versus outcome. Accountability is still unresolved when autonomous agents trigger material business or security impacts, since ownership is split across users, developers, and platform controls.”

The message for enterprises is that native controls from Microsoft or Google may help, but they are unlikely to cover the full agent landscape. Companies using multiple clouds, SaaS tools, developer platforms, and browser-based AI assistants will still need governance that extends beyond any single vendor’s console.

The article originally appeared on ComputerWorld.

  • ✇Security | CIO
  • From copilot to control plane: Where serious AI governance starts
    In practice, that means setting the rules for identity, model access, permissions, logging and human approval before AI tools or agents are allowed to operate inside business workflows. The practical starting point is to identify where AI is already touching repositories, tickets, internal knowledge and business systems, then establish a minimum common control set across those entry points. The first enterprise AI conversations I kept getting pulled into sounded like to
     

From copilot to control plane: Where serious AI governance starts

1 de Maio de 2026, 08:00

In practice, that means setting the rules for identity, model access, permissions, logging and human approval before AI tools or agents are allowed to operate inside business workflows. The practical starting point is to identify where AI is already touching repositories, tickets, internal knowledge and business systems, then establish a minimum common control set across those entry points.

The first enterprise AI conversations I kept getting pulled into sounded like tooling debates.

Which copilot should we allow? Which model should we approve? How quickly can teams start using it in the IDE? How much faster will developers move?

Those are reasonable opening questions. In my experience, they are rarely the questions that determine whether AI scales safely inside an enterprise. They are just the entry point.

More than once, I have watched a meeting begin with a simple request to approve an AI coding assistant and end twenty minutes later in a debate about repository access, model approvals, prompt retention, audit trails and whether an agent should be allowed anywhere near a deployment workflow. That is the pattern that matters.

What I have seen instead is a predictable progression. First comes enthusiasm around copilots and coding assistants. Teams want faster code completion, quicker debugging, better documentation and help writing tests. Then the conversation shifts. Leaders start asking what these tools can see, where prompts go, which models are approved, whether responses are retained and how generated output should be reviewed. Then the issue gets bigger again. Once AI starts interacting with repositories, tickets, pipelines, internal knowledge, APIs and systems of record, the problem is no longer the assistant itself. It is the control plane around it.

That is why I no longer think this is mainly a coding tools story. Software development is simply where the governance problem becomes visible first. The broader enterprise issue is whether there is a shared layer for identity, permissions, approved model access, secure context, auditability and action boundaries before AI becomes an execution surface inside the business.

Software development is where the issue surfaces first

Development teams encounter this shift early because the platforms themselves are already moving beyond simple assistance. GitHub Copilot policy controls now let organizations govern feature and model availability, while GitHub’s enterprise AI controls provide a centralized place to manage and monitor policies and agents across the enterprise. GitHub has also made its enterprise AI controls and agent control plane generally available, explicitly positioning them as governance features for deeper control and stronger auditability. That is a sign that governance is starting to surface directly in product design.

Google is sending a similar signal. Gemini Code Assist is framed as support to build, deploy and operate applications across the full software development lifecycle, not just as an IDE helper. Its newer agent mode documentation describes access to built-in tools and Google’s data governance documentation says Standard and Enterprise prompts and responses are not used to train Gemini Code Assist models and are encrypted in transit. When vendors start documenting lifecycle coverage, tool access, data governance and validation expectations, the market is already telling you what matters next.

Microsoft is even more explicit. Microsoft Agent 365 is described as a control plane for AI agents, with unified observability through telemetry, dashboards and alerts. Microsoft’s Copilot architecture and data protection model put equal emphasis on permissions, data flow, Conditional Access, MFA, labeling and auditing. In other words, the control-plane idea is no longer theoretical. Major platforms are operationalizing it.

That is why the productivity-only debate misses the larger point. DORA’s 2025 report argues that AI primarily acts as an amplifier, magnifying an organization’s existing strengths and weaknesses and that the biggest gains come from the surrounding system, not from the tool by itself. The DORA AI Capabilities Model pushes the same idea further by laying out organizational capabilities required to get real value from AI-assisted software development. That lines up with what I have seen in practice. Enterprises do not fail because a model is impressive or unimpressive. They fail when they mistake local tool adoption for operating readiness.

The developer productivity research is mixed, which is exactly why leadership should be careful. MIT Sloan summarized field research showing productivity gains from AI coding assistants, especially among less-experienced developers. METR’s 2025 trial, by contrast, found that experienced open-source developers using early-2025 AI tools took longer in that setting. I do not read those findings as contradictions. I read them as a warning against building enterprise strategy around a narrow “hours saved in the IDE” lens. For leaders, the implication is simple: Mixed productivity data is a reason to strengthen governance and operating discipline, not to make strategy from benchmark claims alone.

The shift from assistant to execution layer

The real change happens when AI stops being a suggestion surface and starts becoming an execution surface.

That threshold arrives faster than many leaders expect. GitHub’s coding agent can create pull requests, make changes in response to comments and work in the background before requesting review. GitHub also documents centralized agent management and policy-compliant execution patterns using hooks to log prompts and control which tools Copilot CLI can run. Once a tool can act inside the delivery system, permission design stops being optional.

Anthropic’s documentation makes the same shift visible from another angle. Claude Code is described as an agentic coding tool that reads a codebase, edits files, runs commands and integrates with development tools. Anthropic’s sandboxing work explains how filesystem and network isolation were added to reduce permission prompts while improving safety. Its work on advanced tool use describes dynamic discovery and loading of tools on demand rather than preloading everything into context. Once tools can be discovered dynamically and invoked during work, governance must move above the assistant.

This is usually the point when the room changes. What started as a discussion about developer productivity becomes a discussion about identity, authority, logging, approval boundaries and who owns the risk if an AI-enabled action causes real enterprise impact. The issue is no longer, “Did the assistant help write code?” The issue becomes, “Who authorized this path from context to action?”

Serious governance starts above the tool

If an organization is serious about AI, governance must start above the assistant.

The first control is identity. Who is acting: A human, a service account, a bot or an agent? Microsoft’s Copilot architecture and agent management guidance make this concrete by tying access to user authorization, Conditional Access and MFA. That is the right instinct. AI does not remove the identity problem. It sharpens it.

The second control is permissions. What can the actor read, write, retrieve or execute? This is where many early deployments are still too loose. If an AI tool can read internal knowledge, query systems, write to a repository or trigger workflows, those capabilities need clear tiering just as privileged human access does. In practice, that usually means mapping agent permissions onto existing identity and access models so read, write, query and execution rights follow least-privilege rules rather than tool convenience. That can mean giving an agent read access to internal knowledge, limited write access in development environments and no production execution rights without an explicit approval boundary.

The third control is approved model access. GitHub now lets organizations govern model and feature availability in Copilot. Google documents edition-specific data handling and validation expectations. Enterprises need a way to decide which models are allowed for which workloads and data classes. Otherwise, every team ends up inventing its own routing logic and risk posture.

The fourth control is secure context. This is where real exposure often sits: Connectors, retrieval, embedded knowledge, prompts and tool calls. Anthropic’s work on context engineering for agents is useful because it shows how agents increasingly load data just in time through references and tools. That is powerful, but it also means context discipline matters as much as model discipline.

The fifth control is auditability. If a system suggests code, opens a ticket, retrieves enterprise content, triggers a tool or initiates a change, the enterprise needs evidence. GitHub’s enterprise agent monitoring and Microsoft’s auditing model both point in this direction. Governance without reconstructable evidence is not governance. It is optimism.

The standards are already telling us this

The control-plane framing matters because it aligns with where the standards bodies are already going.

NIST’s Secure Software Development Framework says secure practices need to be integrated into each SDLC implementation. NIST SP 800-218A extends that logic with AI-specific practices for model development throughout the software lifecycle. NIST’s Generative AI Profile treats generative AI as a risk-management problem spanning design, development, use and evaluation rather than as a narrow feature rollout. That is consistent with what enterprises are now learning in practice: Once AI touches real delivery and operating processes, governance becomes architectural.

The security community is saying the same thing. OWASP’s LLM Top 10 flags prompt injection, sensitive information disclosure, supply chain vulnerabilities and excessive agency as core risk areas. Those are not merely model-quality issues. They are control issues that show up when AI has context, tools and authority.

Software supply chain discipline matters here, too. SLSA ties stronger software trust to provenance and tamper resistance, while OpenSSF’s MLSecOps whitepaper and its Security-Focused Guide for AI Code Assistant Instructions show that AI-assisted development now needs explicit security practice in both pipelines and prompting. In an AI-assisted delivery environment, provenance and secure instruction design become more important, not less.

The market is moving toward a real control-plane layer

This is not just a framework conversation anymore. It is becoming a market category.

Forrester’s agent control plane research described enterprise needs across three functional planes: Building agents, embedding them into workflows and managing and governing them at scale. That matters because it validates the idea that governance has to sit outside the build plane if it is going to remain consistent as agents proliferate.

The market signal is clear. Microsoft is calling Agent 365 a control plane. GitHub has generally available enterprise AI controls and an agent control plane. Airia’s governance launch explicitly positions governance as a distinct layer alongside security and orchestration. The category is converging around the same problem statement: If agents can act, someone has to govern the conditions under which that action is allowed. Any control-plane solution worth serious consideration should work across models and tools while preserving policy consistency, auditability and clear operational boundaries.

The real leadership question

When this becomes real, I usually stop asking which assistant a team prefers and start asking different questions:

  • Who is the actor, and under what identity does it run?
  • What can it read, what can it write and what can it execute?
  • Which models, endpoints and data flows are approved?
  • What evidence survives an audit, an incident review or a board-level question?
  • Where are the mandatory human checkpoints before an AI-assisted action becomes an enterprise action?

Those questions change the quality of the conversation quickly. They move the discussion out of demo mode and into operating model territory. That is also where alignment starts, because governance becomes a cross-functional operating issue for architecture, security, engineering and risk rather than a tooling preference inside one team.

In the conversations I have been in, that is usually the point when the room stops talking about tools and starts talking about control.

The wrong question for this phase is, “Which copilot should we standardize on?”

The better question is, “What control plane will govern AI wherever it runs?”

That is where serious enterprise AI governance starts.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • What is TOGAF? An EA framework for aligning technology to business
    TOGAF definition The Open Group Architecture Framework (TOGAF) is an enterprise architecture methodology that offers a high-level framework for enterprise software development. TOGAF helps organize the development process through a systematic approach aimed at reducing errors, maintaining timelines, staying on budget, and aligning IT with business units to produce quality results. The Open Group developed the framework in 1995, and by 2016, 80% of Global 50 companies
     

What is TOGAF? An EA framework for aligning technology to business

1 de Maio de 2026, 06:00

TOGAF definition

The Open Group Architecture Framework (TOGAF) is an enterprise architecture methodology that offers a high-level framework for enterprise software development. TOGAF helps organize the development process through a systematic approach aimed at reducing errors, maintaining timelines, staying on budget, and aligning IT with business units to produce quality results.

The Open Group developed the framework in 1995, and by 2016, 80% of Global 50 companies and 60% of Fortune 500 companies used it. TOGAF is free for organizations to use internally, but not for commercial purposes. Businesses can, however, have tools, software or training programs certified by The Open Group. There are currently eight certified TOGAF tools and 71 accredited courses offered from 70 organizations.

In 2022, The Open Group announced the latest update to the framework and released the TOGAF Standard, 10th Edition, to replace the previous Standard, 9.2 Edition. The Open Group states that the 10th Edition will help businesses operate more efficiently and will provide more guidance and simpler navigation for applying the TOGAF framework.

As more organizations adopt AI technology, the TOGAF 10 framework can help businesses navigate developing and implementing AI-driven enterprise architecture. The methodology’s focus on compliance and security can guide organizations through the development and implementation process of AI architecture, while mitigating risk.

TOGAF framework overview

Like other IT management frameworks, TOGAF helps businesses align IT goals with overall business goals, while helping to organize cross-departmental IT efforts. TOGAF helps businesses define and organize requirements before a project starts, keeping the process moving quickly with few errors.

TOGAF 10 brings a stronger focus to organizations using the agile methodology, making it easier to apply the framework to an organization’s specific needs. The latest edition uses a modular structure that is simpler to follow and implement, making the framework easier to implement in any industry.

The TOGAF framework is broken into two main groups, which include the fundamental content and extended guidance. The fundamental content includes all the essentials and best practices of TOGAF that create the foundation for the framework. The extended guidance portion of TOGAF includes guidance for specific topics such as agile methods, business architecture, data and information architecture, and security architecture. The extended guidance portion of TOGAF is expected to evolve over time as more best practices are established, whereas the fundamental content offers a basic starting point for anyone looking to apply the framework.

The Open Group states that TOGAF is intended to accomplish the following:

  • Ensure everyone speaks the same language
  • Avoid lock-in to proprietary solutions by standardizing on open methods for enterprise architecture
  • Save time and money, and utilize resources more effectively
  • Achieve demonstrable ROI
  • Provide a holistic view of an organizational landscape
  • Act as a modular, scalable framework that enables organizational transformation
  • Enable organizations of all sizes across all industries to work off the same standard for enterprise architecture

Agile framework for agentic AI

TOGAF 10 offers a flexible and adaptable framework for designing, integrating, and governing agentic AI systems. Following the framework’s principles can help IT leaders ensure AI architecture aligns with business goals, while also maintaining governance and ethical standards. TOGAF’s flexibility also enables enterprises to grow and adapt their AI architectures as the technology and its use cases evolve.

When approaching agentic AI development, TOGAF 10 can help organizations:

  • Establish key stakeholders and identify key tools and principles
  • Identify risk and address ethical questions around AI
  • Ensure proper compliance and governance of AI
  • Guide the overall integration of AI with existing infrastructure
  • Offer a framework for identifying skills, data, and technology gaps in the organization necessary for AI transformation
  • Demonstrate strategic alignment between AI architecture and business goals

TOGAF business benefits

Th framework helps organizations implement software technology in a structured and organized way, with a focus on governance and meeting business objectives. Software development relies on collaboration among multiple departments and business units both inside and outside of IT, and TOGAF helps address any issues around getting key stakeholders on the same page.

TOGAF is intended to help create a systematic approach to streamline enterprise architecture and the development process so that it can be replicated, with as few errors or problems as possible as each phase of development changes hands. By creating a common language that bridges gaps between IT and the business side, it helps bring clarity to everyone involved.

It’s an extensive document — but you don’t have to adopt every part of the framework. Businesses are better off evaluating their needs to determine which parts of the framework to focus on. With the modular updates to the TOGAF Standard 10th Edition, creating a custom TOGAF framework should be easier than ever. Organizations can start with the core fundamentals, and then pick and choose parts to adopt from the extended guidance portion of the document.

TOGAF certification and training

On  releasing TOGAF 10, The Open Group decided to keep TOGAF 9.1 certification exams as-is, while introducing three new exams to address updates made to the framework. TOGAF 9.1 Level 1 and Level 2 cover the foundations of TOGAF and ensure that past certifications do not become obsolete in the face of an updated framework. The three new exams include the TOGAF Enterprise Architecture Foundation, TOGAF Enterprise Architecture Practitioner, and TOGAF Business Architecture Foundation.

These certifications are combined into learning path options appropriate for differing levels of experience. The first of the three learning paths is the Team level, for those in roles that require a basic understanding of enterprise architecture or who work in customer service. The second is the Practitioner level, for anyone at the management level or who is responsible for developing enterprise architecture. The third and final learning path is the Leader level, for those establishing an enterprise architecture capability.

The TOGAF certification scheme is especially useful for enterprise architects, because it’s a common methodology and framework used in the field. It’s also a vendor-neutral certification that has global recognition. Earning your certification will demonstrate your ability to use the TOGAF framework to implement technology and manage enterprise architecture. It will validate your abilities to work with TOGAF as it applies to data, technology, enterprise applications, and business goals.

According to PayScale, a TOGAF certification can boost your salary for the following roles:

Job TitleAverage SalaryWith TOGAF Certification
IT enterprise architect $158,795 $166,414
Solutions architect $135,178$157,089
Software architect $139,438$170,000
IT director $131,727$152,949

For more IT management certifications, see “20 IT management certifications for IT leaders.”

TOGAF tools

The Open Group keeps an updated list of TOGAF-certified tools, which includes the following software:

  • Alfabet AG: planningIT 7.1 and later
  • Avolution: ABACUS 4.0 or later
  • BiZZdesign: BiZZdesign Enterprise Studio
  • BOC Group: ADOIT
  • Orbus Software: iServer Business and IT Transformation Suite 2015 or later
  • Planview: Troux
  • Software AG: ARIS 9.0 or later
  • Sparx Systems: Enterprise Architect v12

For more tools that support enterprise architecture and digital transformation, see our list of the top 20 enterprise architecture tools.

The evolution of TOGAF

TOGAF is based on TAFIM (Technical Architecture Framework for Information Management), an IT management framework developed by the US Department of Defense in the 1990s. It was released as a reference model for enterprise architecture, offering insight into DoD’s own technical infrastructure, including how it’s structured, maintained, and configured to align with specific requirements. Since 1999, the DoD hasn’t used the TAFIM, and it’s been eliminated from all process documentation.

The Architecture Development Method (ADM) is at the heart of TOGAF. The ADM helps businesses establish a process around the lifecycle of enterprise architecture. The ADM can be adapted and customized to a specific organizational need, which can then help inform the business’s approach to information architecture. ADM helps businesses develop process that involve multiple check points and firmly establish requirements, so that the process can be repeated with minimal errors.

TOGAF was released in 1995, expanding on the concepts found in the TAFIM framework. TOGAF 7 was released in December 2001 as the “Technical Edition,” followed by TOGAF 8 Enterprise Edition in December 2002; it was then updated to TOGAF 8.1 in December 2003. The Open Group took over TOGAF in 2005 and released TOGAF 8.1.1 in November 2006. TOGAF 9 was introduced in 2009, with new details on the overall framework, including increased guidelines and techniques. TOGAF 9.1 was released in 2011 and the most recent version, TOGAF 10 was released in 2022.

More on advancing enterprise architecture:

  • ✇Security | CIO
  • 칼럼 | 멀티 벤더 프로젝트 실패, 대부분은 ‘거버넌스’에서 시작된다
    벤더가 프로그램을 스스로 바로잡아주기를 기다리는 것은 전략이 아니다. 이는 조용히 누적되는 비용일 뿐이며, 회의실 안의 모두가 프로세스가 정상적으로 작동하고 있다는 착각을 유지하는 동안 그 부담은 계속 커진다. 필자는 두 가지 상황을 모두 경험했다. 하나는 고객이 이미 문제가 있음을 인지하고 행동에 나설 근거와 표현을 필요로 하는 경우, 다른 하나는 아직 문제를 인식하지 못한 상태다. 후자의 경우 프로그램은 관리 가능한 수준으로 보이고, 벤더는 전문적으로 보이며, 운영위원회 회의도 제시간에 진행된다. 그러나 경고 신호는 이미 곳곳에 드러나 있으며, 누군가 이를 짚어내기만을 기다리고 있다. 이 두 번째 상황이 더 중요하다. 아직 대응할 수 있는 시간이 남아 있기 때문이다. 하지만 대부분의 기업은 그 기회가 줄어들기 시작할 때까지 움직이지 않는다. 대부분의 기업이 놓치는 경고 신호는 설계 단계에서 나타난다 초기의 신호는 일정
     

칼럼 | 멀티 벤더 프로젝트 실패, 대부분은 ‘거버넌스’에서 시작된다

29 de Abril de 2026, 04:09

벤더가 프로그램을 스스로 바로잡아주기를 기다리는 것은 전략이 아니다. 이는 조용히 누적되는 비용일 뿐이며, 회의실 안의 모두가 프로세스가 정상적으로 작동하고 있다는 착각을 유지하는 동안 그 부담은 계속 커진다.

필자는 두 가지 상황을 모두 경험했다. 하나는 고객이 이미 문제가 있음을 인지하고 행동에 나설 근거와 표현을 필요로 하는 경우, 다른 하나는 아직 문제를 인식하지 못한 상태다. 후자의 경우 프로그램은 관리 가능한 수준으로 보이고, 벤더는 전문적으로 보이며, 운영위원회 회의도 제시간에 진행된다. 그러나 경고 신호는 이미 곳곳에 드러나 있으며, 누군가 이를 짚어내기만을 기다리고 있다.

이 두 번째 상황이 더 중요하다. 아직 대응할 수 있는 시간이 남아 있기 때문이다. 하지만 대부분의 기업은 그 기회가 줄어들기 시작할 때까지 움직이지 않는다.

대부분의 기업이 놓치는 경고 신호는 설계 단계에서 나타난다

초기의 신호는 일정 지연이나 산출물 실패가 아니다. 오히려 ‘언어’에서 드러난다. 상태 보고서나 운영위원회 자료에 ‘프로젝트 정상화 방안(path to green)’이라는 표현이 등장하기 시작하면, 이는 이미 프로젝트가 정상 상태가 아님을 스스로 인정한 것이다. 실행을 관리하는 단계에서 ‘서사를 관리하는 단계’로 전환된 셈이다.

운영위원회가 실제로 무엇을 하고 있는지도 살펴봐야 한다. 회의가 다음 달 전망이 아닌 지난달 결과 보고에 집중된다면, 리더십은 의사결정 주체가 아니라 단순한 청중으로 전락한 것이다. 이 경우 벤더가 의제 설정, 메시지 구성, 정보 공개 주기를 사실상 통제하게 된다.

가장 심각한 신호는 프로그램 스폰서가 벤더가 아닌 내부 보고를 통해 주요 문제를 인지하는 경우다. 이는 단순한 커뮤니케이션 문제를 넘어, 어떤 정보를 공유할지에 대한 의도적인 선택이다. 이러한 패턴이 SAP, 오라클, 세일즈포스 기반 프로젝트에서 나타난다면, 거버넌스의 핵심인 신뢰는 이미 무너진 상태다.

이러한 신호가 보이면 다음 운영위원회를 기다릴 이유가 없다. 독립적으로 검증 가능한 데이터를 요구해야 하며, 벤더에게 ‘보고’가 아닌 ‘예측’을 요구해야 한다. 만약 60일 후 프로젝트 상태를 설명하지 못한다면, 벤더는 프로젝트가 아니라 고객의 인식을 관리하고 있는 것이다.

총괄 조정자 역할, 해결되지 않은 이해충돌 문제

액센추어, 딜로이트, PwC 등 다수의 글로벌 컨설팅 기업이 참여하는 멀티 벤더 프로젝트에서 반복적으로 나타나는 패턴이 있다. 총괄 조정자, 즉 프로그램 통합 코디네이터는 고객의 문제, 다른 벤더의 한계, 외부 의존성 지연 등을 빠르게 지적한다. 그러나 정작 자사 문제에 대해서는 같은 수준의 직설적인 언급을 거의 하지 않는다.

이는 개인의 성향 문제가 아니라 구조적인 이해충돌이다. 총괄 조정 역할을 맡은 기업은 소위 업무 범위 정의서로 불리는 자체 업무 범위(Statement Of Work, SOW)를 수행 책임도 동시에 지고 있다. 이 과정에서 거버넌스 권한으로 확보한 정보 접근성과 보고 권한을 바탕으로, 자사 리스크를 방어하는 방향으로 움직일 가능성이 크다.

이 때문에 총괄 조정자 역할은 반드시 벤더의 수행 역할과 구조적으로 분리해야 한다. 가장 이상적인 방식은 결과에 이해관계가 없는 독립적인 통합 관리 조직을 두는 것이다. 현실적으로는 기존 벤더 내에서 별도 조직이나 인력을 지정하되, 해당 조직이 자사 리더십이 아닌 고객에게 직접 보고하고 운영위원회에 책임을 지도록 해야 한다.

이 구조에는 완벽한 방화벽이 존재하지 않는다. 대신 행동으로 판단할 수 있는 기준이 있다. 해당 역할이나 팀이 자사에 불리한 정보를 어떻게 다루는지 살펴보는 것이다. 고객의 문제를 제기할 때와 동일한 긴급도로 이를 공유하거나 상향 보고하는지, 혹은 자사 이슈는 숨기고 타 조직의 문제만 부각하는지를 확인해야 한다.

자사 전달 조직의 실패까지도 적극적으로 공개하고 에스컬레이션하는 총괄 조정자라면 제 역할을 수행하고 있는 것이다. 반대로 고객과 타 벤더의 문제만 지적한다면, 이는 프로젝트가 아니라 자사 계약을 방어하고 있는 것에 가깝다.

다음 SOW 체결 이전에 이러한 구조를 명확히 해야 한다. 총괄 조정자 역할을 수행 역할과 분리해 정의하고, 담당 인력이나 조직을 명확히 지정해야 한다. 또한 보고 체계를 고객 직속으로 설정하고, 실제로 해당 역할이 수행되고 있는지 행동 기준을 통해 검증해야 한다.

기다림은 중립이 아니다

대응을 미루는 데 따른 비용은 많은 기업이 생각하는 것보다 훨씬 구체적이다. 여러 시스템 통합업체가 동시에 참여하는 멀티 벤더 환경에서는 일정이 한 달만 지연돼도 기업별로 수백만 달러에서 수천만 달러에 이르는 비용이 발생할 수 있다. 이는 범위 확장이 아니라, 거버넌스가 일정 관리를 제대로 하지 못한 결과다.

상업적 리스크는 더 이른 시점부터 나타난다. 범위가 명확하지 않고 통합 계획이 불안정하면, 벤더는 가격 산정의 기준점을 확보할 수 없다. 그 결과 동일 범위에도 시간·자재 기반 견적과 고정가 견적 간 큰 차이가 발생한다. 이는 단순한 가격 차이가 아니라, 거버넌스 불확실성을 계약 조건으로 전가한 것이다. 결국 그 부담은 고객이 떠안게 된다.

문제가 쉽게 드러나지 않는 이유는 현장 팀이 대체로 전문적이고 성실하게 일하고 있기 때문이다. 하지만 핵심은 노력의 문제가 아니라 권한과 인센티브 구조에 있다. 프로젝트를 운영하는 프로그램 매니저는 추가 자원 투입이나 조직 간 비용 집행을 승인할 권한이 없다. 이들의 역할은 관계를 유지하고 수익성을 관리하는 것이며, 고객 프로젝트를 근본적으로 해결하는 것과는 다르다.

효과적인 개입은 ‘리더십’에서 시작된다

문제가 명확해지고 고객이 대응을 결정했다면, 실제 변화를 이끄는 것은 거버넌스 문서나 정기 회의가 아니라 고객과 벤더 최고 경영진 간의 직접적인 대화다. 이는 일상 운영을 담당하지 않지만, 결과에 책임을 지는 임원들이 참여해야 한다.

이러한 대화가 효과적인 이유는 인센티브 구조를 바꾸기 때문이다. 벤더의 산업 책임자나 파트너는 해당 프로젝트를 성공 사례로 만들어야 하며, 실패는 조직 내 부담으로 이어진다. 이들은 실행 조직이 갖지 못한 권한을 보유하고 있다. 최우수 인력 투입, 계약 범위를 넘어선 비용 부담, 신속한 인력 재배치 등 프로젝트의 방향을 바꿀 수 있는 결정이 가능하다.

또한 개입은 구조적으로 설계돼야 한다. 양측 고위 임원이 참여하고, 신규 인력 투입을 통해 벤더의 투자 의지를 명확히 보여야 한다. 동시에 프로젝트가 정상 궤도에 오를 때까지 일정 주기로 점검을 이어가고, 양측 모두에 시간 기반 책임을 부여해야 한다. 이 과정에서 단순히 프로젝트뿐 아니라 파트너십 자체도 평가 대상임을 명확히 해야 한다.

이는 일회성 회의가 아니라 지속적인 리더십 개입이며, 기존 거버넌스를 대체하는 것이 아니라 이를 실제로 작동하게 만드는 역할을 한다.

신뢰할 수 있는 유일한 회복 신호

리더십 간 논의가 효과를 발휘했다면, 그 결과는 벤더의 대응 방식에서 드러난다. 단순한 낙관적 계획이나 약속이 아니라, 실패 지점과 개선 방안, 그리고 고객 측의 성과 격차까지 구체적으로 제시해야 한다.

자사 책임만 인정하는 벤더는 여전히 관계 관리에 머무른다. 반면 자사 실패와 고객의 개선 필요사항을 동시에 명확히 제시하는 벤더는 공동 책임 구조를 구축하고 있는 것이다. 이것이 진정한 신뢰의 기준이다.

프로젝트 실패는 대부분 양측 요인이 결합해 발생한다. 지연된 의사결정, 부족한 내부 자원, 설계 이후 변경된 요구사항 등이 대표적이다. 이러한 요소를 자사 문제와 함께 제시하는 벤더는 책임을 회피하는 것이 아니라 결과 개선에 투자하고 있는 것이다.

만약 경영진 회의 결과가 단순한 약속과 형식적인 의지 표명에 그친다면, 지속적으로 압박을 유지해야 한다. 실제 개입은 구체적인 문제 인정, 명확한 자원 배정, 그리고 양측 모두를 향한 냉정한 평가에서 드러난다.

최종 책임은 고객에게 있다…‘중간에서 판단하는 역할’ 필요

모든 과정에서 최종 책임은 고객에게 있다. 총괄 조정자는 실행과 통합을 책임지지만, 판단 자체를 외부에 맡길 수는 없다. 이는 단순한 역할 구분이 아니라, 거버넌스의 본질이다.

벤더의 상태 보고서는 중립적인 데이터가 아니다. 이는 보상, 향후 계약, 개인의 평판이 걸린 사람들이 구성한 ‘의도된 서사’다. 보고서에 담긴 내용도 중요하지만, 빠져 있는 내용이 더 많은 것을 말해준다.

따라서 고객은 ‘중간에서 판단하는 역할’을 수행해야 한다. 데이터를 검증하고, 교차 확인하며, 보고서가 답하지 않은 질문을 던져야 한다. 무엇이 포함됐는지뿐 아니라 무엇이 빠졌는지도 살펴야 한다.

운영위원회가 좋은 소식만 듣고 있다면, 이는 프로젝트가 잘 진행되고 있다는 의미가 아니라, 누군가 리더십이 듣고 싶어 하는 내용만 선택하고 있다는 신호일 수 있다.

고객은 상태 보고가 아닌 예측을 요구해야 한다. 독립적으로 검증 가능한 근거를 확보하고, 벤더가 자사 문제와 고객 문제를 함께 제시할 때 이를 गंभीर하게 받아들여야 한다. 이는 책임 회피가 아니라, 거버넌스가 제대로 작동하고 있다는 신호다.

경고 신호는 항상 명확하지 않을 수 있다. 그러나 대응할 수 있는 시간은 제한적이다. 기다림은 결코 전략이 아니다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • You selected the right vendors. Now govern them like you mean it.
    Waiting for your vendor to fix a program isn’t a strategy. It’s a cost, accumulating quietly while everyone in the room maintains the fiction that the process is working. I’ve been in both rooms. The room where the client already knows something is wrong and needs the language and the evidence to act, and the room where the client doesn’t know yet. The program feels manageable, the vendor is professional, the steering committee meetings run on time, and the warning sign
     

You selected the right vendors. Now govern them like you mean it.

27 de Abril de 2026, 07:00

Waiting for your vendor to fix a program isn’t a strategy. It’s a cost, accumulating quietly while everyone in the room maintains the fiction that the process is working.

I’ve been in both rooms. The room where the client already knows something is wrong and needs the language and the evidence to act, and the room where the client doesn’t know yet. The program feels manageable, the vendor is professional, the steering committee meetings run on time, and the warning signs are sitting in plain sight waiting for someone to name them.

That second room is the more important one. Because the window to act is still open. And most clients don’t move until it’s started to close.

Warning signs most clients miss appear in design

The earliest signal is rarely a missed milestone or a failed deliverable. It appears in language. When the phrase “path to green” starts appearing in status reports and steering committee decks, the program has already accepted it’s not green. It’s shifted from managing execution to managing the narrative.

Watch what the steering committee is actually doing. If it’s consistently hearing about what happened last month rather than what’s forecast for next month, leadership has been converted from a decision-making body into an audience. The vendor controls the agenda, the framing, and the cadence of what gets surfaced.

The most serious signal is when a program sponsor hears about material issues from their own direct reports that the vendor hasn’t raised in the room. That’s not a communication gap but a calculated decision about what leadership is ready to hear. When that pattern appears in SAP, Oracle, or Salesforce programs, the trust that makes the governance model function has already eroded.

When you see these signals, don’t wait for the next steering committee. Start demanding data that can be independently corroborated. Ask the vendor to forecast, not report. If they can’t tell you where the program will be in 60 days, they’re managing your perception, not your program.

Your master conductor has a conflict of interest you’re not addressing

A pattern I’ve seen consistently across multi-vendor programs involving Accenture, Deloitte, PwC, and others is the master conductor, or program integration coordinator, is quick to name client’s gaps, other vendors’ shortcomings, and third-party dependencies running behind. What they almost never do is name their own firm’s failures with the same directness in the same room.

That’s not a personality issue but a structural conflict. The firm serving as master conductor is delivering against its own statement of work (SOW), and the governance position gives them access to information, reporting authority, and narrative control they’ll use to, consciously or not, protect their own delivery track.

This is why I advise clients to treat the master conductor and program integration coordinator role as structurally separate from the vendor delivery role. That means a, entirely separate firm, an independent integrator with no delivery stake in the outcome. In practice, it’s more often a designated individual or a group within the project management or transformation office carved from one of the existing vendors, reporting directly to the client and accountable to the steering committee, not to their own firm’s engagement leadership.

There’s no true firewall in that model, but there’s a behavioral test. Watch what that role or team does with information that reflects badly on their own firm. Do they surface it or escalate it with the same urgency they bring to client gaps? Do they forecast problems on their own track, or only on everyone else’s?

A master conductor who’ll escalate failures that implicate their own delivery team is doing the job. One who only calls out the client and the other vendors is protecting the engagement.

Before the next SOW is signed, make it structural. Define the master conductor role separately from the delivery role, name the individual or team, set the reporting line directly to the client, and use the behavioral test to determine whether the role is being performed or merely filled.

Waiting isn’t neutral

The financial cost of waiting is more specific than most clients realize. In a multi-vendor environment where two or three system integrators are billing against active SOWs, every month of schedule extension carries a material cost, potentially millions to tens of millions of dollars per firm, not because scope expanded, but because governance didn’t hold the timeline.

The commercial exposure appears even earlier. When scope boundaries are unclear and the integrated plan is unstable, vendors have no reliable baseline to price against. The result is predictable: a significant spread between a time-and-materials estimate and a fixed fee quote for the same scope. That spread is not a pricing difference. It’s the vendor converting your governance uncertainty into their contract protection. The client absorbs it either way.

What makes the waiting feel reasonable is the vendor’s day-to-day team is usually professional and working hard. So the problem is authority and incentive, not effort. The program manager running the engagement can’t authorize additional resources nor commit spend across organizational lines. Their job is to manage the relationship, protect their firm’s margin, and keep the engagement profitable. Fixing your program isn’t the same job.

The window to act is real and short. A senior executive at the vendor can absorb costs, bring new talent, and make commitments the delivery team has no authority to make. But that authority diminishes as the program ages. The more that’s been billed and the more scope has shifted, the harder it is for even a motivated senior executive to make the client whole. Clients who act in design or early build have options that clients who wait until three months before go-live don’t.

The intervention that works is a leadership one

When the signals are clear and the client is ready to act, the intervention that moves the needle isn’t a governance document or a scorecard meeting but a top-to-top conversation between client and vendor senior leadership. This includes execs who aren’t running the day-to-day program but have something personal at stake in the outcome.

That conversation works because it activates a different set of incentives. The vendor’s senior executive, the sector partner and industry leader whose name is on the relationship, needs your program to be referenceable. They don’t want a PR failure on a flagship engagement, nor do they want to explain to their firm’s leadership why a major client program collapsed. They have authority their delivery team doesn’t: power to assign their best resources, ability to absorb costs the SOW or change order doesn’t cover, and they can accelerate staffing decisions and make commitments that change what the program can do. They have skin in the game their team doesn’t.

Also, structure the engagement deliberately. Have senior executives on both sides and new talent brought in as a visible signal of vendor investment. And have a cadence that continues until the data shows the program is back on track, with time-bound accountability on both sides. And have explicit understanding that the relationship itself is under review, not just the program.

This is sustained leadership engagement, not a one-time meeting, and it doesn’t replace the governance model. It enforces it.

The only recovery signal worth trusting

When the top-to-top works, you’ll know it by what the vendor brings back to the table. Not reassurances or a revised plan with optimistic milestone dates, but facts about where they failed, what they’re changing, and, most critically, where the client has performance gaps that also need to close.

A vendor who comes back and accepts blame still manages the relationship. A vendor who says we failed here and here, these are the specific changes we’re making, and you have a gap here we need you to address, that vendor is engaged and mutually accountable. That’s the integrity test.

It runs both ways because program failure almost always does. Slow client decisions. Unavailable business resources. Requirements that shifted after design was locked. A vendor who names those things alongside their own failures isn’t deflecting, they’re investing in an outcome. That’s the signal the recovery is real.

If the executive meeting produces only promises and general commitment, keep the pressure on. Real engagement looks like specific admissions, named resources, and a willingness to hold the mirror up to both sides of the table.

You hold the accountability. Be the human in the middle.

Through all of it, the client holds the ultimate accountability. The master conductor holds the responsibility for execution and integration across the vendor ecosystem. That distinction isn’t administrative. It means the client can’t outsource their judgment, regardless of how rigorous the governance model looks on paper.

Think of it like the vendor can hallucinate. Not out of malice, but because every status report is a curated narrative produced by people whose compensation, future work, and professional reputation depend on how that narrative lands. The program deck isn’t neutral data, it’s information filtered through interests. What’s present tells you something. What’s absent, however, tells you more.

Be the human in the middle. Verify, cross-reference, ask questions the deck didn’t answer, and notice what’s missing as much as what’s there. If the steering committee is only hearing good news, that’s a sign someone is deciding what leadership is ready to hear, not that the program is running well.

Demand forecasts, not status reports. Look for hard evidence that can be independently corroborated. When the vendor names a client performance gap alongside their own, take it seriously. That’s the accountability model working the way it’s supposed to, not a deflection.

The warning signs may not always be apparent, though. The window is open, but won’t stay that way, so waiting isn’t a strategy.

  • ✇Security | CIO
  • What is ITIL? Your guide to the IT Infrastructure Library
    What is ITIL? The goal of ITIL is for organizations to create predictable IT environments, and deliver the best service possible to customers and clients by streamlining processes and identifying opportunities to improve efficiency. ITIL has always focused on integrating IT into the business — something that’s become increasingly important as technology becomes vital to every business unit. ITIL 4, the latest iteration of the ITIL framework, maintains the original foc
     

What is ITIL? Your guide to the IT Infrastructure Library

24 de Abril de 2026, 06:00

What is ITIL?

The goal of ITIL is for organizations to create predictable IT environments, and deliver the best service possible to customers and clients by streamlining processes and identifying opportunities to improve efficiency.

ITIL has always focused on integrating IT into the business — something that’s become increasingly important as technology becomes vital to every business unit. ITIL 4, the latest iteration of the ITIL framework, maintains the original focus with a stronger emphasis on fostering an agile and flexible IT department. And as organizations embrace AI, using the principles outlined in the ITIL 4 framework can allow for better service optimization and free up employees to focus on higher-priority tasks and IT projects.

ITIL 4 in AI-driven service management

AI is transformative technology, and it can help support the service management practices outlined in ITIL 4. It’s technology that’s already been deployed in ITSM environments to support the IT service desk, especially through features such as chat bots, automated ticketing systems, and continuous threat monitoring. AI also has the potential to help alleviate some human error that can occur with IT service management, supporting IT departments by increasing automation.

Integrating AI into the ITIL 4 framework can also alleviate repetitive work for IT employees by enabling the automation of routine processes, which can reduce the time it takes to resolve tickets and solve IT issues. Other areas for automation include ticket logging, ticket prioritization, generating automated replies to inquiries, incident routing, and identifying key data points for continuous service improvement. AI-driven chatbots can help organizations handle simple inquiries that would normally clog up ticketing systems, keeping them clear for higher-priority tickets.

Alongside ITIL 4 principles, AI can be integrated to support several different areas of service management including:

  • Predictive monitoring for incident and problem management, identifying and even resolving potential issues before they escalate.
  • Automating tasks for the IT service desk, such as categorizing, assigning, and resolving tickets for IT staff.
  • Continuous monitoring for events and potential issues, alleviating human workers from some of the burden of identifying potential threats.
  • Providing immediate assistance to end-users to resolve common questions, freeing up representatives for more complex service calls.
  • Create more personalized service offerings by analyzing historical data and user profiles.
  • Analyzing service data to identify what needs to be improved or changed over time.

What are the ITIL 4 guiding principles?

ITIL 4 contains seven guiding principles that were adopted from the most recent ITIL Practitioner Exam, which covers organizational change management, communication, and measurement and metrics. These principles include:

  • Focus on value
  • Start where you are
  • Progress iteratively with feedback
  • Collaborate and promote visibility
  • Think and work holistically
  • Keep it simple and practical
  • Optimize and automate

ITIL 4 focuses on company culture and integrating IT into the overall business structure. It encourages collaboration between IT and other departments, especially as other business units increasingly rely on technology to get work done. There’s also a strong emphasis on customer feedback since it’s easier than ever for businesses to understand their public perception, as well as customer satisfaction and dissatisfaction.

For more information on the benefits of the latest version of ITIL, see “ITIL 4: ITSM gets agile.”

How do I put ITIL into practice?

ITIL is a collection of e-books, but merely going on a reading binge won’t improve your IT operations. To effectively implement ITIL, you need to have everyone on board to adopt new procedures and best practices. Consider what type of consulting, training, and certifications you might want to take advantage of to prepare for the transition as well.  

And before implementing ITIL at your organizations, there are several questions you should answer, such as what problems your organization is trying to solve and what’s your route to continual service improvement.

For a deeper look at putting ITIL into practice, see “7 questions to ask before implementing ITIL” and “How to get started with ITIL.”

What is ITIL certification and is it worth it?

The ITIL 4 certification scheme includes the ITIL Foundation and the ITIL Master exams. After passing the ITIL Foundation exam, the certification scheme splits into two paths with the option of the ITIL Managing Professional (MP) or ITIL Strategic Leader (SL) certifications, each of which has its own modules and exams. Those who complete both paths qualify for the ITIL Master designation, which is the highest level of certification offered.

The MP exam is designed for IT practitioners involved with technology and digital teams throughout the organization, not just in the IT department. This path will teach professionals everything they need to know about running successful IT projects, teams, and workflows.

Modules include:

  • ITIL Specialist – Create, Deliver, and Support
  • ITIL Specialist – Drive Stakeholder Value
  • ITIL Specialist – High Velocity IT
  • ITIL Strategist – Direct, Plan, and Improve

The SL exam is designed for those who deal with all digitally enabled services, and not just those that fall under IT operations. This path focuses on how technology directs business strategy and how IT plays into that.

Modules include:

  • ITIL Strategist – Direct, Plan, and Improve
  • ITIL Leader – Digital and IT Strategy

For an in-depth look at ITIL certification, see “ITIL certification guide: Costs, requirements, levels, and paths.”

How does ITIL help business?

A well-run IT organization that manages risk and keeps the infrastructure humming not only saves money but also enables everyone in the organization to do their jobs more effectively. For example, brokerage firm Pershing reduced its incident response time by 50% in the first year after restructuring its service desk according to ITIL guidelines, enabling users with problems to get back to work much quicker.

ITIL provides a systematic and professional approach to the management of IT service provision, and offers benefits including reduced costs and improvements to productivity, use of skills and experience, IT services using proven best practices, delivery of third-party services, and customer satisfaction through a more professional approach to service delivery. According to PeopleCert, ITIL can also help businesses improve services by helping businesses manage risk, disruption, and failure; strengthening customer relations by delivering efficient services that meet their needs; establishing cost-effective practices; and building a stable environment that still allows for growth, scale, and change.

For a deeper look at how to get the most from ITIL, see “6 tips for ITIL implementation success.”

What will ITIL cost?

Getting started involves the purchase of the ITIL either as hardcopy, PDF, ePub, or through an online subscription directly from PeopleCert. Then there’s the cost of training, which fluctuates each year.

The course leading to the initial Foundation Certificate typically runs for two days, and courses leading to higher certifications can be a week or more. And add to that the inevitable cost of re-engineering some processes to comply with ITIL guidelines, and adjustment of help desk or other software to capture the information you need for tracking and generating metrics.

How does ITIL reduce costs?

Corporations and public sector organizations that have successfully implemented ITIL best practices report huge savings.

For example, in its Benefits of ITIL paper, Pink Elephant reports that Procter & Gamble saved about $500 million over four years by reducing help desk calls and improving operating procedures. Nationwide Insurance achieved a 40% reduction in system outages and estimates a $4.3 million ROI over three years, and Capital One reduced its business-critical incidents by 92% over two years.

Without buy-in and cooperation from IT staff, however, any implementation is bound to fail. Bringing best practices into an organization is as much a PR job as it is a technical exercise. And it’s impossible to plan for every failure, event, or incident so it’s not an exact science. You won’t know the exact ROI on ITIL until you implement it within your organization and use it effectively. Ultimately, since ITIL is a framework, it can only be as successful as corporate buy-in allows. But embracing certifications, training, and investing in the shift will help increase the chances of success and savings.

More on ITIL and ITSM:

  • ✇Security | CIO
  • How poor data foundations can undermine AI success
    The promise of AI is immense, but poor-quality data undermines every attempt to derive any value from it. Without the right inputs, AI produces unreliable, incomplete, and even misleading outcomes. For the average enterprise, data exists in many forms across many systems, says Brian Sathianathan, CTO at Iterate.ai, and integrating structured and unstructured data is harder than most AI pilots account for. “Structured data from operational systems is rarely as tidy as te
     

How poor data foundations can undermine AI success

17 de Abril de 2026, 07:00

The promise of AI is immense, but poor-quality data undermines every attempt to derive any value from it. Without the right inputs, AI produces unreliable, incomplete, and even misleading outcomes.

For the average enterprise, data exists in many forms across many systems, says Brian Sathianathan, CTO at Iterate.ai, and integrating structured and unstructured data is harder than most AI pilots account for. “Structured data from operational systems is rarely as tidy as teams are assuming, and unstructured data, like scanned documents and forms, requires a different preparation process before it can be matched and used effectively,” he says, adding this might explain why businesses hit a wall when trying to move beyond POC.

Organizations with impressive POCs typically succeed because they rely on curated datasets, manual workarounds, and tightly controlled environments, says Rhian Letts, head of group technology strategy at Investec. The real challenge lies in converting pilots into reliable, production-grade implementations. Scaling, she adds, requires resilient pipelines, consistent definitions, operational support, and integration into real workflows. It also raises the bar for governance.

“Many data governance frameworks were designed for human-paced consumption,” she says. “AI significantly increases both the speed and volume of data demand and introduces non-human consumers. Governance, therefore, needs to evolve to become more automated, real-time, and explicit about provenance and permissions.”

For Daniel Acton, CTO at technology firm ADG, too many organizations rush to do something with AI without properly analyzing what they actually want to do with it. “AI can be useful, but if you feed AI data that’s incomplete and inaccurate, or if it doesn’t have the data needed to teach the machine to do what you want it to do, the results will be underwhelming,” he says.

Another core issue is a lack of standardized, high-fidelity metadata. “The quality of metadata is the hardest challenge to overcome,” says Brett Pollak, executive director for workplace technology and infrastructure services at UC San Diego. “Metadata is the essential connective tissue that allows an AI agent to interpret a user’s prompt and map it correctly to the intersection of specific columns and rows. Most organizations have unique, institution-specific interpretations of data that are rarely documented properly or kept current.” This creates a translation gap where an agent might have access to the data but lacks the context to understand what a specific field represents in a business context.

Data, data everywhere

Just because obstacles exist, though, doesn’t mean progress needs to pause. “AI use should be aligned to current maturity,” says Letts. “Rather than treating imperfect data as a constraint, organizations can ask how AI might help improve and better connect the data they already have.” Sathianathan agrees, adding that within the new LLM world, even small amounts of accurate data can have significant value. “With traditional machine learning just a few years ago, you needed a lot of data to train models,” he says. “Today, since most LLMs come with highly pre-packaged knowledge, all you need is sufficient amounts of the right data to get it ready for your domain.”

For organizations that have already deployed structured data warehousing, the new barrier is the transition from human-centric storage to machine-actionable delivery, says Pollak. “Readiness now means ensuring your data is wrapped in specific metadata, exposed via modern protocols like MCP servers, and governed by a selective exposure strategy that ensures agents only act on what’s governed,” he says.

Shift your mindset around data

Today, many organizations want to quickly move from data disorder to being data-driven. But if that’s the end goal, CIOs and tech leaders need to be mindful of treating data like a first-class citizen within your organization. As part of this shift, data can no longer be seen as a by-product of business systems, but rather as a core output that should be managed with the same level of care as any other product or service. When this happens, business leaders can unlock insights and value they didn’t know existed.

Also, according to Letts, a use-case-led approach is critical. Trying to fix every dataset across an organization is neither practical nor necessary. Meaningful value can be unlocked even where data is imperfect by focusing on the right use cases. By prioritizing five to 10 high-value use cases and mapping the data required to deliver them in production, it’s easier to focus efforts. Foundations can then be strengthened to serve those priorities.

With AI, the threshold for what’s good enough has lowered for many use cases, particularly those focused on productivity and knowledge work, she adds. AI models can extract value from context and connect dots, even where data isn’t perfectly structured. But higher-stakes use cases demand higher quality and stronger controls. “The key is to be explicit about purpose, risk, and operational dependency,” she says. “Lower-risk use cases can move faster with well-described and well-governed context, while higher-risk applications require tighter thresholds.”

Prioritize ownership, governance, and security

All governance frameworks, policies, standards and procedures should be reviewed with AI in mind, adds Letts. Many were designed for human-paced consumption, whereas AI increases speed, scale, and integration across both structured and unstructured data. So validating ownership of critical data elements and establishing a shared business understanding of their meaning is essential to progress. Standardized definitions and metadata should also ensure questions like what it means and where did it come from can always be answered. “AI access must be secure by default,” she adds. “This means having least privilege, audit trails, handling of sensitive data, and strong controls around retrieval. It should always be demonstrable what a model can and cannot access.”

Additionally, organizations must be mindful of data privacy when using AI, too. “Agentic AI systems require a different level of data access than traditional enterprise apps,” says Sathianathan. “Data needs to be analyzed, not just queried, at scale. That’s a big change to privilege models, and IT and security leaders need to think carefully about where all that data is going and what access the AI system really requires.” The same is true, he adds, if the LLM processing that data is running within or outside an organization’s four walls, and such decisions should be considered before deployment, not after. 

Use AI to fill in the gaps

In areas where the business might be falling short, consider using AI to draft and update your organization-specific data definitions, suggests Pollak. “Prioritize establishing a rigorous human-in-the-loop process to ensure this connective tissue is accurate and current.” Additionally, it’s possible to use LLMs and smaller language models to clean up data in certain areas with restrictive prompts, adds Sathianathan. This way, you can process data efficiently and avoid wasting resources by pumping massive amounts of data into large cloud-based LLMs.

Being AI-ready isn’t a one-time milestone, says Letts. AI capabilities are evolving quickly, which means the threshold for readiness shifts over time. It’s essential to improve end-to-end lineage, build shared semantics and ontology so data is consistently understood, increase interoperability across platforms and domains, and tighten how AI systems access data so it remains secure, auditable, and fit for purpose. “Thresholds change as use cases evolve,” she says, “so data readiness must be treated as an ongoing discipline rather than a completed task.”

  • ✇Security | CIO
  • MuleSoft Agent Fabric adds new ways to keep AI agents in line
    Salesforce first sought to tackle AI agent sprawl last year with Agent Fabric, a suite of capabilities and tools inside its MuleSoft AnyPoint Platform. Now, it’s seeking to further rein in unruly AI agents on its platform and those of other vendors too, with new governance tools and deterministic controls. When enterprises adopt multiple agentic AI products, they can end up redundant or siloed workflows or scattered across teams and platforms, undermining operational eff
     

MuleSoft Agent Fabric adds new ways to keep AI agents in line

15 de Abril de 2026, 15:22

Salesforce first sought to tackle AI agent sprawl last year with Agent Fabric, a suite of capabilities and tools inside its MuleSoft AnyPoint Platform. Now, it’s seeking to further rein in unruly AI agents on its platform and those of other vendors too, with new governance tools and deterministic controls.

When enterprises adopt multiple agentic AI products, they can end up redundant or siloed workflows or scattered across teams and platforms, undermining operational efficiency and complicating governance as they try to scale AI safely and responsibly.

Agent Fabric, introduced in September 2025, started out as a place for enterprises to register, view, interconnect and govern agents. In January it added a deterministic scripting tool and the ability to scan for new agents and add them to the registry.

But enterprises still need more help to bring their AI agents under control, so Salesforce is adding more features.

First up is an expansion of the deterministic controls in the form of Agent Script for Agent Broker, an intelligent routing service inside Agent Fabric that is designed to connect agents across domains, dynamically matching user tasks with the best-fit agent. Salesforce said the controls will help developers codify workflows in multi-agent systems in order to ensure consistent and reliable outputs.

Rather than leave probabilistic agents to make all the decisions about how to resolve a problem, introducing an element of unpredictability, Agent Script for Agent Broker enables enterprises to steer some of the decision-making according to predetermined rules that require fewer computing resources than running a large language model.

That’s welcome news for Robert Kramer, managing parter at KramerERP.

“Pure autonomous agents don’t necessarily work in production as enterprises need to ensure predictable outcomes. The deterministic controls should facilitate a secure handoff of control and rules while still allowing the model to engage in reasoning when it’s appropriate,” he said. “It’s a balance between control and flexibility, which is the norm for most real deployments.”

For Rebecca Wettemann, principal analyst at Valoir, providing both deterministic and probabilistic options within Agent Fabric enables developers and agent builders to take the lower-cost route to more accurate and predictable results from agentic systems.

Enterprises will have to wait to put this deterministic orchestration feature into production, though: Still in beta testing, it won’t be generally available until June 2026.

Centralized LLM governance tackles cost

Beyond orchestration, Salesforce has added a new LLM Governance capability in AI Gateway, the control layer within Agent Fabric that provides centralized visibility of token usage, costs, and data flows for third-party model.

Enterprises will be able to use LLM Governance, now generally available, to help them keep their AI operations on budget, Salesforce said.

This is becoming increasingly important as CIOs seek to bring disparate AI systems under centralized control and justify spiralling AI costs.

Info-Tech Research Group advisory fellow Scott Bickley warned that without centralized governance like this, different teams around a company may choose different models, negotiate their own API contracts, and manage token budgets locally.

“This results in sprawling costs, inconsistent security postures, and no enterprise-wide policy enforcement,” he said. “By positioning AI Gateway as the choke point through which all LLM traffic flows, enterprises gain visibility into AI usage patterns, the models in use, purpose of the usage, and cost data.”

MCP additions simplify integration

Salesforce is also adding new Model Control Protocol features, including MCP Bridge to make it easier to access legacy APIs, and Informatica-hosted MCPs, that it says will simplify how agents interact with enterprise data and APIs.

These could save developers time and simplify the building of cross-environment, multi-agent systems.

Bickley said MCP Bridge will help enterprises with thousands of legacy APIs (REST, SOAP, GraphQL) built long before MCP existed.

“Agents speaking MCP cannot call those APIs natively so they require wrappers around the API endpoint; this would be a massive engineering lift. MCP Bridge allows these APIs to be exposed as MCP-compatible tools without modifying the underlying code,” he said.

And Wettemann said Informatica-hosted MCPs will further reduce development overhead by bringing built-in data quality and governance capabilities into agent workflow, particularly critical for enterprises in regulated industries and those with heightened risk concerns.

But Bickley added a note of caution. “APIs can behave oddly and have their own nuanced behavior,” he said. “Enterprises should test how MCP Bridge handles edge cases.”

Informatica-hosted MCPs will not be a miracle solution either, he warned: “Even if the Informatica data quality and governance capabilities are cleanly integrated in the Agent Fabric registry, these are not instantaneous operations. Checking data fields for accuracy, deduplication, and cross-system matching take time and carry latency measured in milliseconds or even multiple seconds, and that is pre-integration.”

A pivot for MuleSoft?

Bickley sees the updates as a broader strategy for Salesforce to reposition MuleSoft, which it acquired in 2018 for $5.7 billion, from a traditional API integration platform to an infrastructure layer for enterprise AI agents.

By layering orchestration, governance, and connectivity into Agent Fabric, Salesforce appears to be trying to position MuleSoft as the system of record for how agents are discovered, routed, and governed across the enterprise, deepening its role beyond API management into core AI infrastructure, he said.

Not all CIOs will welcome that move.

“If your agent control plane runs on Agent Fabric, switching costs rise materially, and the more agents you register, the more orchestration rules and governance policies defined, the more difficult it becomes to move to an alternative solution,” the analyst said.

As with any critical infrastructure dependency, “CIOs need to ask:  What is the exit path?  What components of Agent Fabric are portable and what is locked in?  What’s the pricing model?  What is the integration depth with non-Salesforce agents and data sources?” he said.

For now, though, enterprises have plenty of AI agent orchestration options to choose from.

This article first appeared on InfoWorld.

  • ✇Security | CIO
  • The real cost of manual access — and why CIOs are paying attention
    In my nearly two decades as an identity practitioner — including leading identity programs at global financial institutions and serving as a CISO — I’ve seen a recurring pattern that quietly erodes enterprise velocity. I call it “Monday morning friction.” The symptoms often look mundane, but they are systemically expensive: The project stall: A cloud migration pauses while an engineer waits days for approval on a single resource. The executive “dark” period: A ne
     

The real cost of manual access — and why CIOs are paying attention

15 de Abril de 2026, 07:00

In my nearly two decades as an identity practitioner — including leading identity programs at global financial institutions and serving as a CISO — I’ve seen a recurring pattern that quietly erodes enterprise velocity. I call it “Monday morning friction.”

The symptoms often look mundane, but they are systemically expensive:

  • The project stall: A cloud migration pauses while an engineer waits days for approval on a single resource.
  • The executive “dark” period: A newly hired leader spends their first week unable to access the very dashboards they were hired to oversee.
  • The security workaround: A developer uses a shared credential because the formal request process is too slow for the current sprint.

In large enterprises, these moments are often dismissed as routine IT friction. In practice, they are signals of manual access governance quietly slowing the pace of the business.

When I sat in the CISO chair, the pressure was binary: Keep the organization secure without becoming the “Office of No.” What has become increasingly clear in boardroom conversations is that manual access governance is no longer just a security concern. It has evolved into a persistent source of operational friction that slows the very transformation CIOs are tasked with accelerating.

The productivity tax of the “I don’t know” loop

The most significant hidden cost in governance isn’t software — it is lost time.

Research from Lakeside Software’s 2024 IT Leaders Report shows that employees lose nearly an hour each week to IT-related friction, with access delays and technical hurdles among the primary contributors. In a 10,000-employee enterprise, that translates into hundreds of thousands of productive hours annually spent waiting, escalating or troubleshooting.

This creates what I’ve seen repeatedly: The “copy-paste” model of onboarding. A new employee is told to replicate the access of someone else in a similar role. Over time, those inherited permissions accumulate. What begins as expedience becomes structural privilege creep.

The SaaS paradox: Modern tools, manual workflows

Most enterprises no longer rely on spreadsheets for governance. They use sophisticated identity governance and administration (IGA) platforms. Yet the presence of modern interfaces has not eliminated manual intervention.

Today’s “manual trap” is less visible. It’s the human-in-the-loop model that requires managers to interpret cryptic entitlements and click “approve” on decisions they may not fully understand.

Even in organizations with advanced identity tooling, automation frequently stops halfway. HR systems, identity directories, provisioning engines and application logs may each function well in isolation — but the human often becomes the integration layer between them. That integration work carries a cost. Every escalation pulls focus from higher-value work and pulls the CIO further away from digital acceleration goals.

Governance as a spend signal

Increasingly, CIOs are asking a broader question: Can identity governance help manage SaaS sprawl?

Identity data holds a powerful, underused signal. Authentication frequency and inactivity patterns reveal where access no longer aligns with usage. When viewed through an operational lens, identity governance becomes a shadow IT discovery tool.

For CIOs managing margin pressure and platform rationalization, this reframes identity from a cost center to a potential efficiency lever. If an identity platform can flag that a significant portion of a SaaS tier is unused because the governance signal shows zero logins in 90 days, it moves from a security checkbox to a procurement asset.

Approval fatigue and governance debt

Manual governance often creates the illusion of control. A manager clicking “approve” feels like oversight. In practice, high-volume approval queues create approval fatigue.

When access requests arrive described in dense shorthand — such as FIN-PRD-DB-USR-RW — most managers lack the time or context to dissect each entitlement. Over time, approvals become reflexive. This is where governance debt accumulates.

Like technical debt, governance debt is the byproduct of incremental shortcuts. The interest on that debt is paid not only in risk, but in downtime, rework and fragmented visibility.

The scaling problem: AI and machine identities

Manual governance models were designed for a workforce of humans. That denominator is changing. In cloud-forward environments, non-human identities — such as service accounts, bots and AI agents — already outnumber human users. These identities are created and modified at the speed of code.

A governance model that depends on manual review does not scale for AI. As CIOs invest in automated workflows and autonomous agents, identity governance increasingly needs to transition from a human-centric process to a higher-velocity automated control plane.

Identity as an operational control system

The friction surrounding access governance is often framed as a security trade-off: Safety versus speed. In practice, the issue is fragmentation.

When identity operates in isolation, organizations rely on people to bridge the gaps. Human coordination becomes the control plane. That is expensive, slow and prone to error.

Viewed through this lens, identity governance is an operational control system that influences onboarding speed, engineering throughput and workforce productivity. CIOs who recognize its role in shaping workflow velocity and cost transparency gain a competitive edge. Governance does not have to function as an emergency brake; it can become part of the engine.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Micro and macro agents: The emerging architecture of the agentic enterprise
    Artificial intelligence is entering a new phase. For the past decade, enterprises have focused primarily on predictive analytics and automation — using machine learning models to classify data, detect patterns and improve decision making. Today, a new paradigm is emerging: Agentic AI, systems capable of autonomously executing tasks and coordinating complex workflows. Yet despite the rapid growth of AI agents, the term itself is often used loosely. Many organizations des
     

Micro and macro agents: The emerging architecture of the agentic enterprise

14 de Abril de 2026, 08:00

Artificial intelligence is entering a new phase. For the past decade, enterprises have focused primarily on predictive analytics and automation — using machine learning models to classify data, detect patterns and improve decision making. Today, a new paradigm is emerging: Agentic AI, systems capable of autonomously executing tasks and coordinating complex workflows.

Yet despite the rapid growth of AI agents, the term itself is often used loosely. Many organizations describe any AI-powered automation as an “agent,” even when it performs only a single function. As enterprises move toward large-scale deployment of autonomous systems, a clearer framework is needed to understand how these systems will be structured.

One useful way to think about the emerging architecture is through the distinction between micro agents and macro agents — two complementary layers that together form the foundation of the agentic enterprise.

The rise of micro agents

Most AI systems being deployed today can be best described as micro agents.

Micro agents are specialized AI systems designed to perform narrow, well-defined tasks within a workflow. They typically operate within existing applications and platforms, augmenting specific functions rather than managing entire processes.

Examples of micro agents are increasingly common across industries:

  • A document extraction agent that reads contracts or insurance policies
  • A fraud detection agent that analyzes transactional anomalies
  • A summarization agent that condenses large volumes of text
  • A classification agent that categorizes customer requests
  • A risk scoring agent that evaluates underwriting inputs

These agents are powerful because they combine machine learning models, large language models and automation tools to complete tasks that previously required human intervention.

In many ways, micro agents resemble AI-powered microservices. Each is optimized for a specific capability and integrated into a broader digital workflow.

However, micro agents have an inherent limitation: They operate at the task level, not the workflow level.

The emergence of macro agents

The next stage in enterprise AI will be defined by the rise of macro agents.

Macro agents operate at a higher level of abstraction. Rather than performing a single task, they coordinate multiple micro agents to complete an end-to-end business process.

Macro agents are, therefore, goal-oriented systems. Their objective is not simply to perform an activity but to deliver an outcome.

This enables seamless integration by integrating with systems requiring real-time decisions and dynamic engagement.

Consider a typical insurance claims process. Traditionally, this workflow involves numerous steps:

  • First notice of loss intake
  • Document analysis
  • Damage assessment
  • Fraud detection
  • Coverage validation
  • Payment authorization

A macro agent could orchestrate each of these steps by coordinating specialized micro agents responsible for individual tasks. The macro agent would manage the workflow, evaluate outcomes and ensure the process is completed successfully.

This orchestration capability fundamentally changes the role of AI in enterprises. Instead of acting as a set of isolated tools, AI begins to function more like a coordinated digital workforce.

The key factor to note is that macro agents are more outcome-based, which is what the businesses want.

The need for governance: Meta agents

As organizations deploy networks of interacting agents, another challenge quickly emerges: Governance.

 The struggle for good AI governance is real, and many organizations deploying AI recognize the need for guardrails, but few have figured out how to build a mature governance system.

Autonomous systems that make decisions, coordinate tasks and execute actions must be monitored carefully to ensure they stay compliant, secure and aligned with business objectives.

This creates the need for a third layer in the agentic architecture: Meta agents.

Meta agents oversee and monitor other agents. Their responsibilities may include:

  • Monitoring risk and model behavior
  • Validating regulatory compliance
  • Auditing decision logic
  • Managing cost and resource consumption
  • Escalating decisions to human operators when necessary

In essence, meta agents serve as the governance layer of the agentic enterprise, ensuring that autonomy does not come at the expense of control.

The need for governance is critical, and meta agents will be the trick to balancing governance with innovation in the age of AI. According to Ian Ruffle, head of data and insight at UK breakdown specialist RAC, “Success is about having the right relationships and never trying to sweep issues under the carpet.”

The agentic enterprise stack

Together, these layers form what can be described as the agentic enterprise stack:

  • Meta agents: Governance and oversight. Monitoring, compliance and risk management across agent systems.
  • Macro agents: Workflow intelligence. Coordination of multi-step processes and delivery of business outcomes.
  • Micro agents: Task execution. Specialized systems are responsible for discrete capabilities and actions.

This layered architecture reflects how large-scale AI systems will likely evolve. Instead of deploying isolated tools, enterprises will build interconnected ecosystems of agents, each operating at a different level of responsibility.

This framework potentially can move today’s ERP systems from a system of records to a new generation of systems that are systems of intelligence.

Where most companies are today

Despite growing interest in agentic AI, most organizations remain in the micro-agent stage.

Many AI initiatives focus on improving individual tasks — automating document processing, generating summaries, or assisting customer service representatives. These use cases deliver meaningful productivity gains, but they represent only the early phase of the agentic transformation.

The real shift will occur when enterprises begin to deploy macro agents capable of managing entire workflows, coordinating dozens of micro agents in the background.

At that point, AI moves beyond augmentation and begins to operate as an operational system for work itself.

Implications across industries

The emergence of agentic architectures will have profound implications across industries.

In financial services and insurance, macro agents could manage complex processes such as underwriting decisions, claims resolution and regulatory reporting.

In healthcare, macro agents may coordinate patient intake, diagnosis support and care management workflows.

In manufacturing and supply chains, agent systems could orchestrate procurement, logistics and production planning.

Across sectors, the defining shift will be the transition from AI tools that assist humans to AI systems that manage workflows autonomously while remaining governed by human oversight.

From automation to autonomous

The evolution from micro agents to macro agents represents more than a technological upgrade. It signals a fundamental shift in how organizations think about work.

Digital transformation modernized technology, while intelligent transformation modernizes the enterprise itself.

Ultimately, success will not be determined by who can showcase the most impressive agent, but by who can develop the most trustworthy agentic ecosystem — one that is secure by design, outcome-oriented and embraced by employees who feel empowered rather than displaced.

For decades, enterprise technology has focused on improving the efficiency of human tasks. Agentic systems instead aim to restructure how work itself is executed, distributing responsibilities across networks of autonomous systems.

In this emerging model, micro agents act as the specialized workers, macro agents serve as workflow managers and meta agents provide the governance and oversight required for responsible autonomy.

This approach moves the organizations from where humans initiate AI agents to where AI initiates AI agents, sometimes with a human overseeing the outcome.

Organizations that understand and design for this layered architecture and are willing to redesign workflows and roles will be best positioned to build the agentic enterprises of the future. Adoption of this enterprise architecture will translate the value creation into value realization.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

❌
❌