Visualização normal

Ontem — 8 de Maio de 2026Stream principal
  • ✇Security | CIO
  • The CIO succession gap nobody admits
    I have sat with three CIOs in the last two years who wanted to leave their seat and could not. One was being recruited into a larger enterprise role. One was ready to retire. One had been offered a board seat that required stepping down. In every case, the same thing stopped them. When the CEO asked who could step in, the CIO could not give a credible name. The person they had been calling their number two was technically brilliant and operationally reliable, and every one
     

The CIO succession gap nobody admits

8 de Maio de 2026, 06:00

I have sat with three CIOs in the last two years who wanted to leave their seat and could not. One was being recruited into a larger enterprise role. One was ready to retire. One had been offered a board seat that required stepping down. In every case, the same thing stopped them. When the CEO asked who could step in, the CIO could not give a credible name. The person they had been calling their number two was technically brilliant and operationally reliable, and every one of them had been groomed into an architect, not a leader. The board would not approve an external hire during an active transformation. So the CIO stayed. One of them is still stuck.

The CIO role has the weakest succession bench in the C-suite, and most CIOs discover it the same way those three did. Not during a quarterly talent review. Not during a board retreat. They discover it the moment they try to leave. By then, the decision is already made for them. This is a leadership design problem CIOs build into their own orgs, and they inherit it when it is too late to fix quickly.

The architect trap

I have watched the same pattern form in almost every IT organization I have worked in. The people who rise to the top of the CIO’s direct reports are the ones who can hold the most architectural complexity in their heads. They are the ones the CIO trusts with the platform decisions, the vendor consolidations, the integration maps. They earn that trust legitimately. They are excellent at what they do.

But architectural trust is a different currency than leadership trust. When a CIO promotes based on architectural depth, what they get is a deputy who can design the org but cannot run it. I have seen deputies who have never owned a P&L conversation with a CFO. Deputies who have never delivered hard news to a business unit president. Deputies who have never had to defend a budget line item in a room full of people trying to take it from them. They were not hiding from those conversations. The CIO was holding the conversations for them because the CIO was good at those conversations and the deputy was good at the architecture.

The result is a bench that looks deep from inside the IT org and looks empty from the boardroom. I have watched a CEO walk out of a succession conversation saying, “I like your people, but I cannot see any of them in your chair.” That is not a compliment to the CIO. That is a verdict on how the CIO built the team.

Three moves I make before I need them

After watching this happen enough times, I stopped treating succession as something I would address later and started treating it as a design choice I had to make inside my first year. I changed how I build the bench in three ways, and I make each move early enough that the person has time to grow into it or fail out of it.

First, I give them a standing decision domain, not a “next in line” title. A deputy who is told they are being groomed for the CIO seat will manage their career instead of their work. A deputy who is given full authority over, say, all vendor escalations above a defined threshold will start making real decisions in real rooms with real consequences. That is where judgment gets built. The domain has to be something I would otherwise own myself. If I am still approving everything inside it, I am building a forwarder, not a successor.

Second, I put them in rooms where they have to lose something. One of the most damaging things a CIO can do is protect a high-potential deputy from conflict. I used to do this without realizing it. I would pull the hard conversations back to my level because I wanted to spare the deputy the political damage. The deputy came out looking clean and came out completely unprepared. Now I deliberately put deputies into conversations where they have to defend a position against a peer executive who will push back hard. Sometimes they hold the line. Sometimes they fold. Either outcome tells me something I needed to know before anyone was counting on them.

Third, I make the bench visible to the board before I have to. If the board does not know my top two or three deputies by name and track record, I do not have a succession plan. I have private notes. The CIOs I described at the beginning of this article all had deputies they believed in. None of those deputies had ever presented to the board on anything substantive. The board had no reference point. So when the succession question came up, the deputies did not exist in the board’s imagination, and the CIO’s personal endorsement was not enough to create them.

The first time I put a deputy in front of the board, they came back different. The board did not go easy on them. They came back knowing what a board conversation actually feels like, which meant the next one would not be a first impression. The board needs reps with my deputies before the seat is vacant. Once it is vacant, the reps are a job interview and a job interview is not where anyone does their best work.

What the gap actually costs

The cost of a shallow bench is not abstract. I have seen CIOs delay their own career moves by eighteen months or longer because they could not produce a credible successor. I have seen organizations pay two and a half times market to hire externally because the internal candidate did not survive a board interview. I have seen transformations stall because the CIO could not delegate enough to step back and think, because there was no one qualified to hold what they put down.

The cost to the deputies is also real. The architect-track deputy who spends six or seven years being the CIO’s most trusted technical lieutenant, and then gets passed over for the CIO role because the board does not see a leader, rarely recovers that momentum. Some of them leave. Some of them stay and quietly disengage. A few of them become the reason the new CIO’s first ninety days are harder than they should be. None of that is the deputy’s fault. It is the consequence of a design choice the previous CIO made years earlier, usually without knowing they were making it.

CIO.com has published strong guidance on this, including work on grow your own CIO strategies that treat succession as a deliberate pipeline rather than an accident of tenure.

The test is simple. If you had to leave in ninety days, could you hand the CEO a name and get a nod? If you cannot picture that nod, you do not have a successor. You have a list of people you like and trust, which is not the same thing. The successor you can actually name is the one you built on purpose, not the one who happened to look ready when the chair emptied. I have learned this by watching peers run out of time to build what they meant to build. I am trying not to be one of them.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Antes de ontemStream principal
  • ✇Security | CIO
  • 칼럼 | 기술을 넘어선 경쟁력, 뛰어난 FDE는 어떻게 다른가
    전례 없는 규모의 AI 투자가 이뤄지고 있음에도 불구하고, 대부분의 기업은 ‘통합의 벽’에 부딪힌 상태다. 기술은 개별 환경에서는 제대로 작동하고, 개념검증(PoC)도 충분히 인상적인 결과를 낸다. 하지만 실제 고객과 접점이 생기고 매출에 영향을 미치며 실질적인 리스크가 발생하는 운영 환경에 AI를 적용하려는 순간, 기업들은 주저하게 된다. 이는 충분히 타당한 이유에서다. AI 시스템은 본질적으로 비결정적(non-deterministic) 특성을 지니기 때문이다. 예측 가능한 방식으로 동작하는 기존 소프트웨어와 달리, 대규모 언어모델(LLM)은 예상치 못한 결과를 만들어낼 수 있다. 틀린 정보를 확신에 차서 제공하거나, 존재하지 않는 사실을 생성하는 ‘환각(hallucination)’, 브랜드 톤과 맞지 않는 응답을 내놓을 위험도 존재한다. 리스크 관리에 민감한 기업일수록 이러한 불확실성은 어떤 수준의 기술적 고도화로도 극복하기 어
     

칼럼 | 기술을 넘어선 경쟁력, 뛰어난 FDE는 어떻게 다른가

7 de Maio de 2026, 03:25

전례 없는 규모의 AI 투자가 이뤄지고 있음에도 불구하고, 대부분의 기업은 ‘통합의 벽’에 부딪힌 상태다. 기술은 개별 환경에서는 제대로 작동하고, 개념검증(PoC)도 충분히 인상적인 결과를 낸다.

하지만 실제 고객과 접점이 생기고 매출에 영향을 미치며 실질적인 리스크가 발생하는 운영 환경에 AI를 적용하려는 순간, 기업들은 주저하게 된다. 이는 충분히 타당한 이유에서다. AI 시스템은 본질적으로 비결정적(non-deterministic) 특성을 지니기 때문이다.

예측 가능한 방식으로 동작하는 기존 소프트웨어와 달리, 대규모 언어모델(LLM)은 예상치 못한 결과를 만들어낼 수 있다. 틀린 정보를 확신에 차서 제공하거나, 존재하지 않는 사실을 생성하는 ‘환각(hallucination)’, 브랜드 톤과 맞지 않는 응답을 내놓을 위험도 존재한다. 리스크 관리에 민감한 기업일수록 이러한 불확실성은 어떤 수준의 기술적 고도화로도 극복하기 어려운 장벽으로 작용한다.

이 같은 현상은 산업 전반에서 공통적으로 나타난다. 기업의 AI 도입을 지원해 온 경험을 돌아보면, 많은 조직이 인상적인 AI 데모를 구축하고도 통합 단계를 넘어서지 못하는 사례를 반복해왔다. 기술은 준비돼 있었고 사업적 타당성도 충분했지만, 조직의 리스크 수용도가 이를 따라가지 못했다. 또한 실험 환경에서 가능한 AI 활용과 실제 운영 환경에서 허용되는 범위 사이의 간극을 메울 방법을 아는 사람도 없었다. 이 지점에서 문제의 본질은 기술이 아니라 이를 실제로 적용할 인재라는 결론에 이르게 된다.

필자는 몇 달 전 IT 인력 제공 플랫폼인 안델라(Andela)라는 기업에 합류했다. 이 관점에서 보면, 기업이 필요로 하는 역량은 보다 분명해진다. 바로 파견형 엔지니어 정확히 말해 FDE(Forward Deployed Engineer)다. 해당 개념은 데이터 분석 기업 팔란티어(Palantir)가 정부 기관과 기업 내부에 자사 플랫폼을 배포하는 과정에서 필수적인 고객 중심 기술 인력을 설명하기 위해 처음 사용했다. 최근에는 선도 AI 연구소와 하이퍼스케일러, 스타트업까지 이 모델을 채택하고 있다. 예를 들어 오픈AI는 고가치 고객의 플랫폼 도입을 촉진하기 위해 숙련된 FDE를 배치하고 있다.

다만 CIO가 반드시 이해해야 할 점이 있다. 지금까지 이러한 역량은 주로 AI 플랫폼 기업의 성장 전략을 위해 집중적으로 활용돼 왔다. 기업이 통합의 벽을 넘어 AI를 실제 운영에 적용하기 위해서는, 이 같은 FDE 역량을 내부적으로 확보하고 육성해야 한다.

FDE를 만드는 요소

FDE의 핵심 특징은 기존 엔지니어가 하지 못하는 방식으로 기술적 솔루션과 비즈니스 성과를 연결하는 능력에 있다. FDE는 단순히 시스템을 구축하는 개발자가 아니다. 엔지니어링, 아키텍처, 비즈니스 전략이 교차하는 지점에서 작동하는 ‘번역자’에 가깝다.

이들은 생성형 AI라는 미지의 영역을 조직이 탐색할 수 있도록 이끄는 ‘탐험대장’과 같은 존재다. 특히 AI를 실제 운영 환경에 배포하는 과정은 단순한 기술 문제가 아니라 리스크 관리 문제라는 점을 명확히 이해한다. 따라서 적절한 가드레일 설정, 모니터링 체계, 위험 통제 전략을 통해 조직의 신뢰를 확보하는 것이 필수적이다.

필자는 구글 클라우드와 안델라에서 15년간 일하며 이러한 역량을 모두 갖춘 인재를 극소수만 만나왔다. 이들을 구분 짓는 요소는 단일 기술이 아니라 네 가지 핵심 역량의 결합이다.

첫째는 첫째는 문제 해결 능력과 판단력이다. AI의 출력은 대체로 80~90% 정확하지만, 나머지 10~20%는 오히려 더 위험한 오류를 포함할 수 있다. 때로는 그럴듯하게 보이지만 잘못된 결과이거나, 불필요하게 복잡해 실무 적용을 어렵게 만들기도 한다.

뛰어난 FDE는 이러한 오류를 식별할 수 있는 맥락적 이해를 갖추고 있다. 이들은 AI가 생성한 저품질 결과나, 중요한 비즈니스 제약을 무시한 권고를 빠르게 찾아낸다. 무엇보다 중요한 점은 이러한 리스크를 통제할 수 있는 시스템을 설계할 수 있다는 것이다. 출력 검증, 인간 개입 프로세스, 모델이 불확실할 때 작동하는 결정적 대체 응답 체계 등을 통해 위험을 관리한다. 이러한 역량이야말로 단순히 인상적인 데모와, 경영진이 실제 도입을 승인할 수 있는 운영 시스템을 가르는 결정적 차이다.

둘째는 솔루션 엔지니어링과 설계 역량이다. FDE는 비즈니스 요구사항을 기술 아키텍처로 전환하는 동시에 비용, 성능, 지연 시간, 확장성 등 현실적인 트레이드오프를 균형 있게 고려해야 한다. 특정 활용 사례에서는 추론 비용이 낮은 소형 언어모델이 최신 대형 모델보다 더 나은 성과를 낼 수 있으며, 이러한 선택을 기술적 완성도가 아닌 경제적 관점에서 설명할 수 있어야 한다.

무엇보다 중요한 것은 단순성을 우선하는 접근이다. 통합의 벽을 가장 빠르게 넘는 방법은 대부분 적절한 가드레일을 갖춘 최소기능제품(MVP)으로 전체 문제의 80%를 해결하는 데서 시작된다. 모든 예외 상황을 포괄하려다 통제 불가능한 리스크를 초래하는 복잡한 시스템이 아니라, 현실적으로 관리 가능한 수준의 솔루션이 더 효과적이다.

셋째는 고객 및 이해관계자 관리 역량이다. FDE는 비즈니스 조직과의 주요 기술 접점 역할을 수행하며, AI 경험이 많지 않은 경영진에게 기술적 작동 원리를 설명해야 한다. 다만 이들이 실제로 주목하는 것은 기술 자체가 아니라 리스크, 일정, 그리고 사업적 영향이다.

바로 이 지점에서 FDE는 조직의 신뢰를 확보하고, AI를 실제 운영 환경으로 확장할 수 있는 기반을 마련한다. FDE는 비결정적 AI의 특성을 경영진이 이해할 수 있는 리스크 프레임워크로 전환한다. 예를 들어 문제 발생 시 영향 범위는 어디까지인지, 어떤 모니터링 체계가 구축돼 있는지, 그리고 롤백 계획은 무엇인지 등을 명확히 제시한다. 이러한 과정은 AI의 불확실성을 가시화하고 관리 가능한 형태로 전환함으로써, 리스크에 민감한 의사결정자들이 이를 수용할 수 있도록 만드는 핵심 역할을 한다.

넷째는 전략적 정렬 능력이다. FDE는 AI 구현을 측정 가능한 비즈니스 성과와 직접 연결한다. 어떤 기회가 실제 성과를 만들어낼 수 있는지, 혹은 기술적으로는 흥미롭지만 가치 대비 과도한 리스크를 수반하는지에 대해 판단하고 조언한다.

또한 초기 도입 단계뿐 아니라 운영 비용과 장기적인 유지보수까지 함께 고려한다. 이러한 사업 중심의 시각에 더해, 리스크를 객관적으로 평가하는 능력이 결합될 때 비로소 FDE는 단순히 뛰어난 소프트웨어 엔지니어를 넘어서는 차별화된 역할을 수행하게 된다.

이 네 가지 역량을 모두 갖춘 인재는 공통된 특성을 보인다. 대부분 개발자 등 기술 중심 직무에서 커리어를 시작했고, 컴퓨터공학 기반의 교육을 받았을 가능성이 높다. 이후 특정 산업에 대한 전문성을 쌓고, 빠르게 변화하는 환경 속에서도 지속적으로 학습하는 유연성과 호기심을 갖추게 된다. 이러한 희소한 조합 때문에 이들은 주로 대형 기술 기업에 집중돼 있으며 높은 보상을 받는 경향이 있다.

CIO의 딜레마

FDE가 이처럼 희소한 자원이라면, CIO에게 남은 선택지는 무엇일까.

인재 시장에서 자연스럽게 공급이 늘어나기를 기다리는 방법이 있지만, 이는 상당한 시간이 필요하다. 그 사이 AI 프로젝트가 통합의 벽에서 멈춰 있는 매달, 실제 가치를 창출하는 기업과 여전히 이사회에 데모만 보여주는 기업 간 격차는 더욱 벌어진다. AI의 비결정적 특성은 앞으로도 사라지지 않는다. 오히려 모델 성능이 향상될수록 예측 불가능한 행동의 가능성은 더 커질 수 있다. 결국 성공하는 기업은 기술이 완전히 무위험 상태가 되기를 기다리는 조직이 아니라, AI를 책임감 있고 자신 있게 운영 환경에 적용할 수 있는 내부 역량을 갖춘 조직이다.

대안은 내부에서 FDE를 육성하는 것이다. 이는 채용보다 더 어렵지만, 확장 가능한 유일한 해법이다. 다행히 FDE 역량은 체계적으로 개발이 가능하다. 적절한 인재 풀과 집중적이고 구조화된 교육이 필요하다. 안델라(Andela)는 경험 많은 엔지니어를 FDE로 전환하는 교육 과정을 구축했으며, 이를 통해 효과적인 방법론을 축적해왔다.

FDE 인재 풀 구축 전략

우선 적합한 후보자를 선별하는 것이 중요하다. 모든 뛰어난 엔지니어가 FDE로 전환할 수 있는 것은 아니다. 기술 영역을 넘어서는 호기심을 갖춘 숙련된 소프트웨어 엔지니어를 찾아야 한다. 기본적인 개발 역량이 탄탄하고 데이터 과학과 클라우드 아키텍처에 대한 경험이 있는 인재가 적합하다. 특히 특정 산업에 대한 이해는 빠른 적응을 돕는 중요한 요소다. 의료 규제나 금융 리스크 프레임워크에 대한 경험이 있는 인재는 해당 분야를 처음 배우는 경우보다 훨씬 빠르게 성장할 수 있다.

기술 교육 과정은 세 단계로 구성된다. 기초 단계에서는 AI와 머신러닝에 대한 기본 이해를 다진다. LLM 개념, 프롬프트 설계 기법, 파이썬 활용 능력, 토큰 구조, 기본적인 에이전트 아키텍처 이해가 포함된다. 이는 기본 역량에 해당한다.

중간 단계는 실전 도구 활용 역량이다. FDE가 수행하는 ‘세 가지 역할’에 대응하는 핵심 기술이 요구된다.

  • 첫째는 RAG(검색 증강 생성)로, 기업 데이터와 모델을 정확하고 안정적으로 연결하는 능력이다.
  • 둘째는 에이전트형 AI로, 다단계 추론과 작업 흐름을 적절한 통제와 검증 단계와 함께 설계하는 역량이다.
  • 셋째는 운영 환경 대응 능력으로, 모니터링 체계와 가드레일, 장애 대응 프로세스를 갖춘 상태에서 솔루션을 실제 배포할 수 있어야 한다.

이러한 역량은 실제 운영 환경의 리스크를 고려한 시스템을 직접 구축하고 배포하는 과정을 통해 습득된다.

고급 단계에서는 모델 내부 구조와 파인튜닝 등 심화 지식을 익힌다. 이는 표준적인 접근 방식이 통하지 않을 때 문제를 해결할 수 있는 능력으로 이어진다. 단순히 정해진 절차를 따르는 수준을 넘어, 새로운 상황에 맞춰 즉각적으로 대응할 수 있는 역량이다. 또한 보안 책임자(CISO)와 같은 이해관계자에게 특정 접근 방식의 안전성을 설명할 수 있는 수준의 전문성이 요구된다.

기술 역량만큼 중요한 것이 비기술적 역량이다. FDE는 기술 중심의 대화에서 벗어나 비즈니스 문제와 리스크 완화 중심으로 논의를 재구성할 수 있어야 한다. 프로젝트 범위 변경, 일정 지연, 비결정적 시스템의 불확실성 등 민감한 이슈를 포함한 고난도 이해관계자 관리도 필수다. 무엇보다 중요한 것은 판단력이다. 불확실한 상황에서도 합리적인 결정을 내리고, 새로운 유형의 기술 리스크를 받아들여야 하는 경영진에게 신뢰를 줄 수 있어야 한다.

조직과 후보자 모두에게 현실적인 기대치를 설정하는 것도 중요하다. 아무리 체계적인 프로그램을 갖추더라도 모든 인재가 FDE로 전환되는 것은 아니다. 그러나 소수의 FDE 인재만 확보하더라도 통합의 벽을 넘는 속도는 크게 빨라질 수 있다. 실제로 비즈니스 조직에 배치된 한 명의 FDE는, 비즈니스 맥락 없이 분리된 환경에서 일하는 다수의 기존 엔지니어보다 더 큰 성과를 낼 수 있다. 이는 문제의 본질이 기술이 아니라는 점을 FDE가 정확히 이해하고 있기 때문이다.

AI 시대의 승부처

FDE 역량을 확보한 기업은 통합의 벽을 넘어설 수 있다. 이들은 인상적인 데모를 실제 가치를 창출하는 운영 시스템으로 전환하고, 성공 경험을 바탕으로 조직의 신뢰를 점진적으로 확대해 나간다.

반면 이러한 역량을 확보하지 못한 기업은 AI 투자에도 불구하고 실질적인 성과를 내지 못한 채 정체 상태에 머물 가능성이 크다. 그 사이 더 높은 리스크를 감수하는 경쟁 기업들이 시장을 선점하게 된다.

안델라에 합류할 당시 필자는 AI가 인간의 역량을 완전히 대체하지는 못할 것이라고 판단했다. 지금도 그 생각은 변함없다. 다만 인간 역시 진화해야 한다. FDE는 그 진화의 방향을 보여주는 대표적인 인재상이다. 깊이 있는 기술 이해, 비즈니스 감각, 리스크 관리 능력, 그리고 지속적인 변화에 대응하는 유연성을 모두 갖춘 존재다.

AI 시대에 CIO가 지금 이 역량에 투자한다면, 단순히 기술 발전 속도를 따라가는 수준을 넘어 그동안 쉽게 확보되지 않았던 기업의 AI 가치를 실질적으로 실현하는 주체가 될 수 있다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • AI is spreading decision-making, but not accountability
    On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised. As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks sugg
     

AI is spreading decision-making, but not accountability

6 de Maio de 2026, 07:00

On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised.

As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks suggest responsibility is shared across legal, risk, IT, and business teams, courts may ultimately find it far less evenly distributed when something goes wrong.

AI, after all, may diffuse decision-making, but not legal liability.

AI doesn’t show up in court — people do

Jessica Eaves Mathews, an AI and intellectual property attorney and founder of Leverage Legal Group, understands that when an AI system influences a consequential decision, the algorithm isn’t what will show up in court. “It’ll be the humans who developed it, deployed it, or used it,” she says. For now, however, the deeper uncertainty is there’s very little case law to guide those decisions.

“We’re still in a phase where a lot of this is speculative,” says Mathews, comparing the moment to the early days of the internet, when courts were still figuring out how existing legal frameworks applied to new technologies. Regulators have signaled that responsibility can’t be outsourced to algorithms. But how liability will be apportioned across vendors, deployers, and executives remains unsettled — an uncertainty that’s unlikely to persist for long.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jessica Eaves Mathews, founder, Leverage Legal Group

LLG

“There are going to be companies that become the poster children for how not to do this,” she says. “The cases working their way through the system now are going to define how this plays out.”

In most scenarios, responsibility will attach first and foremost to the deploying organization, the enterprise that chose to implement the system. “Saying that we bought it from a vendor isn’t likely to be a defense,” she adds.

The underlying legal principle is familiar, even if the technology isn’t: liability follows the party best positioned to prevent harm. In an AI context, that tends to be the organization integrating the system into real-world decision-making, so what changes isn’t who’s accountable but how difficult it becomes to demonstrate appropriate safeguards were in place.

CIO as the system’s last line of defense

If legal accountability points to the enterprise, operational accountability often converges on the CIO. While CIOs don’t formally own AI in most organizations, they do own the systems, infrastructure, and data pipelines through which AI operates.

“Whether they like it or not, CIOs are now in the AI governance and risk oversight business,” says Chris Drumgoole, president of global infrastructure services at DXC Technology and former global CIO and CTO of GE.

The pattern is becoming familiar, and increasingly predictable. Business teams experiment with AI tools, often outside formal processes, and early results are promising. Adoption accelerates but controls lag. Then something breaks. “At that moment,” Drumgoole says, “everyone looks to the CIO first to fix it, then to explain how it happened.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Chris Drumgoole, president, global infrastructure services, DXC Technology

DXC

The dynamic is intensified by the rise of shadow AI. Unlike earlier forms of shadow IT, the risks here aren’t limited to cost or inefficiency. They extend to things like data leakage, regulatory exposure, and reputational damage.

“Everyone is an expert now,” Drumgoole says. “The tools are accessible, and the speed to proof of concept is measured in minutes.” For CIOs, this creates a structural asymmetry. They’re accountable for systems they don’t fully control, and increasingly for decisions they didn’t directly authorize.

In practice, that makes the CIO the enterprise’s last line of defense, not because governance models assign that role, but because operational reality does.

The illusion of distributed accountability

Most organizations, however, aren’t building governance structures around a single accountable executive. Instead, they’re constructing distributed models that reflect the cross-functional nature of AI.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Ojas Rege, SVP and GM, privacy and data governance, OneTrust

OneTrust

Ojas Rege, SVP and GM of privacy and data governance at OneTrust, sees this distribution as unavoidable, but also potentially misleading. “AI governance spans legal, compliance, risk, IT, and the business,” he says. “No single function can manage it end to end.”

But that doesn’t mean accountability is shared in the same way. In Rege’s view, responsibility for outcomes remains firmly with the business. “You still keep the owners of the business accountable for the outcomes,” he says. “If those outcomes rely on AI systems, they have to figure out how to own that.”

In practice, however, governance is fragmented. Legal teams interpret regulatory exposure, risk and compliance define frameworks, and IT secures and operates systems. The result is a model in which responsibility appears distributed while accountability, when tested, is not — and it often compresses to a single point of failure. “AI doesn’t replace responsibility,” says Simon Elcham, co-founder and CAIO at payment fraud platform Trustpair. “It increases the number of points where things can go wrong.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Simon Elcham, CAIO, Trustpair

Trustpair

And those points are multiplying. Beyond traditional concerns such as security and privacy, enterprises must now manage algorithmic bias and discrimination, intellectual property infringement, trade secret exposure, and limited explainability of model outputs.

Each risk category may fall under a different function, but when they intersect, as they often do in AI systems, ownership becomes blurred. Mathews frames the issue more starkly in that accountability ultimately rests with whoever could have prevented the harm. The difficulty in AI systems is that multiple actors may plausibly claim, or deny, that role. So the result is a governance model that’s distributed by design, but not always coherent in execution.

The emergence and limits of the CAIO

To address this ambiguity, some organizations are beginning to formalize AI accountability through new leadership roles. The CAIO is one attempt to centralize oversight without constraining innovation.

At Hi Marley, the conversational platform for the P&C insurance industry, CTO Jonathan Tushman recently expanded his role to include CAIO responsibilities, formalizing what he describes as executive accountability for AI infrastructure and governance. In his view, effective AI governance depends on structured separation. “AI Ops owns how we build and run AI internally,” he says. “But AI in the product belongs to the CTO and product leadership, and compliance and legal act as independent checks and balances.”

The intention isn’t to eliminate tension, but to institutionalize it. “You need people pushing AI forward and people holding it back,” says Tushman. “The value is in that tension.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jonathan Tushman, CTO, Hi Marley

Hi Marley

This reflects a broader shift in enterprise governance away from centralized control and toward managed friction between competing priorities — speed versus safety, innovation versus compliance. Yet even this model has limits.

When disagreements inevitably arise, someone must decide whether to proceed, pause, or reverse course. “In most organizations, that decision escalates often to the CEO or CFO,” says Tushman.

The CAIO, in other words, may coordinate accountability. But ultimate responsibility still sits at the top and can’t be delegated.

The widening gap between deployment and governance

If organizational models for AI accountability are still evolving, the gap between deployment and governance is already widening. “Companies are deploying AI at production speed, but governing at committee speed,” Mathews says. “That’s where the risk lives.”

Consequences are beginning to surface as a result. Many organizations lack even a basic inventory of AI systems in use across the enterprise. Shadow AI further complicates visibility, as employees adopt tools independently, often without understanding the implications.

The risks are both immediate and systemic. Employees may input sensitive corporate data into public AI platforms, inadvertently exposing trade secrets. AI-generated content may infringe on copyrighted material, and decision systems may produce biased or discriminatory outcomes that trigger regulatory scrutiny.

At the same time, regulatory expectations are rising, even in the absence of clear legal precedent. That combination — rapid deployment, limited governance, and legal uncertainty — makes it likely that a small number of high-profile cases will shape the future of AI accountability, as Mathews describes.

Where the buck stops

For all the complexity surrounding AI governance, one pattern is becoming clear. Responsibility may be distributed, authority may be shared, and new roles may emerge to coordinate oversight, but accountability doesn’t remain diffused indefinitely.

When systems fail, or when regulators intervene, it often points at enterprise leadership, and, in operational terms, to the executives closest to the systems in question. AI may decentralize how decisions are made, obscure the pathways through which those decisions emerge, and challenge traditional notions of control, but what it doesn’t do is eliminate responsibility. If anything, it magnifies it.

AI accountability is a familiar problem, refracted through a more complex system. The difference is the system is moving faster, and the cost of getting it wrong is increasing.

  • ✇Security | CIO
  • How UKG puts AI to work for frontline employees
    As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation fram
     

How UKG puts AI to work for frontline employees

6 de Maio de 2026, 07:00

As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation framework to transform the frontline worker experience, and why the CIO-CHRO partnership is critical to making it stick.

How do you leverage AI for growth and transformation at UKG?

UKG is one of the largest HR, pay, and workforce management tech platforms in the market, and our expertise is in creating solutions for frontline workers, which account for 80% of the world’s workforce. This is important because when companies rebrand themselves as AI for knowledge workers, they’re not talking about frontline workers. But people in retail, manufacturing, healthcare, and so on also benefit from AI capabilities.

So the richness of our data sets, and our long history with the frontline workforce, positions us well for AI driven workforce transformation. 

What are some examples?

We use agentic AI for dynamic workforce operations, which shows us real-time labor demand. Our customers employ thousands of frontline workers, and the timely market insights and suggested actions we give them are new and valuable.

We also provide voice agents. Traditionally, when a frontline worker requests a shift, managers would review availability, fill out paperwork or update scheduling software, and eventually offer an appropriate job. With voice agents, AI works directly with the frontline worker, going through background and skills validation, communication, and even workflow execution. The worker can also ask if they can swap shifts or even get advice on how to make more money in a particular month. This is where AI changes the entire frontline worker experience.

We also launched People Assist, an autonomous employee support agent. Typically, when an employee is onboarded, IT and HR need to trigger and approve workflows. People Assist  not only tracks workflows, but also performs those necessary IT and HR onboarding activities so new employees are productive from day one.

What framework do you use to create these new capabilities?

For internal AI usage for our own employee experience, we use an idea-to-implementation framework, which involves a community of UKG power users who are subject matter experts in their area. Ideas can come from anybody, and since we started nine months ago, more than 800 ideas have been submitted. The power users set our priorities by choosing the ideas that will make the most impact.

Rather than funneling ideas through a small central team — a linear process that kills momentum — we’ve democratized innovation across the business. We give teams the governance frameworks, change models, and risk guardrails they need to move quickly.  With AI, the most important thing isn’t to launch, but to land.

But before we adopted the framework, we defined internal personas so we could collaborate with different employee groups across the company, from sales to finance.

With the personas and the framework, we can prioritize ideas by persona, which also facilitates crowd sourcing. You’re asking an entire persona which of these 10 ideas will make their lives better, rather than senior leaders making those decisions for them.

Why do so many CIOs focus on personas for their AI engine?

Across the enterprise, every function has a role to play. We hire marketing, sales, and finance for a particular purpose. Before AI, we gave generic packaged tools to everyone. AI allows us to build capabilities to make a specific job more effective. Even our generic AI tools are delivered by persona. Its impact on specific roles is the reason personas are so important right now. Our focus is on the actual jobs, the people who do them, the skills and tasks needed, and the outcomes they want to achieve.

We know our framework and persona focus work from employee data. In our most recent global employee engagement survey, 90% said they’re getting the right AI tools to be effective. For the AI tools we’ve launched broadly across the company, eight out of 10 employees use them. For me, AI isn’t about launching 10,000 tools, because if no one uses them, it’s just additional cost for the CIO and the company.

Is the build or buy question more challenging in this nascent stage of AI?

The lifecycle of technology has moved from three years to three hours, so whenever we build at UKG, we use an open architecture, which allows us to build with a commercial product if one comes on the market.

Given the speed of innovation, we lean toward augmentation rather than build. There are areas, like our own native products, where a dedicated engineering team makes sense. But for most of our AI capabilities — customer support and voice agents, for example — we work with our vendor partners. We test and learn with multiple vendors, and decide on one usually within two weeks.

This is what AI is giving all CIOs: flexibility, rapid adoption, interoperability, and the ability to quickly switch vendors. It’s IT that’s very different from what it used to be.

Given the shift to augmentation, how will the role of the software engineer change?

For software builders, business acumen — the ability to understand context — is no longer optional. In the past, the business user would own the business context, and the developer, who owns the technology, brings that business idea to life. Going forward, the builder has the business context to create the right prompts to let AI do the building, and the human in the loop is no longer the technology builder, but the provider of context, prompts, and validation of the work. So the engineer doesn’t go away, however they now finish a three-week scope of work in hours. With AI, engineers operate at a different altitude. The SDLC stays, but agility increases where a two-week concept compresses into two days.

At UKG, you’re directly connected to the CHRO community. What should they be thinking about how the workforce is changing with AI?

The best CHROs are thinking about the skills they’ll need for the future, and how to train existing talent to be ready. They’re not questioning whether we’ll need people, but how to sharpen our teams for new roles. The runbooks for both IT and HR are evolving, which is why the CIO-CHRO partnership has never been more critical to create the right culture for AI transformation.

CIOs can deliver a wealth of employee data like roles, skillsets, and how people spend their time. And as HR leaders help business leaders think through their roadmap for talent —  both human and AI — IT leaders can equip them with exactly that intelligence.

What advice would you give to CIOs driving AI adoption?

Invest in AI fluency, not just AI tools. Your people don’t need to become data scientists, but they do need a new kind of literacy — the ability to work alongside AI, question its outputs, and know when to override it. That’s a training and culture investment, not a software investment.

And redesign work before you redeploy people. Don’t just drop AI into existing workflows. Use this moment to ask what work really matters. AI is forcing us to have the job design conversations we should’ve had years ago, so it’s important to be transparent about the journey. What’s killing workforce trust now is ambiguity. Your people can handle hard truths but not silence. Leaders who communicate openly about where AI is taking the organization will retain the talent they need to get there.

  • ✇Security | CIO
  • Agentic AI is rewiring the SDLC
    The next wave of AI in software development goes beyond better code generation: agents are starting to take accountability throughout planning, design, build, test, release and operations. In the teams I work with, this is already changing team dynamics, leadership priorities and what CIOs must do to maintain quality, security and control.   The biggest shift I see is genuine delegation: AI can now draft backlog items, inspect codebases, propose implementation paths, cr
     

Agentic AI is rewiring the SDLC

4 de Maio de 2026, 09:00

The next wave of AI in software development goes beyond better code generation: agents are starting to take accountability throughout planning, design, build, test, release and operations. In the teams I work with, this is already changing team dynamics, leadership priorities and what CIOs must do to maintain quality, security and control.  

The biggest shift I see is genuine delegation: AI can now draft backlog items, inspect codebases, propose implementation paths, create tests, summarize reviews and prep releases before teams fully agree on ‘done.’ This marks a shift from AI as an assistant to AI as an active participant. That is why this topic matters for CIOs right now. With Google I/O on May 19–20 and Microsoft Build on June 2–3, attention will continue to rise around AI coding models, agentic development workflows and the platforms that now span planning through operations. Microsoft and GitHub are embedding agents more deeply into the engineering workflow.

Gemini Code Assist, GitHub Copilot’s coding agent, OpenAI Codex and Claude Code all reflect the same direction: AI is beginning to participate across planning, building, testing, reviewing and operations, not just within the editor. Google is trying to provide coding assistance to broader lifecycle support. Amazon is leaning into operationalization. OpenAI and Anthropic are pushing agentic coding and repository reasoning. Newer prompt-to-app platforms such as Lovable and Replit are compressing the path from idea to working application. The market signal is clear: AI is moving beyond code suggestion and into software delivery itself.

For business and technology executives, the strategic question is no longer whether AI can generate output. It is whether the organization can use AI to improve delivery without creating faster paths to weak requirements, inconsistent standards, poor testing and vague governance. That is why I frame this conversation around software delivery rather than relying too heavily on the older SDLC label. SDLC still makes sense, but it sounds procedural for what is actually happening. Agentic AI is not just accelerating tasks inside a fixed lifecycle. It is rewiring the operating model of delivery. Recent DORA research reinforces what I see in practice: AI tends to amplify an organization’s existing strengths and weaknesses and the biggest returns come not from the tool alone, but from improving the delivery system around it.

Where agentic AI is creating the most value

The first place CIOs should focus on is where agentic AI is creating measurable value across the lifecycle. In planning and requirements, AI can already do meaningful first-pass work. Teams can ask it to inspect an existing codebase, summarize dependencies, suggest implementation paths, draft user stories, refine acceptance criteria and surface tradeoffs before engineers begin building. Used well, that reduces administrative drag and improves consistency. It also changes where the bottleneck appears. What I see most often is that teams adopt agentic tools expecting a boost, but the first real bottleneck appears upstream when acceptance criteria are too loose for the agent to interpret safely. The teams that struggle most are not the ones with weak prompts. They are the ones with vague intent. AI amplifies ambiguity as efficiently as it amplifies insight. OpenAI’s guidance for AI-native engineering teams describes agents contributing to scoping, ticket creation and other lifecycles work well before code is merged.

A practical model of agentic AI across the software delivery lifecycle.
A practical model of agentic AI across the software delivery lifecycle

Vipin Jain

In architecture and design, the real gain is not that AI can produce more diagrams. It can help teams compare options faster, trace dependencies, expose inconsistencies and document decisions with less manual effort. But architecture is not just pattern matching. It is a judgment about resilience, security, compliance, integration, cost and long-term business fit. The strongest teams use AI to explore options while architects define the guardrails, review points and non-functional requirements that the system must adhere to. In an agentic environment, architecture becomes more important, not less, because someone still has to define what the system is allowed to do. What I see in the strongest teams also matches Anthropic’s experience: simpler, well-bounded agent patterns usually outperform elaborate multi-agent complexity when the goal is reliable software delivery.

Build, test and review are changing even faster. GitHub Copilot’s coding agent, Claude Code, Amazon Q Developer, OpenAI Codex and Google’s broader agentic tooling all point in the same direction: the market is moving from AI-assisted coding to AI-assisted flow. In practice, that means agents can decompose work, generate code, create tests, run checks, summarize failures and prepare work for human review. The important metric is no longer lines of code per developer. It is the amount of safe, reviewable work the team can move through the pipeline without increasing rework. That is a more executive-relevant measure because it ties AI to throughput and quality rather than just speed. Benchmarks such as SWE-bench matter here because they test models against real repository-level software tasks, rather than isolated code snippets, which is much closer to the work CIOs are actually trying to improve.

Deployment, operations and maintenance are where the enterprise’s stakes become highest. This is the point that many organizations underestimate. Writing code is visible. Governing agent behavior in production is harder, less glamorous and much more important. In the teams I see gaining the most value, leaders are using AI to support release readiness, detect anomalies, summarize incidents, draft remediation steps and improve documentation around recurring issues. I have also seen teams pilot agents successfully in build, then stall at release because no one had clearly defined what the agent could change on its own, what required approval or who owned rollback when something went wrong. The organizations that make progress are the ones that answer those questions early. That is where trust is built. That is also why the market is shifting toward governed runtime and operations support, not just coding help; Amazon Bedrock AgentCore is one example of that broader move toward secure deployment, monitoring and controlled agent operation at scale.

How roles and teams are evolving

Agentic AI changes agile teams by shifting what roles contribute. Developers spend less time on first drafts and more time steering AI, validating diffs, hardening edge cases and managing exceptions. Their leverage shifts from typing speed to judgment—knowing what to trust, challenge or escalate. Leaders should recognize this meaningful change in role identity.

Architects also move up the value chain. In traditional environments, they often spend too much time creating static documentation that teams interpret unevenly. In agentic environments, the more valuable work is defining executable guardrails: approved patterns, tool boundaries, policy controls, integration rules and quality gates that both humans and agents can follow. That makes architecture more operational and more consequential.

QA, platform and SRE teams also gain influence. Testing becomes less about writing every case manually and more about building evaluation strategies, validating behavior, instrumenting pipelines and preserving rollback discipline. The closer AI moves to release and operations, the more essential traceability, observability and control become. Product owners and business analysts also need to raise their game. When requirements are fuzzy, human teams usually compensate through conversation. Agents often execute fuzziness literally. In practice, that means the teams that benefit most from agentic AI are the ones that improve intent, edge-case thinking and acceptance discipline. One more shift deserves attention: pro-code and low-code are converging. Microsoft’s Copilot Studio, IBM WatsonX Orchestrate, Lovable and Replit are lowering the barrier between idea and execution for a broader set of contributors. That is good news for experimentation and business alignment, but it also raises the risk of software sprawl outside shared architecture and security controls. CIOs should not dismiss these tools as toys, nor let them float free of governance. The most effective organizations will connect pro-code and low-code through common guardrails rather than force a false choice between them.

width="1024" height="519" sizes="auto, (max-width: 1024px) 100vw, 1024px">
How agentic AI is shifting the center of gravity for core delivery roles.

Vipin Jain

What CIOs should do now?

As roles and delivery processes evolve, what concrete actions should CIOs consider now? The organizations I see getting the most from agentic AI are not treating it as a coding-assistant bakeoff. They are redesigning the delivery system around it. That starts with intent. Leaders should raise the quality of requirements before work enters agentic pipelines. If the business outcome, constraints and acceptance criteria are unclear, the AI will often produce technically plausible but strategically wrong work.

Next comes guardrails and autonomy. Leaders should define what agents can do on their own, what requires approval, what systems and data they can touch and what evidence the pipeline must capture. This is not bureaucracy for its own sake. It is the difference between acceleration and avoidable damage. Teams need clear security rules, architecture patterns, approval boundaries and rollback paths before they scale autonomy. Google Research offers a useful counterweight to the hype here: more agents do not automatically produce better outcomes, especially when the task design, coordination model and workflow are weak.

The management system leaders need for agentic software delivery.
The management system leaders need for agentic software delivery.

Vipin Jain

Then comes observability. If an agent drafts code, generates tests, touches data, triggers a workflow or influences a release decision, leaders should be able to see that activity, evaluate it and audit it later. This is where many pilots remain weak. They prove that AI can do something. They do not prove that the organization can repeatedly trust it. That is why a more formal evaluation matter. Microsoft’s guidance on agent evaluators is useful here because it focuses on operational signals leaders actually need: task completion, task adherence, intent resolution and tool-call accuracy.

Finally, leaders should change how they measure success. Code volume and demo velocity are weak proxies. Better measures include defect escape, rework, release confidence, cycle time for work that reaches production safely and the percentage of work that moves through the pipeline with clear evidence and human accountability. Start with bounded use cases such as maintenance tasks, test generation, documentation, technical debt reduction and lower-risk feature work with strong review. Build supervision muscle before you try to scale autonomy.

The executive takeaway

The strategic mistake I see most often is treating this moment as a tool refresh or a beauty contest among AI coding platforms. Google, Microsoft, Amazon, OpenAI, Anthropic and the next wave of prompt-to-app players matter because they signal where the market is going. But the winning question for leaders is not which demo looks smartest. It is whether the organization is redesigning software delivery so AI can contribute without weakening quality, security or control.

More generated code is not the prize. Better software delivery is. The enterprises that win will connect business intent to engineering execution more tightly, instrument agent behavior more rigorously and redesign team roles around judgment, supervision and accountability. They will make AI part of the team, not just another tab in the IDE.

This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The rise of the double agent CIO
    CIOs of B2B SaaS companies are just as responsible to represent technology as they are to run it. In an environment where the buyer is often another CIO, however, the role becomes something fundamentally different. It’s no longer confined to internal execution. It extends into the market, customer conversations, and the moments that ultimately shape revenue, trust, and long-term relationships. So the modern SaaS CIO operates as a true double agent, running the business fro
     

The rise of the double agent CIO

4 de Maio de 2026, 07:00

CIOs of B2B SaaS companies are just as responsible to represent technology as they are to run it. In an environment where the buyer is often another CIO, however, the role becomes something fundamentally different. It’s no longer confined to internal execution. It extends into the market, customer conversations, and the moments that ultimately shape revenue, trust, and long-term relationships. So the modern SaaS CIO operates as a true double agent, running the business from within while representing it to the market.

Box CIO Ravi Malick sits squarely in that duality. After serving as CIO of Vistra Energy, a company defined by legacy systems and industrial scale, he stepped into a digitally native, founder-led SaaS business in 2021 where technology is inseparable from the business itself. He now leads internal tech while engaging directly with CIOs of companies evaluating Box, bringing a perspective shaped by both worlds. What stands out in Malick’s perspective isn’t how different the role is, but how much more expansive it’s become.

What stays the same, what evolves

The core tension of the CIO role hasn’t changed. “There’s always more demand than you have the capacity or funding for,” Malick says. Prioritization, alignment to business strategy, and the constant need to modernize while operating at scale still define the job. The difference, however, is the environment in which those challenges now exist.

At Box, Malick operates inside a workforce where technology fluency is high and expectations are even higher. “I partner with 3,000 technologists who love to solve problems with technology,” he says. That creates a powerful advantage, but also a new kind of pressure. Demand for tools, platforms, and innovation is constant, and AI has only accelerated it.

That dynamic is further shaped by Box’s leadership. As a founder-led company, technology conversations extend well beyond the CIO’s organization. “It’s a different dynamic when your CEO is a founder and a technologist,” Malick says. “You’re as much a steward of incoming ideas as you are a generator of them.” That relationship creates both pace and perspective, requiring the CIO to operate as both orchestrator and partner in shaping how technology evolves across the business.

In that context, the CIO is leading within a highly informed, highly engaged organization where expectations for speed and innovation are constant. The challenge isn’t modernization as a one-time effort, but ensuring the tech stack continuously evolves and scales with the business.

Balancing the internal mandate with external pull

What truly differentiates the role in SaaS is what happens outside the enterprise, and the pressure that comes with it. The CIO is still accountable for running IT, ensuring security, and maintaining operational excellence. At the same time, there’s growing expectation to show up externally, engage customers, and directly support revenue.

Malick doesn’t present that balance as seamless. “It’s a daily challenge,” he says. “But sometimes not balanced so well.” There’s a constant push and pull between internal priorities and external demands, and in many cases, revenue pulls hard. The opportunity to influence deals, build relationships, and contribute to growth elevates the strategic importance of the role, but it doesn’t remove the responsibility for the day job.

What allows Malick to operate effectively in both worlds is the strength of the foundation behind him. He points to the maturity of his leadership team, operating model, and internal processes as critical enablers. With clear structures, strong leaders, and disciplined execution in place, he has the bandwidth to spend meaningful time externally. It isn’t always a perfect balance, but it’s a deliberate one.

From operator to peer in the market

Through Box’s customer zero program, Box on Box, Malick operates as both CIO and practitioner, bringing firsthand experience into customer conversations. “I can take how we build at Box to customer conversations,” he says. That perspective shifts the dialogue away from product positioning, and toward the realities of execution.

In a market where CIOs are constantly being pitched, that distinction carries weight. “They want to know how it works from the perspective of someone managing it,” he says, adding he leans into that by being transparent about both successes and missteps. “We share the challenges and false starts we’ve managed through.”

That candor builds credibility, and credibility builds trust. After all, people buy from people they trust, and in enterprise technology, says Malick, peer-to-peer conversations are a faster path to trust than demos. 

The external dimension of the role also holds a symbiotic relationship with internal responsibilities. Malick brings customer conversations back into Box, using them to inform how he thinks about technology decisions and broader strategy. He describes the CIO community as uniquely open, even therapeutic, where leaders candidly share challenges and exchange ideas. That openness creates a feedback loop where external insights sharpen internal execution, and internal experience strengthens external credibility.

What this means for the CIO role

What makes Malick’s perspective especially relevant is that the lesson isn’t limited to SaaS. As technology becomes more central to growth, customer experience, and business model change, CIOs in every industry are being pulled closer to the front office. The shift is about becoming more fluent in how technology translates into trust, speed, and commercial impact, not just becoming more visible.

For Malick, one of the biggest lessons is the role now demands a different kind of leadership than many CIOs were originally trained for. “Don’t make assumptions, and don’t assume something’s easy or intuitive,” Malick says. In a world where technology is reshaping how people work in real time, communication becomes a strategic discipline. CIOs have to explain change, absorb feedback, and keep translating between technical possibility and business reality.

The rise of AI adds another dimension to the double agent role. CIOs are building the content foundation that AI needs to be effective, and ensuring the organization can experiment with AI without sacrificing compliance or control. In a fast-paced technology company, ideas, opinions, and new technologies come from every direction. So the CIO isn’t simply the expert with the answers but often the one managing velocity itself, deciding where to push and where to hold.

“You have to figure out when you need to be in the fast lane and when you don’t,” Malick says. That kind of judgment is becoming more critical as technology moves to the center of the business, and it’s another reason CIOs are stepping into CEO and COO roles.

As AI accelerates the pace of change and creates the potential to decouple revenue growth from headcount growth, that ability to manage speed, scale, and tradeoffs becomes a defining leadership capability. That’s why the SaaS CIO should matter to leaders far beyond software. With AI transforming every industry, the role is becoming a preview of where the profession is headed — not just to run technology, but help shape how the company grows, how it shows up in the market, and how it earns trust. The double agent CIO may sound like a SaaS phenomenon. Increasingly, though, it looks more like the future of the job.

  • ✇Security | CIO
  • Agentic AI is reshaping business ecosystems — CIOs must choose their role carefully
    From systems to ecosystems to agents A shift has been underway for some time as value creation moves from slow, firm-centric to more rapid, co-created across a network of participants.  Customers don’t experience systems; they experience outcomes. Those outcomes are assembled across a network of partners, platforms and capabilities that must work together as one. Consider NVIDIA. Its Blackwell platform is not simply a product; it is an ecosystem. Chips, software fram
     

Agentic AI is reshaping business ecosystems — CIOs must choose their role carefully

1 de Maio de 2026, 07:00

From systems to ecosystems to agents

A shift has been underway for some time as value creation moves from slow, firm-centric to more rapid, co-created across a network of participants.  Customers don’t experience systems; they experience outcomes. Those outcomes are assembled across a network of partners, platforms and capabilities that must work together as one.

Consider NVIDIA. Its Blackwell platform is not simply a product; it is an ecosystem. Chips, software frameworks, developer tools and partner innovations combine to deliver AI capability at scale. What appears seamless to the customer is a highly coordinated system of interdependent contributors.

The CIO’s responsibility is to ensure alignment among technology, agents and the ecosystem’s role.

That requires the agentic AI strategy to shift from static alignment to continuous alignment, in which architecture, governance and intelligent systems evolve in real time.

This shift is at the core of Digital Momentum: Architecture that actively shapes how value is created, adapted and delivered in an outcome-oriented world.

Not all agents are created equal

One of the biggest mistakes organizations are making right now is treating agentic AI as a plug-and-play solution, assuming all agents, whether internal or ecosystem-facing, can be designed the same way. Context defines the agent, and context determines how it must be designed.

However, there’s a fundamental difference between:

These ecosystem agents don’t just execute tasks; they negotiate, coordinate and influence results in environments that are partially controlled and partially influenced by stakeholders.

Ecosystem agents must be designed with precision. They cannot be general-purpose actors with broad autonomy or poorly defined functionality. To function effectively, an agent needs to address its role in the ecosystem:

  • Limited, purpose-built context so they can act quickly without being overwhelmed or unpredictable.
  • Clearly defined responsibilities, tightly aligned to a specific mission.
  • Bounded authority, ensuring decisions stay within acceptable risk thresholds.
  • Embedded governance, built into how they operate and not layered on afterward.

Research into AI-driven organizations consistently shows that intelligent systems perform well only when aligned with operating models and value delivery. The same principle applies to agentic systems. Without alignment, autonomy doesn’t create value; it creates instability.

4 agentic role types that define agentic strategy

To operate effectively in an agent-driven ecosystem, CIOs must be explicit about the role their organization is playing and how agents fill those roles:

1. Orchestrator agent: Designing the system

Orchestrators define how value is assembled across the ecosystem. They control integration points, set standards and often own the customer relationship.

What it requires

  • Strong architectural control over interfaces and workflows
  • Coordination of agent behavior at scale
  • Governance embedded directly into runtime execution

CIO decision lens

  • Where to enforce control vs. allow flexibility.
  • How agents interact, trigger actions and make decisions.
  • What governance must be codified into the system.

2. Complementor agent: Differentiating at the edge

Complementors extend the ecosystem with specialized capabilities, providing directed experience and domain expertise that matter most.

What it requires

  • Deep, defensible domain expertise.
  • Context-aware agents that operate within orchestrated workflows.
  • Rapid adaptability as the ecosystem’s needs evolve.

CIO decision lens

  • Where to differentiate vs. conform.
  • How much autonomy agents should have within external systems.
  • How to expose capabilities to remain indispensable.

3. Supplier agent: Powering the solution

Suppliers provide the infrastructure and core services that ecosystems depend on.

What it requires

  • High reliability and scalability
  • Standardized, consumable services
  • Consistent performance at ecosystem scale

CIO decision lens

  • Where to compete on cost, performance or specialization
  • How to expose services for reuse
  • Where to invest to avoid commoditization

4. The consumer agent: Using the solution

Consumer agents act as customer proxies, presenting solutions orchestrated solutions.

What it requires

  • Flexibility across providers and platforms
  • Strong governance over external dependencies
  • Trust frameworks for reliable outcomes

CIO decision lens

  • How much control to retain vs. delegate
  • How to govern external agents
  • How to ensure predictable outcomes

The bottom line for CIOs

The mistake many organizations make is designing agents generically. Agent behavior, authority and governance must be shaped by the role you play in the ecosystem.

Get that alignment right, and agentic AI becomes a force multiplier.
Get it wrong, and you introduce instability at the very point where value is created.

Agentic strategy: Aligning AI to your evolving role in the ecosystem

With deployed agents, CIOs need to ask the following question: How will those agents remain aligned to our evolving role in the ecosystem as strategic priorities shift?

AI is a continuous expression of how your organization creates value. As markets shift, partnerships evolve and strategy changes, your role in the ecosystem must evolve as well, and your agents must adapt to it. Figure 1 illustrates these roles and how they interact dynamically across the ecosystem.

The agent ecosystem

Brice Ominski

When organizations fail to realign agent behavior as their role evolves, misalignment sets in and the consequences compound quickly:

  • Orchestrators lose control over increasingly complex ecosystems
  • Complementors become interchangeable as differentiation erodes
  • Suppliers are pushed toward utility status, competing primarily on cost
  • Consumers lose predictability in outcomes they depend on

In an agentic world, competitive advantage doesn’t come from deploying agents; it comes from continuously realigning them.

Control value and risk in agentic systems

As ecosystems become agent-driven, risk doesn’t disappear; specifically, CIOs should look for the following risks:

  • Platform dependency. Your operating model becomes tied to another organization’s ecosystem
  • Value imbalance. Orchestrators capture disproportionate value
  • Architectural lock-in. Integration and agent decisions limit future flexibility
  • Capability absorption. Differentiated capabilities get embedded into the platform
  • Trust gaps. Autonomous agents require stronger identity, auditability and policy enforcement
  • Intelligence displacement. Control over data and learning loops shifts elsewhere

Ignoring them doesn’t reduce risk — it delays when it becomes a constraint.

What CIOs should do next to prepare their agentic strategy

The organizations that succeed won’t be the fastest adopters. They will be the most deliberate.

  1. Make your ecosystem role explicit. Define whether you are orchestrating, extending, supplying or assembling value.
  2. Map control vs. dependency early. Understand where decision authority resides and where it may erode.
  3. Design agents as bounded actors. Scope, authority and decision rights must be clearly defined.
  4. Embed governance and trust into execution. Governance must be codified, enforced at runtime and continuously observable.

Closing perspective

In this environment:

  • Orchestrators shape how value is assembled
  • Complementors differentiate where it matters most
  • Suppliers provide the foundation
  • Consumers determine how value is composed

The CIO’s responsibility is to ensure alignment across all three: Technology, agents and ecosystem position.

This shift toward continuous, outcome-driven alignment is at the core of what I explore in Digital Momentum — how architecture evolves from supporting the business to actively shaping value creation in real time.

This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Your AI coding agent isn’t a tool. It’s a junior developer. Treat it like one
    Yet that is precisely how most organizations are deploying AI coding agents today. The prevailing narrative around “AI-powered development” frames these systems as productivity tools. Vibe-coding and agentic coding are considered something closer to a faster autocomplete or a more sophisticated IDE plugin. Flip the switch, the story goes, and suddenly your engineering organization becomes dramatically more efficient. Everyone is “all in” on the first hand of cyber-Texas Hol
     

Your AI coding agent isn’t a tool. It’s a junior developer. Treat it like one

23 de Abril de 2026, 07:00

Yet that is precisely how most organizations are deploying AI coding agents today. The prevailing narrative around “AI-powered development” frames these systems as productivity tools. Vibe-coding and agentic coding are considered something closer to a faster autocomplete or a more sophisticated IDE plugin. Flip the switch, the story goes, and suddenly your engineering organization becomes dramatically more efficient. Everyone is “all in” on the first hand of cyber-Texas Hold ’Em. That mental model is wrong.

AI coding agents are not tools. They behave far more like junior developers: Capable, energetic, sometimes brilliant, but absolutely capable of causing catastrophic damage if given autonomy before they understand and respect the environment they’re operating in.

The organizations that treat AI coding agents like tools will create and accumulate technical debt at unprecedented speed. The organizations that treat them like junior engineers by onboarding them as talent, pairing with them and teaching them context will unlock the productivity gains everyone is chasing. The difference between those outcomes is not the technology. It is the management model.

The lesson every engineer learns early

Midway through the DevOps phase of my career, I worked at the CME Group, where the exchange operates one of the most critical financial infrastructures on the planet. The CME processes roughly a quadrillion dollars’ worth of contracts annually and, at the time, ran across five datacenters with more than 10,000 servers, including racks of Oracle Exadata systems costing hundreds of thousands of dollars each. The biggest SIFI of SIFIs.

You did not get root access to that environment on day one.

Instead, you were paired with a mentor. Your mentor was part of a buddy system for onboarding new hires and was effectively a docent for the infrastructure. My mentor was a deeply technical manager named Matt, one of the most capable engineers I have ever worked with. His job wasn’t simply to show me which commands to run or where to find documentation. His job was to teach me how to ask the platform, a system of systems, meaningful questions.

When you’re managing infrastructure at that scale, every question returns thousands of answers.

  • Are the matching engines pinned correctly to CPU cores?
  • Are cgroups configured properly for workload isolation?
  • Which RAID arrays are starting to show drive failures?
  • Are firmware and BIOS versions aligned across production and QA?

None of this can be learned through a quick tutorial or a training video. You learn by doing. You learn by working through the ticket queue, performing dry runs, preparing rollback plans and executing changes within narrow maintenance windows (a few minutes per week).

The lesson wasn’t simply technical. It was epistemological. Engineering expertise is not about knowing commands. It is about knowing which questions matter and how to understand the response. And that knowledge only develops through mentorship, iteration and experience.

Why the pair-programming model matters

The software industry already solved this problem decades ago through a practice called pair programming. In agile teams, a senior developer pairs with a junior one. They work together on the same problem in real time. The junior developer contributes energy and fresh thinking, while the senior developer contributes experience and judgment. The result is faster capability development without sacrificing quality. At first, it might seem an expensive allocation of resources, but when you think it through, it is really a strong knowledge management technique.

AI coding tools are like a super smart baby, a nascent intelligence that is as eager as any recent college graduate, but without much in the way of real-life experience solving real-world problems because it cannot rely on a body of lived experience and hard-won lessons in software development, release engineering and debugging. That description should sound familiar. It is essentially the profile of a junior developer.

The implication is obvious once you see it: the most effective deployment model for AI coding agents is the same pairing model that works for human developers. Human plus agent.

Not a human supervising an agent after the fact. Not just a human reviewing pull requests from an automated pipeline. But genuine co-development, with contextual education on why the vulnerability should not be introduced in the first place. When that pairing works, the productivity gains are real. When it doesn’t, you ship vulnerabilities faster than your security team can ever hope to triage them.

What the agent gets wrong first

The first time I worked alongside a coding model on a real security problem, the mistake it made was subtle but revealing. I was experimenting with ways to harden an API without introducing latency or complexity on the client side. The goal was to produce a transparent security uplift that improved the API’s defensive posture without forcing developers to substantially change how they interacted with the service.

The model generated plausible suggestions quickly. Too quickly. Some of the techniques it proposed were technically correct but operationally obsolete. Others referenced security mechanisms that had been deprecated. Still others ignored non-functional requirements around compliance or performance. In other words, the model surfaced relevant information but lacked the judgment to distinguish wheat from chaff. 

There is also a tendency to accept the legitimacy of the ask rather than questioning the assumptions and baseline parameters of the situation. The agent is not going to think outside the box (unless it is hallucinating a nonexistent function or package/library that solves the problem). It assumes that the question being asked for it to try to solve is a legitimate and valid question or problem to be solved.

Humans develop that discernment over time. It’s part of how we move from data to information to knowledge to wisdom. What information scientists have called the DIKW pyramid.

Models don’t struggle their way up that pyramid. They jump directly to conclusions. The struggle, however, is a messy process of trial, failure and iteration, but it is where human experience and knowledge form. That knowledge is then further refined and distilled into wisdom. When that process is skipped, real expertise never develops. This is why treating AI coding agents as tools is dangerous. Tools don’t need to exercise judgment. Junior developers do.

How trust actually develops

Think about the best junior engineer you ever worked with. How long did it take before you trusted them to work independently? Rarely less than months. Oftentimes a year or more.

Trust emerges gradually. It grows from observing how someone works through problems: how they document changes, how they write tests, how they think about rollback procedures and anticipating edge cases and race conditions. In my own teams, I’ve always preferred a management philosophy of 100% freedom and 100% responsibility (Netflix Manifesto circa 2001).

Engineers on my teams are expected to behave like owners of the company. They are indoctrinated to commit infrastructure changes as code. They document their reasoning. They attach testing artifacts to their pull requests. We track progress not just by time spent but by contributions: Commits, documentation, testing evidence and operational discipline.

That process shapes junior engineers into reliable junior engineers. The exact same logic applies to AI coding agents. Trust should expand progressively.

  • At first, the agent proposes little code snippets and stanzas.
  • Then it drafts functions and packages libraries.
  • Eventually, it might implement entire features, but only after proving it understands the environment and the risk appetite of the company.

Skip those steps, and you aren’t accelerating development. You’re accelerating chaos being driven by FOMO and FUD.

Learning from more than one chef

Over the course of my career, I’ve worked across a wide range of industries: dot-com era web development in San Francisco, trading infrastructure in European financial markets, cloud transformations for legacy enterprises and large-scale infrastructure engineering.

Each environment changed how I thought about software and security. The dot-com era taught speed and experimentation. European financial institutions taught rigorous project governance (PRINCE2 anyone?). Large-scale options and commodity exchanges taught what real operational resilience looks like.

Those experiences fundamentally reshaped how I approach engineering problems. AI agents will benefit from the same diversity. Pairing them with multiple engineers and rotating pairings over time will expose them to different coding styles, architectural philosophies and security techniques. Best practices, but not monolithic best practices aggregated and homogenized by token prediction algorithms trained on millions and billions of lines of code. Just as aspiring chefs learn from multiple masters, agents improve faster when exposed to varied expertise.

A warning for CISOs

Many security leaders today are under pressure to reduce developer headcount because executives believe AI can absorb the workload. This assumption misunderstands both security and AI. If an organization already has strong security discipline with well-documented architectures, clear coding standards and mature review processes then AI agents will amplify that core mindset and culture.

But if the organization has weak security habits, AI will amplify those weaknesses even faster. Human knowledge is like sunlight. Large language models are more like moonlight. A mere reflection of that knowledge. You cannot build a thriving ecosystem entirely under moonlight. Sooner or later, you need the sun, despite what the vampires and werewolves howling at the moon might lead you to believe.

The real promise of AI development

None of this is an argument against AI coding tools. Used properly, they are extraordinary collaborators. They can surface patterns across massive codebases, accelerate documentation and help engineers explore alternative designs more quickly than ever before.

But unlocking that potential requires the right mental model. Not as a tool, but as a junior developer. Onboard them. Pair with them. Teach them your systems, regale them with your stories of isolating a bug or race condition that took weeks to pinpoint. Rotate them across your teams. Expand their responsibilities gradually as trust develops.

That investment phase is what transforms AI from a novelty into a genuine multiplier. And like every good mentorship relationship in engineering, the payoff compounds over time. Treat your AI coding agent like a disposable tool and you’ll get disposable code (aka slop).

Treat it like a junior developer and you might just raise up the best engineering partner you’ve ever had.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The changing face of IT: From operator to orchestrator
    For decades, IT organizations were measured by stability, uptime, cost efficiency and service delivery. Success meant systems ran reliably, incidents were minimized and budgets were controlled. That model is no longer enough. In today’s environment, defined by cost pressure, supply chain volatility and accelerating digital expectations, the role of IT is fundamentally transforming. The modern CIO is no longer just an operator of systems, but an orchestrator of busines
     

The changing face of IT: From operator to orchestrator

22 de Abril de 2026, 07:00

For decades, IT organizations were measured by stability, uptime, cost efficiency and service delivery. Success meant systems ran reliably, incidents were minimized and budgets were controlled.

That model is no longer enough.

In today’s environment, defined by cost pressure, supply chain volatility and accelerating digital expectations, the role of IT is fundamentally transforming. The modern CIO is no longer just an operator of systems, but an orchestrator of business value.

The new mandate: Business value over technology

Digital transformation was once synonymous with technology modernization. But leading organizations have learned a hard truth: Technology does not create value, outcomes do.

Today, CIOs are accountable for:

  • Margin improvement and cost reduction
  • Faster product development cycles
  • Supply chain resilience
  • Operational efficiency and quality

This requires a fundamental shift in mindset: “Don’t sell technology. Enable business value and let technology follow.”

Every digital investment must tie directly to measurable impact EBIT uplift, working capital improvement, productivity gains, not just system upgrades.

Run and transform: The dual engine of modern IT

The transition from operator to orchestrator is anchored in a dual mandate:

Run the business + Transform the business

Run the business ensures:

  • Secure, resilient IT and OT environments
  • Stable ERP and plant operations
  • Compliance and cybersecurity
  • Predictable service delivery

Transform the business drives:

  • Data, AI and automation at scale
  • Digital capabilities across engineering, manufacturing and supply chain
  • Agile, product-centric ways of working

The differentiator is not managing these separately but orchestrating them seamlessly together.

This orchestration is what elevates IT from a support function to a strategic partner.

From projects to products: Rewiring the operating model

Traditional IT is structured around projects and technology silos. High-performing organizations are shifting to product and platform operating models aligned to business value streams.

This means:

  • Product teams own outcomes, not just delivery
  • Platform teams enable reuse, scalability and speed
  • Business and IT operate as one integrated team

The impact is significant:

  • Faster decision-making
  • Clear accountability for outcomes
  • Reduced duplication and total cost

The guiding principle becomes simple: Standardize first. Digitize second. Scale through platforms.

Digital thread: Unlocking end-to-end value

One of the biggest unlocks in industrial enterprises is the digital thread connecting engineering, manufacturing, supply chain and commercial systems into a unified ecosystem.

When connected, organizations gain:

  • Real-time visibility across the value chain
  • Faster product development cycles
  • Cost transparency from design to delivery
  • Predictive, data-driven decision-making

Without this integration, enterprises operate in silos — resulting in inefficiencies, delays and margin erosion.

The digital thread is not just a technology concept it is a business capability multiplier.

AI as a force multiplier, not a side initiative

Artificial intelligence is rapidly becoming embedded across every business function — but its true value lies not in isolated use cases, but in scaling intelligence across the enterprise.

Leading organizations are moving beyond experimentation to:

  • Embed AI into core workflows (engineering, quality, supply chain)
  • Automate decision-making at scale
  • Enable predictive and prescriptive insights

Examples include:

  • Predictive quality models reducing defects before they occur
  • AI-driven quoting improving margins and win rates
  • Intelligent supply chain analytics optimizing inventory and logistics

The shift is clear:

From dashboards → to decisions → to autonomous execution

However, AI’s success depends on two critical enablers: Trusted data and organizational adoption.

Citizen development: Scaling innovation beyond IT

One of the most powerful — and often underestimated — levers of transformation is citizen development.

In a world where demand for digital solutions far exceeds IT capacity, empowering business users to build solutions is no longer optional, it is essential.

Citizen development enables:

  • Faster identification and execution of use cases at the plant and function level
  • Reduced dependency on centralized IT teams
  • Increased ownership and adoption of digital solutions

But this is not about uncontrolled proliferation. Successful organizations balance empowerment with governance through:

  • Standardized platforms (low-code/no-code, data, automation)
  • Clear guardrails for security, data and architecture
  • Digital champions embedded within business functions

The role of IT shifts from builder to platform provider, coach and orchestrator of innovation.

When done right, citizen development creates a multiplier effect, turning every function into a contributor to digital transformation.

Observability & AIOps: Managing complexity at scale

As digital ecosystems grow, so does complexity. Traditional monitoring approaches, reactive and fragmented, are no longer sufficient.

The next frontier is AI-driven observability and AIOps, where:

  • Logs, metrics and events are continuously analyzed
  • Anomalies are detected proactively
  • Automated remediation reduces downtime

This shift enables organizations to:

  • Improve reliability and resilience
  • Reduce operational cost
  • Build internal intelligence rather than relying on external vendors

Observability becomes a core orchestration capability, enabling IT to manage increasingly complex digital environments with confidence.

Talent, culture and leadership: The real differentiators

Technology alone does not transform organizations, people, culture and leadership do.

Key shifts include:

  • Skills
  • Building capabilities in data, AI and automation across the organization
  • Culture
  • Driving speed, experimentation and continuous learning
  • Leadership
  • Ensuring strong sponsorship and business-led digital adoption

The most successful organizations empower business teams to identify opportunities, while IT provides the platforms and governance to scale them.

Governance: From control to value realization

Modern governance is no longer about approvals — it is about outcomes.

Effective models focus on:

  • Alignment to business priorities
  • Transparent portfolio management
  • Continuous tracking of value (EBIT, cost, productivity)

The key question shifts from “Is this project on track?” to “Is this delivering measurable business value?”

Conclusion: The CIO as orchestrator-in-chief

The CIO role has fundamentally evolved — from operator to orchestrator.

Today’s CIO must:

  • Align technology to business outcomes
  • Integrate data, platforms and processes
  • Enable innovation at scale across the enterprise

The organizations that will lead are not those that adopt the most technology, but those that orchestrate technology, data, AI and people into measurable outcomes.

In a world of constrained budgets and rising expectations, the mandate is clear: Run with discipline. Orchestrate with intent. Transform with measurable impact.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Stopping power: The leadership skill that separates modern IT leaders
    Most portfolios don’t lack initiatives. They lack stopping power. Once funding is approved and a program is publicly endorsed, the system favors continued support over learning, even when evidence weakens. IT leaders are increasingly judged on whether they can prevent the enterprise from drifting into sunk cost inertia while protecting credibility. The ability to stop misaligned work without triggering political fallout has become a quiet leadership superpower. That mat
     

Stopping power: The leadership skill that separates modern IT leaders

9 de Abril de 2026, 08:00

Most portfolios don’t lack initiatives. They lack stopping power. Once funding is approved and a program is publicly endorsed, the system favors continued support over learning, even when evidence weakens. IT leaders are increasingly judged on whether they can prevent the enterprise from drifting into sunk cost inertia while protecting credibility. The ability to stop misaligned work without triggering political fallout has become a quiet leadership superpower.

That matters because the role itself has shifted. Research from McKinsey shows that nearly two-thirds of top-performing companies say their technology leaders are “very involved” in crafting enterprise strategy. Deloitte’s Tech Exec Survey reports that 80% of tech executives say their responsibilities have expanded significantly and that more than a third now manage a P&L. When technology leadership is part of enterprise strategy, portfolio stopping is not an IT hygiene issue. It is a business leadership capability.

I learned this the hard way in portfolio reviews that looked “healthy” on paper. Everything was green. Delivery teams were busy. Stakeholders were still showing up. Yet the conversations felt like defense, not discovery. Nobody could say, out loud, that the original rationale had changed. When a portfolio can’t stop, it stops being a portfolio. It becomes a backlog with a budget.

Stopping power is not about being harsh or impatient. It is about building an operating model where truth has a safe path to the surface, where commitments are reversible by design and where capacity can move toward real growth rather than protecting yesterday’s narrative.

MIT CISR describes an enterprise IT operating model as the accountabilities, processes, platforms, metrics and behaviors that define how the IT unit and business units collaborate. In the AI era, it must scale the use and reuse of data and AI while managing risks like cyber threats and privacy breaches. Stopping power is one of the operating-model behaviors many organizations are missing: The ability to reverse commitments when evidence changes.

Why portfolios lose stopping power

Stopping is difficult because the mechanics of enterprise work reward continuation.

First, money quickly turns into identity. After funding, leaders are no longer defending an idea; they are defending their credibility. The longer a program runs, the more it becomes a proxy for competence rather than just outcomes.

Second, governance often measures motion rather than learning. Status is tracked weekly. Evidence is rarely revisited. Early assumptions are treated as history, not as hypotheses that must keep earning their place. The enterprise excels at delivery updates but struggles with decision updates.

Third, decision rights are often unclear at the moment when matters stop. MIT CISR frames governance as the allocation of decision rights and accountabilities: Who has authority, who is accountable and how decisions stay coherent as you devolve power to teams. When those rights are fuzzy, stopping becomes a negotiation and negotiations default to “keep going.”

Finally, initiative volume keeps climbing. AI is making ideation cheaper. The portfolio inflates. Delivery capacity does not. Leaders end up managing overload through quiet deferral rather than explicit choices. That is how strategic work dies: Not through failure but through dilution.

Build stopping power into governance

Stopping power is not a speech. It is a system. The goal is to make stopping a normal outcome of good governance, not a scandal.

Require an exit plan before entry

If an initiative cannot explain how it will stop, it is not ready to start. I ask for four items upfront:

  • A clear hypothesis: What must be true for this to be worth the investment
  • A measurable time to value: What evidence we will see by a specific date
  • A reversibility plan: How we unwind contracts, integrations, data and process changes
  • A named decision owner: Who can say stop, and who must be consulted

McKinsey’s work on the evolving technology leadership mandate also emphasizes the shift from annual planning to more continuous decision making and tighter business-tech strategy cocreation. That shift only works if exit is treated as a first-class part of strategy, not an afterthought.

Install kill-switch gates throughout the delivery lifecycle.

The easiest time to stop is early, before a program acquires institutional gravity. I use three decision gates:

  • Gate 1, viability: Is the problem still real, is the customer still there and is the baseline measured
  • Gate 2, value signal: Is there evidence of adoption, cycle time improvement, risk reduction or revenue impact
  • Gate 3, scale decision: Are the operating model, data controls, security and support model ready to expand

These are not stage gates that slow delivery. They are decision gates that prevent false certainty.

Separate portfolio truth from portfolio theatre.

Most steering committees mix three incompatible activities: Status reporting, issue escalation and investment decisions. That creates a bias toward performative confidence. A simple fix is to run a separate evidence review cadence focused on what changed:

  • Which assumptions were tested
  • Which signals improved or degraded
  • Which constraints moved (regulatory, supply chain, talent, vendor terms)
  • What does that imply for continuing, changing or stopping?

This is where stopping becomes normal, because the forum is built for learning.

Make capacity reallocation explicit.

Stopping without reinvestment looks like austerity. Stopping with reinvestment looks like leadership. Every stop decision should include a reallocation statement: What capacity is freed and where it is going. When teams see that stopping creates room for meaningful work, the politics shift.

Deloitte’s research on digital operating models highlights how reporting structure, ownership and team capabilities influence value realization. In practice, stopping power improves when ownership is clear, and the leader responsible for value has the authority to make hard calls.

Make stopping culturally safe in the AI era

Governance creates the mechanics. Culture determines whether people use them.

Use a narrative protocol that avoids blame loops.

If stopping is framed as “who messed up,” leaders will hide signals. If it is framed as “what did we learn and what is the best next bet,” leaders will surface reality earlier. The language matters:

  • We are stopping because the hypothesis did not hold.
  • We are stopping because the economy has changed.
  • We are stopping because the dependency landscape shifted.
  • We are stopping to protect focus and create capacity for higher value work.

Celebrate early stops as competence.

Many organizations reward persistence more than course correction. That is backwards. A well-run stop is proof of good judgment. A simple ritual helps: A quarterly “best stop” that highlights teams who proved something quickly, made a clear recommendation and freed capacity.

Protect teams from reputational shrapnel.

When programs stall, delivery teams often take the hit, even when the problem lies upstream: Unclear objectives, shifting priorities and missing decision rights. Leaders need to separate execution quality from investment quality explicitly. That is how you keep talent.

Now add AI, because it raises the stakes. BCG notes that GenAI can drive material productivity gains in the tech function and that tech leaders also need to help the broader organization scale responsibly while avoiding shadow IT. PwC’s 2026 CIO agenda likewise points to AI reshaping governance, operating models and culture. The practical implication is simple: Experimentation must be paired with clear kill criteria, guardrails and accountability. Without that, innovation becomes sprawl.

This is also where CEO alignment matters. The World Economic Forum has argued that CEOs set the vision for AI. At the same time, technology leaders build the processes that realize it, and that success depends on speed with structure, so momentum does not become chaos. Stopping power is the mechanism that turns that partnership into an execution advantage: It prevents the organization from scaling the wrong things.

A practical starter kit: The stop protocol pack

If you want to operationalize stopping power, start with a lightweight protocol pack and pilot it on a subset of initiatives next quarter.

Stop criteria

Stop when one of these is true:

  • The business problem is no longer priority one.
  • The hypothesis is not holding, and the next test is not worth the cost.
  • Time-to-value is missed without a credible recovery plan.
  • Risk or compliance exposure crosses a defined threshold.
  • A dependency changes the economics (platform decision, vendor terms, data constraints)
  • The program is consuming scarce capacity that could be used for a higher-value bet.

Kill switch gates

  • Viability gate at 30 days
  • Value signal gate at 60 to 90 days
  • Scale gate before any enterprise rollout

Narrative protocol

  • The decision owner communicates the stop with a one-page rationale.
  • The delivery lead communicates what was learned and what assets are reusable.
  • Sponsor communicates the reallocation plan and the next focus area.
  • A short retro captures what to change in future intake, not who to blame.

Exit plan requirement

  • Contract unwind path and timeline.
  • Data and integration cleanup plan.
  • User communication plan and transition support.
  • Asset reuse plan (code, patterns, data models, controls).

If your next portfolio review has 30 initiatives and zero stop decisions, that is not a sign of excellence. It is a sign that truth has nowhere to go. Build stopping power, then use it.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The path to CIO
    After more than three decades in enterprise technology at IBM and years advising organizations on digital strategy, here is what I have learned about what it takes to reach the top technology role and thrive once you get there. At some point in my decades at IBM, I stopped being the smartest technologist in the room. Not because I got worse at technology, but because the people working for me got so much better at it. That shift, when it happened, was disorienting. My i
     

The path to CIO

9 de Abril de 2026, 07:00

After more than three decades in enterprise technology at IBM and years advising organizations on digital strategy, here is what I have learned about what it takes to reach the top technology role and thrive once you get there.

At some point in my decades at IBM, I stopped being the smartest technologist in the room. Not because I got worse at technology, but because the people working for me got so much better at it. That shift, when it happened, was disorienting. My identity had been built around technical mastery and suddenly the job required something else entirely.

I have been thinking about that transition, including during my years as a consultant at Citibank and in my work with the MIT Sloan CIO community. I write about it regularly on my blog. I keep returning to it because I keep seeing talented technologists get stuck at the same inflection point on the way to the CIO role and other senior executive roles. They are waiting for their technical credentials to carry them the rest of the way. They won’t. Here is what will.

A role transformed

When I started my career in the 1970s, enterprise technology was largely about dealing with the overall IT infrastructure, including the computers and the software in the data centers, and the networks that reach out to the users.

The CIO often reported to the CFO because cost control was the primary concern. The internet changed that permanently once ERP forced every part of the business to integrate around shared systems, raising a question that elevated the CIO’s standing for good. If a technology transformation touches every corner of the business, who leads it? Not the CEO. Not the CFO. The CIO stepped into that space and never stepped back.

From cloud to mobile to AI, each technology wave has compounded the scope and strategic weight of the role. The CIO now occupies a position comparable in importance to the CFO. No organization can afford one who is not trusted, capable and deeply connected to the business. If you want this job, you must understand the weight being put on your shoulders before you pursue it.

Technical credibility is the floor, not the ceiling

Here is where many aspiring technology leaders get stuck. They assume that the path to CIO runs through deepening technical expertise. Get better at the technology, they think, and the rest will follow. In my experience, that is precisely backwards.

Yes, you need genuine technical credibility, and you need to understand what is possible, what is risky and the tradeoffs. However, beyond a certain level in any organization, the people who work for you will know far more about the specific technology than you do. That is not a failure. It is the natural and healthy state of a large, capable team. The CIO who believes their primary job is to be the most technically expert person in the room will likely not make it, because they are misreading what the job requires.

When I moved out of pure research and into broader leadership roles at IBM, I had to absorb this lesson quickly. My job was no longer to produce the best technical work myself. My job was to identify where technology was heading, to build and lead the teams that could get IBM there and to connect their work to the realities of the market.

The transition from technologist to technology leader is the first major inflection point on the path to CIO. Build your technical foundation early and deliberately. Then consciously shift where you invest your personal development energy.

The skills that get you there

When I think about what separates the CIOs who reach the top and stay there, I keep coming back to three things: management capability, the ability to earn trust across the organization and the ability to communicate effectively.

On management: The CIO oversees some of the most complex programs in any enterprise, balancing in-house capability against vendor partnerships, keeping security embedded rather than bolted on, maintaining momentum on multi-year programs that cut across every business unit. None of this is learned from a textbook. It is learned by seeking out progressively larger management challenges early and often.

On trust: the CIO must be trusted by the business. That means speaking the language of the boardroom as fluently as the language of IT. The heads of manufacturing, finance, sales and operations all must believe the CIO understands their problems, not just the technology.

I saw this dynamic play out repeatedly across my career at IBM and in my advisory work. Capable technologists fail in CIO roles because the business does not trust them. Less technically brilliant executives thrive because they have earned genuine organizational credibility. They long ago identified this business alignment as the defining differentiator for a CIO that creates lasting enterprise value.

On communication: if you cannot explain what technology does, why it matters and what it will cost to the board, to your peers, to the press and to your customers, you will not make it. I have watched smart people fail at this level for exactly that reason. Communication is not a soft skill at the CIO level. It is as critical as any technical credential.

How careers progress toward the CIO role

There is a traditional path to the CIO, and it mirrors the CFOs. Nobody becomes CFO without first managing the finances of a business unit, then progressively larger ones, demonstrating capability at each stage. The CIO path works the same way.

In large organizations, almost every major business unit has its own technology leader, which is effectively a divisional CIO reporting both to that unit’s head and to the enterprise CIO. I observed this structure at IBM and at Citibank. That dual-reporting role is where many of the best CIO careers are made, where you learn to serve a business function while maintaining technical credibility. If you have CIO ambitions and the chance to take one of these roles, take it.

Does it matter whether you are at a technology company, a bank or a manufacturer? It matters less than most people think. The challenges scale with the size and complexity of the organization, not the industry. The sector shapes the context; the required skills are the same.

Do you need an MBA? Not necessarily, though many effective CIOs have one. What you unquestionably need is a genuine understanding of the business you are serving. A CIO who cannot speak credibly about the pressures facing the head of sales or supply chain will not earn the trust the role demands. Develop business acumen deliberately through education, mentorship and by putting yourself in roles where business outcomes and technology decisions intersect.

One more point is to build your external network. Peer communities and forums like the MIT Sloan CIO Symposium matter more than many rising technology leaders realize. They are where you gain perspective beyond your own organization and build the external credibility that opens doors.

What I would tell my younger self

If I could go back and speak to the researcher I was when I first joined IBM, I would tell him that technology is the entry ticket. Everything else is the job.

The people who make it to the top and stay there are not necessarily the most brilliant technologists. They are the ones who made the shift from being experts to being leaders, who earned trust across the organization and who understood that their job was not to master the technology but to lead the people who do.

That transition is available to anyone willing to make it. But it requires recognizing, early, that the skills that got you here are not the skills that will take you further. The sooner you start building the other ones, the better.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

❌
❌