Visualização de leitura

Your CEO just got AI FOMO. Here are 6 tips on what to do next.

Every CIO I know has had some version of this conversation: their CEO comes back from a golf trip with their buddy, or a conference with peers, and is told AI is about to automate everything at their company, from HR to marketing and finance. No humans in the loop, just AI. The CEO then calls an all-hands Monday morning, and the CIO is suddenly on the hook to make it all happen.

The instinct for CEOs to chase unsubstantiated claims is understandable since they’re responding to competitive pressure. But that leaves CIOs responsible to close the gap between ambition and reality. Making AI work in an organization with decades of accumulated process, permission frameworks, and cultural inertia is very different from deploying it in a demo.

The best response isn’t to push back on the ambition, but redirect it. Translate the CEOs vision into an honest map of what has to happen for the organization to get there, including the infrastructure, governance, and training. That helps to convert the kneejerk compulsion to move faster into a concrete plan that leadership can get behind.

Here’s what CIOs should actually be focused on to get where their CEOs want them to go, regardless of what’s discussed on the links.

1. Start where AI can build its own credibility

The hype machine wants you to climb Everest on day one. Instead, identify the repetitive tasks where AI can prove itself on familiar ground — the workflows your team already knows well, where results are easy to verify and the bar for trust is attainable.

The goal is the Eureka moment when a skeptic on your team sees a real result and becomes a believer. Those moments compound. When someone has seen AI make their work easier in a context they understand, they’re more likely to help you move things forward. You can’t force that change, but you can engineer the conditions for it.

2. Models will commoditize. Context will not.

Every few months, a new model claims to be smarter, faster, and cheaper than the last one. Don’t be distracted by that race. The lasting advantage in enterprise AI doesn’t just come from which model you’re running, it’s in the quality, governance, and semantic clarity of the data feeding it. Enterprises that invest in consistent business definitions, well-structured data, and clear lineage will outperform those that don’t, regardless of which model is in fashion. Context is your competitive moat. Focus on building that.

3. Nail down the permissions

In a world of dashboards, you know exactly what data will appear on a given page, so you can set permissions in advance for who can access it. In an AI world, the system can generate outputs that were never pre-designed. So how do you determine who has the right to see a result that was never anticipated?

Before deploying any agent that acts on someone’s behalf, such as filing a request, surfacing payroll data, or populating a record, first determine whether your existing permissions and access control frameworks can handle outputs that were never planned for. Most can’t. This is a prerequisite of what your CEO is asking for: the unglamorous infrastructure work that determines whether your AI is trustworthy in production. It needs to happen before you scale, not after.

4. Build an editing culture, not a writing one

For decades, engineers, analysts, and operations teams have been trained to write code, build reports, and define new processes. AI upends that. The skill now is editing — auditing what the system produces, catching what it got wrong, and knowing where to push back.

The truth is most people aren’t naturally good at editing because they’ve never had to be. That’s a skills gap that needs to be closed early on. Invest in helping engineers, analysts, and managers develop the judgment to evaluate AI outputs, not just generate them. Editing must become a core enterprise competency.

5. Measure behavior change, not tool adoption

Login data is a vanity metric. If your engineers are accessing AI coding tools but aren’t changing how they build, you haven’t adopted anything. The metric that makes more sense is productivity output. In agile terms, a team that completes 20 story points per sprint should hit about 28 with AI, not because the tools are magic, but because the repetitive work gets faster. If you’re not seeing that, you’re measuring the wrong thing. Pay attention to output, not usage metrics.

6. Reframe your organization’s relationship with failure

The instinct to de-risk everything made sense when software deployments were expensive and slow to reverse. AI works differently. The outputs are probabilistic, the iteration cycles are fast, and being overly cautious can cost valuable time. CIOs need to give teams permission to experiment in ways that feel uncomfortable by traditional enterprise standards, all while building the feedback loops that make fast failure safe. That culture shift has to be modeled from the top.

FOMO isn’t going away

CEOs will keep getting pulled into cycles of urgency and FOMO, and that pressure will keep landing on CIOs. The organizations that make real progress will be the ones that redirect that energy into infrastructure that makes AI trustworthy, measurement systems that show what’s working, and cultural changes that make adoption stick. That’s the agenda that’ll move your organization forward.

The CIO succession gap nobody admits

I have sat with three CIOs in the last two years who wanted to leave their seat and could not. One was being recruited into a larger enterprise role. One was ready to retire. One had been offered a board seat that required stepping down. In every case, the same thing stopped them. When the CEO asked who could step in, the CIO could not give a credible name. The person they had been calling their number two was technically brilliant and operationally reliable, and every one of them had been groomed into an architect, not a leader. The board would not approve an external hire during an active transformation. So the CIO stayed. One of them is still stuck.

The CIO role has the weakest succession bench in the C-suite, and most CIOs discover it the same way those three did. Not during a quarterly talent review. Not during a board retreat. They discover it the moment they try to leave. By then, the decision is already made for them. This is a leadership design problem CIOs build into their own orgs, and they inherit it when it is too late to fix quickly.

The architect trap

I have watched the same pattern form in almost every IT organization I have worked in. The people who rise to the top of the CIO’s direct reports are the ones who can hold the most architectural complexity in their heads. They are the ones the CIO trusts with the platform decisions, the vendor consolidations, the integration maps. They earn that trust legitimately. They are excellent at what they do.

But architectural trust is a different currency than leadership trust. When a CIO promotes based on architectural depth, what they get is a deputy who can design the org but cannot run it. I have seen deputies who have never owned a P&L conversation with a CFO. Deputies who have never delivered hard news to a business unit president. Deputies who have never had to defend a budget line item in a room full of people trying to take it from them. They were not hiding from those conversations. The CIO was holding the conversations for them because the CIO was good at those conversations and the deputy was good at the architecture.

The result is a bench that looks deep from inside the IT org and looks empty from the boardroom. I have watched a CEO walk out of a succession conversation saying, “I like your people, but I cannot see any of them in your chair.” That is not a compliment to the CIO. That is a verdict on how the CIO built the team.

Three moves I make before I need them

After watching this happen enough times, I stopped treating succession as something I would address later and started treating it as a design choice I had to make inside my first year. I changed how I build the bench in three ways, and I make each move early enough that the person has time to grow into it or fail out of it.

First, I give them a standing decision domain, not a “next in line” title. A deputy who is told they are being groomed for the CIO seat will manage their career instead of their work. A deputy who is given full authority over, say, all vendor escalations above a defined threshold will start making real decisions in real rooms with real consequences. That is where judgment gets built. The domain has to be something I would otherwise own myself. If I am still approving everything inside it, I am building a forwarder, not a successor.

Second, I put them in rooms where they have to lose something. One of the most damaging things a CIO can do is protect a high-potential deputy from conflict. I used to do this without realizing it. I would pull the hard conversations back to my level because I wanted to spare the deputy the political damage. The deputy came out looking clean and came out completely unprepared. Now I deliberately put deputies into conversations where they have to defend a position against a peer executive who will push back hard. Sometimes they hold the line. Sometimes they fold. Either outcome tells me something I needed to know before anyone was counting on them.

Third, I make the bench visible to the board before I have to. If the board does not know my top two or three deputies by name and track record, I do not have a succession plan. I have private notes. The CIOs I described at the beginning of this article all had deputies they believed in. None of those deputies had ever presented to the board on anything substantive. The board had no reference point. So when the succession question came up, the deputies did not exist in the board’s imagination, and the CIO’s personal endorsement was not enough to create them.

The first time I put a deputy in front of the board, they came back different. The board did not go easy on them. They came back knowing what a board conversation actually feels like, which meant the next one would not be a first impression. The board needs reps with my deputies before the seat is vacant. Once it is vacant, the reps are a job interview and a job interview is not where anyone does their best work.

What the gap actually costs

The cost of a shallow bench is not abstract. I have seen CIOs delay their own career moves by eighteen months or longer because they could not produce a credible successor. I have seen organizations pay two and a half times market to hire externally because the internal candidate did not survive a board interview. I have seen transformations stall because the CIO could not delegate enough to step back and think, because there was no one qualified to hold what they put down.

The cost to the deputies is also real. The architect-track deputy who spends six or seven years being the CIO’s most trusted technical lieutenant, and then gets passed over for the CIO role because the board does not see a leader, rarely recovers that momentum. Some of them leave. Some of them stay and quietly disengage. A few of them become the reason the new CIO’s first ninety days are harder than they should be. None of that is the deputy’s fault. It is the consequence of a design choice the previous CIO made years earlier, usually without knowing they were making it.

CIO.com has published strong guidance on this, including work on grow your own CIO strategies that treat succession as a deliberate pipeline rather than an accident of tenure.

The test is simple. If you had to leave in ninety days, could you hand the CEO a name and get a nod? If you cannot picture that nod, you do not have a successor. You have a list of people you like and trust, which is not the same thing. The successor you can actually name is the one you built on purpose, not the one who happened to look ready when the chair emptied. I have learned this by watching peers run out of time to build what they meant to build. I am trying not to be one of them.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AI戦略という名のパフォーマンス——CIOはイノベーションを率いているか、演じているだけか

数年おきに、CIOは同じ問いに直面する。「今話題の技術について、わが社は何をしているのか」。今日、その問いはAIに向けられている。プレッシャーは本物だ。競争環境は厳しく、取締役会が進捗を求めるのも当然だ。

問題は、そのプレッシャーがどう吸収されているかだ。多くの組織で、取締役会の要求への対応が一種の「演技」になっているのではないか。パイロットが積み上がり、ベンダーとの関係が増え、進捗報告が社内を回る。外から見ると、AIに真剣に投資している組織に見える。しかし実態はというと、ビジネスの動き方はほとんど変わっていない。AIが依存するインフラ整備、ワークフローの再設計、データの準備は手つかずのままだ。

取締役会向けの準備で、AIのスライドに15のアクティブなパイロットが載っているようなケースを複数目にしてきた。3つは「有望」と説明され、1つはデータアクセスの問題で保留中。どれも測定可能なビジネス成果には結びついていない。これが「AI戦略シアター(AI戦略という名の演技)」だ。取締役会の問いを表面上は満たすが、本質的には答えていない。

パイロットがポートフォリオになるとき

本来パイロットとは、「この技術は特定の用途でスケールさせるに値するか」という一つの問いに答えるためのものだ。時間を区切り、用途を定義し、二択の結論を出す——それがパイロットの役割だ。しかし今多くの組織で起きていることはそれとは異なる。

取締役会からのプレッシャーが高いとき、最も抵抗の少ない道は「何かを始めること」だ。用途を特定し、ベンダーと交渉し、概念実証を立ち上げて報告する。目に見える活動が生まれ、次の四半期のガバナンス上の問いが満たされる。一方、ワークフローの統合、データインフラ、変更管理という難しい仕事は先送りされ続ける。

McKinseyの2025年版「State of AI」では、88%の企業が少なくとも一つの領域でAIを活用しているが、スケーリングフェーズにあるのはわずか32%であることがわかった。実験と価値創出の間のギャップは広く、ほとんどの組織はその中に止まっている。この状況に対し、McKinsey、ワークフローが再設計されていないことを主な理由と指摘する。AIはある。しかしその周りのビジネスプロセスは変わっていない。

個別に立ち上げたパイロットは、互いに連携しない。AIを拡張するために必要となるデータインフラや統合アーキテクチャも生まれない。結果として、維持コストだけがかかるポートフォリオと、実態の伴わないAI投資の物語が残る。ベンダー側にも新しいパイロットを次々と立ち上げる動機がある。概念実証は限られた環境では印象的な結果を出すが、それが本番環境で機能するかどうかは顧客側の問題だ。契約が取れれば、ベンダーの役割は終わる。

ガバナンスの欠如がCIOの信頼性を損なう

プレッシャーが生む第二の問題は、ガバナンスのないAI意思決定が組織内に広がることだ。取締役会からのAI推進の指令が事業部門に届くと、各部門は独自の判断で動き出す。財務部門がITアーキテクチャの審査を通過していないツールの契約を結ぶ。業務部門が本番データに触れる自動化パイロットを走らせる。マーケティング部門がコンプライアンス審査を受けていない顧客情報で実験する。

いわばAIの速度で拡大するシャドーITだ。通常の大きなソフトウェア投資なら調達審査やアーキテクチャレビューを経るはずが、午後に導入できて数日で結果が出るツールにはそのプロセスが適用されない。IT部門が全体像を把握する頃には、事業部門はすでに「AIは使える」「使えない」という結論を出している——エンタープライズ向けに設計されていないツールをもとに。

小さな失敗が積み重なるうちに、信頼は静かに失われていく。成果の出ないAI投資、後から発覚するガバナンスの問題、ITを避け始めた事業部門——こうしたパターンが積み重なれば、取締役会はいずれ問題を直視する。実験フェーズから価値創出への移行を自らリードし、そのプロセスを積極的に管理しているCIOが、差をつけ始めている。

規律ある実行に必要なもの

概念実証から本番環境へ、AIを移行させることに成功した組織には共通点がある。どこに投資するかについて明示的で文書化された決断を下し、パイロットを追加するプレッシャーが高いときもその方針を守り通したことだ。

具体的には、開始前に一定の基準を満たすイニシアティブだけを選ぶ短いリストを維持する。ワークフローが十分に理解され、変更管理権限を持つビジネスリーダーが担っていること。データがアクセス可能で整備されていること。展開後ではなく前に成功の定義が決まっていること。この基準を満たさない提案はスタートさせない。取締役会が進捗を求め、ベンダーが有利な条件を提示している状況で目に見える活動を控えることは、言うは易く行うは難しだ。

内部ケイパビリティの構築も重要だ。ツールは増え続ける。戦略的な問いは、組織がAIを評価・統合・ガバナンスする真のケイパビリティを培っているか、それともベンダー依存に留まるかだ。前者は時間をかけて組織的な優位性を積み上げる。後者は外から見るとケイパビリティのように見えるが、内実は依存だ。

AIリーダーシップを測るたった一つの指標

取締役会向けのプレゼンでは見栄えがよく、業務上の価値はほとんど生まないAIリーダーシップの形がある。パイロットは走り、進捗報告が回る。「実際にビジネスに何をもたらしているのか」という問いは、答えることがナラティブを複雑にするため、問われないままになりがちだ。

AIリーダーシップは最終的に一つのことで測られる。何個のパイロットが、ビジネスの実際の動き方を変えるまで生き残ったか。今構築されているもののほとんどは、そこまで届かないかもしれない。

8 tips for becoming a more agile IT leader

Our world is spinning so fast that getting off course from intended outcomes can happen quickly. And it isn’t just technology that’s catalyzing change. The business climate, economic conditions, rules of engagement, and even people’s belief systems and behaviors are rapidly shifting to the point that trying to keep up is like chasing a cheetah on roller skates.

To lead in this climate, you have to hone your ability to pivot, pull the plug, or pounce on a new opportunity with little lead time. You can’t make a decision, install a system, or set a team to work on a project, then move on, even as you might have done a few short years ago. You have to be able to change your mind, admit you no longer stand behind a decision, aren’t confident in a particular project and set a new course toward a better destination.

Having an agile mind and a flexible worldview is vital to IT leadership today. But how do you achieve that?

I spoke to IT experts and leaders who have struggled with and mastered this skill. Here are the agility tricks they employ to stay flexible.

Keep asking questions

“Historically, CIOs come into an organization, assess, then try to add value,” says Sathish Muthukrishnan, chief information, data, and digital officer at Ally Financial. “That could take a year. Then they spend another six months developing strategy. From year three onwards, they might implement strategy. That was the traditional playbook.”

So, the current pace of change is, in itself, an enormous pivot for a role so complex, Muthukrishnan says.

The first step to becoming an agile leader then is to accept that the old playbook won’t work. The second step, he says, is to keep asking questions.

“I ask questions so I can deepen my understanding, orient myself,” he says. “Has the context changed? Has technology changed? Have people changed? If so, why are we doing what we were doing three, four, or five months ago?”

There are some things that have not changed, he says. Learning is the same, though what you learn and the way you apply it is different. And the need for your leadership has only increased.

“The human qualities that set you apart as a leader are becoming even more relevant in an AI-first world,” he says. “It’s no longer, ‘I’m the expert. I know. I’ve done this, I’ve seen this,’ that sets you apart. The thing that sets you apart is having the courage to say I am not tied to my previous beliefs. I’m changing them because of this reason. I’m making a pivot because of these reasons. Courage and conviction go together.”

Trust the navigation — and your teams

“You have to lead with purpose and clarity. That’s important for the organization. But you need a lot of flexibility when it comes to the execution,” says Manny Rivelo, CEO of ConnectWise.

Like a ship on a wild sea, you have a destination in mind. Getting there, though, requires navigating through a lot of tumult.

“You have to be able to respond quickly to change,” Rivelo says. “It can be anything from a market shift, the technology, or internal organizational challenges. You don’t want to lose sight of that long-term strategy, but you may have to pivot along the way. It’s not only about moving with speed, but with flexibility.”

Just like that ship navigating rough seas, you have to get accurate readings and trust your navigators to know how to steer through the chaos.

“How you collect information is important,” he says. “I look at it as a signal-to-noise ratio. What is the signal that’s driving you to go someplace, and what is just noise? How do you remove the noise so you can focus on what the signal is telling you.”

Rivelo believes in facts and data. But you also need to be able to test your own assumptions and, to do that, you have to trust your team, he says.

“You have to build diverse teams that are willing to challenge your thinking,” he advises. “In my experience, you can’t train for that. You have to hire for it.”

Rivelo digs deep when hiring to find people who have a history of being opinionated and, especially, curious.

“Curiosity is one of the greatest gifts you can have as a leader. You need to be curious enough to disrupt yourself and not assume that, because we are doing things a certain way, we have to continue. The best idea should win — wherever it comes from,” he says.

Empty your cup

According to one Zen parable, you have to empty your cup before you can fill it. To learn, you have to accept that you don’t already know.

“For me, being agile means seeing the truth and not making assumptions,” says Dr. Akvile Ignotaite, founder and data scientist at System Akvile. “I go into new projects thinking, ‘Let’s see what we can learn.’ And I learn from the data.”

It sounds simple. But when you have achieved a leadership role, you likely got there because of your expertise. You have become accustomed to people expecting you to know what to do. Letting go of that expert role is, Ignotaite admits, a process.

“I try to keep a very open mind,” she says. “I make assumptions, then measure those assumptions against real user data and behavior. I can’t know everything. The speed we live in is too fast.”

Use the ‘hot-shot rule’

Every day is full of decisions and responsibilities. It’s easy to get caught up in that and keep navigating toward a goal without stopping to check whether you are headed in the right direction. To stay flexible, Ingrid Curtis, CEO at Sparq, likes to test wind direction frequently with what she calls the “hot-shot rule.”

“This is not a concept I created,” she says. It is a mental exercise that helps people to let go of a decision, path, or progress that is no longer serving their purposes.

“Imagine you’ve been fired,” she says. “Who’s the hot shot that’s coming in to take over. What do you think they will do that you aren’t doing?”

The hot shot can be fictional or a real-life leader from the tech or business world.

“There are plenty of big, wild entrepreneurs to choose from,” she says. “They come with this huge persona. And we’ve seen that it has gotten some of them — the WeWork founder, Elizabeth Holmes, and others — in serious trouble. But there is also admiration for this flagrant, ‘I’m willing to do whatever it takes’ kind of leadership.”

It’s surprising, she says, how much this game allows people to disconnect from minutia and look at their job with fresh perspective. It’s fascinating to watch it unlock ideas.

“We all allow ourselves to be hamstrung,” she says. “Yet you imagine someone else would disregard those self-imposed restrictions and be able to get the thing done. Suddenly, with that perspective, you are able to do that, too.”

Rethink your approach to decision-making

“Everyone frames agility as a personality trait — be flexible, stay curious, embrace change,” says Nik Kale, principal engineer at Cisco Systems. “All of that is fine, but personality does not scale.”

Agility, he says, is less about mindset and more about structure.

“Adaptable leaders aren’t the ones with the most flexible temperament,” he says. “They’re the ones who build decision-making systems to absorb change without breaking.”

One big part of this structure, he says, is sorting decisions by weight. Some decisions are reversable. Others are not. Therefore, those two types of decisions should be sorted into different piles. Slow down and ponder non-reversable decisions. Decide fast and iterate on those that are reversible.

“Many leaders do the opposite,” he says. “They agonize over things that don’t matter and rush through things that do.”

For reversible decisions, schedule a point where you will stop to reevaluate them.

“I put reassessment dates on the calendar,” he says. That way changing your mind is part of the process. “It won’t hurt anybody’s ego if we planned to reevaluate that decision.”

This structure, he believes, overcomes the risk decision-makers face when they change their mind.

“Admitting you were wrong, in most corporate cultures, is expensive — reputationally, career-wise, politically. People double down on failing strategies because the cost of admitting they were wrong feels higher than the cost of failure,” Kale says. “Courage shouldn’t be a prerequisite for good decisions.”

Factor in the fact that permanence is a thing of the past

According to Ram Palaniappan, CTO at TEKsystems Global Services, when the software you use every day changes almost that often, clinging to the idea that anything you decide today won’t change tomorrow is holding on to a world that no longer exists.

This is especially true when working with AI, he says. When you make a decision about something repeatable, and offload the work to AI, verifying the results is essential because an AI will amplify mistakes. This also helps you learn to trust the AI.

This kind of mental agility, he says — making decisions that you are willing to unmake if the output doesn’t match expectations — requires that people to stay alert and keep learning. That goes not just for leaders but entire teams, he adds.

“We ask our teams to spend a percentage of time upskilling,” he says. “We set goals. We provide a learning path. Then we allow them to apply what they learn in a lab facility.”

The idea is, he says, to learn to let go of the way it was.

“Tech companies change their products, sometimes daily,” he says. “We all have to be able to move like that.”

Let go of the idea that anything you decide is permanent. Decide quickly. Then check how that decision is doing.

Exercise your emotional muscles

According to Sarah Noll Wilson, founder of The Noll Wilson Group and author of Don’t Feed the Elephants, many technical leaders believe that emotion has nothing to do with their decisions. But that can make you blind to the power emotion has over them.

“When you build your emotional skillset, it gives you access to a higher level of self-awareness and intellectual humility,” she says.

Curiosity is one emotional skill. “Instead of making you fear discovering a bad decision, curiosity can make it fun to wonder — with interest and even excitement — where you might be wrong,” she explains.

Another emotional skill is to let go of the idea that it is your expertise that’s needed.

“Some problems are technical,” she says. “Those are clear and typically solved with expertise. But some are adaptive challenges. In that case, the problem might not be clear and solving it requires learning, not expertise.”

Fear is another emotion that drives resistance to change. People don’t fear change, they fear loss, she says. “Ask yourself, ‘What am I losing?’ or ‘What am I afraid I’m going to lose?’”

One of the practices her team uses to increase emotional self-awareness, she says, is a courageous audit. This is a process where leaders examine what they want to be — an agile leader, for example — and interrogate behaviors that conflict with that goal.

“A question you can ask is, ‘What do I do or not do that’s in conflict with being an agile leader?” she says. “Do I protect my ideas or my team’s ideas? Do I dismiss ideas from people who aren’t in my field or ‘in’ group? Who gets to submit ideas? Who doesn’t?”

These exercises are designed to raise your awareness of the emotional reactions that affect your decisions and to help you develop the ability to be comfortable with uncertainty.

Change how you measure and build

According to Shahrzad Rafati, founder and CEO of RHEI, keeping a plastic viewpoint requires you to fundamentally change how you build technology and measure success.

“When you spend two years building an enterprise tool, your ego becomes tied to its deployment. You lose agility because you are financially and emotionally invested in the solution, rather than the problem,” she says.

“Instead of measuring success with metrics like uptime or deployment milestones, measure workforce elevation. When your metric is ‘Did it elevate human output and strategic thinking?’ you won’t hesitate to kill a failing project.”

The second step, she says, is to find a way to experiment quickly and cheaply. “We no longer live in a world where prototyping costs millions of dollars. You can ‘vibe code’ an idea, stand up a specialized agent, and test its capabilities almost instantly.”

“Use this to your advantage,” she says, “by lowering the stakes of your experiments. If testing a hypothesis costs nothing, your willingness to abandon a bad idea and admit you were wrong goes up exponentially.”

칼럼 | 기술을 넘어선 경쟁력, 뛰어난 FDE는 어떻게 다른가

전례 없는 규모의 AI 투자가 이뤄지고 있음에도 불구하고, 대부분의 기업은 ‘통합의 벽’에 부딪힌 상태다. 기술은 개별 환경에서는 제대로 작동하고, 개념검증(PoC)도 충분히 인상적인 결과를 낸다.

하지만 실제 고객과 접점이 생기고 매출에 영향을 미치며 실질적인 리스크가 발생하는 운영 환경에 AI를 적용하려는 순간, 기업들은 주저하게 된다. 이는 충분히 타당한 이유에서다. AI 시스템은 본질적으로 비결정적(non-deterministic) 특성을 지니기 때문이다.

예측 가능한 방식으로 동작하는 기존 소프트웨어와 달리, 대규모 언어모델(LLM)은 예상치 못한 결과를 만들어낼 수 있다. 틀린 정보를 확신에 차서 제공하거나, 존재하지 않는 사실을 생성하는 ‘환각(hallucination)’, 브랜드 톤과 맞지 않는 응답을 내놓을 위험도 존재한다. 리스크 관리에 민감한 기업일수록 이러한 불확실성은 어떤 수준의 기술적 고도화로도 극복하기 어려운 장벽으로 작용한다.

이 같은 현상은 산업 전반에서 공통적으로 나타난다. 기업의 AI 도입을 지원해 온 경험을 돌아보면, 많은 조직이 인상적인 AI 데모를 구축하고도 통합 단계를 넘어서지 못하는 사례를 반복해왔다. 기술은 준비돼 있었고 사업적 타당성도 충분했지만, 조직의 리스크 수용도가 이를 따라가지 못했다. 또한 실험 환경에서 가능한 AI 활용과 실제 운영 환경에서 허용되는 범위 사이의 간극을 메울 방법을 아는 사람도 없었다. 이 지점에서 문제의 본질은 기술이 아니라 이를 실제로 적용할 인재라는 결론에 이르게 된다.

필자는 몇 달 전 IT 인력 제공 플랫폼인 안델라(Andela)라는 기업에 합류했다. 이 관점에서 보면, 기업이 필요로 하는 역량은 보다 분명해진다. 바로 파견형 엔지니어 정확히 말해 FDE(Forward Deployed Engineer)다. 해당 개념은 데이터 분석 기업 팔란티어(Palantir)가 정부 기관과 기업 내부에 자사 플랫폼을 배포하는 과정에서 필수적인 고객 중심 기술 인력을 설명하기 위해 처음 사용했다. 최근에는 선도 AI 연구소와 하이퍼스케일러, 스타트업까지 이 모델을 채택하고 있다. 예를 들어 오픈AI는 고가치 고객의 플랫폼 도입을 촉진하기 위해 숙련된 FDE를 배치하고 있다.

다만 CIO가 반드시 이해해야 할 점이 있다. 지금까지 이러한 역량은 주로 AI 플랫폼 기업의 성장 전략을 위해 집중적으로 활용돼 왔다. 기업이 통합의 벽을 넘어 AI를 실제 운영에 적용하기 위해서는, 이 같은 FDE 역량을 내부적으로 확보하고 육성해야 한다.

FDE를 만드는 요소

FDE의 핵심 특징은 기존 엔지니어가 하지 못하는 방식으로 기술적 솔루션과 비즈니스 성과를 연결하는 능력에 있다. FDE는 단순히 시스템을 구축하는 개발자가 아니다. 엔지니어링, 아키텍처, 비즈니스 전략이 교차하는 지점에서 작동하는 ‘번역자’에 가깝다.

이들은 생성형 AI라는 미지의 영역을 조직이 탐색할 수 있도록 이끄는 ‘탐험대장’과 같은 존재다. 특히 AI를 실제 운영 환경에 배포하는 과정은 단순한 기술 문제가 아니라 리스크 관리 문제라는 점을 명확히 이해한다. 따라서 적절한 가드레일 설정, 모니터링 체계, 위험 통제 전략을 통해 조직의 신뢰를 확보하는 것이 필수적이다.

필자는 구글 클라우드와 안델라에서 15년간 일하며 이러한 역량을 모두 갖춘 인재를 극소수만 만나왔다. 이들을 구분 짓는 요소는 단일 기술이 아니라 네 가지 핵심 역량의 결합이다.

첫째는 첫째는 문제 해결 능력과 판단력이다. AI의 출력은 대체로 80~90% 정확하지만, 나머지 10~20%는 오히려 더 위험한 오류를 포함할 수 있다. 때로는 그럴듯하게 보이지만 잘못된 결과이거나, 불필요하게 복잡해 실무 적용을 어렵게 만들기도 한다.

뛰어난 FDE는 이러한 오류를 식별할 수 있는 맥락적 이해를 갖추고 있다. 이들은 AI가 생성한 저품질 결과나, 중요한 비즈니스 제약을 무시한 권고를 빠르게 찾아낸다. 무엇보다 중요한 점은 이러한 리스크를 통제할 수 있는 시스템을 설계할 수 있다는 것이다. 출력 검증, 인간 개입 프로세스, 모델이 불확실할 때 작동하는 결정적 대체 응답 체계 등을 통해 위험을 관리한다. 이러한 역량이야말로 단순히 인상적인 데모와, 경영진이 실제 도입을 승인할 수 있는 운영 시스템을 가르는 결정적 차이다.

둘째는 솔루션 엔지니어링과 설계 역량이다. FDE는 비즈니스 요구사항을 기술 아키텍처로 전환하는 동시에 비용, 성능, 지연 시간, 확장성 등 현실적인 트레이드오프를 균형 있게 고려해야 한다. 특정 활용 사례에서는 추론 비용이 낮은 소형 언어모델이 최신 대형 모델보다 더 나은 성과를 낼 수 있으며, 이러한 선택을 기술적 완성도가 아닌 경제적 관점에서 설명할 수 있어야 한다.

무엇보다 중요한 것은 단순성을 우선하는 접근이다. 통합의 벽을 가장 빠르게 넘는 방법은 대부분 적절한 가드레일을 갖춘 최소기능제품(MVP)으로 전체 문제의 80%를 해결하는 데서 시작된다. 모든 예외 상황을 포괄하려다 통제 불가능한 리스크를 초래하는 복잡한 시스템이 아니라, 현실적으로 관리 가능한 수준의 솔루션이 더 효과적이다.

셋째는 고객 및 이해관계자 관리 역량이다. FDE는 비즈니스 조직과의 주요 기술 접점 역할을 수행하며, AI 경험이 많지 않은 경영진에게 기술적 작동 원리를 설명해야 한다. 다만 이들이 실제로 주목하는 것은 기술 자체가 아니라 리스크, 일정, 그리고 사업적 영향이다.

바로 이 지점에서 FDE는 조직의 신뢰를 확보하고, AI를 실제 운영 환경으로 확장할 수 있는 기반을 마련한다. FDE는 비결정적 AI의 특성을 경영진이 이해할 수 있는 리스크 프레임워크로 전환한다. 예를 들어 문제 발생 시 영향 범위는 어디까지인지, 어떤 모니터링 체계가 구축돼 있는지, 그리고 롤백 계획은 무엇인지 등을 명확히 제시한다. 이러한 과정은 AI의 불확실성을 가시화하고 관리 가능한 형태로 전환함으로써, 리스크에 민감한 의사결정자들이 이를 수용할 수 있도록 만드는 핵심 역할을 한다.

넷째는 전략적 정렬 능력이다. FDE는 AI 구현을 측정 가능한 비즈니스 성과와 직접 연결한다. 어떤 기회가 실제 성과를 만들어낼 수 있는지, 혹은 기술적으로는 흥미롭지만 가치 대비 과도한 리스크를 수반하는지에 대해 판단하고 조언한다.

또한 초기 도입 단계뿐 아니라 운영 비용과 장기적인 유지보수까지 함께 고려한다. 이러한 사업 중심의 시각에 더해, 리스크를 객관적으로 평가하는 능력이 결합될 때 비로소 FDE는 단순히 뛰어난 소프트웨어 엔지니어를 넘어서는 차별화된 역할을 수행하게 된다.

이 네 가지 역량을 모두 갖춘 인재는 공통된 특성을 보인다. 대부분 개발자 등 기술 중심 직무에서 커리어를 시작했고, 컴퓨터공학 기반의 교육을 받았을 가능성이 높다. 이후 특정 산업에 대한 전문성을 쌓고, 빠르게 변화하는 환경 속에서도 지속적으로 학습하는 유연성과 호기심을 갖추게 된다. 이러한 희소한 조합 때문에 이들은 주로 대형 기술 기업에 집중돼 있으며 높은 보상을 받는 경향이 있다.

CIO의 딜레마

FDE가 이처럼 희소한 자원이라면, CIO에게 남은 선택지는 무엇일까.

인재 시장에서 자연스럽게 공급이 늘어나기를 기다리는 방법이 있지만, 이는 상당한 시간이 필요하다. 그 사이 AI 프로젝트가 통합의 벽에서 멈춰 있는 매달, 실제 가치를 창출하는 기업과 여전히 이사회에 데모만 보여주는 기업 간 격차는 더욱 벌어진다. AI의 비결정적 특성은 앞으로도 사라지지 않는다. 오히려 모델 성능이 향상될수록 예측 불가능한 행동의 가능성은 더 커질 수 있다. 결국 성공하는 기업은 기술이 완전히 무위험 상태가 되기를 기다리는 조직이 아니라, AI를 책임감 있고 자신 있게 운영 환경에 적용할 수 있는 내부 역량을 갖춘 조직이다.

대안은 내부에서 FDE를 육성하는 것이다. 이는 채용보다 더 어렵지만, 확장 가능한 유일한 해법이다. 다행히 FDE 역량은 체계적으로 개발이 가능하다. 적절한 인재 풀과 집중적이고 구조화된 교육이 필요하다. 안델라(Andela)는 경험 많은 엔지니어를 FDE로 전환하는 교육 과정을 구축했으며, 이를 통해 효과적인 방법론을 축적해왔다.

FDE 인재 풀 구축 전략

우선 적합한 후보자를 선별하는 것이 중요하다. 모든 뛰어난 엔지니어가 FDE로 전환할 수 있는 것은 아니다. 기술 영역을 넘어서는 호기심을 갖춘 숙련된 소프트웨어 엔지니어를 찾아야 한다. 기본적인 개발 역량이 탄탄하고 데이터 과학과 클라우드 아키텍처에 대한 경험이 있는 인재가 적합하다. 특히 특정 산업에 대한 이해는 빠른 적응을 돕는 중요한 요소다. 의료 규제나 금융 리스크 프레임워크에 대한 경험이 있는 인재는 해당 분야를 처음 배우는 경우보다 훨씬 빠르게 성장할 수 있다.

기술 교육 과정은 세 단계로 구성된다. 기초 단계에서는 AI와 머신러닝에 대한 기본 이해를 다진다. LLM 개념, 프롬프트 설계 기법, 파이썬 활용 능력, 토큰 구조, 기본적인 에이전트 아키텍처 이해가 포함된다. 이는 기본 역량에 해당한다.

중간 단계는 실전 도구 활용 역량이다. FDE가 수행하는 ‘세 가지 역할’에 대응하는 핵심 기술이 요구된다.

  • 첫째는 RAG(검색 증강 생성)로, 기업 데이터와 모델을 정확하고 안정적으로 연결하는 능력이다.
  • 둘째는 에이전트형 AI로, 다단계 추론과 작업 흐름을 적절한 통제와 검증 단계와 함께 설계하는 역량이다.
  • 셋째는 운영 환경 대응 능력으로, 모니터링 체계와 가드레일, 장애 대응 프로세스를 갖춘 상태에서 솔루션을 실제 배포할 수 있어야 한다.

이러한 역량은 실제 운영 환경의 리스크를 고려한 시스템을 직접 구축하고 배포하는 과정을 통해 습득된다.

고급 단계에서는 모델 내부 구조와 파인튜닝 등 심화 지식을 익힌다. 이는 표준적인 접근 방식이 통하지 않을 때 문제를 해결할 수 있는 능력으로 이어진다. 단순히 정해진 절차를 따르는 수준을 넘어, 새로운 상황에 맞춰 즉각적으로 대응할 수 있는 역량이다. 또한 보안 책임자(CISO)와 같은 이해관계자에게 특정 접근 방식의 안전성을 설명할 수 있는 수준의 전문성이 요구된다.

기술 역량만큼 중요한 것이 비기술적 역량이다. FDE는 기술 중심의 대화에서 벗어나 비즈니스 문제와 리스크 완화 중심으로 논의를 재구성할 수 있어야 한다. 프로젝트 범위 변경, 일정 지연, 비결정적 시스템의 불확실성 등 민감한 이슈를 포함한 고난도 이해관계자 관리도 필수다. 무엇보다 중요한 것은 판단력이다. 불확실한 상황에서도 합리적인 결정을 내리고, 새로운 유형의 기술 리스크를 받아들여야 하는 경영진에게 신뢰를 줄 수 있어야 한다.

조직과 후보자 모두에게 현실적인 기대치를 설정하는 것도 중요하다. 아무리 체계적인 프로그램을 갖추더라도 모든 인재가 FDE로 전환되는 것은 아니다. 그러나 소수의 FDE 인재만 확보하더라도 통합의 벽을 넘는 속도는 크게 빨라질 수 있다. 실제로 비즈니스 조직에 배치된 한 명의 FDE는, 비즈니스 맥락 없이 분리된 환경에서 일하는 다수의 기존 엔지니어보다 더 큰 성과를 낼 수 있다. 이는 문제의 본질이 기술이 아니라는 점을 FDE가 정확히 이해하고 있기 때문이다.

AI 시대의 승부처

FDE 역량을 확보한 기업은 통합의 벽을 넘어설 수 있다. 이들은 인상적인 데모를 실제 가치를 창출하는 운영 시스템으로 전환하고, 성공 경험을 바탕으로 조직의 신뢰를 점진적으로 확대해 나간다.

반면 이러한 역량을 확보하지 못한 기업은 AI 투자에도 불구하고 실질적인 성과를 내지 못한 채 정체 상태에 머물 가능성이 크다. 그 사이 더 높은 리스크를 감수하는 경쟁 기업들이 시장을 선점하게 된다.

안델라에 합류할 당시 필자는 AI가 인간의 역량을 완전히 대체하지는 못할 것이라고 판단했다. 지금도 그 생각은 변함없다. 다만 인간 역시 진화해야 한다. FDE는 그 진화의 방향을 보여주는 대표적인 인재상이다. 깊이 있는 기술 이해, 비즈니스 감각, 리스크 관리 능력, 그리고 지속적인 변화에 대응하는 유연성을 모두 갖춘 존재다.

AI 시대에 CIO가 지금 이 역량에 투자한다면, 단순히 기술 발전 속도를 따라가는 수준을 넘어 그동안 쉽게 확보되지 않았던 기업의 AI 가치를 실질적으로 실현하는 주체가 될 수 있다.
dl-ciokorea@foundryco.com

CIOは「技術管理者」から「価値設計者」へ AI導入が進まない日本のCIOに求められる視点とは

美馬のゆり(みまのゆり)
学習環境デザイナー/学習科学者、公立はこだて未来大学 名誉教授。日本学術会議 第26期・第27期会員、電気通信大学 監事。電気通信大学(計算機科学)、ハーバード大学大学院(教育学)、東京大学大学院(認知心理学)で学ぶ。コンピュータと教育、認知科学の幅広い視点から、コミュニケーションや人材育成、ネットワーク形成などを促進する活動を行っている。その他、マサチューセッツ工科大学メディアラボ客員研究員、NHK経営委員会委員、カリフォルニア大学バークレー校人工知能研究所および人間互換人工知能センター客員研究員などを歴任。元日本科学未来館副館長。

日本でAI導入が進まない本当の理由

生成AIの活用が世界規模で進む中、日本企業の多くはいまだ本格的な導入に至っていない。ChatGPTを業務で試した経験はあっても、組織全体への定着には達していない企業が大多数だ。

一方、米国では状況が異なる。米国心理学会が2025年に発表した「Work in America」調査によると、47%の労働者が月1回以上業務でAIを使用していることがわかった。30%が「使わないと取り残される」と感じ、38%が「自分の職務が不要になる可能性」を懸念している。AIはすでに日常業務に深く浸透し、効率向上への期待と同時に、不安や組織内格差といった心理的・組織的影響をもたらしていることを示す調査結果だ。「AI導入は、人の不安と期待を前提にした制度設計と不可分である」という結論をレポートは導き出している。

では、日本でAI導入が進まない背景には何があるのか。美馬氏が講演や企業との対話を通じて見えてきたのは、能力や技術力ではなく、構造的な問題だという。第一に、(AI導入前に)DXの目的が組織内で十分に共有されていないこと。第二に、業務ルーチンの変更を望まない現場の意識と、それに引きずられる形で「クライアントが望まないものは提案しない」というベンダー側の行動原理。そして現状維持を合理的選択にしてしまう評価制度がそれを後押しする。

統計データをPDFで公開する、カルテを電子化するといった「本質ではないこと」で止まってしまう例は枚挙にいとまがない。このような状況を招く根本原因として、美馬氏が指摘するのは「設計思想」だ。例えば、EV専業の自動車メーカーTesla。頻繁にソフトウェアアップデートがかかり、一晩で画面表示が刷新されることも珍しくない。美馬氏はUCバークレーAI研究機関に在籍時にTeslaに乗っていた経験から、「日本では容易に受け入れられない仕様」と話す。日本の自動車産業の設計思想が確実さ、安全性、高い完成度であるとすれば、進化のスピード、継続的改善、実装後の修正といった設計思想を持つTeslaは正反対と言えるからだ。「これはどちらが良い・悪いの優劣ではなく、設計思想の違い」と美馬氏は説明する。

これを踏まえてみると、AIの技術的性質と日本とは相性が良くない。「AIは本質的に不完全。どんどん更新され、試すたびに精度が上がるという技術です。これは、”完成してから導入する”という日本の発想とは合わない」と美馬氏。日本にはAI導入が進みにくい構造がある、と続けた。

AI時代の技術判断は価値判断―ーCIOは「価値設計者」に

そのような状況でCIOはAIを導入しなければならない。そしてAIの導入では、CIOに新しい役割が加わると美馬氏は見る。

これまでCIOの重要な役割の1つに技術判断があった。だが、AI導入をめぐっては何を自動化するか、どの役割を再定義するか、どのスキルを価値とみなすかなどの判断も入ってくる。そしてこれらは、純粋な技術の判断ではなく価値の判断だ。

価値判断が重要であることを示した象徴的な事例がAmazonだ。採用選考にAIを導入しようとした際、過去の採用実績(男性が多数)を学習データとしたため、女性を不当に低く評価する結果となった(結局、導入は中止)。「どういう学習データを選ぶのか自体が、AI導入の判断にかかってくる」と美馬氏。技術的に可能であることと、組織として妥当であることは一致しない——そうした判断の前提を決めているのは、突き詰めればIT技術者であり、経営判断を下すCIOだ。

美馬氏はこう言い切る。「CIOは技術管理者であると同時に、組織の価値設計者でもある」。

価値を考えるにあたって美馬氏がまずスポットを当てるのが、「最適化と望ましさは一致しない」という視点だ。効率やコスト削減といった数値化できる指標が優先される一方、「数値化できるものを指標にした途端、そうじゃない優秀さは漏れていく」と美馬氏。

何を価値とみなすかを決めること自体が問われている。だからこそ美馬氏は倫理を中核に置く必要性を説く。倫理とは守るべき規則ではなく、価値が衝突し正解が定まらない状況において判断の拠り所となる枠組みだ。「何を良しとするか」という問いを組織の基盤に据えること——それがCIOに求められると美馬氏は言う。

技術者倫理3層モデル

このようなことから、AI導入にあたって価値設計者としてのCIOが考えるべき枠組みとして、美馬氏は「技術者倫理3層モデル」を提示する。第1層は「予見責任」——導入前に職務への影響をマッピングし、移行計画を可視化すること。第2層は「説明責任」——導入目的・影響範囲・限界を組織内で共有すること。そして第3層が「組織的ケア責任」だ。

特に第3層について、「ケア責任とは、情緒的配慮ではなく、人的資本の毀損を防ぐ経営責任」だと美馬氏は言う。AI導入で影響を受ける人材の能力移行を制度として保証すること——それは「優しさ」ではなく、人材という経営資源を守るための責務だ。「少子高齢化が進み、人材不足が深刻化する中、AI導入で影響を受ける人材を他の業務へ移行させることは、組織が果たすべき経営上の責務だ」と美馬氏。自分の仕事がなくなるかもしれないという不安を抱えた社員に対し、トップダウンで導入を命じるだけでは反発を招く。影響を受ける人材の不安に向き合い、移行の道筋を組織として明示すること——それが制度設計としてのケア責任の本質だ。この視点は人事部門との緊密な連携なしには実現しない。

さらに美馬氏は、技術者が担うべき倫理にもう一つの視点を加える必要があると説く。従来、技術者の倫理は「技術が社会に与える影響」「組織人としての責任」「専門職としての説明責任」という3つの観点で論じられてきた。しかしAI時代においては、「生活者倫理」の視点が必要だ、と美馬氏。AIコンパニオンと結婚する人が世界中に現れ、亡くなった家族とサブスクリプションで「会える」サービスまで登場している現在、技術を開発・導入する側もまた、その影響を受ける生活者の一人だ。「使う側・影響を受ける側としての判断が、今や4つ目の倫理として必要になっている」。

AI時代の人材育成を設計する

AI時代は人材面でも見直しが不可欠だ。先述の米国の調査で明らかになったように、AI導入に社員は不安を感じている。雇用そのものに対する不安ももちろんだが、業務のやり方や内容が変わることはキャリアの断絶を伴う。人材育成もまた、倫理を基盤に設計される必要がある。

「従来の『作業効率』『コスト削減』という指標だけでは、人材への影響を捉えることはできない」と美馬氏。AI時代に必要な指標として、スキル転換完了率、再配置成功率、組織内AI成熟度、そして心理的安全性——「導入しても大丈夫」と社員が感じられるかどうか——を挙げる。効率から持続可能性へ。その転換が、人材育成KPI再設計の核心だ。

「AI時代の人材育成とは、ツール操作を教えることではなく、変化を前提に学び続ける構造を設計することだ」と美馬氏は言う。加えて、変化を引き受ける倫理を育てることも重要になるという。そのような考えから、美馬氏は「リスキリング(学び直し)」よりも、新しいスキルを足す「学び足し(アップスキリング)」の視点が重要と話す。

WHYを問い続ける

冒頭のように、日本のAI導入が進まない背景には文化・制度・設計思想という構造的な問題がある。だが、AIを導入しないという選択肢はない。落とし所をどうするかーー「ここは肝になる」と美馬氏。悲観しているのではなく、「紙一重のところに希望がある」と言う。

「空気を読む」「場を整える」「察する」——日本社会に根付く関係性を重んじる感性は、美馬氏が提唱する「Humane Learning Design(HLD)」の考え方と深く響き合う。HLDとは、問いを立て、対話し、文脈と関係性に応じて責任ある判断を行うという学びの思想だ。humaneという言葉には、humanよりも人道的・思いやりという意味が込められている。個人の最適化ではなく「間」を整える倫理的な感性——これはケアの倫理の核心とも重なる。

ただし、関係性を重んじる感性は忖度と「紙一重」だと美馬氏は注意を促す。「空気を読みすぎて、失敗を許されない文化になると、何も変えないということになってしまう」。現場が課題を最もよく把握しているにもかかわらず、声が届かない——この構造はあらゆる組織に共通する病理だ。関係性を重んじる感性を活かしながら、変化を許容しオープンであること。「変化していくことを許すとか、関係が変わっていくことをよしとするとか、そういう意味での開かれた文化が必要だ」と美馬氏は言う。

その両立が実現できれば、日本はAI時代において独自の強みを発揮できると美馬氏は考える。「日本の『関係性の知』を、日本オリジナルとして世界に発信できるのではないか」。

最後に美馬氏がCIOに向けて強調するのは、「なぜ使うのか」という問いだ。「AIをどう使うかというHowやWhatの議論は進んでいる。でも、なぜ使うのか、何は使わない方がいいのか、何を人間の側に残しておくべきかというWHYの議論はされていない」。

美馬氏はこの問いに向き合う力を「AI Readiness」と呼ぶ。「使いこなす能力ではなく、判断に向き合う準備状態」——それを組織に根付かせることが、価値設計者としてのCIOに求められる最も重要な役割だと美馬氏は言う。「使うことの意味と影響、そして使わない判断。そこの判断が、人を生かしていく社会を作ることにつながる」と美馬氏は述べた。

AI is spreading decision-making, but not accountability

On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised.

As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks suggest responsibility is shared across legal, risk, IT, and business teams, courts may ultimately find it far less evenly distributed when something goes wrong.

AI, after all, may diffuse decision-making, but not legal liability.

AI doesn’t show up in court — people do

Jessica Eaves Mathews, an AI and intellectual property attorney and founder of Leverage Legal Group, understands that when an AI system influences a consequential decision, the algorithm isn’t what will show up in court. “It’ll be the humans who developed it, deployed it, or used it,” she says. For now, however, the deeper uncertainty is there’s very little case law to guide those decisions.

“We’re still in a phase where a lot of this is speculative,” says Mathews, comparing the moment to the early days of the internet, when courts were still figuring out how existing legal frameworks applied to new technologies. Regulators have signaled that responsibility can’t be outsourced to algorithms. But how liability will be apportioned across vendors, deployers, and executives remains unsettled — an uncertainty that’s unlikely to persist for long.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jessica Eaves Mathews, founder, Leverage Legal Group

LLG

“There are going to be companies that become the poster children for how not to do this,” she says. “The cases working their way through the system now are going to define how this plays out.”

In most scenarios, responsibility will attach first and foremost to the deploying organization, the enterprise that chose to implement the system. “Saying that we bought it from a vendor isn’t likely to be a defense,” she adds.

The underlying legal principle is familiar, even if the technology isn’t: liability follows the party best positioned to prevent harm. In an AI context, that tends to be the organization integrating the system into real-world decision-making, so what changes isn’t who’s accountable but how difficult it becomes to demonstrate appropriate safeguards were in place.

CIO as the system’s last line of defense

If legal accountability points to the enterprise, operational accountability often converges on the CIO. While CIOs don’t formally own AI in most organizations, they do own the systems, infrastructure, and data pipelines through which AI operates.

“Whether they like it or not, CIOs are now in the AI governance and risk oversight business,” says Chris Drumgoole, president of global infrastructure services at DXC Technology and former global CIO and CTO of GE.

The pattern is becoming familiar, and increasingly predictable. Business teams experiment with AI tools, often outside formal processes, and early results are promising. Adoption accelerates but controls lag. Then something breaks. “At that moment,” Drumgoole says, “everyone looks to the CIO first to fix it, then to explain how it happened.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Chris Drumgoole, president, global infrastructure services, DXC Technology

DXC

The dynamic is intensified by the rise of shadow AI. Unlike earlier forms of shadow IT, the risks here aren’t limited to cost or inefficiency. They extend to things like data leakage, regulatory exposure, and reputational damage.

“Everyone is an expert now,” Drumgoole says. “The tools are accessible, and the speed to proof of concept is measured in minutes.” For CIOs, this creates a structural asymmetry. They’re accountable for systems they don’t fully control, and increasingly for decisions they didn’t directly authorize.

In practice, that makes the CIO the enterprise’s last line of defense, not because governance models assign that role, but because operational reality does.

The illusion of distributed accountability

Most organizations, however, aren’t building governance structures around a single accountable executive. Instead, they’re constructing distributed models that reflect the cross-functional nature of AI.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Ojas Rege, SVP and GM, privacy and data governance, OneTrust

OneTrust

Ojas Rege, SVP and GM of privacy and data governance at OneTrust, sees this distribution as unavoidable, but also potentially misleading. “AI governance spans legal, compliance, risk, IT, and the business,” he says. “No single function can manage it end to end.”

But that doesn’t mean accountability is shared in the same way. In Rege’s view, responsibility for outcomes remains firmly with the business. “You still keep the owners of the business accountable for the outcomes,” he says. “If those outcomes rely on AI systems, they have to figure out how to own that.”

In practice, however, governance is fragmented. Legal teams interpret regulatory exposure, risk and compliance define frameworks, and IT secures and operates systems. The result is a model in which responsibility appears distributed while accountability, when tested, is not — and it often compresses to a single point of failure. “AI doesn’t replace responsibility,” says Simon Elcham, co-founder and CAIO at payment fraud platform Trustpair. “It increases the number of points where things can go wrong.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Simon Elcham, CAIO, Trustpair

Trustpair

And those points are multiplying. Beyond traditional concerns such as security and privacy, enterprises must now manage algorithmic bias and discrimination, intellectual property infringement, trade secret exposure, and limited explainability of model outputs.

Each risk category may fall under a different function, but when they intersect, as they often do in AI systems, ownership becomes blurred. Mathews frames the issue more starkly in that accountability ultimately rests with whoever could have prevented the harm. The difficulty in AI systems is that multiple actors may plausibly claim, or deny, that role. So the result is a governance model that’s distributed by design, but not always coherent in execution.

The emergence and limits of the CAIO

To address this ambiguity, some organizations are beginning to formalize AI accountability through new leadership roles. The CAIO is one attempt to centralize oversight without constraining innovation.

At Hi Marley, the conversational platform for the P&C insurance industry, CTO Jonathan Tushman recently expanded his role to include CAIO responsibilities, formalizing what he describes as executive accountability for AI infrastructure and governance. In his view, effective AI governance depends on structured separation. “AI Ops owns how we build and run AI internally,” he says. “But AI in the product belongs to the CTO and product leadership, and compliance and legal act as independent checks and balances.”

The intention isn’t to eliminate tension, but to institutionalize it. “You need people pushing AI forward and people holding it back,” says Tushman. “The value is in that tension.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jonathan Tushman, CTO, Hi Marley

Hi Marley

This reflects a broader shift in enterprise governance away from centralized control and toward managed friction between competing priorities — speed versus safety, innovation versus compliance. Yet even this model has limits.

When disagreements inevitably arise, someone must decide whether to proceed, pause, or reverse course. “In most organizations, that decision escalates often to the CEO or CFO,” says Tushman.

The CAIO, in other words, may coordinate accountability. But ultimate responsibility still sits at the top and can’t be delegated.

The widening gap between deployment and governance

If organizational models for AI accountability are still evolving, the gap between deployment and governance is already widening. “Companies are deploying AI at production speed, but governing at committee speed,” Mathews says. “That’s where the risk lives.”

Consequences are beginning to surface as a result. Many organizations lack even a basic inventory of AI systems in use across the enterprise. Shadow AI further complicates visibility, as employees adopt tools independently, often without understanding the implications.

The risks are both immediate and systemic. Employees may input sensitive corporate data into public AI platforms, inadvertently exposing trade secrets. AI-generated content may infringe on copyrighted material, and decision systems may produce biased or discriminatory outcomes that trigger regulatory scrutiny.

At the same time, regulatory expectations are rising, even in the absence of clear legal precedent. That combination — rapid deployment, limited governance, and legal uncertainty — makes it likely that a small number of high-profile cases will shape the future of AI accountability, as Mathews describes.

Where the buck stops

For all the complexity surrounding AI governance, one pattern is becoming clear. Responsibility may be distributed, authority may be shared, and new roles may emerge to coordinate oversight, but accountability doesn’t remain diffused indefinitely.

When systems fail, or when regulators intervene, it often points at enterprise leadership, and, in operational terms, to the executives closest to the systems in question. AI may decentralize how decisions are made, obscure the pathways through which those decisions emerge, and challenge traditional notions of control, but what it doesn’t do is eliminate responsibility. If anything, it magnifies it.

AI accountability is a familiar problem, refracted through a more complex system. The difference is the system is moving faster, and the cost of getting it wrong is increasing.

How UKG puts AI to work for frontline employees

As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation framework to transform the frontline worker experience, and why the CIO-CHRO partnership is critical to making it stick.

How do you leverage AI for growth and transformation at UKG?

UKG is one of the largest HR, pay, and workforce management tech platforms in the market, and our expertise is in creating solutions for frontline workers, which account for 80% of the world’s workforce. This is important because when companies rebrand themselves as AI for knowledge workers, they’re not talking about frontline workers. But people in retail, manufacturing, healthcare, and so on also benefit from AI capabilities.

So the richness of our data sets, and our long history with the frontline workforce, positions us well for AI driven workforce transformation. 

What are some examples?

We use agentic AI for dynamic workforce operations, which shows us real-time labor demand. Our customers employ thousands of frontline workers, and the timely market insights and suggested actions we give them are new and valuable.

We also provide voice agents. Traditionally, when a frontline worker requests a shift, managers would review availability, fill out paperwork or update scheduling software, and eventually offer an appropriate job. With voice agents, AI works directly with the frontline worker, going through background and skills validation, communication, and even workflow execution. The worker can also ask if they can swap shifts or even get advice on how to make more money in a particular month. This is where AI changes the entire frontline worker experience.

We also launched People Assist, an autonomous employee support agent. Typically, when an employee is onboarded, IT and HR need to trigger and approve workflows. People Assist  not only tracks workflows, but also performs those necessary IT and HR onboarding activities so new employees are productive from day one.

What framework do you use to create these new capabilities?

For internal AI usage for our own employee experience, we use an idea-to-implementation framework, which involves a community of UKG power users who are subject matter experts in their area. Ideas can come from anybody, and since we started nine months ago, more than 800 ideas have been submitted. The power users set our priorities by choosing the ideas that will make the most impact.

Rather than funneling ideas through a small central team — a linear process that kills momentum — we’ve democratized innovation across the business. We give teams the governance frameworks, change models, and risk guardrails they need to move quickly.  With AI, the most important thing isn’t to launch, but to land.

But before we adopted the framework, we defined internal personas so we could collaborate with different employee groups across the company, from sales to finance.

With the personas and the framework, we can prioritize ideas by persona, which also facilitates crowd sourcing. You’re asking an entire persona which of these 10 ideas will make their lives better, rather than senior leaders making those decisions for them.

Why do so many CIOs focus on personas for their AI engine?

Across the enterprise, every function has a role to play. We hire marketing, sales, and finance for a particular purpose. Before AI, we gave generic packaged tools to everyone. AI allows us to build capabilities to make a specific job more effective. Even our generic AI tools are delivered by persona. Its impact on specific roles is the reason personas are so important right now. Our focus is on the actual jobs, the people who do them, the skills and tasks needed, and the outcomes they want to achieve.

We know our framework and persona focus work from employee data. In our most recent global employee engagement survey, 90% said they’re getting the right AI tools to be effective. For the AI tools we’ve launched broadly across the company, eight out of 10 employees use them. For me, AI isn’t about launching 10,000 tools, because if no one uses them, it’s just additional cost for the CIO and the company.

Is the build or buy question more challenging in this nascent stage of AI?

The lifecycle of technology has moved from three years to three hours, so whenever we build at UKG, we use an open architecture, which allows us to build with a commercial product if one comes on the market.

Given the speed of innovation, we lean toward augmentation rather than build. There are areas, like our own native products, where a dedicated engineering team makes sense. But for most of our AI capabilities — customer support and voice agents, for example — we work with our vendor partners. We test and learn with multiple vendors, and decide on one usually within two weeks.

This is what AI is giving all CIOs: flexibility, rapid adoption, interoperability, and the ability to quickly switch vendors. It’s IT that’s very different from what it used to be.

Given the shift to augmentation, how will the role of the software engineer change?

For software builders, business acumen — the ability to understand context — is no longer optional. In the past, the business user would own the business context, and the developer, who owns the technology, brings that business idea to life. Going forward, the builder has the business context to create the right prompts to let AI do the building, and the human in the loop is no longer the technology builder, but the provider of context, prompts, and validation of the work. So the engineer doesn’t go away, however they now finish a three-week scope of work in hours. With AI, engineers operate at a different altitude. The SDLC stays, but agility increases where a two-week concept compresses into two days.

At UKG, you’re directly connected to the CHRO community. What should they be thinking about how the workforce is changing with AI?

The best CHROs are thinking about the skills they’ll need for the future, and how to train existing talent to be ready. They’re not questioning whether we’ll need people, but how to sharpen our teams for new roles. The runbooks for both IT and HR are evolving, which is why the CIO-CHRO partnership has never been more critical to create the right culture for AI transformation.

CIOs can deliver a wealth of employee data like roles, skillsets, and how people spend their time. And as HR leaders help business leaders think through their roadmap for talent —  both human and AI — IT leaders can equip them with exactly that intelligence.

What advice would you give to CIOs driving AI adoption?

Invest in AI fluency, not just AI tools. Your people don’t need to become data scientists, but they do need a new kind of literacy — the ability to work alongside AI, question its outputs, and know when to override it. That’s a training and culture investment, not a software investment.

And redesign work before you redeploy people. Don’t just drop AI into existing workflows. Use this moment to ask what work really matters. AI is forcing us to have the job design conversations we should’ve had years ago, so it’s important to be transparent about the journey. What’s killing workforce trust now is ambiguity. Your people can handle hard truths but not silence. Leaders who communicate openly about where AI is taking the organization will retain the talent they need to get there.

Cuenta atrás para presentar candidaturas en España a los CIO 50 Awards

Un año más, vuelve la convocatoria de premios de referencia para distinguir a los mejores directivos de sistemas de información (CIO) en España y los proyectos de TI más innovadores realizados en el país. La iniciativa, conocida como los ‘Oscar de la industria de TI’, forma parte del proyecto global CIO Awards con el que la publicación internacional CIO, del grupo editorial Foundry, pone en valor la labor de ejecutivos de primer nivel capaces de impulsar valiosos resultados empresariales mediante el liderazgo digital, la visión estratégica y la innovación tecnológica.

Los premios esta vez recalan en España bajo el nombre de CIO 50 Awards. El plazo de recepción de candidaturas para la edición de 2026 está abierto hasta el próximo 29 de mayo y la cita de entrega de los galardones tendrá lugar el 8 de octubre en Madrid, en el marco de una gran conferencia que se celebrará en paralelo y estará centrada en la temática “Liderazgo tecnológico responsable, resiliencia y gobierno digital en el contexto español”. Durante la jornada, los galardonados en otras ediciones de los premios y los candidatos podrán compartir sus historias de éxito con otros líderes de TI, creando una experiencia de aprendizaje entre iguales de valor incalculable.

Quién puede participar

Pueden optar a los CIO 50 Awards los CIO y otros directivos/gerentes de tecnología de empresas, administraciones públicas u organizaciones sin ánimo de lucro (ONG).

Los directivos que se presenten a la convocatoria deben desempeñar una labor al más alto nivel en lo que respecta a estrategia y ejecución tecnológica y de transformación, pues los premios CIO 50 reconocen a aquellos líderes que definen la dirección de la organización, contribuyen a decisiones a nivel del consejo directivo y ejercen influencia en inversiones tecnológicas de gran envergadura. Un requisito para presentar candidatura es que los CIO lleven al menos un año en la organización para la que trabajan actualmente.

Los consultores, proveedores de TI, de software o de hardware y las empresas de estudios de mercado o servicios de información no podrán optar a los CIO 50.

Cómo se elige a los premiados

Como en las ediciones anteriores, las candidaturas serán valoradas por un jurado independiente que analizará aspectos como los desafíos afrontados en los proyectos y las soluciones implementadas; los beneficios y mejoras logrados; el impacto en el negocio (optimización de costes, mejora de márgenes, crecimiento de ingresos); los aumentos en la productividad y la transformación de los procesos empresariales gracias a las TI.

El jurado está conformado por Fernando Muñoz, director del CIO Executive by Foundry; Esther Macías, directora editorial de CIO y COMPUTERWORLD en España; los históricos CIO, ya retirados, José María Tavera, que lideró la estrategia de TI de gigantes como Telefónica o Acciona, y José María Fuster, quien estuvo al frente de las TI del Banco Santander y ahora es patrono de la Fundación Real Academia de Ciencias de España; Dimitris Bountolos, CIIO de Ferrovial y ganador de la categoría CIO del año de la edición 2025 de los CIO 100 Awards Spain; Gracia Sánchez-Vizcaíno, CIO para Iberia & Latinoamérica de Securitas Group; Mar Hurtado de Mendoza, vicepresidenta global de reclutamiento en IE University y profesora adjunta de esta escuela de negocio; y Patricia Arboleda, presidenta de Women in Tech – Spain.

Una distinción local con alma global

La historia de los galardones CIO 100 y CIO 50 a la excelencia en TI empresarial se remonta a hace más de tres décadas, cuando comenzaron a otorgarse a directivos de Estados Unidos, para extenderse después a otros mercados como Alemania, Reino Unido, España, Singapur, Australia, Corea del Sur e India.

Se trata de una iniciativa clave para reconocer logros, compartir conocimiento y conectar a una influyente comunidad de responsables de la toma de decisiones en tecnologías de la información.

En la actualidad, la publicación CIO, del grupo Foundry, tiene abierto el proceso de recepción de candidaturas a los premios CIO 100 y CIO 50 en los siguientes países/regiones:

  • CIO 100 USA (agosto de 2026).– Fase de solicitud cerrada; La inscripción para la conferencia está abierta aquí. Más información
  • CIO del Año Alemania (octubre de 2026).– Fecha límite de presentación de candidaturas: el 15 de mayo de 2026. Más información
  • CIO 100 UK (septiembre de 2026).– Fecha límite de presentación de candidaturas: el 21 de mayo de 2026. Más información
  • CIO 50 España (octubre de 2026).– Fecha límite de presentación de candidaturas: 29 de mayo de 2026. Más información
  • CIO 100 India (septiembre de 2026).– Fecha límite de presentación de candidaturas: 5 de junio de 2026. Más información
  • CIO 100 Australia (septiembre de 2026).– Fecha límite de presentación de candidaturas: 19 de junio de 2026. Más información
  • CIO 100 ASEAN (noviembre de 2026).– Fecha límite de presentación de candidaturas: 27 de julio de 2026. Más información
  • CIO 50 Japón (diciembre de 2026) – Fecha límite de presentación de candidaturas: mediados de agosto de 2026.

Antonio Cobos, nuevo CIO de Andersen en España

Andersen Iberia acaba de fichar como director de sistemas de información (CIO) a Antonio Cobos, en los últimos casi siete años director de Tecnología de la constructora OHLA Group. Cobos cuenta con una amplia experiencia en tecnología; además de ser el responsable de la función tecnológica corporativa de la citada multinacional, donde lideró la estabilización del modelo tecnológico tras un periodo de alta complejidad organizativa, previamente (entre 2016 y 2019) fue gerente de TI en Ferrovial durante casi tres años, con responsabilidad de proyectos corporativos de modernización y digitalización en distintas áreas de negocio y antes, durante más de 12 años (entre 2004 y 2016), trabajó como gerente en la consultora Indra, dirigiendo proyectos tecnológicos en entornos públicos y privados.

Andersen España, relata el propio Cobos por correo electrónico a CIO ESPAÑA, vive un “momento de inflexión real”. “De 22 millones en 2020 a más de 110 millones proyectados en 2026, con más de 700 profesionales operando ya como firma ibérica plena. Ese crecimiento exige una tecnología que no siga al negocio, sino que lo anticipe”, explica.

El nuevo CIO de Andersen en España desvela que su agenda tiene tres prioridades: “Primero, convertir la plataforma tecnológica en una ventaja competitiva real para la firma y para sus clientes. Segundo, seguir elevando los estándares de ciberseguridad en un entorno donde los despachos jurídicos somos un objetivo de alto valor: proteger la información de nuestros clientes no es un coste de cumplimiento, es una condición de confianza. Y tercero, acelerar la adopción responsable de la inteligencia artificial —ya desplegamos Legora en toda la organización— y convertir esa experiencia interna en capacidad de asesoramiento externo sobre gobernanza de IA, donde el Reglamento Europeo de IA abre una ventana de oportunidad enorme”.

Recuerda que “el CIO de un despacho de servicios profesionales tiene una posición singular”. “La tecnología impacta directamente en la calidad del trabajo jurídico, en la productividad del profesional y en la confianza del cliente. Que la tecnología esté en la agenda de negocio no es una aspiración: es una condición”, explica.

Andersen Iberia cerró 2025 con una facturación de 80 millones de euros, un 13’3% más respecto al ejercicio anterior. Recientemente la compañía ha anunciado (este mes de marzo) la firma portuguesa PRA –Raposo, Sá Miranda & Associados–, con la que tiene previsto superar los 110 millones de euros de facturación en Iberia a finales de 2026.

En el ámbito de la modernización tecnológica, destaca la adopción de la herramienta jurídica de IA generativa Legora y la firma de un acuerdo con One Million Bot para ofrecer servicios de inteligencia artificial a empresas, instituciones y administraciones públicas.

Agentic AI is rewiring the SDLC

The next wave of AI in software development goes beyond better code generation: agents are starting to take accountability throughout planning, design, build, test, release and operations. In the teams I work with, this is already changing team dynamics, leadership priorities and what CIOs must do to maintain quality, security and control.  

The biggest shift I see is genuine delegation: AI can now draft backlog items, inspect codebases, propose implementation paths, create tests, summarize reviews and prep releases before teams fully agree on ‘done.’ This marks a shift from AI as an assistant to AI as an active participant. That is why this topic matters for CIOs right now. With Google I/O on May 19–20 and Microsoft Build on June 2–3, attention will continue to rise around AI coding models, agentic development workflows and the platforms that now span planning through operations. Microsoft and GitHub are embedding agents more deeply into the engineering workflow.

Gemini Code Assist, GitHub Copilot’s coding agent, OpenAI Codex and Claude Code all reflect the same direction: AI is beginning to participate across planning, building, testing, reviewing and operations, not just within the editor. Google is trying to provide coding assistance to broader lifecycle support. Amazon is leaning into operationalization. OpenAI and Anthropic are pushing agentic coding and repository reasoning. Newer prompt-to-app platforms such as Lovable and Replit are compressing the path from idea to working application. The market signal is clear: AI is moving beyond code suggestion and into software delivery itself.

For business and technology executives, the strategic question is no longer whether AI can generate output. It is whether the organization can use AI to improve delivery without creating faster paths to weak requirements, inconsistent standards, poor testing and vague governance. That is why I frame this conversation around software delivery rather than relying too heavily on the older SDLC label. SDLC still makes sense, but it sounds procedural for what is actually happening. Agentic AI is not just accelerating tasks inside a fixed lifecycle. It is rewiring the operating model of delivery. Recent DORA research reinforces what I see in practice: AI tends to amplify an organization’s existing strengths and weaknesses and the biggest returns come not from the tool alone, but from improving the delivery system around it.

Where agentic AI is creating the most value

The first place CIOs should focus on is where agentic AI is creating measurable value across the lifecycle. In planning and requirements, AI can already do meaningful first-pass work. Teams can ask it to inspect an existing codebase, summarize dependencies, suggest implementation paths, draft user stories, refine acceptance criteria and surface tradeoffs before engineers begin building. Used well, that reduces administrative drag and improves consistency. It also changes where the bottleneck appears. What I see most often is that teams adopt agentic tools expecting a boost, but the first real bottleneck appears upstream when acceptance criteria are too loose for the agent to interpret safely. The teams that struggle most are not the ones with weak prompts. They are the ones with vague intent. AI amplifies ambiguity as efficiently as it amplifies insight. OpenAI’s guidance for AI-native engineering teams describes agents contributing to scoping, ticket creation and other lifecycles work well before code is merged.

A practical model of agentic AI across the software delivery lifecycle.
A practical model of agentic AI across the software delivery lifecycle

Vipin Jain

In architecture and design, the real gain is not that AI can produce more diagrams. It can help teams compare options faster, trace dependencies, expose inconsistencies and document decisions with less manual effort. But architecture is not just pattern matching. It is a judgment about resilience, security, compliance, integration, cost and long-term business fit. The strongest teams use AI to explore options while architects define the guardrails, review points and non-functional requirements that the system must adhere to. In an agentic environment, architecture becomes more important, not less, because someone still has to define what the system is allowed to do. What I see in the strongest teams also matches Anthropic’s experience: simpler, well-bounded agent patterns usually outperform elaborate multi-agent complexity when the goal is reliable software delivery.

Build, test and review are changing even faster. GitHub Copilot’s coding agent, Claude Code, Amazon Q Developer, OpenAI Codex and Google’s broader agentic tooling all point in the same direction: the market is moving from AI-assisted coding to AI-assisted flow. In practice, that means agents can decompose work, generate code, create tests, run checks, summarize failures and prepare work for human review. The important metric is no longer lines of code per developer. It is the amount of safe, reviewable work the team can move through the pipeline without increasing rework. That is a more executive-relevant measure because it ties AI to throughput and quality rather than just speed. Benchmarks such as SWE-bench matter here because they test models against real repository-level software tasks, rather than isolated code snippets, which is much closer to the work CIOs are actually trying to improve.

Deployment, operations and maintenance are where the enterprise’s stakes become highest. This is the point that many organizations underestimate. Writing code is visible. Governing agent behavior in production is harder, less glamorous and much more important. In the teams I see gaining the most value, leaders are using AI to support release readiness, detect anomalies, summarize incidents, draft remediation steps and improve documentation around recurring issues. I have also seen teams pilot agents successfully in build, then stall at release because no one had clearly defined what the agent could change on its own, what required approval or who owned rollback when something went wrong. The organizations that make progress are the ones that answer those questions early. That is where trust is built. That is also why the market is shifting toward governed runtime and operations support, not just coding help; Amazon Bedrock AgentCore is one example of that broader move toward secure deployment, monitoring and controlled agent operation at scale.

How roles and teams are evolving

Agentic AI changes agile teams by shifting what roles contribute. Developers spend less time on first drafts and more time steering AI, validating diffs, hardening edge cases and managing exceptions. Their leverage shifts from typing speed to judgment—knowing what to trust, challenge or escalate. Leaders should recognize this meaningful change in role identity.

Architects also move up the value chain. In traditional environments, they often spend too much time creating static documentation that teams interpret unevenly. In agentic environments, the more valuable work is defining executable guardrails: approved patterns, tool boundaries, policy controls, integration rules and quality gates that both humans and agents can follow. That makes architecture more operational and more consequential.

QA, platform and SRE teams also gain influence. Testing becomes less about writing every case manually and more about building evaluation strategies, validating behavior, instrumenting pipelines and preserving rollback discipline. The closer AI moves to release and operations, the more essential traceability, observability and control become. Product owners and business analysts also need to raise their game. When requirements are fuzzy, human teams usually compensate through conversation. Agents often execute fuzziness literally. In practice, that means the teams that benefit most from agentic AI are the ones that improve intent, edge-case thinking and acceptance discipline. One more shift deserves attention: pro-code and low-code are converging. Microsoft’s Copilot Studio, IBM WatsonX Orchestrate, Lovable and Replit are lowering the barrier between idea and execution for a broader set of contributors. That is good news for experimentation and business alignment, but it also raises the risk of software sprawl outside shared architecture and security controls. CIOs should not dismiss these tools as toys, nor let them float free of governance. The most effective organizations will connect pro-code and low-code through common guardrails rather than force a false choice between them.

width="1024" height="519" sizes="auto, (max-width: 1024px) 100vw, 1024px">
How agentic AI is shifting the center of gravity for core delivery roles.

Vipin Jain

What CIOs should do now?

As roles and delivery processes evolve, what concrete actions should CIOs consider now? The organizations I see getting the most from agentic AI are not treating it as a coding-assistant bakeoff. They are redesigning the delivery system around it. That starts with intent. Leaders should raise the quality of requirements before work enters agentic pipelines. If the business outcome, constraints and acceptance criteria are unclear, the AI will often produce technically plausible but strategically wrong work.

Next comes guardrails and autonomy. Leaders should define what agents can do on their own, what requires approval, what systems and data they can touch and what evidence the pipeline must capture. This is not bureaucracy for its own sake. It is the difference between acceleration and avoidable damage. Teams need clear security rules, architecture patterns, approval boundaries and rollback paths before they scale autonomy. Google Research offers a useful counterweight to the hype here: more agents do not automatically produce better outcomes, especially when the task design, coordination model and workflow are weak.

The management system leaders need for agentic software delivery.
The management system leaders need for agentic software delivery.

Vipin Jain

Then comes observability. If an agent drafts code, generates tests, touches data, triggers a workflow or influences a release decision, leaders should be able to see that activity, evaluate it and audit it later. This is where many pilots remain weak. They prove that AI can do something. They do not prove that the organization can repeatedly trust it. That is why a more formal evaluation matter. Microsoft’s guidance on agent evaluators is useful here because it focuses on operational signals leaders actually need: task completion, task adherence, intent resolution and tool-call accuracy.

Finally, leaders should change how they measure success. Code volume and demo velocity are weak proxies. Better measures include defect escape, rework, release confidence, cycle time for work that reaches production safely and the percentage of work that moves through the pipeline with clear evidence and human accountability. Start with bounded use cases such as maintenance tasks, test generation, documentation, technical debt reduction and lower-risk feature work with strong review. Build supervision muscle before you try to scale autonomy.

The executive takeaway

The strategic mistake I see most often is treating this moment as a tool refresh or a beauty contest among AI coding platforms. Google, Microsoft, Amazon, OpenAI, Anthropic and the next wave of prompt-to-app players matter because they signal where the market is going. But the winning question for leaders is not which demo looks smartest. It is whether the organization is redesigning software delivery so AI can contribute without weakening quality, security or control.

More generated code is not the prize. Better software delivery is. The enterprises that win will connect business intent to engineering execution more tightly, instrument agent behavior more rigorously and redesign team roles around judgment, supervision and accountability. They will make AI part of the team, not just another tab in the IDE.

This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The rise of the double agent CIO

CIOs of B2B SaaS companies are just as responsible to represent technology as they are to run it. In an environment where the buyer is often another CIO, however, the role becomes something fundamentally different. It’s no longer confined to internal execution. It extends into the market, customer conversations, and the moments that ultimately shape revenue, trust, and long-term relationships. So the modern SaaS CIO operates as a true double agent, running the business from within while representing it to the market.

Box CIO Ravi Malick sits squarely in that duality. After serving as CIO of Vistra Energy, a company defined by legacy systems and industrial scale, he stepped into a digitally native, founder-led SaaS business in 2021 where technology is inseparable from the business itself. He now leads internal tech while engaging directly with CIOs of companies evaluating Box, bringing a perspective shaped by both worlds. What stands out in Malick’s perspective isn’t how different the role is, but how much more expansive it’s become.

What stays the same, what evolves

The core tension of the CIO role hasn’t changed. “There’s always more demand than you have the capacity or funding for,” Malick says. Prioritization, alignment to business strategy, and the constant need to modernize while operating at scale still define the job. The difference, however, is the environment in which those challenges now exist.

At Box, Malick operates inside a workforce where technology fluency is high and expectations are even higher. “I partner with 3,000 technologists who love to solve problems with technology,” he says. That creates a powerful advantage, but also a new kind of pressure. Demand for tools, platforms, and innovation is constant, and AI has only accelerated it.

That dynamic is further shaped by Box’s leadership. As a founder-led company, technology conversations extend well beyond the CIO’s organization. “It’s a different dynamic when your CEO is a founder and a technologist,” Malick says. “You’re as much a steward of incoming ideas as you are a generator of them.” That relationship creates both pace and perspective, requiring the CIO to operate as both orchestrator and partner in shaping how technology evolves across the business.

In that context, the CIO is leading within a highly informed, highly engaged organization where expectations for speed and innovation are constant. The challenge isn’t modernization as a one-time effort, but ensuring the tech stack continuously evolves and scales with the business.

Balancing the internal mandate with external pull

What truly differentiates the role in SaaS is what happens outside the enterprise, and the pressure that comes with it. The CIO is still accountable for running IT, ensuring security, and maintaining operational excellence. At the same time, there’s growing expectation to show up externally, engage customers, and directly support revenue.

Malick doesn’t present that balance as seamless. “It’s a daily challenge,” he says. “But sometimes not balanced so well.” There’s a constant push and pull between internal priorities and external demands, and in many cases, revenue pulls hard. The opportunity to influence deals, build relationships, and contribute to growth elevates the strategic importance of the role, but it doesn’t remove the responsibility for the day job.

What allows Malick to operate effectively in both worlds is the strength of the foundation behind him. He points to the maturity of his leadership team, operating model, and internal processes as critical enablers. With clear structures, strong leaders, and disciplined execution in place, he has the bandwidth to spend meaningful time externally. It isn’t always a perfect balance, but it’s a deliberate one.

From operator to peer in the market

Through Box’s customer zero program, Box on Box, Malick operates as both CIO and practitioner, bringing firsthand experience into customer conversations. “I can take how we build at Box to customer conversations,” he says. That perspective shifts the dialogue away from product positioning, and toward the realities of execution.

In a market where CIOs are constantly being pitched, that distinction carries weight. “They want to know how it works from the perspective of someone managing it,” he says, adding he leans into that by being transparent about both successes and missteps. “We share the challenges and false starts we’ve managed through.”

That candor builds credibility, and credibility builds trust. After all, people buy from people they trust, and in enterprise technology, says Malick, peer-to-peer conversations are a faster path to trust than demos. 

The external dimension of the role also holds a symbiotic relationship with internal responsibilities. Malick brings customer conversations back into Box, using them to inform how he thinks about technology decisions and broader strategy. He describes the CIO community as uniquely open, even therapeutic, where leaders candidly share challenges and exchange ideas. That openness creates a feedback loop where external insights sharpen internal execution, and internal experience strengthens external credibility.

What this means for the CIO role

What makes Malick’s perspective especially relevant is that the lesson isn’t limited to SaaS. As technology becomes more central to growth, customer experience, and business model change, CIOs in every industry are being pulled closer to the front office. The shift is about becoming more fluent in how technology translates into trust, speed, and commercial impact, not just becoming more visible.

For Malick, one of the biggest lessons is the role now demands a different kind of leadership than many CIOs were originally trained for. “Don’t make assumptions, and don’t assume something’s easy or intuitive,” Malick says. In a world where technology is reshaping how people work in real time, communication becomes a strategic discipline. CIOs have to explain change, absorb feedback, and keep translating between technical possibility and business reality.

The rise of AI adds another dimension to the double agent role. CIOs are building the content foundation that AI needs to be effective, and ensuring the organization can experiment with AI without sacrificing compliance or control. In a fast-paced technology company, ideas, opinions, and new technologies come from every direction. So the CIO isn’t simply the expert with the answers but often the one managing velocity itself, deciding where to push and where to hold.

“You have to figure out when you need to be in the fast lane and when you don’t,” Malick says. That kind of judgment is becoming more critical as technology moves to the center of the business, and it’s another reason CIOs are stepping into CEO and COO roles.

As AI accelerates the pace of change and creates the potential to decouple revenue growth from headcount growth, that ability to manage speed, scale, and tradeoffs becomes a defining leadership capability. That’s why the SaaS CIO should matter to leaders far beyond software. With AI transforming every industry, the role is becoming a preview of where the profession is headed — not just to run technology, but help shape how the company grows, how it shows up in the market, and how it earns trust. The double agent CIO may sound like a SaaS phenomenon. Increasingly, though, it looks more like the future of the job.

AI won’t fix tech talent gaps — but YOU can

Every CIO I talk to — and I talk to a lot of them — agrees that skills-first hiring makes sense. And with AI now embedded in nearly every stage of the hiring process, from resume screening to candidate matching, many assume the technology will finally make it happen at scale.

It won’t. AI can accelerate hiring decisions, but it can’t fix the underlying systems that power those decisions.

Despite initial progress in removing college degree requirements from job postings, many organizations are still getting it wrong — and AI is giving them new ways to get it wrong faster. Agreeing on a principle isn’t the same as operationalizing one. Even when there’s a skills-first hiring strategy in place, execution fails if IT, HR and business leaders aren’t aligned on outcomes, accountability and measurement. When AI screening tools are layered on top of misaligned systems, the result isn’t smarter hiring; it’s automated bias with a veneer of objectivity.

Why skills-first hiring became a buzzword

The idea of prioritizing skills in hiring decisions isn’t new. Competency-based hiring has been discussed in talent management circles since the 1970s. Over the last two decades, the growing technology skills gap, the explosion of non-traditional learning pathways and the broader recognition of “degree inflation” pushed skills-based hiring into the mainstream. Large employers publicly dropped degree requirements. States followed. Everyone was buzzing about skills-first hiring.

But buzz doesn’t change systems. Data from Indeed showed a decrease in job postings with college degree requirements between 2019 to 2024, but by November 2025, the number swung back up, nearly erasing the gains of the previous five years. 

Meanwhile, the skills that matter most now — prompt engineering, AI tool fluency, the ability to scope and complete AI-augmented projects — are being developed outside traditional degree programs. That makes degrees an even worse proxy for career readiness.

Announcing that an organization is “skills-first” without redesigning the infrastructure that surrounds hiring — job descriptions, applicant screening, recruiter training, interview rubrics and onboarding frameworks — doesn’t change practices. A recent survey by the University of Phoenix found that 69% of hiring decision makers believe there’s still too much focus on college degrees, with little clarity on what they should evaluate instead.

The 3 most common failure points

From my experience in working with CIOs on their entry-level talent pipelines, skills-first initiatives tend to break down in one or more of these places: job descriptions, screening tools and internal skepticism.

First, take job descriptions. Hiring managers tend to default to historical templates — pasting in degree requirements and years-of-experience thresholds that were never validated against actual job performance and are even less relevant with AI in the mix.

Second, screening tools. Nearly 90 percent of companies are using some form of AI to screen candidates, expecting greater efficiency and less bias. But AI screening tools learn from existing hiring data — which, if biased, just means that bias is now automated. Patterns in successful candidates’ backgrounds get baked into future decisions, except now, these decisions appear “data-driven” and neutral instead of more obviously predicated on certain hiring managers’ preference for college graduates.

The third failure point is internal skepticism — and the training gap that feeds it. According to the survey by the University of Phoenix, one in four non-HR managers receives no training before interviewing job candidates, even when the final hiring decision is theirs. Without shared definitions of what “skills-first” means and clear accountability, the initiative collapses under the weight of individual discretion.

What CIOs see that others miss

CIOs are often closer to the consequences of a bad hire than anyone else in the C-suite. When a cybersecurity analyst freezes during an incident response, the CIO gets a front-row view of the damage a skills gap can cause.

CIOs are also watching AI redefine what “qualified” looks like in real time. The engineer who deploys an agentic AI system to automate monitoring, or the analyst who chains multiple AI tools into a custom workflow — these people deliver outsized value, and their qualifications often look nothing like what a traditional job description demands.

And CIOs have a keen understanding of issues with tech talent pipelines. Projects slip while niche technical roles sit open for six months or more, even as candidates from rigorous programs — people with the specific skills for the job — are filtered out.

How successful CIOs operationalize skills-first hiring

Successful CIOs get specific. They work with their teams to define exactly which skills matter for each role — and they validate those definitions against the performance of current, thriving employees.

Should that taxonomy include demonstrated experience with AI tools and platforms: Has the candidate built or deployed an AI agent? Can they work across multiple AI tools? Have they completed projects requiring AI-augmented problem-solving? These concrete, observable skills predict performance far more than a degree ever could.

Second, they establish shared metrics across IT and HR. Organizations that get this right track 90-day performance reviews, first-year retention and promotion velocity alongside traditional recruiting metrics. In its New Collar Program with Sentara Healthcare, TEKsystems worked with company leaders to fill open big data positions through a skills-based cohort model and achieved 80% retention one-year post-training.

Third, these CIOs build direct relationships with employer-aligned training pipelines before a role opens. Bank of America invested nearly $40 million in workforce development initiatives in 2025 alone and partnered with more than 600 nonprofits across the US.

At Per Scholas, our head of IT, Tyrone Washington, makes it clear that while technical skills might get you through the door, it’s “smart skills” — discernment, emotional intelligence, complex problem-solving and agility — that build a career in an AI-driven landscape.

What the data shows

Skills-first hiring, when paired with structured onboarding and development pathways, is not just a talent acquisition strategy — it’s a retention strategy. And higher retention contributes directly to the bottom line, as the fully loaded cost of replacing a technical employee ranges from one to two times their annual salary.

In one employer partnership deploying skills-trained talent, a TEKsystems client came out $238,000 ahead in the first year after accounting for program costs, with a projected annual return of over $1.2 million. IT leaders reported that skills-trained talent becomes productive measurably faster than early-career hires from conventional pipelines.

How CIOs can lead the Shift — even without owning HR

CIOs who are moving the needle are piloting skills-based hiring for one or two roles, tracking outcomes rigorously and using that data to make the case for broader adoption. They’re building external partnerships before they need them. Bank of America, a Per Scholas long-term partner, knows that our graduates are team players, lifelong learners and motivated employees; our graduates’ certifications (through CompTIA and Google) validates that they have the technical know-how.

Every quarter that technical roles sit open has a measurable impact on project delivery, team capacity and competitive positioning. Surfacing these costs — backed by data — is something CIOs are uniquely positioned to do.

The bottom line

Skills-first hiring will remain a well-intentioned abstraction unless CIOs treat it as an operating model — one that reflects how AI is reshaping the skills organizations need.

The candidates who can demonstrate hands-on experience in building and deploying AI agents, integrating multiple AI tools into a workflow or evaluating when AI can help are the ones who will drive value. But they’ll keep getting filtered out unless CIOs get specific about skill definitions, align IT and HR around shared metrics, and build employer-aligned pipelines. Bank of America and TEKsystems didn’t achieve their results by endorsing a principle — they achieved them by building systems.

Luckily, building systems is something that CIOs know how to do well.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Agentic AI is reshaping business ecosystems — CIOs must choose their role carefully

From systems to ecosystems to agents

A shift has been underway for some time as value creation moves from slow, firm-centric to more rapid, co-created across a network of participants.  Customers don’t experience systems; they experience outcomes. Those outcomes are assembled across a network of partners, platforms and capabilities that must work together as one.

Consider NVIDIA. Its Blackwell platform is not simply a product; it is an ecosystem. Chips, software frameworks, developer tools and partner innovations combine to deliver AI capability at scale. What appears seamless to the customer is a highly coordinated system of interdependent contributors.

The CIO’s responsibility is to ensure alignment among technology, agents and the ecosystem’s role.

That requires the agentic AI strategy to shift from static alignment to continuous alignment, in which architecture, governance and intelligent systems evolve in real time.

This shift is at the core of Digital Momentum: Architecture that actively shapes how value is created, adapted and delivered in an outcome-oriented world.

Not all agents are created equal

One of the biggest mistakes organizations are making right now is treating agentic AI as a plug-and-play solution, assuming all agents, whether internal or ecosystem-facing, can be designed the same way. Context defines the agent, and context determines how it must be designed.

However, there’s a fundamental difference between:

These ecosystem agents don’t just execute tasks; they negotiate, coordinate and influence results in environments that are partially controlled and partially influenced by stakeholders.

Ecosystem agents must be designed with precision. They cannot be general-purpose actors with broad autonomy or poorly defined functionality. To function effectively, an agent needs to address its role in the ecosystem:

  • Limited, purpose-built context so they can act quickly without being overwhelmed or unpredictable.
  • Clearly defined responsibilities, tightly aligned to a specific mission.
  • Bounded authority, ensuring decisions stay within acceptable risk thresholds.
  • Embedded governance, built into how they operate and not layered on afterward.

Research into AI-driven organizations consistently shows that intelligent systems perform well only when aligned with operating models and value delivery. The same principle applies to agentic systems. Without alignment, autonomy doesn’t create value; it creates instability.

4 agentic role types that define agentic strategy

To operate effectively in an agent-driven ecosystem, CIOs must be explicit about the role their organization is playing and how agents fill those roles:

1. Orchestrator agent: Designing the system

Orchestrators define how value is assembled across the ecosystem. They control integration points, set standards and often own the customer relationship.

What it requires

  • Strong architectural control over interfaces and workflows
  • Coordination of agent behavior at scale
  • Governance embedded directly into runtime execution

CIO decision lens

  • Where to enforce control vs. allow flexibility.
  • How agents interact, trigger actions and make decisions.
  • What governance must be codified into the system.

2. Complementor agent: Differentiating at the edge

Complementors extend the ecosystem with specialized capabilities, providing directed experience and domain expertise that matter most.

What it requires

  • Deep, defensible domain expertise.
  • Context-aware agents that operate within orchestrated workflows.
  • Rapid adaptability as the ecosystem’s needs evolve.

CIO decision lens

  • Where to differentiate vs. conform.
  • How much autonomy agents should have within external systems.
  • How to expose capabilities to remain indispensable.

3. Supplier agent: Powering the solution

Suppliers provide the infrastructure and core services that ecosystems depend on.

What it requires

  • High reliability and scalability
  • Standardized, consumable services
  • Consistent performance at ecosystem scale

CIO decision lens

  • Where to compete on cost, performance or specialization
  • How to expose services for reuse
  • Where to invest to avoid commoditization

4. The consumer agent: Using the solution

Consumer agents act as customer proxies, presenting solutions orchestrated solutions.

What it requires

  • Flexibility across providers and platforms
  • Strong governance over external dependencies
  • Trust frameworks for reliable outcomes

CIO decision lens

  • How much control to retain vs. delegate
  • How to govern external agents
  • How to ensure predictable outcomes

The bottom line for CIOs

The mistake many organizations make is designing agents generically. Agent behavior, authority and governance must be shaped by the role you play in the ecosystem.

Get that alignment right, and agentic AI becomes a force multiplier.
Get it wrong, and you introduce instability at the very point where value is created.

Agentic strategy: Aligning AI to your evolving role in the ecosystem

With deployed agents, CIOs need to ask the following question: How will those agents remain aligned to our evolving role in the ecosystem as strategic priorities shift?

AI is a continuous expression of how your organization creates value. As markets shift, partnerships evolve and strategy changes, your role in the ecosystem must evolve as well, and your agents must adapt to it. Figure 1 illustrates these roles and how they interact dynamically across the ecosystem.

The agent ecosystem

Brice Ominski

When organizations fail to realign agent behavior as their role evolves, misalignment sets in and the consequences compound quickly:

  • Orchestrators lose control over increasingly complex ecosystems
  • Complementors become interchangeable as differentiation erodes
  • Suppliers are pushed toward utility status, competing primarily on cost
  • Consumers lose predictability in outcomes they depend on

In an agentic world, competitive advantage doesn’t come from deploying agents; it comes from continuously realigning them.

Control value and risk in agentic systems

As ecosystems become agent-driven, risk doesn’t disappear; specifically, CIOs should look for the following risks:

  • Platform dependency. Your operating model becomes tied to another organization’s ecosystem
  • Value imbalance. Orchestrators capture disproportionate value
  • Architectural lock-in. Integration and agent decisions limit future flexibility
  • Capability absorption. Differentiated capabilities get embedded into the platform
  • Trust gaps. Autonomous agents require stronger identity, auditability and policy enforcement
  • Intelligence displacement. Control over data and learning loops shifts elsewhere

Ignoring them doesn’t reduce risk — it delays when it becomes a constraint.

What CIOs should do next to prepare their agentic strategy

The organizations that succeed won’t be the fastest adopters. They will be the most deliberate.

  1. Make your ecosystem role explicit. Define whether you are orchestrating, extending, supplying or assembling value.
  2. Map control vs. dependency early. Understand where decision authority resides and where it may erode.
  3. Design agents as bounded actors. Scope, authority and decision rights must be clearly defined.
  4. Embed governance and trust into execution. Governance must be codified, enforced at runtime and continuously observable.

Closing perspective

In this environment:

  • Orchestrators shape how value is assembled
  • Complementors differentiate where it matters most
  • Suppliers provide the foundation
  • Consumers determine how value is composed

The CIO’s responsibility is to ensure alignment across all three: Technology, agents and ecosystem position.

This shift toward continuous, outcome-driven alignment is at the core of what I explore in Digital Momentum — how architecture evolves from supporting the business to actively shaping value creation in real time.

This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

How NOV is moving from FOMO to calculated scaling

For decades, the industrial sector has operated on the simple mantra to live by automation, die by automation. In the oil and gas industry, where precision is measured in millimeters and safety in lives, automation is a necessity, not just nice to have. But as gen AI sweeps through the enterprise, a new challenge has emerged in how a global leader in energy services should transition from experimental chatbots to industrial-grade AI without compromising safety or security.

Here, Alex Philips, CIO of NOV, formerly National Oilwell Varco, discusses implementing OpenAI and securing it with zero trust for 25,000 employees, and why the next phase of agentic AI requires a fundamental shift in how to view human expertise and digital safeguards.

From FOMO to ROI

Like many global companies, NOV’s initial move into gen AI was driven by executive pressure fueled by fear of missing out. Philips remembers the early talks with his CEO about the investment.

“I said we have this opportunity, and it costs this much,” he says. “He asked about the ROI and I replied that’s something I couldn’t calculate, nor what it’d replace or what it’d displace in cost, but I couldn’t say any of that for email either.”

Just as no modern business can function without email, even without a direct line-item ROI, Philips argues that LLMs will soon become the standard for employee productivity. Currently, NOV reports about 50% of its workforce actively use the tool to enhance productivity.

The results, though qualitative, are profound. Philips says that response times for urgent customer requests, for instance, have plummeted, language barriers are crumbling, and employees are tackling complex analyses once considered out of reach.

The six-month validation lesson

One example Philips details involves an engineer who spent six months mastering a highly specialized skill. With ChatGPT, the engineer was able to replicate that six-month learning process in just 10 minutes.

And while his initial response was to think he wasted six months of his life, the response was to show him he spent six months to validate what the AI told him. “This is a great example of why humans are still needed in the AI loop,” says Philips. “AI execution without human validation can lead to errors that cost companies significant time and money.”

This underscores the crucial pillar of NOV’s AI strategy of human accountability because in an industrial setting, AI dictating terms is never an acceptable excuse. Whether designing a drill bit or automating a workflow, the end user remains responsible for the output.

Securing the Wild West of shadow AI

As AI becomes more widespread, shadow AI poses a significant security risk. To address this, NOV uses Zscaler to route all traffic, and ensure visibility and control. And by doing so, the company can:

  • Redirect users: If an employee tries to use a non-approved LLM, they’re redirected to a page that explains NOV’s policy, and directed to the approved enterprise OpenAI instance.
  • Monitor SaaS evolution: Many authorized SaaS applications are now adding agentic features during contract periods. Zscaler provides the visibility needed to identify these changes before sensitive IP is fed into an unvetted model.
  • Enforce data privacy: Preventing intellectual property from leaking into public training sets is the first step in any industrial AI deployment.

The shift to agentic AI

In software development, NOV already benefits from AI-assisted coding, where AI works alongside developers who accept about 32% of AI suggestions. “We’re now beginning to explore the next evolution of full agentic coding,” says Philips, adding that this next stage truly supercharges teams, enabling them to move faster and better meet customer demand for innovation.

However, this efficiency feeds the dilemma of a widening talent gap. The challenge moving forward is if all the low-level, entry-level tasks can be automated, and what’s the best way to develop skilled workers. “I don’t know how we’ll adapt to it, but we’ll figure it out,” he says.

Safety first

In the oil field, some processes are too critical to be left entirely to a black-box algorithm. Philips is adamant that for safety issues, AI remains an advisor, not a decider. NOV uses AI-powered vision to monitor red zones, or dangerous areas on a drilling rig. If the AI detects a person in a restricted area, it can trigger an emergency stop. However, for actual drilling operations, the final call remains with an onsite human operator. “You can’t have a hallucination,” he says. “You can’t say it’s right 90% of the time. It has to be all the time.”

NOV’s journey shows that transitioning to industrial-grade AI isn’t just about choosing the best model but building a framework of trust, transparency, and responsibility. By using Zscaler for governance and GitHub Advanced Security for code validation, NOV is moving toward a future where AI becomes more essential to the oil industry.

“Development teams should produce twice the output with half the people in half the time,” he says. “The only remaining question is how do we train the next generation of developer experts to control the machines that do the work.”

業務×デジタルの両輪を回せる人材を育てる──SGHグループの「DX人材育成法」の全貌

物流業界を揺さぶる構造変化──なぜ今「DX人材育成」なのか

物流業界はいま、これまでの延長線上では立ち行かない局面に差しかかっている。2024年問題に代表される労働時間規制の強化、慢性的な人手不足、EC市場の拡大による取扱量の増加。これらの要因が重なり合い、従来の「現場の頑張り」に依存した物流モデルは、明確な限界を迎えつつある。

かつて日本の物流は、現場の熟練した判断力や柔軟な対応力によって支えられてきた。しかし、属人的な業務プロセスは人材不足が深刻化する中で大きなリスクとなり、業務の標準化や可視化が急務となっている。現場の負荷は増大し、改善に取り組む余力は削られ、変革のスピードは鈍化する。こうした悪循環を断ち切らなければ、持続可能な物流体制は構築できない。

SGHグループも、この現実と正面から向き合ってきた。同社グループは早くからDXに取り組み、配送ルートの最適化、配送伝票のフルデジタル化、倉庫の自動化など、現場の生産性向上につながる施策を次々と導入してきた。技術導入そのものは一定の成果を上げてきたが、その過程で浮かび上がったのが、別の課題である。

それは、「DX人材」の不足だ。システムを導入しても、現場で使われなければ、改革は定着しない。現場の課題を正確に捉え、デジタルを活用した解決策を描き、現場に即した実装までを導く人材が決定的に足りていなかったのである。技術だけでは現場は変わらず、現場の知見だけでもDXは進まない。その両者をつなぐ存在の重要性が、次第に明確になっていった。

こうした背景から、SGHグループはDX推進には、人材育成も必要不可欠だと判断し、グループ横断で取り組むことにした。DXの成否を左右するのはテクノロジーではなく、現場を理解し、デジタルで事業の変革を推進する人である──その認識が、同社グループのDX人材育成の出発点となった。

戦略・企画・構築が連動する──SGHグループ独自の「三位一体DX推進体制」

SGHグループのDXが現場で実効性を持って進んできた背景には、同グループ独自の「三位一体のDX推進体制」がある。DXを掲げる企業は多いが、戦略と現場、企画とシステムが分断され、構想倒れに終わるケースは少なくない。SGグループはその課題を回避するため、DXを最初から“組織横断の取り組み”として設計してきた。

三位一体の体制を構成するのは、SGHD、事業会社、そしてSGシステムの三者である。まず、グループ全体のDX戦略を描くのがSGHDだ。社会課題や顧客ニーズ、事業環境の変化を踏まえ、DXの方向性を示す司令塔として、全体最適の視点から舵取りを行う役割を担っている。

その戦略を受け、具体的な改革テーマを企画するのが各事業会社だ。宅配、ロジスティクス、国際輸送など、事業ごとに現場の課題は異なる。現場を熟知する事業会社が主体となることで、DXは机上の理想論ではなく、日々の業務に根差した改革として立ち上がる。現場起点で課題を定義し、改善の方向性を描くことが、実効性のあるDXにつながる。

そして、その企画を実際のシステムとして具現化するのがSGシステムである。約1000名のIT人材を擁する同社は、アプリケーション開発やデータ基盤構築、AI実装などを担う中核的存在だ。特徴的なのは、企画が固まってから呼ばれるのではなく、グループ各社に出向やローテーションし、初期段階からプロジェクトに参画している点にある。技術的な制約や可能性を踏まえながら構想を磨き上げることで、「現場視点のDX」を実現してきた。

この三者が縦割りではなく一体となることで、企画と構築の距離は大きく縮まった。現場の声がダイレクトにシステムへ反映され、改善のサイクルも加速する。さらに、グーグル・クラウド・ジャパン(以下GCJ)をはじめとする外部パートナーとの協業も積極的に進め、自前主義に陥ることなく、最適な技術を柔軟に取り込んでいる。

こうした体制のもと、AI‐OCR開発による配達伝票のフルデジタル化や、デジタル化されたデータを活用したAIによるルート最適化など、現場と経営の双方で可視性が高まった。SGHグループのDXが外部から高い評価を受けている背景には、この三位一体の推進体制がある。

「業務×デジタル」を往復できる人材へ──DXコア人材育成の全体像

三位一体のDX推進体制が機能し始めるにつれ、SGHグループでは次なる課題が明確になった。それは、この仕組みを実際に動かし続ける「人材」の問題である。戦略があり、現場課題が整理され、技術基盤が整っていても、それらをつなぎ、プロジェクトを前に進める人がいなければDXは加速しない。現場とデジタルを理解するハイブリッド人材の不足が、成長のボトルネックとなり始めていた。

SGHグループは、競争優位性を担う人的資本を「コア事業推進人材」、「ソリューション人材」、「グループ経営人材」と大きく3つに分類している。その中で、トータルロジスティクスの高度化など、成長エンジンを担う人材として定義されているのが「ソリューション人材」だ。DX人材は、このソリューション人材のひとつに位置づけられ、テクノロジーを活用して生産性向上やサービス高度化を実現する役割を担っている。

DX人材はさらに、「DX企画人材」と「DX構築人材」に分けて育成されている。DX企画人材は主に事業会社から選出され、現場課題の整理、顧客ニーズの把握、DXテーマの立案を担う。一方、DX構築人材はSGシステムを中心に育成され、アプリケーション開発やデータ基盤、AI活用など、技術面から改革を支える。両者が密接に連携することで、ビジネスとデジタルが分断されないDXが成立する。

「まだDX企画人材は2桁程度で手薄なので、2027年までの3か年で、150人体制にしていきたいと思っています。またDX構築人材を構成するSGシステム1000人のうち100人程度が、スペシャリストとして認定されています。最終的にはDX企画人材やDX構築人材からビジネスとデジタル双方の深い知識を持つDXコア人材を育成していきたいと考えています」

SGHD経営企画部長の南部一貴氏(所属・役職は取材当時)はこう説明する。

育成施策は段階的かつ多層的に設計されている。まず全従業員を対象としたDX基礎研修で、デジタルリテラシーを底上げする。続くDX応用研修では、立候補・推薦制で選抜された人材が、顧客ヒアリングや課題設定、企画立案をワークショップ形式で学ぶ。単なる座学ではなく、自社の業務を題材にすることで、学びを即実践につなげる点が特徴だ。

さらに、アクセラレータープログラムや社内ビジネスコンテストといった施策を通じ、研修で生まれたアイデアを事業化へとつなげる仕組みも整えられている。育成を研修で終わらせず、実際の業務や成果につなげることを重視している点に、SGHグループのDX人材育成の思想が表れている。

現場に蓄積された知見とデジタルを掛け合わせて初めて、物流DXは価値を生む。そのために必要なのは、一般論としてのDX人材ではなく、SGHグループの事業構造に根ざしたDXコア人材なのである。

人が動けばDXは加速する──人材ローテーションが切り開く次のステージ

DXコア人材の育成は、すでに着実な成果を生み始めている。DX応用研修の参加者アンケートでは、多くの社員が「研修で学んだ内容を業務で活用できている」と回答しており、顧客ヒアリングや課題設定の精度が高まったという声も多い。現場では、データを根拠にした改善提案や、部門をまたいだ議論が増え、DXが日常業務の延長線上に位置づけられつつある。

一方で、課題も明確だ。最大の課題は、ビジネスとデジタルの双方に深い知見を持つ人材の輩出には時間がかかるという点である。現場理解とデジタルスキルは、それぞれ習得に時間を要し、一朝一夕に身につくものではない。また、従来のやり方に慣れた現場では、変化に対する心理的な抵抗が生まれることもある。こうした壁を乗り越えるには、制度だけでなく、継続的な意識改革が欠かせない。

そこでSGHグループが次の一手として打ち出したのが、戦略的な人材ローテーションである。これまでも、SGシステムの社員が事業会社に出向し、現場業務を理解したうえでデジタル実装に戻るといった動きはあった。しかし今後は、この流れをより体系的に拡大し、事業会社とデジタル部門を往復できる仕組みとして定着させていく構想だ。

このローテーションの狙いは明確である。現場で得た知見をDX企画に生かし、デジタル部門で得た視点を現場改善に還元する。その循環を生み出すことで、“業務×デジタル”の両輪を自走させる人材を育てることにある。単なる人事異動ではなく、DXを加速させるための戦略的な育成施策として位置づけられている点が特徴だ。

南部部長(同上)は「人が動けば、組織も変わります。“業務 × デジタル”の両輪を回せる人材を育て、グループ全体のDXを加速させたい」と語る。

ガートナージャパン バイスプレジデント チーム マネージャーの一志達也氏は、次のように述べている。

「人間の社員がいなくても、AIエージェントが代わりに仕事をしてくれる、そんな未来を描く人もいる。たしかに、AIによって置き換えられる仕事もあるが、人間にしかできない仕事もたくさんある。AIを活かし、生産性を高め、より多くの価値の高い成果を出せる、そんな拡張型の人材をどれだけ確保できるのか、それがこの先の企業競争力を左右する。企業は、社員がAIを学び、活かすことのできる環境を整え、AIと協働する拡張型の人材を育成する必要がある」

DXは最終的に、人が動かしてこそ意味を持つ。人が育ち、人が動き、組織が変わる。その積み重ねが、物流の未来を形づくる力となる。SGHグループのDX人材育成は、自社の競争力強化にとどまらず、物流業界全体に示唆を与える取り組みと言えるだろう。

칼럼 | 멀티 벤더 프로젝트 실패, 대부분은 ‘거버넌스’에서 시작된다

벤더가 프로그램을 스스로 바로잡아주기를 기다리는 것은 전략이 아니다. 이는 조용히 누적되는 비용일 뿐이며, 회의실 안의 모두가 프로세스가 정상적으로 작동하고 있다는 착각을 유지하는 동안 그 부담은 계속 커진다.

필자는 두 가지 상황을 모두 경험했다. 하나는 고객이 이미 문제가 있음을 인지하고 행동에 나설 근거와 표현을 필요로 하는 경우, 다른 하나는 아직 문제를 인식하지 못한 상태다. 후자의 경우 프로그램은 관리 가능한 수준으로 보이고, 벤더는 전문적으로 보이며, 운영위원회 회의도 제시간에 진행된다. 그러나 경고 신호는 이미 곳곳에 드러나 있으며, 누군가 이를 짚어내기만을 기다리고 있다.

이 두 번째 상황이 더 중요하다. 아직 대응할 수 있는 시간이 남아 있기 때문이다. 하지만 대부분의 기업은 그 기회가 줄어들기 시작할 때까지 움직이지 않는다.

대부분의 기업이 놓치는 경고 신호는 설계 단계에서 나타난다

초기의 신호는 일정 지연이나 산출물 실패가 아니다. 오히려 ‘언어’에서 드러난다. 상태 보고서나 운영위원회 자료에 ‘프로젝트 정상화 방안(path to green)’이라는 표현이 등장하기 시작하면, 이는 이미 프로젝트가 정상 상태가 아님을 스스로 인정한 것이다. 실행을 관리하는 단계에서 ‘서사를 관리하는 단계’로 전환된 셈이다.

운영위원회가 실제로 무엇을 하고 있는지도 살펴봐야 한다. 회의가 다음 달 전망이 아닌 지난달 결과 보고에 집중된다면, 리더십은 의사결정 주체가 아니라 단순한 청중으로 전락한 것이다. 이 경우 벤더가 의제 설정, 메시지 구성, 정보 공개 주기를 사실상 통제하게 된다.

가장 심각한 신호는 프로그램 스폰서가 벤더가 아닌 내부 보고를 통해 주요 문제를 인지하는 경우다. 이는 단순한 커뮤니케이션 문제를 넘어, 어떤 정보를 공유할지에 대한 의도적인 선택이다. 이러한 패턴이 SAP, 오라클, 세일즈포스 기반 프로젝트에서 나타난다면, 거버넌스의 핵심인 신뢰는 이미 무너진 상태다.

이러한 신호가 보이면 다음 운영위원회를 기다릴 이유가 없다. 독립적으로 검증 가능한 데이터를 요구해야 하며, 벤더에게 ‘보고’가 아닌 ‘예측’을 요구해야 한다. 만약 60일 후 프로젝트 상태를 설명하지 못한다면, 벤더는 프로젝트가 아니라 고객의 인식을 관리하고 있는 것이다.

총괄 조정자 역할, 해결되지 않은 이해충돌 문제

액센추어, 딜로이트, PwC 등 다수의 글로벌 컨설팅 기업이 참여하는 멀티 벤더 프로젝트에서 반복적으로 나타나는 패턴이 있다. 총괄 조정자, 즉 프로그램 통합 코디네이터는 고객의 문제, 다른 벤더의 한계, 외부 의존성 지연 등을 빠르게 지적한다. 그러나 정작 자사 문제에 대해서는 같은 수준의 직설적인 언급을 거의 하지 않는다.

이는 개인의 성향 문제가 아니라 구조적인 이해충돌이다. 총괄 조정 역할을 맡은 기업은 소위 업무 범위 정의서로 불리는 자체 업무 범위(Statement Of Work, SOW)를 수행 책임도 동시에 지고 있다. 이 과정에서 거버넌스 권한으로 확보한 정보 접근성과 보고 권한을 바탕으로, 자사 리스크를 방어하는 방향으로 움직일 가능성이 크다.

이 때문에 총괄 조정자 역할은 반드시 벤더의 수행 역할과 구조적으로 분리해야 한다. 가장 이상적인 방식은 결과에 이해관계가 없는 독립적인 통합 관리 조직을 두는 것이다. 현실적으로는 기존 벤더 내에서 별도 조직이나 인력을 지정하되, 해당 조직이 자사 리더십이 아닌 고객에게 직접 보고하고 운영위원회에 책임을 지도록 해야 한다.

이 구조에는 완벽한 방화벽이 존재하지 않는다. 대신 행동으로 판단할 수 있는 기준이 있다. 해당 역할이나 팀이 자사에 불리한 정보를 어떻게 다루는지 살펴보는 것이다. 고객의 문제를 제기할 때와 동일한 긴급도로 이를 공유하거나 상향 보고하는지, 혹은 자사 이슈는 숨기고 타 조직의 문제만 부각하는지를 확인해야 한다.

자사 전달 조직의 실패까지도 적극적으로 공개하고 에스컬레이션하는 총괄 조정자라면 제 역할을 수행하고 있는 것이다. 반대로 고객과 타 벤더의 문제만 지적한다면, 이는 프로젝트가 아니라 자사 계약을 방어하고 있는 것에 가깝다.

다음 SOW 체결 이전에 이러한 구조를 명확히 해야 한다. 총괄 조정자 역할을 수행 역할과 분리해 정의하고, 담당 인력이나 조직을 명확히 지정해야 한다. 또한 보고 체계를 고객 직속으로 설정하고, 실제로 해당 역할이 수행되고 있는지 행동 기준을 통해 검증해야 한다.

기다림은 중립이 아니다

대응을 미루는 데 따른 비용은 많은 기업이 생각하는 것보다 훨씬 구체적이다. 여러 시스템 통합업체가 동시에 참여하는 멀티 벤더 환경에서는 일정이 한 달만 지연돼도 기업별로 수백만 달러에서 수천만 달러에 이르는 비용이 발생할 수 있다. 이는 범위 확장이 아니라, 거버넌스가 일정 관리를 제대로 하지 못한 결과다.

상업적 리스크는 더 이른 시점부터 나타난다. 범위가 명확하지 않고 통합 계획이 불안정하면, 벤더는 가격 산정의 기준점을 확보할 수 없다. 그 결과 동일 범위에도 시간·자재 기반 견적과 고정가 견적 간 큰 차이가 발생한다. 이는 단순한 가격 차이가 아니라, 거버넌스 불확실성을 계약 조건으로 전가한 것이다. 결국 그 부담은 고객이 떠안게 된다.

문제가 쉽게 드러나지 않는 이유는 현장 팀이 대체로 전문적이고 성실하게 일하고 있기 때문이다. 하지만 핵심은 노력의 문제가 아니라 권한과 인센티브 구조에 있다. 프로젝트를 운영하는 프로그램 매니저는 추가 자원 투입이나 조직 간 비용 집행을 승인할 권한이 없다. 이들의 역할은 관계를 유지하고 수익성을 관리하는 것이며, 고객 프로젝트를 근본적으로 해결하는 것과는 다르다.

효과적인 개입은 ‘리더십’에서 시작된다

문제가 명확해지고 고객이 대응을 결정했다면, 실제 변화를 이끄는 것은 거버넌스 문서나 정기 회의가 아니라 고객과 벤더 최고 경영진 간의 직접적인 대화다. 이는 일상 운영을 담당하지 않지만, 결과에 책임을 지는 임원들이 참여해야 한다.

이러한 대화가 효과적인 이유는 인센티브 구조를 바꾸기 때문이다. 벤더의 산업 책임자나 파트너는 해당 프로젝트를 성공 사례로 만들어야 하며, 실패는 조직 내 부담으로 이어진다. 이들은 실행 조직이 갖지 못한 권한을 보유하고 있다. 최우수 인력 투입, 계약 범위를 넘어선 비용 부담, 신속한 인력 재배치 등 프로젝트의 방향을 바꿀 수 있는 결정이 가능하다.

또한 개입은 구조적으로 설계돼야 한다. 양측 고위 임원이 참여하고, 신규 인력 투입을 통해 벤더의 투자 의지를 명확히 보여야 한다. 동시에 프로젝트가 정상 궤도에 오를 때까지 일정 주기로 점검을 이어가고, 양측 모두에 시간 기반 책임을 부여해야 한다. 이 과정에서 단순히 프로젝트뿐 아니라 파트너십 자체도 평가 대상임을 명확히 해야 한다.

이는 일회성 회의가 아니라 지속적인 리더십 개입이며, 기존 거버넌스를 대체하는 것이 아니라 이를 실제로 작동하게 만드는 역할을 한다.

신뢰할 수 있는 유일한 회복 신호

리더십 간 논의가 효과를 발휘했다면, 그 결과는 벤더의 대응 방식에서 드러난다. 단순한 낙관적 계획이나 약속이 아니라, 실패 지점과 개선 방안, 그리고 고객 측의 성과 격차까지 구체적으로 제시해야 한다.

자사 책임만 인정하는 벤더는 여전히 관계 관리에 머무른다. 반면 자사 실패와 고객의 개선 필요사항을 동시에 명확히 제시하는 벤더는 공동 책임 구조를 구축하고 있는 것이다. 이것이 진정한 신뢰의 기준이다.

프로젝트 실패는 대부분 양측 요인이 결합해 발생한다. 지연된 의사결정, 부족한 내부 자원, 설계 이후 변경된 요구사항 등이 대표적이다. 이러한 요소를 자사 문제와 함께 제시하는 벤더는 책임을 회피하는 것이 아니라 결과 개선에 투자하고 있는 것이다.

만약 경영진 회의 결과가 단순한 약속과 형식적인 의지 표명에 그친다면, 지속적으로 압박을 유지해야 한다. 실제 개입은 구체적인 문제 인정, 명확한 자원 배정, 그리고 양측 모두를 향한 냉정한 평가에서 드러난다.

최종 책임은 고객에게 있다…‘중간에서 판단하는 역할’ 필요

모든 과정에서 최종 책임은 고객에게 있다. 총괄 조정자는 실행과 통합을 책임지지만, 판단 자체를 외부에 맡길 수는 없다. 이는 단순한 역할 구분이 아니라, 거버넌스의 본질이다.

벤더의 상태 보고서는 중립적인 데이터가 아니다. 이는 보상, 향후 계약, 개인의 평판이 걸린 사람들이 구성한 ‘의도된 서사’다. 보고서에 담긴 내용도 중요하지만, 빠져 있는 내용이 더 많은 것을 말해준다.

따라서 고객은 ‘중간에서 판단하는 역할’을 수행해야 한다. 데이터를 검증하고, 교차 확인하며, 보고서가 답하지 않은 질문을 던져야 한다. 무엇이 포함됐는지뿐 아니라 무엇이 빠졌는지도 살펴야 한다.

운영위원회가 좋은 소식만 듣고 있다면, 이는 프로젝트가 잘 진행되고 있다는 의미가 아니라, 누군가 리더십이 듣고 싶어 하는 내용만 선택하고 있다는 신호일 수 있다.

고객은 상태 보고가 아닌 예측을 요구해야 한다. 독립적으로 검증 가능한 근거를 확보하고, 벤더가 자사 문제와 고객 문제를 함께 제시할 때 이를 गंभीर하게 받아들여야 한다. 이는 책임 회피가 아니라, 거버넌스가 제대로 작동하고 있다는 신호다.

경고 신호는 항상 명확하지 않을 수 있다. 그러나 대응할 수 있는 시간은 제한적이다. 기다림은 결코 전략이 아니다.
dl-ciokorea@foundryco.com

‘Career is over’? IT still has a lot to offer, despite uncertainties

Sometime between the dotcom melt down in the late 1990s and the May 2003 publishing of “IT Doesn’t Mater” in Harvard Business Review, the acronym CIO was snarkily reinterpreted by tech-skeptics as meaning “Career Is Over.” With the rapidly expanding capability of AI, there are some wondering whether the entire IT career category is doomed to disappear.

IT, for a variety of reasons, is no longer universally perceived as a preferred place to work. As a futurist I believe IT is a great occupation today and will continue to be if one persistently and patiently continues to construct career-enhancing levees.

A levee is a human-made embankment, usually constructed of compacted soil or concrete, built along a river or coast to prevent flooding. A career levee is a self-developed barrier to professional obsolescence.

In the 1967 film The Graduate, a cocktail-swilling suburbanite gave collegiate Dustin Hoffman one word of career advice, “Plastics.” Fifty-one years later, Apple CEO Tim Cook in 2018 suggested, “Coding is an essential skill … [that] should be offered in every school in the world.” Now, a mere eight years since that decree, the AI commentariat warns that computer programming jobs are the No. 1 “most exposed to AI.”

It’s not the skills, but the people your skills benefit

Our industry has long labored under the erroneous belief that a certain skill set or certification will create a moat protecting our employability. Skills — particularly technical skills — are subject to rapid obsolescence. What lasts and what is perhaps your strongest career levee are your relationships.

Though this may be a growth area for many of us, the most successful IT professionals are adept at socializing. This alone provides indication that we need to transcend our perpetual feeling of being an Ishmael, an outsider.

The folks making hiring, firing, and compensation decisions are not particularly interested in the specific skill sets you bring to the table; they are obsessed with the benefits those skill sets deliver. Comedian Steve Martin, when he was working his way up through the comedy club circuit, adhered to the mantra, “Be so good they can’t ignore you.”

Attention management

Canadian media scholar Marshall McLuhan was one of the first to understand the importance of attention in society. IT professionals have to exponentially increase their sensitivity to what key stakeholders are paying attention to. Attention is a tricky area: What people think is important and what is actually important are not always the same.

For example, the No. 1 cause of death in America in 2023 was heart disease (29%).

However, media coverage — a proxy for what we are paying attention to — essentially ignored heart disease (2.8%, 2.9%, and 2.3% of coverage in The New York Times, The Washington Post, and Fox News, respectively).

A perpetual challenge of IT professionals has been how to get the muggles to focus on what really matters. This is an aspirational career levee. We need to understand what stakeholders are paying attention to and why.

Uncertainty requires conversation

Being uncertain is a part of being human. In 1624, a secretary of Cardinal Richelieu, Léonard Marandet, concluded that, “There is nothing more certain than doubt.”

As a futurist it pains me to admit that none of us knows with decimal-point precision what is going to happen next. IT professionals blessed with the conscious awareness of our ignorance of the future need to sponsor conversations that render explicit our cognitive (how we think), emotional (how we feel), and behavioral (what we do) responses to not knowing.

Legitimacy increasingly flows to whoever processes uncertainty first. IT professionals need to avoid the hubris of technocratic expertise (i.e., “this is what the future looks like”), choosing rather the “technologies of humility” — institutional mechanisms, including greater stakeholder participation, for incorporating a wider range of experience and views regarding what comes next.

Uncertainty can be exhausting. But it doesn’t have to be. IT professionals can help stakeholders suffering from feeling they are perpetually running up life’s down escalator by generating conversations that eliminate ambiguity regarding what we want to happen in the future. The future may not be certain but our hopes and dreams have shape and substance we can aim for.

Uncertainty needs to be reframed not as a threat but rather as an opportunity. Uncertainty offers the “opportunity for life to go in different directions,” says Stephanie Gorka of Ohio State University’s College of Medicine, “and that is exciting.”

Hope

Hope is the oxygen for IT. IT needs to be honest but can avoid dystopic and tenebroso (“dark/gloomy”) language when describing the path forward. IT needs to be a vessel for stakeholder agency. As Wes Nicker suggested in his closing salutation on radio station KSAN in the 1970s, “If you don’t like the news, go out and make some of your own.”

See also:

人の可能性を信じ、テクノロジーで未来を設計する──DNP執行役員 情報システム本部長が語るAI・DX戦略の核心

エンジニアから経営視点へ──三十代半ばで訪れたキャリアの転換点

ソフトウェアエンジニアとして社会人生活をスタートし、長く携わったのはICカードOSの開発でした。この経験を通じて、システムにおける品質とセキュリティの重要性を徹底的に学びました。

三十代半ばまではエンジニアとして経験を積み、その後、徐々にマネジメントへと軸足を移していきました。グループ内の関連会社で、IT領域の経営層に近い役割を2社ほど経験し、2023年に現職であるコーポレート部門の情報システム本部長に着任しました。

着任後、最優先で取り組んできたのがDX基盤の整備です。当社ではP&Iイノベーション──印刷(Printing)技術と情報(Information)技術を掛け合わせ、新しい価値を生み出す取り組みを進めています。ITが事業に不可欠となる中で、DXを支える基盤を確かなものにすることが重要だと考え、整備を推進してきました。

柱は大きく3つです。第1にデータ利活用、第2にAI活用、第3にモダナイゼーションです。基盤の構築自体はひと通り完了し、現在は社員一人ひとりがストレスなくITとテクノロジーを使いこなし、自律的にDXを進められる「民主化の状態」を目指して取り組んでいます。

研究開発から事業化へ──ICカードビジネス拡大への貢献

キャリアを振り返って、特に印象に残っているのは、MULTOS OSとSIM OSの開発です。いずれも初期プロトタイプの開発に携わり、リリースまで経験できたことは大きな財産になっています。

私が携わったのは初期段階ではありますが、リリースに関わった経験は、その後の事業発展を考える上でも大きな意味がありました。

「人の可能性は無限大」──現在のマネジメント観の原点となった経験

最大のチャレンジとして強く印象に残っているのは、クレジット決済における本人確認のWebサービスをローンチした経験です。

このサービスは、高セキュリティであることに加え、コンシューマー向けで大規模、しかもミッションクリティカルなWebサービスでした。当時、当社はB2Bが中心で、社会実装はされていても、開発の現場としてはB2B型のソフトウェア開発が大半でした。私の組織でも全社的にも経験が乏しく、率直に言えば当初はリスクが高すぎると考え、反対の立場でした。

事業部門の責任者と直接お話しし、「難しいので見送るべきではないか」と説得を試みたのですが、逆にサービスにかける熱い思いと事業としての展開構想を聞くことになり、結果として「やりましょう」と私が背中を押される形になりました。

セキュリティの知見自体は組織に蓄積がありましたが、ミッションクリティカルなWebサービスとして、どの水準でどう品質を担保するかは手探りでした。それでも当時のチームは、粘り強くトライアンドエラーを重ね、一つひとつ課題を潰して前に進んでくれました。

最終的には無事にローンチにこぎ着け、サービスとして黒字化したと聞いています。この経験を通じて、目の前では不可能に見えることでも、人の力は決してそれだけでは測れない──そのことを深く心に刻みました。「人の可能性は無限にある」という感覚は、いまも私のマネジメントの土台になっていますし、それを教えてくれた当時のメンバーのことは、いまでも誇りに思っています。

技術者としての情熱と管理職としての視座の狭間で

印象に残っているキャリアのアドバイスは2つあります。

1つは「戦う土俵を変える」、もう1つは「貢献への誇り」です。

「戦う土俵を変える」というのは、部長を拝命した際に、ICカード研究開発部門の責任者の方からいただいた言葉です。

「これからは、一人の技術者として部員と同じ土俵で戦ってはいけない」。その言葉は、私にとって非常に重いものでした。

管理職の役割は、個人として前に出て戦うことではなく、部員が最大限に力を発揮できる環境を整え、組織として成果を出すことにあります。いまでも時折、反省すべき局面はありますが、そのたびにこの言葉を思い出して自分を正し、次に臨んでいます。

「貢献への誇り」は、部課長研修で同席した企画営業部長の方から伺った話がきっかけです。

懇親の場で、「自分が担当する得意先の業界の発展に、どれだけ貢献できているか」という強い自覚と誇りを、熱を込めて語ってくださいました。その言葉を聞いたとき、私の中で大きく腹落ちしました。

ITであっても、私たちは価値を生み、それを社会や産業の発展にどう結びつけるのかを自覚しなければなりません。そして、部員が誇りを持って仕事に向き合えるよう、組織を方向づけることも、リーダーの重要な役割だと強く感じました。

この2つの言葉は、現場の思いと経営層としての視点、そして果たすべき責任の狭間で判断するとき、いまでも私の軸になっています。

経営と現場をつなぎ、未来を設計する役割の醍醐味

現職のやりがいは、テクノロジーを武器として、会社の未来を設計し、実行に移せる点にあります。数年前までは、「こうあるべきだ」「こうしたい」と構想しても、技術や環境が追いつかず、結果として構想が構想のままで終わることが少なくありませんでした。

ところが近年、特にAIを中心とした技術進展によって、かつては実現が難しかったことが、現実的な選択肢として見え始めています。不確実性が高まり、外部環境が急速に変化する中で、経営戦略、事業戦略、現場の革新のすべてにおいて、ITとテクノロジーへの期待は確実に高まっています。

その中で、現場から経営まで会社全体を視野に入れ、ITを武器に変革を牽引できる立場にいることは、非常に大きな責任であると同時に、何よりのやりがいでもあります。未来を描き、実行し、実装として根づかせていく──その一連のプロセスに関われることが、この仕事の魅力だと感じています。

多様なマネジメントスタイルが機会を捉え、新たな価値を生む時代へ

成功するマネジメントに必要なことをITの文脈で申し上げるなら、まず「変革への覚悟」を持つことだと思います。

同時に、いまの時代は不確実で、環境の変化も激しい。だからこそ、画一的なリーダーを量産するのではなく、「自分らしいリーダー」が数多く生まれるようにしていくことが重要だと思います。自分の強みを生かし、自分に合ったリーダーシップを発揮する。その多様性こそが、機会を捉え、新しい価値創出につながるはずです。

強みに気づきにくい方へのアドバイスとしては、「弱みの克服」ではなく、いま主流になりつつある「強みを伸ばすこと」を軸にした関わり方が一つの鍵になると思います。

誰かに「あなたはここが強い」と言われても、本人が実感できなければ行動につながりません。自分の中で「これが好きだ」「この領域なら自然に力が出る」と感じるものが、価値観の核になっていくことがあります。そこを言葉にできると、強みの自覚につながりやすくなるのではないでしょうか。

時間が経って初めて見えてくる「挑戦の価値」もある

若手のITリーダーにお伝えしたいのは、ITテクノロジーはあくまで手段であり、道具だということです。手段は、ともすると目的化しやすい。だからこそ、「誰のために何を変えたいのか」「なぜそれをやるのか」という軸を固めてほしいと思います。

また、プロジェクトは終わった瞬間に成功か失敗かを断定できないことも多いものです。一定の時間が経って振り返ったときに、初めて「あの挑戦はこういう価値につながっていた」と見えてくることがあります。その瞬間の評価だけで自分の挑戦を閉じてしまわず、粘り強く続けてほしいと思います。

挑戦を続けること自体が、未来を切り開く力になると信じています。

ITを武器にするのは現場──社員一人ひとりを変革の主体に

今後の展望として、私の担当領域では引き続きDX基盤の民主化に注力します。社員一人ひとりがストレスなくテクノロジーを使いこなし、自律的にDXを実践し、成果が循環する状態を目指します。

当社は事業が多角化しており、事業ごとに状況が大きく異なります。外部環境の変化にスピード感をもって追従し、変革や価値創出につなげるためには、社員一人ひとりがテクノロジーを「自分の武器」にしていくことが最も有効だと考えています。

中長期では、3〜5年を見据え、特にAIの状況を見定めながら、基幹系業務システムを新しい形で再構築したいと思っています。現在の業務プロセスは、人の存在を前提に設計され、その上にシステムがつくられてきました。今後AIがさらに進展する中で、人を排除するのではなく、AIを中心に据えた新しい視点で業務プロセスを再構築していきたいのです。

生成AIの3万人展開から始まる、ガバナンスと変革の両立

DNPでは2023年5月31日、生成AIを社員3万人が利用できる環境としてリリースしました。AI活用の鍵は、「ガバナンス」と「変革への活用」を両立させることにあると考えています。

本格的にエージェント化を進めるためには、環境整備が前提となり、業務プロセス自体も一から再設計し踏みながら進めている最中です。

You selected the right vendors. Now govern them like you mean it.

Waiting for your vendor to fix a program isn’t a strategy. It’s a cost, accumulating quietly while everyone in the room maintains the fiction that the process is working.

I’ve been in both rooms. The room where the client already knows something is wrong and needs the language and the evidence to act, and the room where the client doesn’t know yet. The program feels manageable, the vendor is professional, the steering committee meetings run on time, and the warning signs are sitting in plain sight waiting for someone to name them.

That second room is the more important one. Because the window to act is still open. And most clients don’t move until it’s started to close.

Warning signs most clients miss appear in design

The earliest signal is rarely a missed milestone or a failed deliverable. It appears in language. When the phrase “path to green” starts appearing in status reports and steering committee decks, the program has already accepted it’s not green. It’s shifted from managing execution to managing the narrative.

Watch what the steering committee is actually doing. If it’s consistently hearing about what happened last month rather than what’s forecast for next month, leadership has been converted from a decision-making body into an audience. The vendor controls the agenda, the framing, and the cadence of what gets surfaced.

The most serious signal is when a program sponsor hears about material issues from their own direct reports that the vendor hasn’t raised in the room. That’s not a communication gap but a calculated decision about what leadership is ready to hear. When that pattern appears in SAP, Oracle, or Salesforce programs, the trust that makes the governance model function has already eroded.

When you see these signals, don’t wait for the next steering committee. Start demanding data that can be independently corroborated. Ask the vendor to forecast, not report. If they can’t tell you where the program will be in 60 days, they’re managing your perception, not your program.

Your master conductor has a conflict of interest you’re not addressing

A pattern I’ve seen consistently across multi-vendor programs involving Accenture, Deloitte, PwC, and others is the master conductor, or program integration coordinator, is quick to name client’s gaps, other vendors’ shortcomings, and third-party dependencies running behind. What they almost never do is name their own firm’s failures with the same directness in the same room.

That’s not a personality issue but a structural conflict. The firm serving as master conductor is delivering against its own statement of work (SOW), and the governance position gives them access to information, reporting authority, and narrative control they’ll use to, consciously or not, protect their own delivery track.

This is why I advise clients to treat the master conductor and program integration coordinator role as structurally separate from the vendor delivery role. That means a, entirely separate firm, an independent integrator with no delivery stake in the outcome. In practice, it’s more often a designated individual or a group within the project management or transformation office carved from one of the existing vendors, reporting directly to the client and accountable to the steering committee, not to their own firm’s engagement leadership.

There’s no true firewall in that model, but there’s a behavioral test. Watch what that role or team does with information that reflects badly on their own firm. Do they surface it or escalate it with the same urgency they bring to client gaps? Do they forecast problems on their own track, or only on everyone else’s?

A master conductor who’ll escalate failures that implicate their own delivery team is doing the job. One who only calls out the client and the other vendors is protecting the engagement.

Before the next SOW is signed, make it structural. Define the master conductor role separately from the delivery role, name the individual or team, set the reporting line directly to the client, and use the behavioral test to determine whether the role is being performed or merely filled.

Waiting isn’t neutral

The financial cost of waiting is more specific than most clients realize. In a multi-vendor environment where two or three system integrators are billing against active SOWs, every month of schedule extension carries a material cost, potentially millions to tens of millions of dollars per firm, not because scope expanded, but because governance didn’t hold the timeline.

The commercial exposure appears even earlier. When scope boundaries are unclear and the integrated plan is unstable, vendors have no reliable baseline to price against. The result is predictable: a significant spread between a time-and-materials estimate and a fixed fee quote for the same scope. That spread is not a pricing difference. It’s the vendor converting your governance uncertainty into their contract protection. The client absorbs it either way.

What makes the waiting feel reasonable is the vendor’s day-to-day team is usually professional and working hard. So the problem is authority and incentive, not effort. The program manager running the engagement can’t authorize additional resources nor commit spend across organizational lines. Their job is to manage the relationship, protect their firm’s margin, and keep the engagement profitable. Fixing your program isn’t the same job.

The window to act is real and short. A senior executive at the vendor can absorb costs, bring new talent, and make commitments the delivery team has no authority to make. But that authority diminishes as the program ages. The more that’s been billed and the more scope has shifted, the harder it is for even a motivated senior executive to make the client whole. Clients who act in design or early build have options that clients who wait until three months before go-live don’t.

The intervention that works is a leadership one

When the signals are clear and the client is ready to act, the intervention that moves the needle isn’t a governance document or a scorecard meeting but a top-to-top conversation between client and vendor senior leadership. This includes execs who aren’t running the day-to-day program but have something personal at stake in the outcome.

That conversation works because it activates a different set of incentives. The vendor’s senior executive, the sector partner and industry leader whose name is on the relationship, needs your program to be referenceable. They don’t want a PR failure on a flagship engagement, nor do they want to explain to their firm’s leadership why a major client program collapsed. They have authority their delivery team doesn’t: power to assign their best resources, ability to absorb costs the SOW or change order doesn’t cover, and they can accelerate staffing decisions and make commitments that change what the program can do. They have skin in the game their team doesn’t.

Also, structure the engagement deliberately. Have senior executives on both sides and new talent brought in as a visible signal of vendor investment. And have a cadence that continues until the data shows the program is back on track, with time-bound accountability on both sides. And have explicit understanding that the relationship itself is under review, not just the program.

This is sustained leadership engagement, not a one-time meeting, and it doesn’t replace the governance model. It enforces it.

The only recovery signal worth trusting

When the top-to-top works, you’ll know it by what the vendor brings back to the table. Not reassurances or a revised plan with optimistic milestone dates, but facts about where they failed, what they’re changing, and, most critically, where the client has performance gaps that also need to close.

A vendor who comes back and accepts blame still manages the relationship. A vendor who says we failed here and here, these are the specific changes we’re making, and you have a gap here we need you to address, that vendor is engaged and mutually accountable. That’s the integrity test.

It runs both ways because program failure almost always does. Slow client decisions. Unavailable business resources. Requirements that shifted after design was locked. A vendor who names those things alongside their own failures isn’t deflecting, they’re investing in an outcome. That’s the signal the recovery is real.

If the executive meeting produces only promises and general commitment, keep the pressure on. Real engagement looks like specific admissions, named resources, and a willingness to hold the mirror up to both sides of the table.

You hold the accountability. Be the human in the middle.

Through all of it, the client holds the ultimate accountability. The master conductor holds the responsibility for execution and integration across the vendor ecosystem. That distinction isn’t administrative. It means the client can’t outsource their judgment, regardless of how rigorous the governance model looks on paper.

Think of it like the vendor can hallucinate. Not out of malice, but because every status report is a curated narrative produced by people whose compensation, future work, and professional reputation depend on how that narrative lands. The program deck isn’t neutral data, it’s information filtered through interests. What’s present tells you something. What’s absent, however, tells you more.

Be the human in the middle. Verify, cross-reference, ask questions the deck didn’t answer, and notice what’s missing as much as what’s there. If the steering committee is only hearing good news, that’s a sign someone is deciding what leadership is ready to hear, not that the program is running well.

Demand forecasts, not status reports. Look for hard evidence that can be independently corroborated. When the vendor names a client performance gap alongside their own, take it seriously. That’s the accountability model working the way it’s supposed to, not a deflection.

The warning signs may not always be apparent, though. The window is open, but won’t stay that way, so waiting isn’t a strategy.

❌