Visualização normal

Antes de ontemStream principal
  • ✇Security | CIO
  • AI is spreading decision-making, but not accountability
    On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised. As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks sugg
     

AI is spreading decision-making, but not accountability

6 de Maio de 2026, 07:00

On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised.

As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks suggest responsibility is shared across legal, risk, IT, and business teams, courts may ultimately find it far less evenly distributed when something goes wrong.

AI, after all, may diffuse decision-making, but not legal liability.

AI doesn’t show up in court — people do

Jessica Eaves Mathews, an AI and intellectual property attorney and founder of Leverage Legal Group, understands that when an AI system influences a consequential decision, the algorithm isn’t what will show up in court. “It’ll be the humans who developed it, deployed it, or used it,” she says. For now, however, the deeper uncertainty is there’s very little case law to guide those decisions.

“We’re still in a phase where a lot of this is speculative,” says Mathews, comparing the moment to the early days of the internet, when courts were still figuring out how existing legal frameworks applied to new technologies. Regulators have signaled that responsibility can’t be outsourced to algorithms. But how liability will be apportioned across vendors, deployers, and executives remains unsettled — an uncertainty that’s unlikely to persist for long.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jessica Eaves Mathews, founder, Leverage Legal Group

LLG

“There are going to be companies that become the poster children for how not to do this,” she says. “The cases working their way through the system now are going to define how this plays out.”

In most scenarios, responsibility will attach first and foremost to the deploying organization, the enterprise that chose to implement the system. “Saying that we bought it from a vendor isn’t likely to be a defense,” she adds.

The underlying legal principle is familiar, even if the technology isn’t: liability follows the party best positioned to prevent harm. In an AI context, that tends to be the organization integrating the system into real-world decision-making, so what changes isn’t who’s accountable but how difficult it becomes to demonstrate appropriate safeguards were in place.

CIO as the system’s last line of defense

If legal accountability points to the enterprise, operational accountability often converges on the CIO. While CIOs don’t formally own AI in most organizations, they do own the systems, infrastructure, and data pipelines through which AI operates.

“Whether they like it or not, CIOs are now in the AI governance and risk oversight business,” says Chris Drumgoole, president of global infrastructure services at DXC Technology and former global CIO and CTO of GE.

The pattern is becoming familiar, and increasingly predictable. Business teams experiment with AI tools, often outside formal processes, and early results are promising. Adoption accelerates but controls lag. Then something breaks. “At that moment,” Drumgoole says, “everyone looks to the CIO first to fix it, then to explain how it happened.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Chris Drumgoole, president, global infrastructure services, DXC Technology

DXC

The dynamic is intensified by the rise of shadow AI. Unlike earlier forms of shadow IT, the risks here aren’t limited to cost or inefficiency. They extend to things like data leakage, regulatory exposure, and reputational damage.

“Everyone is an expert now,” Drumgoole says. “The tools are accessible, and the speed to proof of concept is measured in minutes.” For CIOs, this creates a structural asymmetry. They’re accountable for systems they don’t fully control, and increasingly for decisions they didn’t directly authorize.

In practice, that makes the CIO the enterprise’s last line of defense, not because governance models assign that role, but because operational reality does.

The illusion of distributed accountability

Most organizations, however, aren’t building governance structures around a single accountable executive. Instead, they’re constructing distributed models that reflect the cross-functional nature of AI.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Ojas Rege, SVP and GM, privacy and data governance, OneTrust

OneTrust

Ojas Rege, SVP and GM of privacy and data governance at OneTrust, sees this distribution as unavoidable, but also potentially misleading. “AI governance spans legal, compliance, risk, IT, and the business,” he says. “No single function can manage it end to end.”

But that doesn’t mean accountability is shared in the same way. In Rege’s view, responsibility for outcomes remains firmly with the business. “You still keep the owners of the business accountable for the outcomes,” he says. “If those outcomes rely on AI systems, they have to figure out how to own that.”

In practice, however, governance is fragmented. Legal teams interpret regulatory exposure, risk and compliance define frameworks, and IT secures and operates systems. The result is a model in which responsibility appears distributed while accountability, when tested, is not — and it often compresses to a single point of failure. “AI doesn’t replace responsibility,” says Simon Elcham, co-founder and CAIO at payment fraud platform Trustpair. “It increases the number of points where things can go wrong.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Simon Elcham, CAIO, Trustpair

Trustpair

And those points are multiplying. Beyond traditional concerns such as security and privacy, enterprises must now manage algorithmic bias and discrimination, intellectual property infringement, trade secret exposure, and limited explainability of model outputs.

Each risk category may fall under a different function, but when they intersect, as they often do in AI systems, ownership becomes blurred. Mathews frames the issue more starkly in that accountability ultimately rests with whoever could have prevented the harm. The difficulty in AI systems is that multiple actors may plausibly claim, or deny, that role. So the result is a governance model that’s distributed by design, but not always coherent in execution.

The emergence and limits of the CAIO

To address this ambiguity, some organizations are beginning to formalize AI accountability through new leadership roles. The CAIO is one attempt to centralize oversight without constraining innovation.

At Hi Marley, the conversational platform for the P&C insurance industry, CTO Jonathan Tushman recently expanded his role to include CAIO responsibilities, formalizing what he describes as executive accountability for AI infrastructure and governance. In his view, effective AI governance depends on structured separation. “AI Ops owns how we build and run AI internally,” he says. “But AI in the product belongs to the CTO and product leadership, and compliance and legal act as independent checks and balances.”

The intention isn’t to eliminate tension, but to institutionalize it. “You need people pushing AI forward and people holding it back,” says Tushman. “The value is in that tension.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jonathan Tushman, CTO, Hi Marley

Hi Marley

This reflects a broader shift in enterprise governance away from centralized control and toward managed friction between competing priorities — speed versus safety, innovation versus compliance. Yet even this model has limits.

When disagreements inevitably arise, someone must decide whether to proceed, pause, or reverse course. “In most organizations, that decision escalates often to the CEO or CFO,” says Tushman.

The CAIO, in other words, may coordinate accountability. But ultimate responsibility still sits at the top and can’t be delegated.

The widening gap between deployment and governance

If organizational models for AI accountability are still evolving, the gap between deployment and governance is already widening. “Companies are deploying AI at production speed, but governing at committee speed,” Mathews says. “That’s where the risk lives.”

Consequences are beginning to surface as a result. Many organizations lack even a basic inventory of AI systems in use across the enterprise. Shadow AI further complicates visibility, as employees adopt tools independently, often without understanding the implications.

The risks are both immediate and systemic. Employees may input sensitive corporate data into public AI platforms, inadvertently exposing trade secrets. AI-generated content may infringe on copyrighted material, and decision systems may produce biased or discriminatory outcomes that trigger regulatory scrutiny.

At the same time, regulatory expectations are rising, even in the absence of clear legal precedent. That combination — rapid deployment, limited governance, and legal uncertainty — makes it likely that a small number of high-profile cases will shape the future of AI accountability, as Mathews describes.

Where the buck stops

For all the complexity surrounding AI governance, one pattern is becoming clear. Responsibility may be distributed, authority may be shared, and new roles may emerge to coordinate oversight, but accountability doesn’t remain diffused indefinitely.

When systems fail, or when regulators intervene, it often points at enterprise leadership, and, in operational terms, to the executives closest to the systems in question. AI may decentralize how decisions are made, obscure the pathways through which those decisions emerge, and challenge traditional notions of control, but what it doesn’t do is eliminate responsibility. If anything, it magnifies it.

AI accountability is a familiar problem, refracted through a more complex system. The difference is the system is moving faster, and the cost of getting it wrong is increasing.

  • ✇Security | CIO
  • How UKG puts AI to work for frontline employees
    As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation fram
     

How UKG puts AI to work for frontline employees

6 de Maio de 2026, 07:00

As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation framework to transform the frontline worker experience, and why the CIO-CHRO partnership is critical to making it stick.

How do you leverage AI for growth and transformation at UKG?

UKG is one of the largest HR, pay, and workforce management tech platforms in the market, and our expertise is in creating solutions for frontline workers, which account for 80% of the world’s workforce. This is important because when companies rebrand themselves as AI for knowledge workers, they’re not talking about frontline workers. But people in retail, manufacturing, healthcare, and so on also benefit from AI capabilities.

So the richness of our data sets, and our long history with the frontline workforce, positions us well for AI driven workforce transformation. 

What are some examples?

We use agentic AI for dynamic workforce operations, which shows us real-time labor demand. Our customers employ thousands of frontline workers, and the timely market insights and suggested actions we give them are new and valuable.

We also provide voice agents. Traditionally, when a frontline worker requests a shift, managers would review availability, fill out paperwork or update scheduling software, and eventually offer an appropriate job. With voice agents, AI works directly with the frontline worker, going through background and skills validation, communication, and even workflow execution. The worker can also ask if they can swap shifts or even get advice on how to make more money in a particular month. This is where AI changes the entire frontline worker experience.

We also launched People Assist, an autonomous employee support agent. Typically, when an employee is onboarded, IT and HR need to trigger and approve workflows. People Assist  not only tracks workflows, but also performs those necessary IT and HR onboarding activities so new employees are productive from day one.

What framework do you use to create these new capabilities?

For internal AI usage for our own employee experience, we use an idea-to-implementation framework, which involves a community of UKG power users who are subject matter experts in their area. Ideas can come from anybody, and since we started nine months ago, more than 800 ideas have been submitted. The power users set our priorities by choosing the ideas that will make the most impact.

Rather than funneling ideas through a small central team — a linear process that kills momentum — we’ve democratized innovation across the business. We give teams the governance frameworks, change models, and risk guardrails they need to move quickly.  With AI, the most important thing isn’t to launch, but to land.

But before we adopted the framework, we defined internal personas so we could collaborate with different employee groups across the company, from sales to finance.

With the personas and the framework, we can prioritize ideas by persona, which also facilitates crowd sourcing. You’re asking an entire persona which of these 10 ideas will make their lives better, rather than senior leaders making those decisions for them.

Why do so many CIOs focus on personas for their AI engine?

Across the enterprise, every function has a role to play. We hire marketing, sales, and finance for a particular purpose. Before AI, we gave generic packaged tools to everyone. AI allows us to build capabilities to make a specific job more effective. Even our generic AI tools are delivered by persona. Its impact on specific roles is the reason personas are so important right now. Our focus is on the actual jobs, the people who do them, the skills and tasks needed, and the outcomes they want to achieve.

We know our framework and persona focus work from employee data. In our most recent global employee engagement survey, 90% said they’re getting the right AI tools to be effective. For the AI tools we’ve launched broadly across the company, eight out of 10 employees use them. For me, AI isn’t about launching 10,000 tools, because if no one uses them, it’s just additional cost for the CIO and the company.

Is the build or buy question more challenging in this nascent stage of AI?

The lifecycle of technology has moved from three years to three hours, so whenever we build at UKG, we use an open architecture, which allows us to build with a commercial product if one comes on the market.

Given the speed of innovation, we lean toward augmentation rather than build. There are areas, like our own native products, where a dedicated engineering team makes sense. But for most of our AI capabilities — customer support and voice agents, for example — we work with our vendor partners. We test and learn with multiple vendors, and decide on one usually within two weeks.

This is what AI is giving all CIOs: flexibility, rapid adoption, interoperability, and the ability to quickly switch vendors. It’s IT that’s very different from what it used to be.

Given the shift to augmentation, how will the role of the software engineer change?

For software builders, business acumen — the ability to understand context — is no longer optional. In the past, the business user would own the business context, and the developer, who owns the technology, brings that business idea to life. Going forward, the builder has the business context to create the right prompts to let AI do the building, and the human in the loop is no longer the technology builder, but the provider of context, prompts, and validation of the work. So the engineer doesn’t go away, however they now finish a three-week scope of work in hours. With AI, engineers operate at a different altitude. The SDLC stays, but agility increases where a two-week concept compresses into two days.

At UKG, you’re directly connected to the CHRO community. What should they be thinking about how the workforce is changing with AI?

The best CHROs are thinking about the skills they’ll need for the future, and how to train existing talent to be ready. They’re not questioning whether we’ll need people, but how to sharpen our teams for new roles. The runbooks for both IT and HR are evolving, which is why the CIO-CHRO partnership has never been more critical to create the right culture for AI transformation.

CIOs can deliver a wealth of employee data like roles, skillsets, and how people spend their time. And as HR leaders help business leaders think through their roadmap for talent —  both human and AI — IT leaders can equip them with exactly that intelligence.

What advice would you give to CIOs driving AI adoption?

Invest in AI fluency, not just AI tools. Your people don’t need to become data scientists, but they do need a new kind of literacy — the ability to work alongside AI, question its outputs, and know when to override it. That’s a training and culture investment, not a software investment.

And redesign work before you redeploy people. Don’t just drop AI into existing workflows. Use this moment to ask what work really matters. AI is forcing us to have the job design conversations we should’ve had years ago, so it’s important to be transparent about the journey. What’s killing workforce trust now is ambiguity. Your people can handle hard truths but not silence. Leaders who communicate openly about where AI is taking the organization will retain the talent they need to get there.

  • ✇Security | CIO
  • The AI assessment gap: Why your hiring process can’t find the talent you need
    The next time someone on your team says, ‘hire an AI engineer,’ stop the conversation. That title is too vague because it fails to account for critical differences in engineering strengths. Instead, companies need to decide specifically what they need. Is it someone to rapidly prototype AI solutions? Or someone to build the solution that makes it ready for production? Or someone to design the supporting capabilities and infrastructure to scale it? These are all different
     

The AI assessment gap: Why your hiring process can’t find the talent you need

6 de Maio de 2026, 07:00

The next time someone on your team says, ‘hire an AI engineer,’ stop the conversation.

That title is too vague because it fails to account for critical differences in engineering strengths. Instead, companies need to decide specifically what they need. Is it someone to rapidly prototype AI solutions? Or someone to build the solution that makes it ready for production? Or someone to design the supporting capabilities and infrastructure to scale it? These are all different skills and require different assessments during the hiring process.

But here’s where companies also fall short. Assessing skills is hard and assessments, as we know them, are broken when it comes to AI. They’re misaligned with what AI roles actually demand. That misalignment is what I call the AI assessment gap.

Where the gap lives

Most technical assessments were built for a pre-AI world: Coding proficiency, algorithms, deterministic system design. These are skills tests. They confirm that an engineer can do the work. But they don’t tell you whether that engineer has the technical taste to make the right decisions when building, scaling or deploying AI systems in production.

In conversations with enterprise engineering leaders, we’re hearing that candidates are now running AI agents during live interviews, getting textbook-perfect answers fed to them in real time. If your assessment can be passed by an AI whispering in someone’s ear, it was never testing for the right thing. Skills can be faked or augmented. Taste can’t.

To see what this looks like, consider this scenario: An enterprise needs someone with deep experience in a specific data platform. A candidate passes the data engineering assessment. They get to the client interview, and the hiring manager says: ‘Tell me about a time you had to make a hard tradeoff in designing a streaming architecture.’ The candidate defines every concept involved. They don’t have the taste to explain why one approach would be dramatically better than another for a specific context. They’re out.

This happens because most assessment pipelines only test for skills: Can they code, understand the fundamentals? Nobody is systematically testing for technical taste: Can this person make better-than-default decisions about architecture, tooling and approach? That question only surfaces once someone with real experience asks it. By then, everyone has wasted time and the role is still open.

Traditional job postings compound the problem by filtering for ‘5+ years of AI experience,’ which screens out strong candidates because the category itself is only a few years old. What matters at the AI layer isn’t tenure. It’s the depth and specificity of what someone has built, deployed or scaled in production. Meanwhile, seniority at the foundational role level still matters in the ways it always has: A senior engineer brings architectural judgment that can’t be shortcut. The mistake is applying years-of-experience filters to the AI layers, where the work hasn’t existed long enough for tenure to be a meaningful signal.

One of the most telling signals of a broken assessment process: When stakeholders simultaneously complain that assessments are too hard and too easy. That’s not a calibration problem. It means the assessments aren’t measuring the right things in the first place. They’re testing for skills when they should be testing for taste.

Start with the work, not the title

To close the AI assessment gap, decompose the problem before you assess and decompose the need across the dimensions that actually determine whether someone can do the job. For example:

DimensionThe QuestionWhat It DeterminesHow You Evaluate
RoleWhat technical domain does the work live in?Foundational skills and stack (e.g., backend engineer, Python, PostgreSQL)Skills assessment: Project-based or simulation-based filter that confirms core engineering competency
SeniorityWhat level of judgment and autonomy does this work require?Engineering maturity, depth of technical taste, ability to operate under ambiguityExperience depth at the role level: Years of practice in the domain, complexity of systems designed and shipped
AI Engagement PatternHow will this person engage with AI systems?The specific technical taste required (e.g., Prototyper needs taste for sensing value; Builder needs taste for architecture and integration decisions; Scaler needs taste for performance, governance and risk tradeoffs)Applied assessments: Not ‘define RAG’ but ‘given this use case, which approach would you choose and why?’ Testing for tradeoff reasoning, not terminology

This decomposition replaces the single job description with a structured picture of what you actually need. It also immediately reveals whether you’re looking for one person or a team. If the project requires rapid prototyping to find value and then a production build, you probably need engineers with different profiles–not one ‘AI engineer’ who’s supposed to do both.

Three things most enterprises get wrong

  1. They test for skills when they should test for taste. Most assessments confirm that an engineer can write code and define concepts. They don’t test whether that engineer can make the architectural and tooling decisions that actually determine project success. An engineer who knows what agentic search is and an engineer who knows when agentic search is the right choice for a specific problem are two completely different hires. The first passes your skills test. The second delivers in production.
  1. They conflate skills with experience. A skills assessment tells you if someone can do the work. An experience validation tells you if someone has done the work in the specific context the job demands. These require completely different evaluation methods. When companies try to test both with a single instrument, they get the ‘too hard and too easy’ paradox: The assessment is simultaneously screening out competent people and letting through candidates who can’t perform. Seniority and years of experience are meaningful at the role level, where 10 years of backend engineering builds real architectural judgment and compounds technical taste. They’re much less meaningful at the AI engagement layer, where the work itself is only a few years old and depth of hands-on exposure matters more than calendar time.
  1. They treat assessment as a snapshot. The traditional model is a one-time gate: Pass or fail, in or out. In an AI world where skills are evolving monthly, that approach breaks down fast. Six months ago, almost nobody was shipping production code with agentic tools like Claude Code. Model Context Protocol, which lets AI systems plug into enterprise tools and data sources, was barely on anyone’s radar. Now enterprises are hiring for these skills specifically. Six months from now, the list will change again.

That means an assessment built in January is already partially stale by June. Companies that treat assessment as a living system, continuously updated by performance signals from real engagements, will consistently field better talent than those running the same tests they built a year ago.

The reskilling imperative

The reality is, there is no way to close this gap through hiring alone. The supply of engineers who already have the technical taste for AI work is a tiny fraction of what the market demands.  For example, since the launch of ChatGPT in 2022, demand for roles that require more analytical, technical or creative work has increased by 20%.

Which means enterprises have to reskill and upskill existing workforces. And without a targeted approach mapped to actual needs, AI upskilling efforts often fail, leaving employees unsupported and initiatives stalled.

This is where the multi-dimensional model pays off beyond hiring. The same framework that powers your talent acquisition also powers your training strategy. Assessment results don’t just filter candidates in or out. They generate a heat map of where your workforce is strong and where it’s thin, across every dimension: Role competency, seniority depth and the specific technical taste required for prototyping, building or scaling AI systems. That heat map becomes your training roadmap.

Most companies skip this entirely and jump straight to ‘let’s buy an AI training program.’ Without that foundation, even the best training program is solving the wrong problem.

Ever ready

In the world of AI, the most critical skill might be knowing that you don’t and can’t possibly know everything. Or even what’s coming next. The roles we need today will look different in six months. The skills taxonomies we build now will need constant revision. The assessments we deploy this quarter will need recalibration by next quarter.

Companies that accept this reality and build nimble, multi-dimensional approaches to talent assessment will find something valuable: The technical taste they need already exists in their workforce, hiding behind outdated job descriptions and misaligned tests. CIOs must actively audit these descriptions to eliminate the traditional experience filters that mask the latent talent already sitting on their teams.  The others will keep posting for ‘AI engineers’ and wondering why nobody who gets hired can actually do the job.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The immutable mountain: Understanding distributed ledgers through the lens of alpine climbing
    In modern enterprises, we often default to centralized command-and-control structures. But in high-stakes environments — whether a whiteout on an Andean peak or a volatile global supply chain — centralization is a single point of failure. To manage complexity and risk, we must look to the architecture of the decentralized network. A storm at high camp The stone walls of the refuge did little to settle our hearts against the pounding storm outside. Wind whistled throu
     

The immutable mountain: Understanding distributed ledgers through the lens of alpine climbing

5 de Maio de 2026, 11:00

In modern enterprises, we often default to centralized command-and-control structures. But in high-stakes environments — whether a whiteout on an Andean peak or a volatile global supply chain — centralization is a single point of failure. To manage complexity and risk, we must look to the architecture of the decentralized network.

A storm at high camp

The stone walls of the refuge did little to settle our hearts against the pounding storm outside. Wind whistled through cracks in the masonry as frozen rain pelted the windows like handfuls of marbles. I lay on my back, bundled in a 10-degree sleeping bag, staring at the bottom of the bunk above me. My pack stood upright beside me, boots and gear stacked with obsessive neatness for maximum efficiency at go-time. My journal, filled with the week’s entries, sat atop the pack next to my headlamp.

I focused on regulating my breathing to acclimate, ensuring full inhales and exhales maximize every element of oxygen in the thin air. At 1:30 AM, our head guide entered the room to announce the weather was challenging; we would hold. Then came 2:00, 2:30, 2:45, 2:46… if there were any climbs I wanted to skip, this was it.

At 2:48, the light flickered on. Damn, I thought.

Our lead guide announced that while the weather was horrible, we would make a go of it. The eight of us moved with sudden purpose. We rallied outside in our four rope teams, confirmed the route and left the safety of the refuge for our summit attempt on Cayambe.

Our guides did not dictate every footstep as in a traditional hierarchical construct, where information must travel to the top for a decision to be made and then back down to be executed. Instead, the expedition operated as a series of nodes (rope teams). Each guide was authoritative within their specific context, having the autonomy to make real-time decisions based on the immediate terrain.

On a mountain, that latency is fatal. By distributing authority, the expedition becomes composable. Each team operates independently but remains synchronized through a shared “state” of the mountain.

The relentless scramble

Our first segment required a difficult scramble: 1,500 feet of exposed rock while we were pounded by the elements. It was relentless. Our headlamps were nearly useless against the whiteout. Frozen rain crusted my face, crystals formed on my brows and my goggles iced over. I kept my head down to protect my face, my lamp illuminating only a few feet of black volcanic ash and ice.

We rose slowly. Each step ended with a deliberate straightening of the trailing leg — the “rest step” — to grant a moment of relief. I lost sight of two teams; the lights from the third glimmered like dying sparks several hundred feet away. The roar of the wind was broken only by short “blips” from the radio. Through the static, I heard the muffled voices of guides discussing locations, hazards and routes. Even in the isolation of the storm, I knew we were connected.

The “blip” of truth

We reached the glacier independently. I stepped into my harness, strapped on crampons and tied off on the rope. Once a team was double-checked for safety, they vanished into the dark. In short order, the distance between us grew until I had no visual reference for the others. My guide and I settled into a rhythm, the rope kept taut between us.

It is in these moments — when no one else can be seen on the mountain — that time slows. The challenge becomes internal, and you begin to question every life choice that led you to a frozen ridge at 19,000 feet.

The radio blips continued. On this day, I was the subject of those blips. Bronchitis had settled in from our previous summit of Antisana, and my blood oxygen was dropping below 85%. My rescue inhaler was failing at the altitude. Two-thirds of the way to the summit, the coughing started.

I pushed until I simply couldn’t. I bent over, coughing hard, my lungs burning and wheezing as fluid began to move. Suddenly, my Apple Watch buzzed — it was dialing an emergency. My mind shifted into a strange, analytical gear: I wonder how the signal even propagates from here? Is it connecting on GPS? Where’s the satellite? How would a rescued team even get here? What would they even do? Does an emergency line actually connect here? Apparently, my life as an INTP reached a new level, as I realized I analyze my own demise while it’s happening.

I disabled the watch, stood straight, ate nacho-flavor chips and drank water. We moved on, the “blips” continued and were more prevalent. Eventually, reality caught up. I was bent over again, moving more fluid from my lungs. In that moment of clarity, I remembered: I’m on vacation. Is this my vacation? What is wrong with me? Am I qualified to make my own life choices?  We turned around for a descent that was anything but graceful.

The distributed journal

That afternoon, we met for lunch. Each team member highlighted their journey. Stories were reconciled, and a complete picture of the mountain emerged. We checked into our “cybercast” to recount the story to the world.

The decision to turn back was recorded, not just in my mind, but across the collective memory of the team. This is the essence of immutability. In a distributed ledger, once an event is verified and added to the “block” or the day’s journey, it cannot be altered or erased. It becomes part of a permanent, auditable chain of events that provides a “single source of truth” for the entire organization.

To this day, I am still amazed at the architecture of an expedition. Each guide is authoritative with their rope team, working autonomously yet connected. The head guide doesn’t make every micro-decision; they delegate that to the nodes — the guides on the ground — to do what is best for their specific context. Together, each team’s experience, when reconciled, becomes the “truth” of the trip.

This is exactly how a Distributed Ledger works. In the workplace, a distributed journal or “composable authoritative source” can be split across systems and databases. Much like our rope teams, different organizations or departments, customers, suppliers, buyers, manufacturing —  have ownership of their part of a dataset. They work independently, yet together they provide a singular, authoritative ledger.

Consensus mechanisms in high-entropy environments

The most critical challenge of any distributed system — digital or human — is consensus. How do multiple independent actors agree on a single version of the truth and maintain ongoing records of transactions is a core function of any business. The protocol provides an opportunity to synchronize transactions between multiple systems across internal functions or externally to business partners, providing a holistic view of a value stream.

In a distributed ledger, we find “truth” through two main methods, both of which I saw on Cayambe:

  1. Synchronous consensus: Through radios, our guides provided status updates to ensure current information was mirrored across our rope teams. Reconciling these views across the day ensures “Proof of Work” — the validation that the progress recorded actually happened.
  2. Gossip protocol: This is the alternative communication method where guides discuss routes and risks with other teams as they pass each other. Information “hops” from team to team. In a digital ledger, this isn’t a “whisper down the lane” where information degrades; it is a rapid, peer-to-peer synchronization that ensures every system eventually holds the same exact data.

In 2026, we see movement beyond consensus, providing a “Proof of Work” to more resilient asynchronous models. Staying with our storyline on Cayambe, the synchronous consensus can incorporate fault tolerance that allows the network to reach an agreement even if some “nodes” (climbers) are offline or sending conflicting signals.  Further, the gossip protocol can be extended to pass the history of who said what and when as a Directed Acyclic Graph (DAG). Unlike a linear chain, a DAG allows multiple “events” to happen simultaneously. On the mountain, this meant Team A could be navigating a rockfall while Team B was crossing a glacier, and both realities were synchronized into the master record without one waiting for the other to finish.

Immutability: The frozen record

Our trip reports and journals “lock down” the information for the day. Movements between camps and the summit are codified as blocks of information. These are sequenced together to create a chain of events. If someone tried to change the history of Day 2, it wouldn’t align with the reality of Day 2.

In a digital blockchain, this data is encrypted and sequenced so that the history of a transaction is permanent and verifiable by any participant. This does create a situation where the transactions are unable to be deleted and modified.  With that said, there is an industry pragmatic approach if regulation requires a right-to-be-forgotten by using a Hyperledger Fabric that provides the ability to amend or delete information.

Blockchain concepts in the alpine environment

ObjectAlpine exampleDistributed ledger
NodeAn individual climber or rope teamComputer or system
TransactionAn occurrence of an event or fact while climbingData Record
ConsensusConsensus through radio communications, storytelling or peer-to-peer gossipProof or validation of work state
BlockCompleted and verified segment of tripBundle of verified transactions
ChainContinuous route from basecamp through the segments to completion of the tripChronological link of blocks.

Strategic implications for the enterprise

Why does this matter for the C-Suite? By adopting a distributed ledger mindset, businesses can achieve a distributed value stream with ledgers maintained across external business providers, customers and vendors, accelerating business. This includes:

  • Flexibility and agility:  Through distributed ledgers, organizations can shift from monolithic systems to composable systems built on microservices, orchestrated together.
  • Radical transparency: Every stakeholder has access to an identical, real-time record of truth. This may even include information across boundaries with external business partners, including customers or suppliers, creating a fully integrated, composable value stream.
  • Operational resilience: If one “node” (a supplier or a regional office) fails, the rest of the network maintains integrity of the data.
  • Reduced friction: Trust is built into the architecture of the system, rather than relying on manual audits and third-party verification.

Ultimately, a distributed ledger is less about the underlying code and more about the philosophy of collective trust. Whether navigating the “death zone” of a mountain or the complexities of a global market, the truth is most resilient when it is not owned by a single leader but held by everyone brave enough to participate in the journey.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The AI workplace paradox: Higher productivity, higher anxiety
    Workers are facing a conundrum: They worry about the potential for their displacement by AI even as it dramatically speeds up their own productivity. According to a new survey from Anthropic, workers in roles most likely to be taken over by AI (developers or IT workers, for instance) recognize their precarious position. Yet, perhaps naturally, they readily adopt the tools that could take their jobs, and see first-hand how well they work. This measurement is fundament
     

The AI workplace paradox: Higher productivity, higher anxiety

23 de Abril de 2026, 23:39

Workers are facing a conundrum: They worry about the potential for their displacement by AI even as it dramatically speeds up their own productivity.

According to a new survey from Anthropic, workers in roles most likely to be taken over by AI (developers or IT workers, for instance) recognize their precarious position. Yet, perhaps naturally, they readily adopt the tools that could take their jobs, and see first-hand how well they work.

This measurement is fundamentally different from the way others are gauging AI job displacement, noted Thomas Randall, research director at Info-Tech Research Group.

While macro reports, such as those from Goldman Sachs, the International Monetary Fund (IMF), or the World Economic Forum (WEF), are asking what share of existing job tasks AI could theoretically perform in the future, “Anthropic is measuring qualitative experiences of workers in the present,” he pointed out. This “tells us how people are navigating this landscape in real time.”

The paradox of AI in the workforce

Anthropic’s survey of 81,000 Claude users gauged peoples’ “visions and fears” around advances in AI, and weighed these findings against the company’s own measurement of jobs most vulnerable to AI displacement. This was based on Claude usage data; jobs are identified as more exposed when associated tasks are significantly performed on the platform, in work-related contexts, and take up a larger share of a role.

Some occupations at risk include computer programmers, data entry keyers, market researchers, software quality assurance analysts and testers, information security analysts, and computer user support specialists.

Overall, one-fifth of respondents expressed concern about displacement, noting that their job, or at least aspects of it, is being taken over by automation. Those in jobs identified as most exposed readily recognized that fact, voicing worry three times as often as those in less at-risk positions. One software engineer remarked: “like anyone who has a white collar job these days, I’m 100% concerned, pretty much 24/7 concerned, about losing my job eventually to AI.”

Early-career respondents were also more nervous than others.

At the same time, those in the highest-paid occupations reported the largest productivity gains when using AI. This is most notably in terms of their ability to perform new tasks, which was cited by 48% of users. In addition, 40% of workers said the technology helped speed up their work, and a little more than 10% said it improved the quality of their work.

In general, enterprise usage of AI is “actually quite consistent,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. Teams are using the technology “where information is abundant and time is limited,” such as in drafting documents and code, summarizing content, responding to customer queries, navigating internal systems.

Is AI actually creating more work?

Still, not everyone thinks AI makes their jobs easier or faster. In some cases, people felt it made their work harder; for instance, project managers are assigning tickets for issues that are much more difficult to solve, Anthropic noted.

Gogia agreed that, even when tasks become easier, scope and responsibilities expand, and roles can absorb adjacent tasks. This results in a “redistribution of effort,” rather than a reduction of effort.

“Faster generation means higher expectations on quality,” he said. More output feeds into decision pipelines that are already constrained. “In some cases, the system becomes heavier, not lighter.”

Delayed impact on enterprises

 The market is rewarding those who can integrate AI into complex workflows to do more, faster, and often with better outcomes, Gogia noted. However, the most exposed tasks, including  documentation, basic coding, routine analysis, and structured support work, often “sit at the base of the experience ladder.”

These very tasks traditionally have given entry-level workers a way in, and the automation of them reduces the urgency for companies to hire them. “What you begin to lose is not the job,” said Gogia. “It is the path into the job.”

This can have a delayed impact; enterprises may not realize until years later that they do not have enough mid-level experts because they didn’t bring enough people in at lower levels. As AI plays a greater role in the workplace, there must be a “conscious effort” to rethink how people enter and grow, Gogia said. “New pathways need to be created, and they need to be deliberate.”

How enterprise leaders should adjust

As is often the case, sentiment moves faster than structural change, Gogia pointed out. Workers feel the shift almost immediately, but organizations take longer to adjust hiring, redesign roles, and rethink workforce structures.

“This is why expectations can become misaligned,” he noted. The reality is that most enterprises have introduced AI into existing ways of working without fundamentally changing them. Acceleration occurs in unchanged systems that still carry the same dependencies, approval chains, and coordination challenges.

Ultimately, Gogia advised, leaders must approach the shift with “intentional design.” This requires clarity, he emphasized; people need to understand how their work is expected to change. What will be enhanced? What will reduce? Where should they focus their development?

Baselines are moving: Roles may begin to look “oversized” as what used to be considered a full day’s work begins to look like half a day’s work, or what used to be considered efficient begins to look average. “AI is changing how work is done, but more importantly, it is changing what work expects from people,” said Gogia.

As well, Info-Tech’s Randall pointed out that workers who experience AI expanding what they can do by performing tasks previously outside their competence appear to relate to AI more positively than those who experience it as doing their existing job faster. So, he advised, “tech leaders should design AI deployment around capability extensions.”

Along with goal setting, managers must have support, Gogia emphasized. They set expectations and interpret strategy, and when they’re not properly equipped, “even the best tools will fall short,” he said. Measurement must also evolve; enterprises need to look at quality, sustainability, and capability development over time.

“What we are witnessing right now is not a sudden disruption,” said Gogia. “It is a gradual shift that is becoming impossible to ignore.”

This article originally appeared on Computerworld.

  • ✇Security | CIO
  • Why planning structures must evolve in modern manufacturing
    Across many manufacturing organizations I have worked with, I keep seeing the same puzzling pattern. Companies invest in better forecasting tools. They implement advanced planning systems. They improve supply chain processes. Yet something strange still happens. Some components are overplanned. Others are repeatedly short. Production teams start expediting parts. Suppliers are pushed to deliver faster. Eventually, leaders ask the obvious question: If plannin
     

Why planning structures must evolve in modern manufacturing

22 de Abril de 2026, 06:00

Across many manufacturing organizations I have worked with, I keep seeing the same puzzling pattern.

Companies invest in better forecasting tools. They implement advanced planning systems. They improve supply chain processes.

Yet something strange still happens.

Some components are overplanned. Others are repeatedly short. Production teams start expediting parts. Suppliers are pushed to deliver faster.

Eventually, leaders ask the obvious question:

If planning systems are improving, why do these imbalances still occur — and why are teams still relying on spreadsheets and manual workarounds?

In my experience, the issue is rarely forecasting accuracy, execution capability or supplier performance. It begins with how planning parameters are defined inside enterprise systems.

Most ERP environments I have worked with still rely on static assumptions, while the real supply chain behaves dynamically. This mismatch between static planning logic and dynamic operational behavior is where structural imbalances originate.

The hidden problem: Static planning parameters

Across implementations, I consistently find that three tightly connected parameters drive planning behavior:

  • Planning Bills of Materials (Planning BOMs)
  • Lead Times
  • Safety Stock

These are typically maintained as master data, reviewed periodically and updated manually, generally once or twice a year. That approach may have worked in stable environments, but modern manufacturing operates under continuous change. Product configurations evolve, customer preferences shift and supply conditions fluctuate.

When these assumptions remain static, the system does not fail; it drifts. And that drift manifests as imbalance across components, time and availability.

Example #1: Planning BOM

In one environment I worked with, the Planning BOM assumed that 70% of orders used a standard PLC module and 30% used an advanced PLC. Over time, actual demand shifted and advanced PLC usage exceeded 50%.

However, the planning structure did not change, largely because updating it required significant manual effort and coordination across teams.

The result was not simply excess inventory — it was misalignment:

  • Overplanning of standard components
  • Underplanning of advanced components
  • Repeated substitutions and expediting

The forecast itself remained reasonably accurate. The imbalance emerged because demand was being translated through outdated structural assumptions.

More fundamentally, I have observed that Planning Bills of Materials, while central to ERP-driven planning, were never designed to capture the full complexity of manufacturing execution. Traditional BOM structures define what needs to be built, but not how it is built.

This limitation has been highlighted in patent US10832197B1, which introduces the concept of a “bill of work” to represent the actual activities, routing and process steps required for manufacturing. However, this type of execution-aware structural modeling is still rarely implemented in most ERP systems, which continue to rely primarily on static BOM definitions.

In my experience, this gap reinforces a broader point: Static planning structures alone are insufficient to model dynamic, real-world production environments.

Example #2: Lead time

I have seen cases where average demand remained stable at 100 units per week and lead time was assumed to be static at 10 weeks. In reality, lead time fluctuated between 8 and 14 weeks.

This did not just affect total inventory; it disrupted timing alignment:

  • Materials arriving too early for some components
  • Materials arriving too late for others

The issue was not quantity. It was synchronization across time.

Example #3: Safety stock

When shortages occur, organizations often increase safety stock. Most enterprise systems support this through simple mechanisms:

  • Fixed quantities
  • Coverage-based calculations

Safety Stock = Average Daily Demand × Days of Coverage

Both approaches assume relatively stable demand variability and supply risk.

However, real supply chains are not stable. Demand patterns shift, suppliers fluctuate and disruptions occur frequently. In this context, increasing safety stock often protects a distorted signal rather than correcting it.

In my work on inventory optimization, sometimes referred to as Garg’s Principle, I evaluate safety stock across the full forecast horizon rather than at a single point.

A simplified representation is:

Safety Stock = Target Service Inventory − Minimum Projected Inventory Across the Forecast Horizon

This approach identifies the lowest projected inventory point and ensures buffers protect that constraint. It transforms safety stock from a static buffer into a forward-looking stability mechanism.

In practice, I consistently see that increasing buffers alone does not resolve imbalance:

  • Some components become over-buffered
  • Others remain constrained
  • Overall inventory may increase, but instability persists

The problem is not how much safety stock exists; it is how it is aligned.

Individually, each of the above three examples (planning BOM, lead time and safety stock) introduces distortion. Together, they amplify it.

Why static planning structures break in a dynamic world

Many ERP planning systems were designed for environments where product configurations, supplier behavior and demand patterns changed slowly.

That reality no longer exists.

Today’s manufacturing environments operate in constant change. Product variants evolve rapidly, customer expectations shift quickly and supply chains face ongoing disruption. Yet many planning models still assume stable product mixes, fixed lead times and constant buffers.

This gap between dynamic markets and static planning structures is where imbalances begin.

At a broader level, this reflects a structural limitation of ERP-centric planning. ERP systems are highly effective at executing transactions and maintaining control, but they extend past data into the future using relatively fixed assumptions. As highlighted in Why ERP-Centric Planning Can’t Keep Up with Modern Supply Chains, such systems often struggle to keep pace when demand patterns, supply variability and product configurations change continuously.

In many cases, supply chains do not struggle because forecasts are wrong; they struggle because the parameters translating demand into supply decisions remain static or are not updated regularly or require huge manual efforts.

Execution systems cannot fix planning imbalance

Planning imbalances do not remain confined to ERP systems, they propagate across the entire manufacturing stack.

Manufacturing Execution Systems (MES) and shop-floor operations depend on the plans they receive. When those plans are structurally imbalanced, execution systems cannot correct them; they simply operationalize the imbalance.

This relationship between planning and execution has been widely discussed in the context of modern MES platforms, which act as the bridge between enterprise systems and real-time production environments, as explored in Manufacturing execution systems: A comprehensive guide to selection and implementation.

I have also discussed a similar pattern in Why your ERP still can’t solve inventory drift — and the architecture that will, where ERP systems struggle not because they are broken, but because they operate on outdated assumptions.

From what I have seen, once a structural error enters the system, it flows through:

Forecast → Planning BOM → ERP → MES → Shop-floor execution

By the time production begins, the imbalance is already embedded.

From static to dynamic planning architecture

For CIOs, I do not see the solution as replacing ERP systems. Instead, I see an opportunity to modernize the intelligence layer that feeds them.

In my experience, artificial intelligence can transform static planning parameters into adaptive models that continuously learn from enterprise data.

AI-driven planning systems can incorporate:

  • Historical configurations and production data
  • Sales inputs and forward-looking programs
  • Engineering changes and substitution patterns
  • Supplier performance and variability

Using these inputs, machine learning models can estimate the probability distribution of components and dynamically generate Planning BOMs that reflect real-world behavior.

In parallel:

  • Lead times can be adjusted dynamically
  • Safety stock can be aligned with forward-looking variability

In practice, this works through four steps:

  1. Build a structural signature from early demand signals
  2. Identify comparable configurations using historical data
  3. Predict component mix probabilities
  4. Generate a dynamic Planning BOM

ERP remains the execution engine, but the structure feeding it becomes adaptive.

When I experimented with dynamic planning approaches, the impact was structural:

BehaviorTraditional Static PlanningDynamic Planning
Component alignmentFrequent mismatchImproved alignment
ExpeditingFrequentReduced by ~30–40%
Production schedulesUnstableMore predictable
ERP- MES alignmentFrequent substitutionsImproved synchronization
Safety stock behaviorIncreasing without stabilityTargeted and stable

These results reinforce a broader lesson:

Planning challenges are not driven by lack of inventory; they are driven by lack of alignment.

Mini case study: Resolving structural imbalance

In one manufacturing environment I worked with, forecasting accuracy was strong and supplier performance was stable. Yet planning imbalance persisted.

At a system level, inventory appeared sufficient. However:

  • Critical components were frequently unavailable
  • Non-critical components accumulated
  • Production schedules required constant adjustment

The issue was not shortage, it was misalignment.

When I analyzed the system, I found:

  • Planning BOMs reflected outdated configurations
  • Lead times were fixed despite variability
  • Safety stock was increased uniformly

This created a cycle of persistent imbalance and expediting.

We shifted to a dynamic planning approach:

  • BOM assumptions aligned with actual demand
  • Lead times adjusted based on observed variability
  • Inventory evaluated across the planning horizon

Within a few cycles:

  • Imbalance reduced significantly
  • Expediting declined
  • Production schedules stabilized

The key change was not more inventory; it was better alignment.

A strategic opportunity for CIOs and supply chain VPs

From a CIO perspective, this represents a fundamental shift.

The question is no longer: “How do we improve planning tools?”

The better question is: How do we transform static planning parameters into adaptive planning intelligence?”

Because in modern manufacturing, planning structure is strategy.

Conclusion

Based on my experience, traditional planning systems rely on static assumptions, while modern supply chains operate in constant change.

The challenge is not about inventory levels; it is planning alignment.

When planning structures remain static, imbalances persist — even when forecasting and execution improve.

But when planning becomes dynamic, when assumptions evolve with reality, those imbalances begin to disappear.

The next era of manufacturing advantage will come not from more inventory or faster execution, but from dynamic real-time alignment between planning assumptions and real-world behavior.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • AI is scoring your job candidates. Can you explain how?
    Somewhere in your organization’s hiring stack, there is probably an AI system producing candidate scores. If you’re a leader who helped evaluate or approve that system, here’s a question worth sitting with: If one of those scores got challenged, by a candidate, an internal audit or a regulator, could your team explain how it was produced? Not “the vendor said it’s accurate.” Not “the model was trained on historical data.” A specific, documented explanation of what criter
     

AI is scoring your job candidates. Can you explain how?

20 de Abril de 2026, 08:00

Somewhere in your organization’s hiring stack, there is probably an AI system producing candidate scores. If you’re a leader who helped evaluate or approve that system, here’s a question worth sitting with: If one of those scores got challenged, by a candidate, an internal audit or a regulator, could your team explain how it was produced?

Not “the vendor said it’s accurate.” Not “the model was trained on historical data.” A specific, documented explanation of what criteria were evaluated, how the candidate performed against them and why those criteria are job-relevant.

For a growing number of organizations using AI video interview scoring tools, the honest answer is no. And as regulatory frameworks targeting employment AI move from guidance to enforcement, that answer is a risk.

What the system is actually optimizing for

Before asking how accurate an AI scoring system is, the right question is what it is optimizing for.

Many video interview scoring platforms evaluate tone of voice, pace, eye contact, facial expressions and fluency alongside or in some cases instead of, the actual content of candidate responses. The underlying assumption is that these signals correlate with job performance or cultural fit. The evidence for that assumption is weak. The evidence that measuring these signals introduces systematic, legally significant bias is much stronger.

Several major players in this space removed facial analysis features after regulatory pressure and public scrutiny. That acknowledgment — that criteria advertised as objective were neither reliable nor fair — should raise a harder question. If those criteria were in production and no one caught it until outside pressure forced a change, what else is still being measured that shouldn’t be?

This is not a hypothetical risk. The EEOC has made it clear that employers are liable under Title VII for discriminatory outcomes from AI hiring tools, regardless of whether those tools were built in-house or purchased from a vendor. New York City’s Local Law 144 requires annual independent bias audits of automated employment decision tools and public disclosure of results. Illinois requires notice and consent before AI is used to evaluate video interviews. The EU AI Act, whose high-risk AI provisions take full effect this August, explicitly classifies employment AI as high-risk, with binding requirements for transparency, explainability and human oversight.

The common thread: Can you explain what your AI is measuring, and can you demonstrate that it’s measuring the right things?

The accountability problem at the executive level

For technology leaders, this is where the conversation becomes concrete.

Consider the scenario: A hiring decision gets challenged by a candidate, an internal audit or a regulator. The question is how the decision was made. “The AI scored them lower” is not a defensible answer in any of those contexts. It can’t be traced to specific job-relevant criteria. It can’t be explained to the candidate. It won’t satisfy an auditor. And if the system’s logic is proprietary and opaque, the organization has no way to produce a satisfying answer even if it wants to.

The organizations that adopt black-box scoring tools often do so with the right intentions: To reduce human bias and create a more consistent process. Those are legitimate goals. But a system whose internal logic can’t be questioned, explained or audited just obscures bias. It doesn’t reduce it. And when bias becomes difficult to see, it becomes more difficult to address.

This is a pattern you’ll recognize from other domains. When a system produces outcomes that look plausible but are wrong in ways that aren’t immediately visible, the failure compounds before it surfaces. The cost of discovering it late is almost always higher than the cost of building it right from the start.

What a defensible architecture looks like

There is a meaningful difference between AI that scores interviews and AI that scores interviews in a way that can be explained and defended. The distinction is structural.

Defensible scoring starts before any candidate records a response. It starts with the job. What competencies does this role require, and what does strong performance against each competency look like? From those answers, explicit rubrics are developed. Criteria that describe what high-quality, adequate and weak responses look like for each dimension being evaluated. Those rubrics are reviewed and approved by the hiring team before scoring begins.

When responses come in, the AI evaluates what candidates actually said against those pre-defined criteria. Not tone. Not pacing. Not facial expression. What they communicated, measured against a standard the hiring team set, and can explain. Criterion-level scores roll up to an overall assessment, and every part of that chain is visible and auditable.

This architecture has an important secondary property: The human remains meaningfully in the loop. The AI generates a starting point by identifying relevant competencies and drafting rubric criteria from the job description, but the standard is owned by the people responsible for the hire. If a hiring manager can’t look at a scoring rubric and explain what it’s evaluating and why, it should not be deployed. That is not a burden on the tool. It is the minimum condition for using it responsibly.

Four questions for the governance conversation

For leaders evaluating or overseeing AI video interview tools, four questions surface most of what matters.

  1. What specifically is the system scoring? Request an explicit list of evaluation criteria. If the answer includes anything beyond the content of candidate responses, ask for the validation data that connects those criteria to job performance outcomes.
  2. Are the criteria derived from job requirements? Generic rubrics applied uniformly across roles create standardized evaluation, not structured evaluation, which is different. Legitimate scoring starts from the specific competencies required for the specific role.
  3. Can the criteria be reviewed, modified and approved before scoring begins? If the rubrics are fixed and opaque, the organization is not in control of its own evaluation standard. That is a governance gap.
  4. Can any score be explained to a candidate or a regulator? This is the accountability test. If the explanation requires “the AI said so” rather than pointing to specific, documented criteria and how a candidate performed against them, the process will not withstand scrutiny.

Well-designed systems answer these questions directly. The ones that can’t are telling you something important about the tradeoffs their creators made.

Why this moment matters

The EU AI Act deadline is in August, forcing organizations with global operations or EU-based candidates to evaluate their tech. But getting this right isn’t just regulatory, it’s practical.

When hiring teams can see exactly how a score was produced, they use it. When they can’t explain it, they override it or work around it, the efficiency gains disappear. The tools that will last in enterprise hiring stacks are the ones that make decisions transparently enough that the humans responsible for those decisions trust them.

That’s not a high bar. But it requires being precise about what any given AI system is really measuring. And honest about whether that’s what you actually want to know.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • AI 자동화 시대, CIO가 다시 설계해야 할 ‘업무 방식’의 본질
    모든 비즈니스 프로세스에는 처음 설계될 당시 존재했던 제약이 반영돼 있다. IT 임원 마리아 카도는 이렇게 설명했다. 카도는 이러한 제약이 대개 해당 프로세스를 최초로 구현하던 시점의 기술적 한계에서 비롯됐다고 전했다. 그 결과 오늘날에도 많은 기업 프로세스가 여전히 수작업을 요구하거나, 핵심 업무를 처리하기 위해 여러 컴퓨터 애플리케이션을 오가야 하는 번거로운 워크플로를 포함하고 있다고 분석했다. 그러나 인공지능과 최신 기술은 이런 과거의 제약을 해소할 수 있다. 카도는 관리형 보안 서비스 기업 레벨블루(LevelBlue)의 CIO로서 “그렇기 때문에 비효율적이고 낡은 프로세스를 그대로 자동화하기 전에 반드시 프로세스를 재평가하는 일이 매우 중요하다”라고 설명했다. 카도는 “더 큰 혁신을 가능하게 하려면 전환 작업을 시작하기 전에 기존 프로세스에 깔린 전제를 점검해야 한다”라고 밝혔다. AI를 활용해 조직의 워크플로를
     

AI 자동화 시대, CIO가 다시 설계해야 할 ‘업무 방식’의 본질

16 de Abril de 2026, 19:25

모든 비즈니스 프로세스에는 처음 설계될 당시 존재했던 제약이 반영돼 있다. IT 임원 마리아 카도는 이렇게 설명했다.

카도는 이러한 제약이 대개 해당 프로세스를 최초로 구현하던 시점의 기술적 한계에서 비롯됐다고 전했다. 그 결과 오늘날에도 많은 기업 프로세스가 여전히 수작업을 요구하거나, 핵심 업무를 처리하기 위해 여러 컴퓨터 애플리케이션을 오가야 하는 번거로운 워크플로를 포함하고 있다고 분석했다.

그러나 인공지능과 최신 기술은 이런 과거의 제약을 해소할 수 있다. 카도는 관리형 보안 서비스 기업 레벨블루(LevelBlue)의 CIO로서 “그렇기 때문에 비효율적이고 낡은 프로세스를 그대로 자동화하기 전에 반드시 프로세스를 재평가하는 일이 매우 중요하다”라고 설명했다.

카도는 “더 큰 혁신을 가능하게 하려면 전환 작업을 시작하기 전에 기존 프로세스에 깔린 전제를 점검해야 한다”라고 밝혔다.

AI를 활용해 조직의 워크플로를 혁신하거나 최적화해야 한다는 압박은 경영진 전반에서 커지고 있다. JP모건의 ‘2026 비즈니스 리더 아웃룩’에 따르면, 중견 기업이 사용 중이거나 도입을 계획 중인 AI 활용 사례 가운데 프로세스 자동화가 가장 많았다. 설문에 응한 임원의 62%가 이를 선택했다. EY의 ‘CEO 아웃룩 2026’ 조사에서도 CEO의 43%가 운영 최적화와 생산성 향상을 전환의 최우선 과제로 꼽았다. 제품 및 프로세스 혁신 강화는 세 번째로 많이 언급된 과제로 나타났다.

카도 역시 이러한 변화를 현장에서 주도하고 있다. 카도는 IT 조직이 이 같은 작업을 수행하기에 매우 적합한 위치에 있다고 보고 있다.

예를 들어 비즈니스 워크플로를 점검하는 과정에서, 카도의 IT 팀은 하나의 프로세스가 “적어도 12개 이상의 서로 다른 시스템에 걸쳐 있고, 워크플로에 내재된 수많은 전제가 두 세대 이전 기술에 의존하고 있다”는 사실을 자주 발견한다고 전했다.

카도는 이런 사례가 왜 CIO가 AI 자동화에 앞서 프로세스 최적화를 이끌어야 하는지를 잘 보여준다고 설명했다. 오랫동안 경고돼 온 ‘잘못된 프로세스를 자동화하지 말라’는 원칙을 상기시킨 것이다.

최적화에 적합한 역할, CIO

카도에 따르면 레벨블루는 기술 라이선스 갱신 시점이 다가올 때마다 해당 기술에 의존하는 비즈니스 프로세스를 함께 검토한다. 기존 기술을 갱신하거나 교체하기 전에, 그리고 추가적인 자동화나 AI를 적용하기 전에 그 프로세스가 최적화와 전환의 대상이 될 수 있는지 판단하는 것이 목적이다.

이러한 시점 덕분에 CIO인 카도가 전사 프로세스 최적화 작업을 주도하게 된다. 또한 카도는 어떤 기술 역량이 프로세스를 최적화하고 전환할 수 있는지에 대해 비즈니스 부서보다 더 깊이 있는 이해를 갖추고 있다. CIO로서 생산성과 효율성을 높이고 비용을 절감해야 할 책임도 맡고 있다.

카도는 CIO가 이러한 역할에 자연스럽게 부합한다고 언급했다. CIO는 직무가 생긴 이래 대부분의 기간 동안 최적화 책임을 맡아왔다는 설명이다.

카도는 또 IT 조직이 프로세스 최적화 과정에서 객관성을 제공한다고 분석했다. 현업 직원은 기존 방식과 사용 중인 도구에 익숙해져 있는 경우가 많지만, CIO와 IT 팀은 그런 관성에 덜 얽매여 있다는 것이다.

카도는 “프로세스가 어떻게 실행되고 있는지를 비판적으로 점검하는 역량은 현재 CIO 역할에서 절대적으로 중요하다”라고 말했다. 이어 “비즈니스 부서는 여전히 ‘원래 이렇게 해왔다’라고 이야기할 수 있지만, CIO는 프로세스를 간소화하고 해결책을 개발하는 과정에서 내재된 전제를 질문하고 각 단계가 여전히 적절한지 따질 수 있는 위치에 있다”라고 밝혔다.

AI가 끌어올린 프로세스 최적화의 무게감

카도가 언급했듯이 CIO는 직무가 생긴 이후 줄곧 프로세스 최적화에 관여해 왔다. 기술을 도입해 업무를 자동화하는 것이 주요 역할 중 하나였다. 그러나 AI 시대에 접어들면서 프로세스 최적화에서 CIO가 맡는 역할의 중요성은 한층 더 커졌다.

디지털 전환 서비스 기업 서덜랜드(Sutherland)의 CIO 겸 최고디지털책임자(CDO) 더그 길버트는 “지금은 과거와 완전히 다르다”라며 “점진적인 변화가 아니라 진화적 전환이 일어났다”라고 설명했다.

길버트는 로보틱 프로세스 자동화(RPA)가 처음 도입됐던 시기와 같은 이전 자동화 물결에서는 CIO가 개별 업무 단위를 개선하는 데 집중했다고 분석했다. 그러나 이제는 조직 운영의 더 넓은 영역을 혁신하기 위해 전체 프로세스와 워크플로를 최적화해야 한다고 진단했다.

길버트는 “AI를 단순히 더 복잡한 RPA로 보고 자잘한 업무를 해결하는 데 활용하려 한다면 실패할 가능성이 높다”라며 “사람이 여러 시스템과 다양한 업무를 넘나들며 일하는 방식을 전체적으로 살펴야 한다. 오늘날 자동화는 전체 프로세스 흐름을 아우르는 수준으로 이뤄져야 한다”라고 밝혔다.

대표 사례로 길버트는 2025년에 출시와 구축을 마친 자사 ‘인슈어런스 AI 허브’를 들었다.

길버트는 “CIO로서 전체 기술 방향과 아키텍처를 총괄했다”라며 “언더라이팅, 보험금 심사, 계약자 서비스 등 여러 사업 부문에 걸쳐 도메인 특화 AI 에이전트로 구성된 전체 생태계를 구축하는 한편, 정제된 마스터 데이터 기반과 내재된 거버넌스, 가시성, 휴먼 인 더 루프 통제를 초기 단계부터 설계에 반영했다”라고 설명했다. 이어 “그 결과 최대 30% 더 빠른 보험금 처리 주기, 누수 감소, 고객 만족도 향상, 규제 준수 강화로 이어지는 프로덕션급 에이전트형 AI를 구현했다”라고 전했다.

두 번째 사례로는 의료 제공자 자격 인증을 위한 에이전트형 AI 플랫폼을 소개했다. 길버트는 이 멀티에이전트 시스템이 “복잡하고 규제가 많은, 여러 기능이 얽힌 프로세스를 거시적 관점에서 재설계했다”라며 “기존에 분절돼 있던 여러 프로세스와 내부 시스템, 외부 소스 전반에 걸쳐 정보를 자동화하고 맥락화했다”라고 설명했다.

길버트는 “이 에이전트는 긴밀히 협력해 훨씬 더 큰 거시적 수준의 복잡한 프로세스를 처리한다”라며 “실시간으로 맥락화된 데이터를 공유하고, 전 과정에서 품질과 규제 준수를 유지하며, 자격 인증의 엔드투엔드 워크플로 전반에서 더 높은 수준의 자동화와 문제 해결을 가능하게 한다”라고 밝혔다. 이어 “과거 수주에 걸쳐 수작업으로 진행되던 느린 프로세스를 빠르고 정확하며 통제 가능한 운영 체계로 전환했다”라고 덧붙였다.

CIO에게 찾아온 변곡점

IT 임원들은 오늘날 프로세스 최적화가 과거보다 CIO에게 더 많은 역량을 요구한다고 입을 모은다. CIO는 조직 내부에서 업무가 실제로 어떻게 이뤄지는지 세부적으로 이해해야 하며, 해당 프로세스가 더 큰 워크플로 안에서 어떤 위치에 놓여 있는지도 파악해야 한다. 또한 프로세스와 워크플로가 어떤 방식으로 의사결정에 도달하는지, 그리고 무엇이 좋은 혹은 정확한 의사결정인지를 정의할 수 있어야 한다.

길버트는 “에이전트형 AI, 즉 대규모 언어 모델이 실제로 행동을 수행할 수 있는 역량은 기준을 크게 끌어올리고 있다”라며 “제대로 설계된 시스템은 단순히 규칙을 따르는 수준을 넘어 계획하고, 결정하고, 여러 시스템을 넘나들며 실행하고 학습한다”라고 설명했다. 이어 “이제 CIO는 사일로 단위로 최적화하거나, 문제가 있는 프로세스에 지능을 덧붙이는 방식으로는 충분하지 않다”라며 “명확한 가드레일 안에서 자율 에이전트에 적합하도록 워크플로를 처음부터 다시 설계하는 전면적 재구상을 주도해야 한다”라고 밝혔다.

길버트는 이러한 요구가 과거에 비해 훨씬 복잡하고 중대한 수준이라고 분석했다.

길버트는 “과거에는 비즈니스 부서로부터 프로세스를 넘겨받아 이를 매핑하고, RPA나 전통적인 도구로 자동화가 가능한 부분을 처리한 뒤 다시 돌려주는 방식이었다”라며 “그러나 이제 AI는 추론과 맥락 이해, 시스템 간 연계를 가능하게 한다. 따라서 CIO가 업무 방식 자체를 근본적으로 다시 설계하는 일을 주도해야 한다”라고 전했다.

이어 “이제 CIO는 엔드투엔드 프로세스 인텔리전스를 책임져야 한다”라며 “기반 데이터가 정제되고 맥락화돼 있는지, 거버넌스가 내재돼 있는지, 최적화된 프로세스가 실제로 신뢰할 수 있는 비즈니스 성과를 내는지를 보장해야 한다”라고 설명했다. 또 “CIO는 전략 테이블에 앉아야 하며, 앞으로는 전략을 주도하는 역할까지 맡아야 한다”라고 밝혔다.

킨드릴 컨설트 앤 프랙티스 미국의 수석 부사장 캐서린 말코바는 산업 전반에서 CIO에게 워크플로 재설계를 맡기는 흐름이 뚜렷해지고 있다고 분석했다.

말코바는 “기존 프로세스를 조금 더 효율적으로 만드는 데 그치는 것이 아니라, 처음부터 AI 에이전트 기반 워크플로로 다시 설계하는 데 초점이 맞춰지고 있다”라며 “이는 곧 AI 네이티브 엔터프라이즈를 구축하는 일”이라고 설명했다. 이어 “‘워크플로의 에이전트화’가 다음 AI 도입 물결을 이끌 핵심 동력이 될 것”이라고 전망했다.

킨드릴의 ‘2025 레디니스 리포트’는 경영진이 기대하는 변화의 폭을 보여준다. 21개국 3,700명의 비즈니스 리더를 대상으로 한 조사에서 응답자의 87%는 올해 자사에서 AI가 직무와 역할을 완전히 변화시킬 것이라고 답했다.

만만치 않은 장애물

전환에 대한 기대는 높지만, CIO와 C레벨 경영진이 마주한 과제 또한 적지 않다.

킨드릴의 연구는 “앞으로 해결해야 할 다섯 가지 구체적인 준비 과제”로 견고한 기술 기반 구축, 글로벌 데이터 관리, 인력 구조의 진화, AI 파일럿의 확장 압박, 리더십 정렬을 제시했다.

킨드릴 컨설트 앤 프랙티스 미국의 수석 부사장 캐서린 말코바는 경직된 워크플로, 분절된 시스템, AI에 대한 신뢰 부족, 필요한 역량의 부재, 변화에 대한 저항 역시 프로세스와 워크플로를 최적화하고 전환하려는 CIO와 조직에 큰 도전 과제가 되고 있다고 분석했다.

일부 전문가는 많은 프로세스가 제대로 문서화돼 있지 않다는 점도 또 다른 장벽으로 꼽는다. 데이터 보안과 프라이버시, AI 환각 현상에 대한 우려 역시 부담 요인으로 작용하고 있다.

컨설팅 기업 브리지넥스트(Bridgenext)의 CTO로 CIO를 자문하는 돔 프로피코는 기업의 과도한 의욕 자체가 문제가 될 수 있다고 진단했다. AI가 최적화를 돕도록 설계된 프로세스를 근본적으로 전환하는 작업은 뒤로 미룬 채, 서둘러 AI를 도입하려는 경향이 있다는 설명이다.

프로피코는 “AI는 자동화를 훨씬 쉽게 만들어주지만, 그만큼 잘못된 프로세스를 자동화할 위험도 커진다”라며 “AI 시대에는 모두가 뒤처지고 있다고 느끼는 압박이 있다. 이는 너무 빠르게 움직일 가능성과 그에 따른 위험을 키운다”라고 말했다.

또한 기존 프로세스를 단순 자동화하는 것만으로는 일정 수준의 효율성 향상은 가능하겠지만, CEO와 이사회가 기대하는 수준의 실질적 성과와 전환으로 이어지지는 않을 수 있다고 강조했다.

변화의 기폭제가 돼야 할 CIO

프로피코는 CIO가 프로세스 최적화라는 영역에 보다 엄격한 기준을 적용해야 한다고 조언했다. “팀이 올바른 대상을 자동화하고 있는지, 기술을 적절한 방식으로 활용하고 있는지를 점검해야 한다”라고 설명했다.

프로피코는 CIO에게 반드시 필요한 역량으로 시스템 사고를 꼽았다. 이는 개별 요소를 넘어, 더 큰 전체 안에서 각 요소가 어떻게 상호작용하는지를 이해하는 총체적 문제 해결 접근법을 의미한다.

이를 위해서는 CIO가 비즈니스 전반과 업무 수행 방식, 그리고 어떤 활동이 실제 가치를 창출하는지에 대한 이해를 더욱 심화하고 확장해야 한다고 덧붙였다.

서덜랜드의 길버트 역시 이에 동의했다. 길버트는 필요한 역량으로 프로세스 인텔리전스 플랫폼과 마이닝 도구에 대한 깊은 이해, 강력한 데이터 전략 전문성(특히 마스터 데이터 관리, 데이터 계보, 컨텍스트 엔지니어링), AI 거버넌스와 리스크 관리 역량(특히 자율 에이전트와 책임성 측면), 고도화된 변화 리더십(인간과 에이전트가 효과적으로 협업하는 새로운 운영 모델 설계), 그리고 기술적 의사결정을 손익계산서와 고객 성과에 직접 연결할 수 있는 비즈니스 번역 역량을 제시했다.

길버트는 “기술적 깊이는 여전히 중요하지만, 오늘날 진정한 차별화 요소는 인간과 에이전트가 함께 작동하는 워크플로를 시스템 차원에서 사고하고, 전략·기술·리스크의 교차점에서 리더십을 발휘하는 능력”이라고 분석했다.

보스턴컨설팅그룹(BCG)의 CIO이자 매니징 디렉터 겸 파트너인 메림 베치로비치 역시 유사한 견해를 밝혔다.

베치로비치는 과거 CIO가 사일로 안에서 프로세스를 최적화하는 데 집중했다면, 지금은 접근 방식이 달라졌다고 설명했다. “이제는 사람을 여러 시스템으로 보내는 것이 핵심이 아니다”라며 “여정 자체와 그 안에서의 경험을 전환하는 일이 중요하다. 점을 연결하고, 그러한 경험과 여정을 만들어내는 시스템을 연결하는 것이 핵심”이라고 말했다.

이는 결국 프로세스와 워크플로가 어떻게 연결돼 실질적인 성과를 창출하는지를 중심에 두는 접근을 의미한다.

베치로비치는 “CIO로서 이는 자연스러운 진화”라며 “기술은 업무를 들여다볼 수 있는 렌즈를 제공하고, 사일로를 허물 수 있는 도구에 대한 접근성을 제공한다”라고 설명했다.

베치로비치는 BCG IT 부서가 자체 개발한 슬라이드 제작 AI 도구 ‘덱스터(Deckster)’를 성공적인 프로세스 최적화 및 전환 사례로 제시했다.

BCG 직원은 매년 3,000만 장 이상의 슬라이드를 제작한다. 상당한 업무 시간을 차지하는 프로세스다. BCG는 개별 슬라이드 문구 작성과 같은 특정 작업에 생성형 AI를 적용해 점진적 개선을 추구하는 대신, 슬라이드 제작의 전체 생애주기를 재설계했다. 오픈AI API와 BCG 템플릿 라이브러리를 결합한 덱스터는 기존 15분이 걸리던 작업을 3초 만에 완료해, 완전한 형식의 고객 제출용 슬라이드를 생성한다.

BCG는 현재 덱스터에 에이전트형 AI 역량을 추가해 프로세스를 한층 더 최적화하고 전환하고 있다.

베치로비치는 “이런 지점에서 CIO가 ‘더 나은 업무 방식은 없는가’라고 질문할 수 있다”라며 “오늘과 내일의 CIO는 변화를 더 적극적으로 모색하고 주도해야 한다. 변화를 촉발하는 기획자이자 영향력 있는 조정자, 실행을 가능하게 하는 촉진자, 협업자, 그리고 혁신가가 돼야 한다”라고 밝혔다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • IBM’s government DEI settlement could increase pressure to avoid tech hiring diversity
    IBM has agreed to settle a complaint from the US Justice Department around its initiatives to diversify its workforce and to encourage hiring of underrepresented groups, contrary to a presidential directive. The federal contractor also agreed to pay the government roughly $17 million. The pressure from the Trump administration to eliminate workforce diversification efforts, typically known as DEI (Diversity, Equity, and Inclusion) programs, has persuaded many companies,
     

IBM’s government DEI settlement could increase pressure to avoid tech hiring diversity

15 de Abril de 2026, 01:01

IBM has agreed to settle a complaint from the US Justice Department around its initiatives to diversify its workforce and to encourage hiring of underrepresented groups, contrary to a presidential directive. The federal contractor also agreed to pay the government roughly $17 million.

The pressure from the Trump administration to eliminate workforce diversification efforts, typically known as DEI (Diversity, Equity, and Inclusion) programs, has persuaded many companies, including Meta, Google, Amazon, Salesforce, Intel, OpenAI, Tesla and Zoom, to publicly back away from those diversification efforts. A few companies, including Apple, Microsoft, Nvidia and Oracle, have held firm in favor of DEI, for the most part. 

The government’s official position states that age, race, sexual preference, and gender should have zero impact on hiring decisions. Diversification proponents counter that workforce composition will stay stagnant unless explicit efforts are made to diversify.

Focus of settlement

The Justice Department settlement focused mostly on IBM’s role as a government contractor.

The government filing said IBM made “false claims” and “false statements” to the government regarding hiring practices in connection with IBM’s government contract work.

“As a federal contractor, IBM was required to comply with anti-discrimination requirements as set forth in Title VII of the Civil Rights Act of 1964,” the settlement said, adding that IBM “discriminated against employees during employment and applicants for employment because of race, color, national origin, or sex, and failed to treat employees during employment without regard to race, color, national origin, or sex.”

Beyond hiring practices, the government also opposed hiring goals that encouraged diversity, including “developing race and sex demographic goals for business units and taking race, color, national origin, or sex into account when making employment decisions to achieve progress towards those demographic goals” and using those same criteria to offer “certain training, partnerships, mentoring, leadership development programs, educational opportunities or resources, and/or similar opportunities only to certain employees.”

The agreement also said that the deal “is neither an admission of liability by IBM nor a concession by the United States that its claims are not well founded” and added that IBM agreed to the settlement “to avoid the delay, uncertainty, inconvenience and expense of protracted litigation.”

Acting US Attorney General Todd Blanche issued a statement saying, “racial discrimination is illegal, and government contractors cannot evade the law by repackaging it as DEI.”

IBM did not respond to an email seeking comment.

Companies can work around biases

Bryan Howard, the CEO of recruiting strategy consulting firm Peoplyst, said he would encourage enterprises to simply move their workforce diversification efforts earlier in the recruitment process. 

“There’s a big difference between candidate pool and the selection process,” Howard said, suggesting that there are no federal rules limiting outreach choices. If, for example, a company wanted to increase workforce representation for a particular group, then the job notice should be focused on universities and other places where that group is well represented.

“Expand your pool and do not contract it. Fish in the ponds where those people are,” Howard said. “Increase diversity by simply recruiting from diverse sources.”

Howard also said the government position leverages last year’s US Supreme Court decision in Ames v. Ohio Department of Youth Services, where the court held that reverse discrimination is illegal. 

Complicating diversification efforts today are two popular recruiting/hiring tools pushed by HR: Using genAI to filter a massive number of applicants and only present a small handful to the hiring managers to choose from; and referral programs in which employees are offered cash incentives if they recommend job candidates who are eventually hired.

AI’s bias is to seek job candidates whose profiles most closely resemble that of the current workforce. In other words, AI wants to learn everything it can about who the company has hired before, to help it determine the attributes to look for. 

Referral programs, Howard said, also tend to attract people with the same characteristics as the existing workforce. Even though those referral hires tend to stay with the company longer, “if you have a population that is already skewed and that is the population recruiting, the existing bias will likely continue.”

Settlement could hurt recruitment efforts

Consultant Brian Levine, executive director of FormerGov, said it is difficult to interpret the settlement as anything other than opposing DEI efforts. 

The US Justice Department, where Levine once worked as a federal prosecutor, ”has issued a multi-million dollar penalty for company policy that seemed to be intended to encourage diversity,” he said. “As with Anthropic, in this new world, sometimes organizations may be forced to choose between ‘the law’ as it is currently being interpreted by some, and a good faith effort to positively influence society, or at least to minimize societal harm.”

Levine said some enterprises may try to overcompensate to keep the current administration happy.

“Fearing financial penalties, some companies that work with the federal government will now choose to ensure their DEI program is fully dismantled,” Levine said. “Other companies may choose to cease working with the federal government and/or may choose to keep, or even double down, on their DEI program. If Anthropic is any indication, these latter companies may ultimately be rewarded in the market.”

Flavio Villanustre, CISO for the LexisNexis Risk Solutions Group, added that this settlement might end up hurting tech recruitment efforts. 

“I think that this will force organizations to reframe their DEI programs to not upset the DOJ, which could have an impact on hiring of individuals in certain classes and could result in overall less diversity,” Villanustre said. “Diversity is an important part of building resilient, successful organizations, so this could have a broader impact than just the one at hiring time.”

  • ✇Security | CIO
  • CIOs reimagine business processes to reap AI benefits
    Every business process reflects the constraints that existed when it was first devised, says IT exec Maria Cardow. Those constraints, Cardow explains, typically derived from technology limitations at the time of the process’s initial implementation. As a result, many business processes to this day involve workflows that still require manual actions or cumbersome jumps between multiple computer applications to accomplish essential tasks. But artificial intelligence an
     

CIOs reimagine business processes to reap AI benefits

13 de Abril de 2026, 07:01

Every business process reflects the constraints that existed when it was first devised, says IT exec Maria Cardow.

Those constraints, Cardow explains, typically derived from technology limitations at the time of the process’s initial implementation. As a result, many business processes to this day involve workflows that still require manual actions or cumbersome jumps between multiple computer applications to accomplish essential tasks.

But artificial intelligence and other modern technologies can knock down those old constraints, “which is why it’s incredibly important to reassess processes” before attempting to automate what may likely be an ineffective, outdated process, says Cardow, CIO of managed security services provider LevelBlue.

“You have to check the assumptions in processes before you kick off a transformation if you want to allow for greater innovation,” she emphasizes.

Executives are feeling the pressure to transform or optimize their organizational workflows with AI. J.P. Morgan’s 2026 Business Leader Outlook found process automation to be the most common AI application that midsize business use or plan to use, cited by 62% of executives surveyed. And EY’s CEO Outlook 2026 found that 43% of CEOs identified optimizing operations and improving productivity as a transformation priority, making it the top cited priority, with enhancing product and process innovation coming in at No. 3.

Cardow is in the thick of leading such efforts, and she’s finding that IT is well positioned to do the work.

For example, when examining business workflows, her IT team often finds that processes span “at least a dozen different systems and that the number of assumptions baked into workflows were dependencies on technologies two generations back,” she says.

Such instances show why CIOs should lead their organizations through process optimization before automating with AI, Cardow says, citing the longstanding admonition against automating bad processes.

A natural fit for optimization

Anytime a technology is up for license renewal, LevelBlue endeavors to review any business process that depends on it, Cardow says. The goal is to determine whether the process is ripe for optimization and transformation before renewing or replacing the existing technology — and before applying further automation or AI.

That timing means Cardow, as CIO, takes a lead role in optimizing enterprise processes. Moreover, she brings more in-depth knowledge than her business colleagues of what technology capabilities can optimize and transform a process. And as CIO, she has a remit to improve productivity and efficiency while driving down costs.

The CIO is a natural fit for such work, Cardow adds, pointing out that CIOs have had responsibility for optimization for most of the role’s existence.

IT also brings objectivity to process optimization, she adds. Workers are often comfortable with the status quo and the tools they’re using, whereas CIOs and IT teams don’t have such attachments.

“Being able to critically challenge how processes are executed is absolutely essential to my role now,” she says. “[The business] will still say, ‘This is how it gets done,’ but CIOs are well positioned to ask about baked-in assumptions and whether certain steps are still appropriate as they partner [with the business] to streamline processes and develop solutions.”

AI heightens the significance of process optimization

As Cardow notes, CIOs have been involved in process optimization since the role’s inception, implementing technologies to automate tasks. But the CIO’s role in process optimization has taken on heightened significance in the AI era.

“It’s vastly different now,” says Doug Gilbert, CIO and chief digital officer at Sutherland, a digital transformation services company. “There’s been an evolutionary shift.”

Gilbert says CIOs focused on improving individual tasks during prior waves of automation, such as when robotic process automation (RPA) was first rolled out. Now CIOs must optimize entire processes and workflows to transform larger swaths of their organization’s operations.

“If you’re looking at AI to be a more complex RPA to solve minute tasks and activities, you’re going to fail,” he says. “You’ve got to look at how humans work across systems and across tasks; automation today has to go across an entire process flow.”

He cites as case in point his company’s Insurance AI Hub, a project launched and completed in 2025.

“As CIO, I was responsible for the overall technology direction and architecture. This meant overseeing the implementation of a full ecosystem of domain-trained AI agents for underwriting, claims adjudication, and policyholder servicing across multiple lines of business, while ensuring clean master data foundations, embedded governance, observability, and human-in-the-loop controls were built in from the start,” he says. “The result is production-scale agentic AI that delivers up to 30% faster claims cycle times, lower leakage, higher satisfaction, and stronger compliance.”

Gilbert cites as a second example his company’s agentic AI platform for healthcare provider credentialing, saying this multiagent system “reimagines a highly complex, regulated, cross-functional process at the macro level — automating and contextualizing information across many previously siloed processes, internal systems, and external sources.”

These agents “work in tight coordination to tackle a much larger, macro-level complex process — sharing contextualized data in real-time, maintaining quality and compliance throughout, and enabling a far more comprehensive level of automation and problem-solving across the entire end-to-end credentialing workflow,” he says. “This turns what used to be a slow, manual, multiweek process into a fast, accurate, governed operation.”

A point of inflection for CIOs

IT execs say process optimization today requires more from CIOs than in the past. CIOs must now understand how work happens within the organization in more detail, as well as how those processes sit within broader workflows, how processes and workflows arrive at decisions, and what defines a good or accurate decision.

“Agentic AI — the ability for LLMs to actually take action — is raising the bar significantly,” Gilbert says. “When designed properly, these systems don’t just follow rules; they plan, decide, act across systems, and learn. That means CIOs can no longer optimize in silos or bolt intelligence onto broken processes. We must now lead the complete reimagination of workflows so they are natively designed for autonomous agents while staying inside clear guardrails.”

Those are more complex — and more significant — requirements than CIOs have faced to date, Gilbert adds.

“In the past we would receive a process from the business, map it, automate the obvious parts with RPA or traditional tools, and hand it back. Today, because AI brings reasoning, contextualization, and the ability to work across systems, the CIO must lead the charge in fundamentally reimagining how work gets done,” Gilbert says.

“We now own end-to-end process intelligence, ensuring the underlying data is clean and contextualized, governance is embedded, and the optimized process actually delivers trustworthy business outcomes,” he adds. “The CIO must sit at the strategy table and, in large part, drive it going forward.”

Organizations across industries are indeed turning to their CIOs to reimagine workflows, says Catherine Malkova, senior vice president of Kyndryl Consult and Practices US.

“There is a focus on business workflow, not just working with old processes and making them more efficient but instead remaking them from the ground up into an AI agentic workflow. It’s about building an AI-native enterprise,” she says, noting that “the ‘agentification’ of workflows is what will drive the next wave of AI adoption.”

Kyndryl’s 2025 Readiness Report highlights the amount of transformation executives are expecting, with 87% of the 3,700 business leaders across 21 countries saying that AI will completely transform job roles and responsibilities at their organizations this year.

Challenges abound

Expectations of transformation are high, but so too are the challenges that CIOs and their C-suite colleagues face.

Kyndryl’s research identifies “five specific readiness challenges ahead of them: getting solid tech foundations, managing global data, evolving workforces, pressure to scale AI pilots, and aligning leadership.”

Rigid workflows, fragmented systems, lack of trust in AI, lack of required skills, and resistance to change also present challenges to CIOs, their colleagues, and their teams as they seek to optimize and transform processes and workflows, Kyndryl’s Malkova adds.

Others list the lack of good documentation for many processes as another barrier, along with concerns about data security, privacy, and AI hallucinations.

Dom Profico, who as CTO of consultancy Bridgenext advises CIOs, says enterprise ambition itself can be problematic, as executives and teams may rush to adopt AI without putting in the work required to transform the processes AI is meant to help optimize.

“AI makes it so much easier to do that automation, so you have a higher risk of automating a bad process. And there’s a lot of pressure in the age of AI. Everyone feels like they’re behind. That adds to the likelihood and risk of going too fast,” he says.

Profico stresses that while automating existing processes may create some efficiencies, it may not lead to the optimization and transformation that delivers the significant returns that CEOs and boards now want from AI investments.

CIO as chief instigator

CIOs must also bring rigor to the discipline of process optimization “to make sure the teams are automating the right things and using the technology in the right way to do that,” Profico says.

He lists systems thinking as a must-have skill for CIOs, as it allows them to take a holistic problem-solving approach where they see how the different parts interact within a larger whole.

That, he notes, requires CIOs to deepen and broaden their knowledge of the business, how work gets done, and what adds value.

Sutherland’s Gilbert agrees, saying needed skills include deep fluency in process intelligence platforms and mining tools; strong data strategy expertise (especially master data management, lineage, and context engineering); AI governance and risk management (particularly around autonomous agents and accountability); advanced change leadership (i.e. designing new operating models where humans and agents collaborate effectively); and business translation skills — “the ability to link technical decisions directly to P&L and customer outcomes.”

“Technical depth is still essential, but the real differentiator today is the ability to think systemically about human-plus-agent workflows and to lead at the intersection of strategy, technology, and risk,” Gilbert adds.

Merim Becirovic, CIO, managing director, and partner at Boston Consulting Group, has a similar take on the process optimization work that CIOs and their teams are leading.

Like others, Becirovic says CIOs in the past focused on optimizing processes within siloes. “But what we’re doing now is not about sending people across the different systems. It’s about transforming the journey and the experiences within that journey. It’s about connecting the dots and connecting the systems that create those experiences and that journey,” he says.

That requires CIOs to focus on how processes and workflows connect to deliver outcomes.

“That’s a natural evolution for me as a CIO,” he adds. “Technology gives you a lens into that work and gives you access to products that allow you to break siloes down.”

Becirovic points to his IT department’s delivery of Deckster, a homegrown AI tool for slide deck creation, as an example of successful process optimization and transformation.

BCG workers create more than 30 million slides annually, making it a process that takes up significant work time. Rather than focus on incremental gains by using generative AI for specific tasks, such as crafting the language on individual slides, BCG teams reimagined the entire slide-creation lifecycle. Using OpenAI’s API alongside a curated library of BCG templates, Deckster produces fully formatted, client-ready slides in just three seconds instead of the 15 minutes it had taken.

BCG is now bringing agentic AI capabilities to Deckster to further optimize and transform the process.

“Those are the opportunities where CIOs can step in and say, ‘Is there a better way to work?’” Becirovic says. “I really think CIOs today and tomorrow need to be far more aggressive in looking for change and driving change. The CIO needs to be the instigator, influencer, enabler, collaborator, and the innovator to make that happen.”

  • ✇Security | CIO
  • 40% of AI productivity gains lost to rework for errors
    While the promise of enterprise AI use hinges on productivity gains, hidden drains on productivity have emerged in practice, with your organization’s most engaged employees likely suffering the brunt of “AI workslop.” According to a recent survey from Workday, around 40% of time saved through use of AI is offset by the extra work created fixing AI-generated content. Workday estimates that for every 10 hours of efficiency that companies gain through AI tools, approximate
     

40% of AI productivity gains lost to rework for errors

13 de Abril de 2026, 06:00

While the promise of enterprise AI use hinges on productivity gains, hidden drains on productivity have emerged in practice, with your organization’s most engaged employees likely suffering the brunt of “AI workslop.”

According to a recent survey from Workday, around 40% of time saved through use of AI is offset by the extra work created fixing AI-generated content. Workday estimates that for every 10 hours of efficiency that companies gain through AI tools, approximately 4 hours are lost fixing AI outputs.

“We’re seeing teams use AI to summarize everything, from simple meeting notes to complex policy or analyst reports. It works for the first, but in the second case, experts often spend more time fixing the output than if they had written it themselves. The real challenge is getting much more granular about where AI adds value — and where it creates rework,” says Laura Stash, executive vice president at iTech AG.

Identifying lost productivity

Leaders need to step back and evaluate where AI adds value, a process that can be as simple as talking to the teams in your organization that are handling the bulk of the company’s AI-generated work, says Paul Farnsworth, president of Dice.

“Look for patterns like certain workflows consistently requiring rework or high performers spending more time editing than creating. AI shouldn’t just accelerate output; it should ultimately reduce friction. If it’s doing the opposite, that’s where leaders need to step in and adjust how this tool is being utilized within their organization,” says Farnsworth.

Lost productivity from AI can quickly become a blind spot for leaders focused on “gross efficiency,” according to Workday. Metrics aimed at the amount of time AI saves can lose sight of the “net value” of AI tools. While there might be early gains in efficiency and speed, it might simply be generating faster outputs — not improving quality or results.

“I’d recommend leaders watch for spots where work starts bouncing back and forth for edits, multiple reviews, or manual fixes, especially from their strongest employees. If speed is up but AI-generated mistakes, revisions, or frustration are also climbing, that may be a sign AI is adding friction instead of value,” says Kareem Osman, VP and market director of technology talent solutions at Robert Half.

Engaged employees

Those most eager to adopt AI are the employees suffering the burden of AI-related rework, Workday found, with 77% of daily users saying they “audit AI work with the same or more rigor than human work.” The bulk of this added labor creates an additional 1.5 weeks of time “lost to fixing AI outputs per highly engaged employee, per year.”

“A company’s strongest employees often become the safety net — they’re the ones catching mistakes, fixing issues, and making sure things don’t slip through the cracks. Over time, that can feel less like high-impact work and more like constant cleanup, which is unsustainable long term,” Farnsworth says.

Stash sees the biggest issues around quality and “cleanup work” with AI arising when employees use it to generate more complex work, especially when they aren’t fully trained on AI tools.

“There’s a place for AI in low-value, repeatable work, but applying that same approach to high-expertise tasks without proper training or validation creates more problems than it solves. If people can’t refine or trust the output, productivity gains disappear — and over time, you risk eroding expertise altogether,” says Stash.

Training

According to Workday’s report, 66% of leaders cite AI skills training as a top investment priority, but only 37% of employees who use AI daily reported increased access to training. This has created an imbalance at many organizations where there’s an expectation to generate high-quality work with AI, without the proper training or skillsets to do so effectively.

Leaders need to align their AI expectations to training and upskilling efforts, Dice’s Farnsworth says.

“That means investing in enablement by training people not just on how to use AI, but how to use it well. It means putting guardrails in place so outputs are reliable and aligned with business goals, and it also means continuously reassessing impact, not assuming that faster automatically means better,” he adds.

It might require redefining job roles, updating employees on new skills they’ll need for their positions, and offer clear guidance to employees on how and when to use AI. The workday report found that 54% of AI users who report struggling with the technology say their required skills haven’t been updated, leaving them unsure of exactly where to start with learning AI skills.

“Employees need clear guidelines on when to use AI, how to validate output, and what success actually looks like. The companies doing this well pair AI with training, quality standards, and accountability, so it helps people do better work, and not just more work faster,” says Robert Half’s Osman.

❌
❌