Visualização normal

Antes de ontemStream principal
  • ✇Security | CIO
  • 에이전틱 AI 확산 속 ‘파견형 엔지니어’ 의존 심화…기업 부담 커진다
    금융 기술 기업 FIS가 화요일 금융 범죄 탐지용 신규 AI 에이전트를 개발했다. 이 에이전트는 앤트로픽이 자체 개발한 커넥터와 템플릿을 기반으로 구축됐으며, 개발 과정에서 앤트로픽의 파견형 엔지니어(FDE) 팀이 내부에 투입됐다. 기업 CIO들은 자체 데이터 품질 문제와 AI 모델 활용의 복잡성으로 인해, AI 벤더의 FDE(Forward Deployed Engineer) 즉 파견형 엔지니어 서비스에 점점 더 많은 비용을 지불하고 있다. 다만 이러한 팀을 어떤 방식과 목적로 도입하느냐에 따라, 기업이 AI 역량을 한 단계 끌어올릴 수 있을지 아니면 끝이 없는 컨설팅 비용 구조에 묶이게 될지가 갈린다. FIS는 캐나다 몬트리올은행(BMO)과 아말가메이티드 은행을 해당 에이전트의 첫 도입 기업으로 공개했다. 이 에이전트는 은행 핵심 시스템 전반에서 데이터를 수집해 자금세탁 방지 조사 시간을 수시간에서 수분으로 단축하고, 가장 위
     

에이전틱 AI 확산 속 ‘파견형 엔지니어’ 의존 심화…기업 부담 커진다

7 de Maio de 2026, 03:05

금융 기술 기업 FIS가 화요일 금융 범죄 탐지용 신규 AI 에이전트를 개발했다. 이 에이전트는 앤트로픽이 자체 개발한 커넥터와 템플릿을 기반으로 구축됐으며, 개발 과정에서 앤트로픽의 파견형 엔지니어(FDE) 팀이 내부에 투입됐다.

기업 CIO들은 자체 데이터 품질 문제와 AI 모델 활용의 복잡성으로 인해, AI 벤더의 FDE(Forward Deployed Engineer) 즉 파견형 엔지니어 서비스에 점점 더 많은 비용을 지불하고 있다.

다만 이러한 팀을 어떤 방식과 목적로 도입하느냐에 따라, 기업이 AI 역량을 한 단계 끌어올릴 수 있을지 아니면 끝이 없는 컨설팅 비용 구조에 묶이게 될지가 갈린다.

FIS는 캐나다 몬트리올은행(BMO)과 아말가메이티드 은행을 해당 에이전트의 첫 도입 기업으로 공개했다. 이 에이전트는 은행 핵심 시스템 전반에서 데이터를 수집해 자금세탁 방지 조사 시간을 수시간에서 수분으로 단축하고, 가장 위험도가 높은 사례를 선별해 제공하며 모든 의사결정 과정에 대한 감사 가능성과 추적성을 확보한다.

FIS는 4일 보도자료를 통해 “앤트로픽의 응용 AI(Applied AI) 팀과 FDE가 함께 금융 범죄 AI 에이전트를 공동 설계하고 있으며, FIS가 향후 독립적으로 추가 에이전트를 구축·확장할 수 있도록 지식 이전도 진행하고 있다”라고 밝혔다.

뉴욕 기반 기술 컨설팅 기업 트라이베카 소프트텍의 최고전략책임자 아만 마하파트라는 유사한 AI 벤더 협업을 평가할 때 비용 흐름을 면밀히 살펴야 한다고 조언했다.

마하파트라는 “FIS와 앤트로픽 모델에서 구조적으로 가장 중요한 부분은 실제로 FDE 비용을 누가 부담하느냐”라며 “이는 CIO들이 반드시 던져야 할 질문이지만 대부분 간과하고 있다”라고 지적했다.

가트너의 수석 디렉터 애널리스트 알렉스 코케이루의 최근 보고서에 따르면, FDE 비용은 일부 AI 프로젝트를 위태롭게 만들 수 있다. 코케이루는 “2028년까지 기업의 70%가 높은 벤더 비용과 내부 역량 부족으로 인해 FDE 중심 협업에서 구축된 에이전틱 AI 솔루션을 포기하게 될 것”이라고 전망했다.

소프트웨어가 아닌 ‘서비스’

이 문제는 전적으로 AI 벤더의 책임만은 아니라는 지적이다. 많은 IT 조직이 데이터를 정제하고 AI 활용에 적합하도록 만드는 사전 준비를 충분히 하지 않고 있으며, 조직 내부의 정치적 역학과 개인 간 이해관계도 중요한 변수로 작용한다.

코케이루는 보고서에서 “FDE 성공에 가장 중요한 도메인 전문가일수록 이를 방해할 유인이 가장 크다”라며 “자신의 전문성이 에이전틱 자동화로 흡수된다고 인식한 전문가는 실제 업무 프로세스가 아닌 형식적인 절차만 제공하고, 그 결과 해당 기반으로 구축된 AI 에이전트는 의도적으로 누락된 예외 상황에서 실패하게 된다”라고 분석했다.

이어 “여러 차례 배포 이후에도 FDE 투입 규모가 줄지 않는다면 이는 역량이 아니라 의존성이 형성됐다는 신호”라며 “활용 사례가 성숙해져도 투입 노력이 감소하지 않는다면 기업은 스스로 운영해야 할 영역에 컨설팅 비용을 지불하고 있는 것”이라고 지적했다.

FIS와 앤트로픽 협업 사례에 대해 마하파트라는 “BMO와 아말가메이티드 은행이 분기별 컨설팅 비용 형태로 앤트로픽의 FDE에 직접 비용을 지불하는 구조가 아니다”라며 “FIS가 FDE 비용을 흡수해 전체 은행 고객군에 분산시키는 방식”이라고 설명했다.

이어 “각 은행이 개별적으로 엔지니어링 팀을 구성해 동일한 컨텍스트 경계, 섀도 자율성 통제, 탈옥(jailbreak) 저항 테스트를 반복 수행하는 방식보다 훨씬 경제적인 구조”라고 평가했다.

마하파트라는 이러한 문제의 상당 부분이 생성형 AI와 에이전틱 AI의 마케팅 방식에서 비롯됐다고 지적했다. “AI를 통해 더 적은 인력으로 더 많은 일을 할 수 있다는 초기 ROI 논리는 규제가 엄격한 금융 업무 환경에서는 현실과 맞지 않는 메시지였다”라고 말했다.

보안 AI 연합(CoSAI) 회원이자 ACM AI 보안 프로그램(AISec) 위원인 닉 케일은 FIS의 발표를 두고 “최첨단 AI가 아직 제품 단계에 이르지 못했음을 인정한 것”이라고 평가했다. 이어 “CIO들은 소프트웨어를 구매한다고 생각했지만 실제로는 전문 서비스 계약을 체결하고 있는 것”이라며 “이는 모든 기업 AI 도입에서 비용 구조, 의존성 구조, 거버넌스 모델을 바꾸는 요소”라고 설명했다.

케일은 발표 문구 자체가 에이전틱 전략의 방향성을 보여준다고 분석했다. “FIS는 모든 에이전트 의사결정이 추적 가능하고 감사 가능하다고 밝혔는데, 이는 사실이지만 핵심 질문은 아니다”라며 “진짜 어려운 문제는 에이전트가 어떤 결정을 내렸는지를 검증하는 것이 아니라, 애초에 어떤 결정을 맡길 것인지 정하는 것”이라고 짚었다.
이어 “은행은 수십 년간 의사결정 권한 체계를 구축해 왔지만, 외부 엔지니어가 만든 에이전트 구조에는 이를 그대로 적용하기 어렵다”라고 덧붙였다.

또한 “FDE 팀이 철수한 이후에도 조직이 에이전틱 워크플로를 운영하고, 모니터링하며, 문제를 제기하고, 안전하게 수정할 수 있는지가 CIO의 핵심 판단 기준”이라며 “그렇지 않다면 이는 성공적인 구축 프로젝트일 수는 있어도 아직 기업 역량이라고 보기는 어렵다”라고 강조했다.

컨설팅 기업 액셀리전스의 CEO이자 전 맥킨지 북미 사이버보안 책임자였던 저스틴 그라이스 역시 이 같은 견해에 동의했다.

프로세스로 위장된 인간 판단

그라이스는 “진짜 위험은 비용이 아니라 의존성”이라며 “수십만 달러를 들여 시스템을 운영 환경에 올리는 것 자체는 문제가 아니다”라고 말했다. 이어 “문제는 해당 시스템을 벤더만이 운영하거나 확장할 수 있고, 심지어 완전히 이해할 수 있는 구조가 되는 순간부터 발생한다”라고 지적했다.

일부 컨설팅 구조의 문제는 IT 역량 부족을 가리는 데 있는 것이 아니라, AI 도입 과정에서 ‘지름길’을 허용한다는 점이다.

그레이하운드 리서치의 수석 애널리스트 산치트 비르 고기아는 “FDE에 비용을 지불하는 구조는 에이전틱 AI의 ROI 자체를 훼손하는 것이 아니라, 단순화된 ROI 논리를 무너뜨리는 것”이라며 “이 차이는 매우 중요하다”라고 말했다.

이어 “지난 2년간 기업 AI는 지나치게 깔끔한 인력 절감 스토리로 포장돼 왔다. 모델을 도입하고, 업무를 자동화하고, 인력을 줄여 비용을 절감한다는 식의 접근은 이사회에는 매력적으로 보일 수 있지만 현실을 충분히 반영하지 못한다”라고 설명했다.

고기아는 “대기업은 자동화를 기다리는 정형화된 업무의 집합이 아니라, 예외 상황, 레거시 시스템, 취약한 통합 구조, 접근 통제, 문서화되지 않은 임시 대응, 규제 요구, 그리고 ‘프로세스로 위장된 인간 판단’이 얽힌 복잡한 구조”라며 “FDE는 AI를 실제로 작동하게 만들기 위한 비용 청구서에 가깝다. 이는 혁신이 아니라 더 정교해진 의존성”이라고 강조했다.

또 다른 FDE 관련 우려는 이해 상충 가능성이다. 복잡성을 해결하기 위해 비용을 받는 AI 벤더가, 동시에 그 복잡성의 상당 부분을 만들어낸 주체일 수 있다는 점이다.

프리랜서 기술 분석가 카미 레비는 이러한 비즈니스 구조가 기업의 목표를 저해할 수 있다고 지적했다. 레비는 “AI 에이전트가 조직 전반에서 고도화된 워크플로를 자율적으로 생성·배포·운영하는 것이 목표라면, 기존에 높은 수익을 창출해 온 유지보수 계약 모델과 충돌할 수 있다”라고 말했다. 이어 “고객과 함께 에이전트를 구축하기 위해 FDE를 지속 투입해야 한다면, 장기적인 지원이 필요 없는 수준까지 에이전트를 고도화할 유인이 과연 존재하는지 의문”이라고 덧붙였다.

또한 “FDE 중심 비즈니스 모델이 초기 모델 설계에까지 영향을 미칠 수 있으며, 지속적인 FDE 지원이 필요하도록 AI 플랫폼이 의도적으로 설계됐을 가능성도 있다”라고 분석했다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • Anthropic’s financial agents expose forward-deployed engineers as new AI limiting factor
    When financial tech vendor FIS announced its new AI agent for detecting financial crimes on Tuesday, it made much of its embedding of a team of forward deployed engineers (FDEs) from Anthropic to make it happen. It’s just one of the dozen or so companies working with Anthropic on developing agents for financial services using new connectors and so-called “ready-to-run” templates Anthropic announced the same day. Enterprise CIOs are increasingly paying for the services o
     

Anthropic’s financial agents expose forward-deployed engineers as new AI limiting factor

6 de Maio de 2026, 13:34

When financial tech vendor FIS announced its new AI agent for detecting financial crimes on Tuesday, it made much of its embedding of a team of forward deployed engineers (FDEs) from Anthropic to make it happen. It’s just one of the dozen or so companies working with Anthropic on developing agents for financial services using new connectors and so-called “ready-to-run” templates Anthropic announced the same day.

Enterprise CIOs are increasingly paying for the services of AI vendors’ FDEs, given their own data quality issues and the complexity of working with AI models.

But how and why such teams are brought in can make the difference between whether the enterprise is helped to get to the next AI level or becomes a hostage to never-ending consulting costs. 

FIS listed the Bank of Montreal (BMO) and Amalgamated Bank as the first two companies to deploy its agent, which it said will compress anti-money-laundering investigations from hours to minutes, assembling evidence across a bank’s core systems and surfacing the riskiest cases for review with full auditability and traceability of decisions. “Anthropic’s Applied AI team and forward-deployed engineers (FDEs) are embedded with FIS to co-design the Financial Crimes AI Agent and transfer knowledge so FIS can build and scale additional agents independently over time,” it said.

Aman Mahapatra, chief strategy officer for Tribeca Softtech, a New York City-based technology consulting firm, suggests CIOs follow the money when evaluating similar work with AI vendors. 

“The structurally interesting thing about the FIS-Anthropic model is who actually pays the FDE cost. This is the question CIOs should be asking but mostly are not,” Mahapatra said.

The cost of FDEs could put some AI projects in jeopardy according to a recent report by Alex Coqueiro, a senior director analyst with Gartner. He predicted that by 2028, “70% of enterprises will be forced to abandon agentic AI solutions from FDE-led engagements because of high vendor costs and lack of internal skills to evolve them independently.”

Service, not software

He argued that the problem is not entirely the fault of the AI vendor. Many IT operations don’t put in the necessary preparatory work to clean their data and to make it AI-friendly. Internal corporate politics/personalities is another critical factor.

“The domain experts most critical to FDE success have the strongest incentive to undermine it. An expert who perceives the FDE as capturing their expertise for agentic automation will give the official process instead of the real one, and the AI agent built on it will fail on the exact edge cases they chose not to mention,” Coqueiro said in the report. “Flat FDE effort across successive deployments is the signal that an engagement has produced a dependency, not a capability. When effort does not decrease as use cases mature, the organization is paying consulting rates for operations it should own.”

In the case of FIS’s work with Anthropic, said Mahapatra, “BMO and Amalgamated are not writing direct checks to Anthropic for forward-deployed engineers at quarterly consulting rates. FIS is absorbing the FDE engagement and amortizing it across its banking customer base.”

That approach, he said, “is meaningfully better economics than direct Anthropic engagements where each bank funds its own embedded engineering team to redesign the same context boundaries, shadow autonomy controls, and the jailbreak resistance testing in isolation.”

Mahapatra said much of this problem stems from how generative and agentic AI have been marketed. The original ROI thesis, he said, was that AI enables enterprises to do more with fewer people, but that was “a marketing pitch that was never going to survive contact with regulated banking workflows.”

Nik Kale, a member of the Coalition for Secure AI (CoSAI) and of ACM’s AI Security (AISec) program committee, said that he sees FIS’s presentation of its work with Anthropic as “a concession that frontier AI isn’t a product yet. CIOs thought they were buying software. They’re actually buying a professional services engagement. That changes the cost model, the dependency model and the governance model for every enterprise AI deployment.”

Kale said the statement’s wording gives a clue about the agentic strategy. 

“The FIS release says every agent decision is traceable and auditable. True statement, wrong sentence. The harder question isn’t auditing what the agent decided. It’s deciding which decisions are the agent’s to make in the first place. Banks have decades of decision-rights frameworks. They don’t translate cleanly to agent harnesses built by someone else’s engineers,” Kale said. “The CIO test is simple: after the forward-deployed team leaves, can your organization still operate, monitor, challenge, and safely modify the agentic workflow? If the answer is no, it’s not mature yet. It may be a successful implementation project, but it’s not yet an enterprise capability.”

Justin Greis, CEO of consulting firm Acceligence and former head of the North American cybersecurity practice at McKinsey, agreed with Kale.

Human judgment pretending to be process

“The bigger risk isn’t the cost of these engagements. It’s the dependency they can create. Spending a few hundred thousand dollars to get something into production isn’t the issue,” Greis said. “Ending up with a system that only the vendor can operate, extend, or even fully understand is where things start to break down.”

The problem with some of these consulting arrangements is not that they hide IT deficiencies as much as they enable AI shortcuts.

Enterprises paying FDE teams “do not undermine the ROI case for agentic AI. They undermine the lazy version of the ROI case. That distinction matters,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. “For the past two years, too much of the enterprise AI narrative has been sold as a tidy labor-reduction story. Buy the model. Automate the work. Reduce the people. Capture the savings. It is neat, board-friendly, and deeply incomplete. Large enterprises are not collections of clean tasks waiting to be automated. They are collections of exceptions, legacy systems, fragile integrations, access controls, undocumented workarounds, compliance obligations, and human judgement pretending to be process. Forward deployed engineers are the invoice for making AI real. That is not transformation. That is dependency with better stationery.”

Another FDE concern is the inevitable conflict of interest that can exist where the AI vendor that is being paid to fix the complexity is also the vendor that created much of that complexity in its model.

Carmi Levy, an independent technology analyst, said the business case can undermine enterprise objectives. “If AI agents are supposed to autonomously create, deploy, and manage super-capable workflows at all levels of the organization, their very capability threatens the future viability of vendors who have long attached lucrative support contracts to those very same deployments. If the FDE is going to be engaged to work alongside customers to make their AI agents come alive, where is the incentive for AI vendors to build agentic systems that are so capable that they don’t require ongoing support? The FDE business model influences up-front model design, and it’s entirely possible that AI platforms are being deliberately designed to require persistent FDE support.”

  • ✇Security | CIO
  • AI is spreading decision-making, but not accountability
    On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised. As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks sugg
     

AI is spreading decision-making, but not accountability

6 de Maio de 2026, 07:00

On a holiday weekend, when most of a company is offline, a critical system fails. An AI-driven workflow stalls, or worse, produces flawed decisions at scale that misprice products or expose sensitive data. In that moment, organizational theory disappears and the question of who’s responsible is immediately raised.

As AI moves from experimentation into production, accountability is no longer a technical concern, it’s an executive one. And while governance frameworks suggest responsibility is shared across legal, risk, IT, and business teams, courts may ultimately find it far less evenly distributed when something goes wrong.

AI, after all, may diffuse decision-making, but not legal liability.

AI doesn’t show up in court — people do

Jessica Eaves Mathews, an AI and intellectual property attorney and founder of Leverage Legal Group, understands that when an AI system influences a consequential decision, the algorithm isn’t what will show up in court. “It’ll be the humans who developed it, deployed it, or used it,” she says. For now, however, the deeper uncertainty is there’s very little case law to guide those decisions.

“We’re still in a phase where a lot of this is speculative,” says Mathews, comparing the moment to the early days of the internet, when courts were still figuring out how existing legal frameworks applied to new technologies. Regulators have signaled that responsibility can’t be outsourced to algorithms. But how liability will be apportioned across vendors, deployers, and executives remains unsettled — an uncertainty that’s unlikely to persist for long.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jessica Eaves Mathews, founder, Leverage Legal Group

LLG

“There are going to be companies that become the poster children for how not to do this,” she says. “The cases working their way through the system now are going to define how this plays out.”

In most scenarios, responsibility will attach first and foremost to the deploying organization, the enterprise that chose to implement the system. “Saying that we bought it from a vendor isn’t likely to be a defense,” she adds.

The underlying legal principle is familiar, even if the technology isn’t: liability follows the party best positioned to prevent harm. In an AI context, that tends to be the organization integrating the system into real-world decision-making, so what changes isn’t who’s accountable but how difficult it becomes to demonstrate appropriate safeguards were in place.

CIO as the system’s last line of defense

If legal accountability points to the enterprise, operational accountability often converges on the CIO. While CIOs don’t formally own AI in most organizations, they do own the systems, infrastructure, and data pipelines through which AI operates.

“Whether they like it or not, CIOs are now in the AI governance and risk oversight business,” says Chris Drumgoole, president of global infrastructure services at DXC Technology and former global CIO and CTO of GE.

The pattern is becoming familiar, and increasingly predictable. Business teams experiment with AI tools, often outside formal processes, and early results are promising. Adoption accelerates but controls lag. Then something breaks. “At that moment,” Drumgoole says, “everyone looks to the CIO first to fix it, then to explain how it happened.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Chris Drumgoole, president, global infrastructure services, DXC Technology

DXC

The dynamic is intensified by the rise of shadow AI. Unlike earlier forms of shadow IT, the risks here aren’t limited to cost or inefficiency. They extend to things like data leakage, regulatory exposure, and reputational damage.

“Everyone is an expert now,” Drumgoole says. “The tools are accessible, and the speed to proof of concept is measured in minutes.” For CIOs, this creates a structural asymmetry. They’re accountable for systems they don’t fully control, and increasingly for decisions they didn’t directly authorize.

In practice, that makes the CIO the enterprise’s last line of defense, not because governance models assign that role, but because operational reality does.

The illusion of distributed accountability

Most organizations, however, aren’t building governance structures around a single accountable executive. Instead, they’re constructing distributed models that reflect the cross-functional nature of AI.

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Ojas Rege, SVP and GM, privacy and data governance, OneTrust

OneTrust

Ojas Rege, SVP and GM of privacy and data governance at OneTrust, sees this distribution as unavoidable, but also potentially misleading. “AI governance spans legal, compliance, risk, IT, and the business,” he says. “No single function can manage it end to end.”

But that doesn’t mean accountability is shared in the same way. In Rege’s view, responsibility for outcomes remains firmly with the business. “You still keep the owners of the business accountable for the outcomes,” he says. “If those outcomes rely on AI systems, they have to figure out how to own that.”

In practice, however, governance is fragmented. Legal teams interpret regulatory exposure, risk and compliance define frameworks, and IT secures and operates systems. The result is a model in which responsibility appears distributed while accountability, when tested, is not — and it often compresses to a single point of failure. “AI doesn’t replace responsibility,” says Simon Elcham, co-founder and CAIO at payment fraud platform Trustpair. “It increases the number of points where things can go wrong.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Simon Elcham, CAIO, Trustpair

Trustpair

And those points are multiplying. Beyond traditional concerns such as security and privacy, enterprises must now manage algorithmic bias and discrimination, intellectual property infringement, trade secret exposure, and limited explainability of model outputs.

Each risk category may fall under a different function, but when they intersect, as they often do in AI systems, ownership becomes blurred. Mathews frames the issue more starkly in that accountability ultimately rests with whoever could have prevented the harm. The difficulty in AI systems is that multiple actors may plausibly claim, or deny, that role. So the result is a governance model that’s distributed by design, but not always coherent in execution.

The emergence and limits of the CAIO

To address this ambiguity, some organizations are beginning to formalize AI accountability through new leadership roles. The CAIO is one attempt to centralize oversight without constraining innovation.

At Hi Marley, the conversational platform for the P&C insurance industry, CTO Jonathan Tushman recently expanded his role to include CAIO responsibilities, formalizing what he describes as executive accountability for AI infrastructure and governance. In his view, effective AI governance depends on structured separation. “AI Ops owns how we build and run AI internally,” he says. “But AI in the product belongs to the CTO and product leadership, and compliance and legal act as independent checks and balances.”

The intention isn’t to eliminate tension, but to institutionalize it. “You need people pushing AI forward and people holding it back,” says Tushman. “The value is in that tension.”

width="1240" height="827" sizes="auto, (max-width: 1240px) 100vw, 1240px">

Jonathan Tushman, CTO, Hi Marley

Hi Marley

This reflects a broader shift in enterprise governance away from centralized control and toward managed friction between competing priorities — speed versus safety, innovation versus compliance. Yet even this model has limits.

When disagreements inevitably arise, someone must decide whether to proceed, pause, or reverse course. “In most organizations, that decision escalates often to the CEO or CFO,” says Tushman.

The CAIO, in other words, may coordinate accountability. But ultimate responsibility still sits at the top and can’t be delegated.

The widening gap between deployment and governance

If organizational models for AI accountability are still evolving, the gap between deployment and governance is already widening. “Companies are deploying AI at production speed, but governing at committee speed,” Mathews says. “That’s where the risk lives.”

Consequences are beginning to surface as a result. Many organizations lack even a basic inventory of AI systems in use across the enterprise. Shadow AI further complicates visibility, as employees adopt tools independently, often without understanding the implications.

The risks are both immediate and systemic. Employees may input sensitive corporate data into public AI platforms, inadvertently exposing trade secrets. AI-generated content may infringe on copyrighted material, and decision systems may produce biased or discriminatory outcomes that trigger regulatory scrutiny.

At the same time, regulatory expectations are rising, even in the absence of clear legal precedent. That combination — rapid deployment, limited governance, and legal uncertainty — makes it likely that a small number of high-profile cases will shape the future of AI accountability, as Mathews describes.

Where the buck stops

For all the complexity surrounding AI governance, one pattern is becoming clear. Responsibility may be distributed, authority may be shared, and new roles may emerge to coordinate oversight, but accountability doesn’t remain diffused indefinitely.

When systems fail, or when regulators intervene, it often points at enterprise leadership, and, in operational terms, to the executives closest to the systems in question. AI may decentralize how decisions are made, obscure the pathways through which those decisions emerge, and challenge traditional notions of control, but what it doesn’t do is eliminate responsibility. If anything, it magnifies it.

AI accountability is a familiar problem, refracted through a more complex system. The difference is the system is moving faster, and the cost of getting it wrong is increasing.

  • ✇Security | CIO
  • The AI economy needs a new vocabulary
    Technology is evolving faster than the language we use to describe it. As a result, people are often talking past each other about what software, AI and automation are. These are treated as single categories when in reality they contain several fundamentally different disciplines and economic models. And when reality changes faster than our language, confusion follows. That’s roughly where we are with technology right now. This challenge is not technical, it is seman
     

The AI economy needs a new vocabulary

6 de Maio de 2026, 06:00

Technology is evolving faster than the language we use to describe it. As a result, people are often talking past each other about what software, AI and automation are. These are treated as single categories when in reality they contain several fundamentally different disciplines and economic models. And when reality changes faster than our language, confusion follows.

That’s roughly where we are with technology right now.

This challenge is not technical, it is semantic. When different groups use the same words to mean different things, alignment becomes difficult. A software engineer, product manager and executive may all use the word “software,” but they are often referring to entirely different categories of work.

This lack of precision becomes more problematic as systems scale. Decisions about hiring, tooling and strategy depend on understanding what kind of work is being done. Without clear vocabulary, those decisions and the resulting actions are often based on incorrect assumptions.

Why language is falling behind technology

We need terms that clarify understanding and convey a clear concept so that we can properly express the intended meaning. Software, AI, content generation and many other tech terms are being discussed; each can now have multiple meanings. They contain several fundamentally distinct ideas, disciplines and economic models. Because we lack clearly differentiated terms, people often end up talking past each other.

So, I’m going to propose a few terms. They may not be the ones that ultimately stick, but we need to start somewhere.

Bizware

Bizware is already the dominant form of software. I’ve previously used this term to describe the class of software that exists primarily to support business infrastructure rather than advance computing itself. Tools like Docker, Kubernetes, React and Angular exist to help organizations assemble and operate the digital part of a business. They solve operational and integration problems rather than fundamental computing problems. Millions of developers now work primarily in this ecosystem. It has its own tools, expectations and culture that are distinct from traditional computer science. Concepts like sprints, deployment pipelines and infrastructure orchestration dominate bizware and arise from the intersection of software and business rather than from computing itself.

The rise of bizware can be seen in the widespread adoption of platforms, like the aforementioned Docker and Kubernetes, and exist to standardize the deployment of software infrastructure at scale. Docker, for example, enables developers to package applications into consistent environments, reducing variability between systems. Kubernetes extends this by orchestrating those environments across distributed systems, allowing organizations to manage complex deployments reliably.

These tools are not advancing computing theory. They are solving operational problems that arise when software becomes infrastructure. That distinction is what defines bizware.

Usage example: Our company builds bizware to integrate AWS datasets with high-speed data queries for front-end rendering.

AI Slop

I obviously didn’t invent the term AI Slop, but it still lacks a precise definition despite heavy use. And not all AI output has the same value. I propose AI Slop should differentiate between content that has some purpose and content that is fundamentally useless. Therefore, AI Slop is content that exists, or seems to exist, for no purpose other than existing or content that is so fundamentally flawed it cannot be used for any intended purpose.

An example of this is the videos of Will Smith eating spaghetti. It exists because people are entertained by the fact that it can exist. Anthropic’s C compiler would fit into the latter category. It is so flawed that it has no applicable use case, nor does it do anything novel, particularly with respect to existing solutions.

One of the reasons the blanket term “AI” creates confusion is that it produces outputs across multiple categories at once. The same system generates truly useless content, while also generating content that can serve a function and generate value.

Without language to distinguish these outcomes, discussions about AI tend to become circuitous. If two people didn’t agree on what the color red is, it would be very difficult to discuss art. Right now, people don’t agree on the term “AI Slop” so we have a challenge coming to a consensus about the nature of what AI generates.

Usage example: Anthropic’s C compiler is AI Slop.

GEA (Good Enough AI)

Not everything AI produces is useless. The real divide is economic, not technical. I’ve often said that AI automates mediocrity. But in many circumstances, mediocre output is economically valuable.

I refer to this category as GEA: Good Enough AI.

GEA is AI-generated material that performs its intended function even if the quality is far from exceptional. The output may require small corrections or modifications, but it is good enough to complete the task. In a business context, “working” is often far more valuable than “excellent.” If someone needs a simple Android app to track gym workouts, AI can generate code that isn’t elegant but still does the job. In that situation, perfection has little economic value.

The important distinction here is, as mentioned above, mostly economic, not technical. GEA is generated content that has value, whereas AI slop does not. It doesn’t imply a quality of the output, only that the quality is high enough that it represents value to the prompter.

This is where many organizations struggle. They attempt to apply a single standard of quality across all outputs, rather than recognizing that different categories of work require different thresholds. In many business contexts, speed and cost efficiency outweigh perfection. In others, precision and originality are critical. Treating all outputs as if they should meet the same standard leads to inefficiency and misaligned expectations.

Usage example: With the right prompts, Claude produced GEA SQL queries roughly 75% of the time.

HRC (Human Required Content)

Some work will remain human by definition and some categories will require human expertise. I propose we refer to these as HRC: Human Required Content. Even when AI produces higher-quality output, that output is instantly accessible to everyone. As a result, it tends to redefine the baseline for mediocrity rather than the ceiling for excellence. Since the best work will always command an economic premium, there will always be economic value in humans that outperform AI.

This class of work is not going away. If anything, it is probably going to demand a higher premium as companies decide what about their business should be “industry-leading” versus what part of their business can merely function. 

Usage example: Our clients demand high-quality HRC for their customer-facing frontend products.

Why this matters

For companies, adopting this vocabulary has practical implications. It allows leaders to better define roles, set expectations and allocate resources. It also helps clarify where AI can be effectively deployed and where human expertise remains essential.

More importantly, it reduces confusion. When teams can clearly distinguish between different types of work, they can make better decisions about how to approach each one.

Technological change always outpaces language. When a new technology emerges, we initially try to describe it using the vocabulary we already have. Eventually, that stops working. New terms appear to describe new categories of work, new economic realities and new technical disciplines.

We are currently in that transitional moment with AI and modern software.

Bizware represents one new category of software work. AI Slop, GEA and HRC describe different tiers of AI-generated output and the economic roles they play.

These terms may not be the ones that ultimately stick, but the categories they describe already exist. As AI capabilities stabilize and genuine business models emerge, our language will evolve to reflect how these systems are used.

When that happens, the conversation around AI and software will become a lot clearer.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • “제조·국방 현장, 범용 AI로는 부족”… 윤성호 마키나락스 대표, 산업 특화 AI 전략 공개
    윤성호 마키나락스 대표는 6일 열린 기자간담회에서 “피지컬 AI 시대는 이미 시작됐지만, 가장 먼저 현실화되는 곳은 휴머노이드가 아니라 제조 산업 현장과 전투 현장”이라며 “산업 현장은 일반적인 클라우드 환경과 달리 정밀도·신뢰성·보안에 대한 요구 수준이 높아 범용 AI만으로는 대응에 한계가 있다”고 말했다. 마키나락스는 산업 현장에서 AI 도입이 어려운 핵심 이유로 폐쇄망 환경과 현장 데이터 문제를 꼽았다. 제조 공장과 국방 시설은 외부 데이터 반출이 제한되는 경우가 많아 일반적인 클라우드 기반 AI 서비스를 그대로 적용하기 어렵고, 산업 장비마다 데이터 구조와 운영 방식이 달라 실제 현장 데이터를 충분히 학습하지 않으면 기업이 원하는 수준의 정확도를 구현하기 어렵다는 설명이다. 여기에 자동차 생산라인처럼 다양한 제조사의 로봇이 함께 운영되는 환경에서는 특정 제조사 솔루션만으로 통합 관리가 쉽지 않다고 강조했다. 마키나락스는 이
     

“제조·국방 현장, 범용 AI로는 부족”… 윤성호 마키나락스 대표, 산업 특화 AI 전략 공개

6 de Maio de 2026, 04:52

윤성호 마키나락스 대표는 6일 열린 기자간담회에서 “피지컬 AI 시대는 이미 시작됐지만, 가장 먼저 현실화되는 곳은 휴머노이드가 아니라 제조 산업 현장과 전투 현장”이라며 “산업 현장은 일반적인 클라우드 환경과 달리 정밀도·신뢰성·보안에 대한 요구 수준이 높아 범용 AI만으로는 대응에 한계가 있다”고 말했다.

마키나락스는 산업 현장에서 AI 도입이 어려운 핵심 이유로 폐쇄망 환경과 현장 데이터 문제를 꼽았다. 제조 공장과 국방 시설은 외부 데이터 반출이 제한되는 경우가 많아 일반적인 클라우드 기반 AI 서비스를 그대로 적용하기 어렵고, 산업 장비마다 데이터 구조와 운영 방식이 달라 실제 현장 데이터를 충분히 학습하지 않으면 기업이 원하는 수준의 정확도를 구현하기 어렵다는 설명이다. 여기에 자동차 생산라인처럼 다양한 제조사의 로봇이 함께 운영되는 환경에서는 특정 제조사 솔루션만으로 통합 관리가 쉽지 않다고 강조했다.

마키나락스는 이 같은 환경에 대응하기 위해 자체 AI 운영체제 ‘런웨이’를 개발했다고 밝혔다. 런웨이는 폐쇄망 환경에서도 동작할 수 있도록 설계됐으며, 공장 내부 서버나 산업 장비 환경에서도 AI 운영이 가능하다는 설명이다. 회사는 이를 통해 데이터 수집·저장·학습·배포·운영 전주기를 통합 지원한다.

윤 대표는 “AI OS는 PC 시대의 윈도우나 기업용 ERP처럼 AI를 실행하기 위한 기반 소프트웨어”라며 “기업은 런웨이 위에서 수백, 수천 개의 AI 애플리케이션을 운영할 수 있다”고 설명했다.

마키나락스가 특히 강조한 부분은 산업 현장에서 검증된 레퍼런스다. 윤성호 대표는 “현장에서 실제로 동작하는 AI만이 의미가 있다”는 점을 강조하며, 회사가 초기부터 공장과 국방 등 실제 산업 환경에서 활용 가능한 AI 개발에 집중해 왔다고 말했다.

마키나락스에 따르면 현재까지 6,000개 이상의 AI 모델을 실제 산업 현장에 적용했으며, 이 과정에서 25테라바이트(TB)가 넘는 운영 데이터를 확보했다. 회사는 이러한 데이터를 기반으로 후발주자와의 격차를 확대하고 있다고 밝혔다.

윤 대표에 따르면 레퍼런스는 실제 기업 의사결정자들이 AI 솔루션을 도입할 때 가장 중요하게 보는 요소 중 하나다. 그는 이러한 레퍼런스 확보가 향후 성장의 기반이 될 것이라는 자신감도 드러냈다. 윤 대표는 “제조와 국방 분야 기업들은 한번 검증된 솔루션을 쉽게 바꾸지 않는 특성이 있다”며 “후발주자는 실제 데이터를 확보하지 못한 상태에서 높은 수준의 AI 성능을 구현해야 하는 구조적 한계가 있다”고 말했다.

최근 글로벌 빅테크 기업들도 제조·산업용 AI 시장 진출을 확대하고 있지만, 마키나락스는 산업 현장 중심의 기술 역량으로 시장을 공략하겠다는 전략이다. 윤 대표는 “글로벌 기업들은 현재 클라우드 기반 의사결정 지원이나 ERP·재무 영역에 집중하고 있다”며 “마키나락스는 공장과 산업 설비처럼 실제 운영 환경에서 활용되는 AI 개발에 집중해 왔다는 점이 다르다”고 말했다.

AI 에이전트 확산에 따라 보안과 거버넌스 중요성이 높아지고 있다는 점도 경쟁 요소로 제시했다. 윤 대표는 “에이전트는 자율성이 높아질수록 기업 입장에서 리스크도 커진다”며 “런웨이는 강력한 보안과 거버넌스 체계 안에서 AI를 안정적으로 운영할 수 있도록 설계됐다”고 설명했다.

마키나락스는 IPO를 통해 확보한 자금을 AI OS 고도화와 글로벌 사업 확대에 투입할 계획이다. 회사는 제조 특화 ‘다크팩토리 OS’와 국방 특화 ‘디펜스 OS’를 개발해 글로벌 피지컬 AI 운영체제 시장 표준 기업으로 자리매김하겠다는 목표를 제시했다.

글로벌 전략 시장으로는 일본과 유럽을 우선 공략한다. 회사는 지난해 일본 법인을 설립했으며, 현재 일본 자동차 제조사와 산업용 장비 기업 등 고객사를 확보했다고 밝혔다. 유럽 시장은 로봇 기업 쿠카(KUKA) 자회사인 디바이스 인사이트(Device Insight)와의 협력을 기반으로 확대 중이다.

마키나락스는 마지막으로 2027년 흑자 전환을 목표로 하고 있으며, 2030년까지 매출 1,000억 원 달성과 글로벌 매출 비중 20~30% 확보를 목표로 제시했다.
jihyun.lee@foundryco.com

  • ✇Security | CIO
  • The triple squeeze: Why the SaaSpocalypse story you’re hearing is missing the most dangerous part
    In early February 2026, nearly $285 billion in market value evaporated from software and related sectors in 48 hours. Atlassian dropped 36% for the month. The iShares Software ETF fell more than 30% from its September 2025 highs. Traders called it the “SaaSpocalypse.” The popular narrative goes like this. AI coding tools have gotten so good that customers can build their own software, so why pay for a SaaS subscription when an engineer can vibe-code a replacement over a
     

The triple squeeze: Why the SaaSpocalypse story you’re hearing is missing the most dangerous part

5 de Maio de 2026, 07:00

In early February 2026, nearly $285 billion in market value evaporated from software and related sectors in 48 hours. Atlassian dropped 36% for the month. The iShares Software ETF fell more than 30% from its September 2025 highs. Traders called it the “SaaSpocalypse.”

The popular narrative goes like this. AI coding tools have gotten so good that customers can build their own software, so why pay for a SaaS subscription when an engineer can vibe-code a replacement over a weekend?

That’s the least interesting version of what’s happening. The real story involves three forces converging on SaaS simultaneously, creating a structural trap that puts hundreds of thousands of white-collar jobs at risk. The force that will decide their fate isn’t AI. It’s a spreadsheet in a private equity office.

Force #1: AI isn’t replacing your product. It’s replacing the problem your product solves

Most enterprises won’t rebuild their tech stack with vibe coding, because that’s not how large organizations work. The bigger threat is that AI agents are making entire workflow categories obsolete. Take a SaaS ticketing product. The threat isn’t a competing ticketing system built in-house, it’s that customers are deploying AI agents to handle support directly, rethinking the pipeline from scratch. The old system isn’t replaced by a better one. It’s replaced by a fundamentally different approach to the job.

Satya Nadella telegraphed this on the BG2 podcast in December 2024, saying business applications would “probably collapse” in the agent era because they’re “CRUD databases with a bunch of business logic.” “All the logic will be in the AI tier.”

The data backs him up. Gartner forecasts worldwide AI spending will hit $2.5T in 2026, up 44% YoY, while overall IT budgets grew ~10%. That money is coming from other budgets. Average SaaS apps per company dropped 18% between 2022 and 2024 (BetterCloud). Among large enterprises, 82% are actively reducing vendor count (NPI Financial). Even companies not directly losing customers face fewer new purchases, slower expansions and harder renewals, because buyers are looking somewhere else.

Force #2: The $440 billion leverage trap

Between 2015 and 2025, private equity acquired more than 1,900 software companies in deals worth over $440 billion. The thesis was elegant. Sticky recurring revenue, high margins, predictable cash flows and high switching costs, all perfect for leveraged buyouts. It worked brilliantly for a decade. Then it stopped.

  • The setup (2020-2022). Public SaaS traded at a median 18x revenue in 2021 (Asana touched 89x). PE paid premium multiples with enormous debt. Anaplan went to Thoma Bravo for $10.4B. Coupa sold for $8B with $4.5-5B in leverage. Zendesk went private for $10.2B backed by ~$5B in private credit.
  • The collapse. By late 2025, the median public SaaS revenue multiple had fallen to 5.1x, over 70% below peak. Private software M&A multiples dropped below 3x in 2024.

Here’s the math. A PE firm buys a $100M-revenue SaaS company in 2021 at 8x ($800M), financing 40% with floating-rate debt, a $320M loan at SOFR plus 500 bps. The initial rate runs 5-6%. After Fed hikes, about 10%, or $32M annual interest. Then the multiple collapses. Even if revenue grows to $120M, at 2-3x the business is worth $240-360M. The loan is $320M. Equity sits somewhere between negative and barely positive.

This isn’t hypothetical. Wells Fargo now uses “keys handover” for cases where PE hands underwater portfolio companies to lenders. A record $25B of software leveraged loans trade below 80 cents on the dollar. Total tech distressed debt sits near $46.9B. Apollo cut its software exposure nearly in half during 2025.

When equity is underwater, PE has two choices. Walk away or shift into margin-maximization mode by cutting headcount, consolidating and extracting cash.

Force #3: AI is the cost-cutting weapon PE has been waiting for

Here’s the cruel irony. AI is killing revenue, the debt still needs servicing and AI is also the most powerful cost-cutting tool ever handed to a PE operating partner.

Most SaaS employees are white-collar knowledge workers, including engineers, PMs, marketers, CS, sales, support and analysts. Precisely where AI is making fastest inroads. Anthropic’s research found AI-exposed workers earn 47% more on average and are nearly 4x as likely to hold a graduate degree. Stanford Digital Economy Lab and Dallas Fed research shows employment among 22-25-year-olds in AI-exposed roles fell 13-16% between late 2022 and mid-2025, nearly 20% among young software developers.

Wall Street has picked its side. When Atlassian announced 1,600 layoffs (10% of workforce) to fund AI investment, the stock rose. When Block cut 4,000 jobs and Jack Dorsey said, “a significantly smaller team, using the tools we’re building, can do more and do it better,” the stock surged over 20%.

PE is moving too. Anthropic is reportedly in talks with Blackstone, Hellman & Friedman and Permira on a JV to embed Claude across portfolio companies. OpenAI is in parallel talks with Advent, Bain, Brookfield and TPG. Blackstone alone manages $1.3T+ across manufacturing, healthcare, real estate and financial services. Many licenses those companies cancel will belong to SaaS firms in other PE portfolios. As CNBC put it, “Private equity built the SaaS installed base. It may also be the one that rips it out.”

The loop closes. AI slows revenue, valuation collapses, debt becomes unsustainable and PE uses AI to cut headcount to service it. That’s the Triple Squeeze.

So, what can you actually do?

  • Assess exposure across three dimensions. First, your company. Is it PE-owned, and what vintage? Deals done at peak 2021-2022 valuations with heavy leverage are most precarious, and PitchBook or Crunchbase will tell you. Second, your role. Cost center or revenue engine? When growth stalls, PE defaults to margin maximization, and G&A, parts of marketing, internal tools and legacy product teams are vulnerable. Third, AI itself. How automatable is your day-to-day? If your core workflow is routing information, synthesizing documents or managing processes, the timeline is shorter than you think.
  • Supersize your T-shape. AI’s Achilles’ heel is scarce context. It doesn’t know your customers, your industry or why that one integration keeps breaking. Widen across adjacent roles while deepening your core with AI. Engineers can learn PM, UX and AI-assisted QA. Marketers can automate operational work with agents and build AI creative pipelines. Become an AI multiplier, someone who directs these tools with cross-functional judgment they can’t generate alone. If your employer isn’t giving you enough exposure, don’t wait. Vibe-code a side project. Pressure-test a financial model against your usual approach.
  • Build reputation while you still have a platform. Write publicly, contribute to communities, ship open source. Individual brand is a hedge against rising company-level risk, and far easier to build while employed than while competing with thousands of displaced workers.
  • If exposure is real, move early and deliberately. A wave of PE-backed SaaS layoffs would flood the market with experienced workers chasing a shrinking pool of roles. Those who fare best move while they can still be selective. But “move” doesn’t mean jumping to the first company with AI in its pitch deck. Apply the same structural thinking. Look for durable revenue, a real plan for AI-native competition, and profitability or a credible path.

The bottom line

The SaaSpocalypse narrative everyone’s debating, whether AI coding will kill SaaS, is a sideshow. The real story is financial, structural and already in motion.

Private equity spent a decade and $440 billion buying up software on a thesis that just broke. The debt doesn’t care about AI timelines or market sentiment. It comes due regardless. The only variable PE can control now is cost, and AI just made that variable dramatically easier to cut.

If you work in this industry, especially at a PE-backed company, it’s time for clear-eyed assessment of your exposure before the math makes the decision for you.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The $570K canary: What AI coding agents reveal about enterprise AI’s real gaps
    Boris Cherny, creator of Anthropic’s Claude Code, says he hasn’t written a line of code by hand in months. He shipped 22 pull requests one day, 27 the next, all AI-generated. Company-wide, Anthropic reports that 70 to 90% of its code is now written by AI. CEO Dario Amodei has predicted that AI could handle “most, maybe all” of what software engineers do within months. And yet Anthropic typically has dozens of software engineering openings, one reportedly carrying $570K
     

The $570K canary: What AI coding agents reveal about enterprise AI’s real gaps

4 de Maio de 2026, 07:00

Boris Cherny, creator of Anthropic’s Claude Code, says he hasn’t written a line of code by hand in months. He shipped 22 pull requests one day, 27 the next, all AI-generated. Company-wide, Anthropic reports that 70 to 90% of its code is now written by AI. CEO Dario Amodei has predicted that AI could handle “most, maybe all” of what software engineers do within months.

And yet Anthropic typically has dozens of software engineering openings, one reportedly carrying $570K in total compensation. As one observer noted, the company is simultaneously predicting the end of the profession and paying top dollar to hire into it.

Meanwhile, during his GTC 2026 keynote, NVIDIA CEO Jensen Huang said that 100% of NVIDIA now uses AI coding tools, including Claude Code, Codex and Cursor, often all three. Then, in a conversation on the All-In Podcast during GTC week, Huang sharpened the point: A $500,000 engineer who doesn’t consume at least $250,000 in AI tokens annually is like “one of our chip designers who says, guess what, I’m just going to use paper and pencil.”

This isn’t cognitive dissonance. It’s a signal. And CIOs who look past the headlines will find a pattern that explains not just where AI coding is going, but where all of enterprise AI is headed.

Tellers, not toll booth workers

The instinct is to see this as an extinction event. AI writes all the code; engineers become toll booth workers, replaced entirely by automation with no complementary role left behind. But the data tells a different story, one I explored in a recent CIO.com article on AGI skepticism.

When ATMs rolled out, bank teller employment didn’t collapse. It doubled, from 268,000 in 1970 to 608,000 in 2006. The machines eliminated the routine transaction. But cheaper branch operations meant banks opened more locations, which created demand for tellers who could handle complex financial conversations. Economists call this Jevons Paradox: When technology makes something more efficient, demand expands rather than contracts.

Software engineers are bank tellers, not toll booth workers. AI agents are eliminating routine implementation: The boilerplate, the CRUD endpoints, the standard test scaffolding. But that efficiency is expanding the total surface area of what “engineering” means. Anthropic isn’t paying $570K for someone to type code. They’re paying for the judgment to orchestrate AI agents that type code: Deciding what to build, evaluating whether the output is correct, governing what gets deployed and maintaining systems that are increasingly written by machines.

Cherny confirmed this shift directly. His team now hires generalists over specialists, because traditional programming specialties are less relevant when AI handles implementation details. The skill premium has moved from writing code to supervising it, from production to orchestration.

The reason AI coding agents work

Here’s the question CIOs should be asking: Why are AI agents succeeding in software development faster than in any other enterprise function?

It’s not because coding models are better than models for customer service, legal review or financial analysis. The underlying LLMs are the same. The difference is that software development already had the infrastructure that every other enterprise function lacks.

Developers didn’t build this infrastructure for AI. They built it for themselves, over decades. But it maps almost perfectly to the six infrastructure gaps that are currently blocking AI agents from moving beyond employee-facing pilots into customer-facing production.

6 gaps the SDLC already solved

1. Governance: Right data, right users, right permissions

In software development, governance is built into the workflow. Branch protection, code review policies and role-based access controls create a clear chain of permission from draft to deploy, whether the author is human or agent.

Most enterprise functions have nothing equivalent. When an AI agent drafts a customer response, accesses a patient record or modifies a financial model, the governance layer (who approved this action, what data was it allowed to see, which policies constrain its output) is either ad hoc or absent. Microsoft’s 2026 Cyber Pulse survey found that while 80% of Fortune 500 companies have deployed AI agents, only 47% have agent-specific security policies in place.

2. Observability: Trace and audit the decision trail

Every line of AI-generated code has a paper trail. Git blame shows who (or what) wrote it. CI/CD pipelines log every build, test and deployment. When something breaks in production, engineers can trace the failure from alert to commit to the specific agent session that produced the change.

Outside of engineering, AI agent decisions are largely opaque. A customer-facing agent that denies a claim or escalates a complaint leaves no audit trail. Without observability, enterprises can’t debug bad outcomes, satisfy regulators or build the trust necessary to expand agent autonomy.

3. Evaluation: Measure correctness at scale

Unit tests, integration tests, type checking, linting and automated QA give software engineering something no other enterprise function has: Continuous, objective measurement of whether AI-generated output is correct. That provides a foundation for proving an agent gets it right.

This is the gap other enterprise functions feel most acutely. DigitalOcean’s 2026 survey of 1,100 technology leaders found that 41% cite reliability as their number one barrier to scaling AI agents. Reliability is an evaluation problem: Without automated, continuous measurement of agent output quality, organizations can’t trust agents enough to put them in front of customers.

4. Memory: Persistent context beyond the context window

Developers take persistent context for granted. Version control, documentation and architectural decision records provide context that survives across sessions, teams and years. An AI coding agent can read the commit history, understand why a design choice was made in 2019, and factor it into today’s implementation.

Most enterprise AI agents operate in a memoryless state. Each customer interaction starts from scratch. Each agent session has no awareness of prior decisions, escalations or context beyond what fits in the context window. This is why employee-facing agents (IT help desks, NOC ticketing) succeed where customer-facing agents stall: Internal users tolerate repeating context. Customers do not.

5. Cost controls: Manage LLM spend across providers

Jensen Huang’s $250K-per-engineer token budget isn’t an abstraction. It’s a real cost management challenge that engineering teams are already navigating. Smart teams route differently depending on the task: Use a lightweight model for boilerplate generation, a reasoning model for architectural decisions and a code-specific model for refactoring. They set token budgets per agent session. They measure cost-per-PR and cost-per-feature, not just cost-per-token.

Enterprises deploying AI agents in other functions rarely have this granularity. When Goldman Sachs stated AI near-zero GDP impact in 2025, the missing variable was cost discipline at the workflow level. Without the ability to route, throttle and measure LLM spend per agent task, scaling agents means scaling costs linearly, which eventually kills ROI.

6. Deployment flexibility: Any cloud, on-prem, no lock-in

In software development, the runtime has always been portable. Code that runs on AWS today can run on Azure tomorrow, or on bare metal in your own data center. Containerization, Kubernetes and infrastructure-as-code tools like Terraform mean that engineering teams can change their minds about where workloads run without rewriting the application. Software has had this mindset for decades.

We’re early enough in this agentic development game that it’s tempting to take short cuts. Organizations that build on a single hyperscaler’s agent framework find themselves locked into that provider’s model ecosystem, observability tooling and pricing structure. As agentic AI matures, deployment flexibility (the ability to run agents on any cloud, on-prem or across hybrid environments without vendor lock-in) will separate organizations that scale from those that stall.

Sometimes you’ll want agents to run close to your data. Other times, you’ll want agents close to the users. And you’ll want your developers to be able to move back and forth between different agent code bases without having to learn a different framework between them.

What CIOs should watch at Build and I/O

Google I/O and Microsoft Build will dominate May with dueling AI coding announcements. The temptation will be to compare model benchmarks. That’s the wrong lens. The models are converging. The real competition is one layer down, in the infrastructure that makes AI agents viable outside of software development.

CIOs watching these conferences should evaluate each announcement against the six gaps: Is Microsoft closing the governance gap with Azure AI Foundry? Is Google advancing observability through Vertex AI? Which platform is making it easier to evaluate agent output at scale, maintain persistent memory across sessions, control costs at the workflow level and deploy without lock-in?

The company that wins the AI coding war will be the one that builds the infrastructure layer that transfers to every other enterprise function. That’s the real stakes of May’s developer conferences, and it’s the real reason CIOs should be paying attention.

The canary’s message

Software engineers are the first knowledge workers to live inside a fully agentic workflow. They’re the canary in the coal mine for every other enterprise function. And right now, the canary is singing, not dying.

The lesson isn’t that AI coding agents have made engineers obsolete. It’s that AI coding agents work because engineers already built the infrastructure that makes agents trustworthy. Governance, observability, evaluation, memory, cost controls and deployment flexibility: These aren’t nice-to-haves. They’re the reason Anthropic can ship 27 AI-generated pull requests in a day and sleep at night.

Every other enterprise function will need to build its own version of that infrastructure before AI agents can move from employee-facing pilots to customer-facing production. The models aren’t the bottleneck. The scaffolding around them is.

Anthropic paying $570K for a software engineer whose job might not exist in a year isn’t a contradiction. It’s Jevons Paradox. And it’s the most expensive leading indicator in enterprise AI.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • How NOV is moving from FOMO to calculated scaling
    For decades, the industrial sector has operated on the simple mantra to live by automation, die by automation. In the oil and gas industry, where precision is measured in millimeters and safety in lives, automation is a necessity, not just nice to have. But as gen AI sweeps through the enterprise, a new challenge has emerged in how a global leader in energy services should transition from experimental chatbots to industrial-grade AI without compromising safety or security.
     

How NOV is moving from FOMO to calculated scaling

30 de Abril de 2026, 07:00

For decades, the industrial sector has operated on the simple mantra to live by automation, die by automation. In the oil and gas industry, where precision is measured in millimeters and safety in lives, automation is a necessity, not just nice to have. But as gen AI sweeps through the enterprise, a new challenge has emerged in how a global leader in energy services should transition from experimental chatbots to industrial-grade AI without compromising safety or security.

Here, Alex Philips, CIO of NOV, formerly National Oilwell Varco, discusses implementing OpenAI and securing it with zero trust for 25,000 employees, and why the next phase of agentic AI requires a fundamental shift in how to view human expertise and digital safeguards.

From FOMO to ROI

Like many global companies, NOV’s initial move into gen AI was driven by executive pressure fueled by fear of missing out. Philips remembers the early talks with his CEO about the investment.

“I said we have this opportunity, and it costs this much,” he says. “He asked about the ROI and I replied that’s something I couldn’t calculate, nor what it’d replace or what it’d displace in cost, but I couldn’t say any of that for email either.”

Just as no modern business can function without email, even without a direct line-item ROI, Philips argues that LLMs will soon become the standard for employee productivity. Currently, NOV reports about 50% of its workforce actively use the tool to enhance productivity.

The results, though qualitative, are profound. Philips says that response times for urgent customer requests, for instance, have plummeted, language barriers are crumbling, and employees are tackling complex analyses once considered out of reach.

The six-month validation lesson

One example Philips details involves an engineer who spent six months mastering a highly specialized skill. With ChatGPT, the engineer was able to replicate that six-month learning process in just 10 minutes.

And while his initial response was to think he wasted six months of his life, the response was to show him he spent six months to validate what the AI told him. “This is a great example of why humans are still needed in the AI loop,” says Philips. “AI execution without human validation can lead to errors that cost companies significant time and money.”

This underscores the crucial pillar of NOV’s AI strategy of human accountability because in an industrial setting, AI dictating terms is never an acceptable excuse. Whether designing a drill bit or automating a workflow, the end user remains responsible for the output.

Securing the Wild West of shadow AI

As AI becomes more widespread, shadow AI poses a significant security risk. To address this, NOV uses Zscaler to route all traffic, and ensure visibility and control. And by doing so, the company can:

  • Redirect users: If an employee tries to use a non-approved LLM, they’re redirected to a page that explains NOV’s policy, and directed to the approved enterprise OpenAI instance.
  • Monitor SaaS evolution: Many authorized SaaS applications are now adding agentic features during contract periods. Zscaler provides the visibility needed to identify these changes before sensitive IP is fed into an unvetted model.
  • Enforce data privacy: Preventing intellectual property from leaking into public training sets is the first step in any industrial AI deployment.

The shift to agentic AI

In software development, NOV already benefits from AI-assisted coding, where AI works alongside developers who accept about 32% of AI suggestions. “We’re now beginning to explore the next evolution of full agentic coding,” says Philips, adding that this next stage truly supercharges teams, enabling them to move faster and better meet customer demand for innovation.

However, this efficiency feeds the dilemma of a widening talent gap. The challenge moving forward is if all the low-level, entry-level tasks can be automated, and what’s the best way to develop skilled workers. “I don’t know how we’ll adapt to it, but we’ll figure it out,” he says.

Safety first

In the oil field, some processes are too critical to be left entirely to a black-box algorithm. Philips is adamant that for safety issues, AI remains an advisor, not a decider. NOV uses AI-powered vision to monitor red zones, or dangerous areas on a drilling rig. If the AI detects a person in a restricted area, it can trigger an emergency stop. However, for actual drilling operations, the final call remains with an onsite human operator. “You can’t have a hallucination,” he says. “You can’t say it’s right 90% of the time. It has to be all the time.”

NOV’s journey shows that transitioning to industrial-grade AI isn’t just about choosing the best model but building a framework of trust, transparency, and responsibility. By using Zscaler for governance and GitHub Advanced Security for code validation, NOV is moving toward a future where AI becomes more essential to the oil industry.

“Development teams should produce twice the output with half the people in half the time,” he says. “The only remaining question is how do we train the next generation of developer experts to control the machines that do the work.”

  • ✇Security | CIO
  • Subscription model: How AI is reshaping corporate education
    In the world of EdTech, where I have had the fortune to design and scale numerous platforms for higher education and enterprise environments alike, one shift is occurring at an increasing rate in 2026 for corporate learning. It is shifting from being programmatic to becoming a continuous system built with AI to identify the needs of the corporation and workforce and to provide the training required to ensure that the workforce develops the necessary skills. Many enter
     

Subscription model: How AI is reshaping corporate education

29 de Abril de 2026, 10:00

In the world of EdTech, where I have had the fortune to design and scale numerous platforms for higher education and enterprise environments alike, one shift is occurring at an increasing rate in 2026 for corporate learning.

It is shifting from being programmatic to becoming a continuous system built with AI to identify the needs of the corporation and workforce and to provide the training required to ensure that the workforce develops the necessary skills.

Many enterprises are adopting AI-powered learning ecosystems to address the needs of their organizations in real time, according to CIO analysis. However, what is emerging now goes a step further to address those needs with subscription-based learning environments that adapt to the needs of the organizations themselves.

Architecting the subscription learning economy

Through my experience working with executive education and enterprise platforms, I have found that traditional learning models fail not because of content quality but because of delivery architecture.

The subscription model for learning emphasizes continuous learning, modular content and regular skill updates rather than traditional fixed courses.

Diagram: Subscription principles

Vishal Shukla

This model enables organizations to deploy micro-credentials that align directly with evolving business priorities such as AI adoption, digital transformation and data-driven decision making.

More importantly, though, and even beyond the change in the way that learning content is delivered, such changes can represent changes to the way that companies and platform view revenue and growth. For instance, subscription-based models can help to keep learner engaged with their learning and create recurring engagement loops that support both learner motivation and organizational values.

Engineering personalized learning pathways with AI

One of the most significant contributions I have seen in this space is the application of AI to dynamically orchestrate learning journeys.

In platforms I have led, AI systems do not simply recommend courses. They:

  • Outcome-driven learning: Maps skills directly to business outcomes.
  • Adaptive learning paths: Adjusts learning sequences based on performance signals.
  • Aligned skill growth: Connects individual development with enterprise capability frameworks.
Diagram: Adaptive learning paths

Vishal Shukla

This transition represents moving from content recommendation engines to capability orchestration systems.

Further, there’s also the benefit of increased efficiency. While the ability to drastically improve skills and capabilities is reason enough for workers to employ AI tools in their educational endeavors, evidence shows that learning can also become 57 percent more efficient, according to  Training Providers Statistics 2025. This CIO perspective highlights how learning is repositioned as a strategic lever in this article.

Activating learning through private cohort networks

While AI enables personalization, I have consistently observed that behavioral transformation happens in groups.

This is where private cohort learning is emerging as a critical layer in B2B education. In enterprise implementations I have supported, curated cohorts of leaders create high-impact learning environments where knowledge is contextualized through peer interaction.

The growing adoption of cohort models is driven by measurable outcomes:

  • Organizations are prioritizing learning tied to real business results, not just completion metrics.
  • Cohort structures close the gap between knowing and doing — faster than self-paced formats ever could.
  • Engagement and retention hold significantly stronger when learners move through content together.
  • Learning ecosystems are evolving into a hybrid model — Netflix-style accessibility with the accountability of a cohort.

In my view, the convergence of subscription access and cohort-based engagement represents a hybrid learning architecture that balances scale with depth.

From content platforms to capability engines

The table below provides mapping between learning dimension and platform features, and what those changes mean in relation to workforce outcomes.

DimensionLegacy learning modelAI-driven subscription modelStrategic outcome
Learning architectureProgram-based deliveryContinuous subscription ecosystemSustained workforce readiness
Content strategyStatic curriculumModular micro-credentialsRapid skill deployment
PersonalizationRole-based segmentationAI-orchestrated pathwaysPrecision learning at scale
Engagement modelIndividual consumptionCohort-driven collaborationBehavioral and performance change

Case in point: Scaling AI learning in executive education

In one of the executive education platforms I helped design, we introduced AI-driven subscription learning paths focused on digital and AI transformation.

What we observed was not just increased participation, but a shift in how leaders engaged with learning:

  • They moved from passive consumption to active problem solving
  • Learning cycles aligned directly with business initiatives
  • Peer cohorts reinforced accountability and execution

This model enabled organizations to translate learning investments into measurable outcomes, bridging a gap that has historically limited the impact of corporate training.

Research on what actually motivates learners to commit to online courses backs this up — people engage when the learning feels relevant to their real work, not when it’s assigned to them.

Defining the next generation of corporate education

Based on my work across enterprise and higher education ecosystems, I believe the future of corporate learning will be defined by three foundational shifts:

  • From learning delivery to capability engineering
  • From static programs to adaptive ecosystems
  • From completion metrics to business impact measurement

Organizations that operationalize these principles will not only upskill their workforce but also build resilient, future-ready talent systems.

Conclusion: Designing learning systems that learn

The most important realization from my work in this space is that modern learning platforms must themselves become intelligent systems.

They must learn from users, adapt to organizational needs and continuously evolve. Subscription models, powered by AI and reinforced through cohort dynamics, are making this possible.

In 2026, corporate education is no longer about providing access to knowledge. It is about designing systems that enable organizations to continuously generate capability — at scale, in real time and aligned with business transformation.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • 독일 소버린 AI 대표주자 알레프 알파, 코히어와 손잡고 글로벌 연합 선택
    유럽이 미국 기술 기업 의존에서 벗어나 기술 주권을 강화하려는 가운데, 한때 독일의 ‘소버린 AI’ 기대주로 꼽히던 알레프 알파가 해외 기업 인수 대상으로 거론되고 있다. 알레프 알파는 캐나다 AI 기업 코히어와의 합병을 앞두고 있다. 이번 거래는 코히어의 글로벌 AI 경쟁력과 알레프 알파의 연구 기반을 결합하는 데 초점을 맞춘다. 양사는 캐나다와 독일의 산업 생태계를 바탕으로 강력한 AI 기업으로 성장하겠다는 계획이다. 코히어 CEO 에이단 고메즈는 “전 세계 조직들은 AI 스택 전반에 대해 타협 없는 통제권을 요구하고 있다”라며 “이번 협력은 이러한 수요를 충족하기 위해 필요한 대규모 확장성, 견고한 인프라, 그리고 세계적 수준의 연구개발 인재를 확보하는 계기가 될 것”이라고 밝혔다. 24일 공개된 공식 보도자료에서는 이번 거래를 ‘대등한 합병’으로 설명하고 있지만, 각주 내용에 따르면 독일 기업 주주 승인만으로 성사가 가
     

독일 소버린 AI 대표주자 알레프 알파, 코히어와 손잡고 글로벌 연합 선택

29 de Abril de 2026, 06:40

유럽이 미국 기술 기업 의존에서 벗어나 기술 주권을 강화하려는 가운데, 한때 독일의 ‘소버린 AI’ 기대주로 꼽히던 알레프 알파가 해외 기업 인수 대상으로 거론되고 있다.

알레프 알파는 캐나다 AI 기업 코히어와의 합병을 앞두고 있다. 이번 거래는 코히어의 글로벌 AI 경쟁력과 알레프 알파의 연구 기반을 결합하는 데 초점을 맞춘다. 양사는 캐나다와 독일의 산업 생태계를 바탕으로 강력한 AI 기업으로 성장하겠다는 계획이다.

코히어 CEO 에이단 고메즈는 “전 세계 조직들은 AI 스택 전반에 대해 타협 없는 통제권을 요구하고 있다”라며 “이번 협력은 이러한 수요를 충족하기 위해 필요한 대규모 확장성, 견고한 인프라, 그리고 세계적 수준의 연구개발 인재를 확보하는 계기가 될 것”이라고 밝혔다.

24일 공개된 공식 보도자료에서는 이번 거래를 ‘대등한 합병’으로 설명하고 있지만, 각주 내용에 따르면 독일 기업 주주 승인만으로 성사가 가능한 구조여서 인수 성격이 일부 반영된 것으로 해석된다.

합병 이후 양사는 금융, 방위, 헬스케어 등 규제가 엄격한 산업을 중심으로 맞춤형 AI 서비스를 제공할 계획이다. 양사의 기술과 제품을 결합해 각 지역의 법률, 문화적 맥락, 기관별 요구사항에 부합하는 AI 솔루션을 제공하겠다는 전략이다.

이번 움직임은 전 세계 기업들이 미국 외 대안을 모색하는 흐름 속에서 나왔다. 트럼프 행정부의 관세 정책과 이란과의 갈등에서 비롯된 불확실성이 이러한 변화의 배경으로 지목된다.

유럽 내부에서도 미국 중심의 기술 패권에 대응하기 위한 다양한 시도가 이어지고 있다. 유럽연합(EU)은 주요 프로젝트에서 유럽 기업을 선택할 수 있도록 하는 ‘유로스택(Eurostack)’ 전략을 추진해 왔으며, 알레프 알파는 해당 계획에서 핵심 기업 중 하나로 언급된 바 있다. 또한 EU는 미국과 중국의 AI 주도권을 견제하기 위해 ‘오픈 유로 LLM(Open Euro LLM)’ 프로젝트도 출범시켰다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • 한국 정부·구글 딥마인드 맞손···AI 캠퍼스·인재 교류 확대
    과기정통부는 한국에 방문한 구글 딥마인드 공동 창업자 겸 CEO 데미스 하사비스와 만나 과학기술 AI 공동연구, AI 인재 양성, 책임 있는 AI 활용 등을 주요 협력 분야로 하는 양해각서(MOU)를 체결했다고 27일 밝혔다. 이번 협약은 알파고 대국 10주년이라는 시점에 체결됐다. 장소 역시 2016년 알파고와 세계 바둑 챔피언 이세돌 사범의 대결 ‘구글 딥마인드 챌린지 매치’가 열렸던 포시즌스 호텔 서울에서 마련됐다. 과기정통부는 지난 10년간 축적된 AI 성과를 과학기술 혁신에 활용하고, 국내 AI 생태계 고도화와 책임 있는 AI 활용을 위한 글로벌 협력을 확대하는 계기가 될 것으로 내다봤다. 과기정통부는 AI 기반 과학기술 혁신을 통해 연구 생산성을 높이고 국가적 과제를 해결하기 위한 ‘K-문샷’ 프로젝트를 추진하고 있다. 이번 양해각서를 통해 세계적인 과학기술 AI 역량을 보유한 구글 딥마인드와 기술·인프라·연구자 교류 전
     

한국 정부·구글 딥마인드 맞손···AI 캠퍼스·인재 교류 확대

27 de Abril de 2026, 06:15

과기정통부는 한국에 방문한 구글 딥마인드 공동 창업자 겸 CEO 데미스 하사비스와 만나 과학기술 AI 공동연구, AI 인재 양성, 책임 있는 AI 활용 등을 주요 협력 분야로 하는 양해각서(MOU)를 체결했다고 27일 밝혔다.

이번 협약은 알파고 대국 10주년이라는 시점에 체결됐다. 장소 역시 2016년 알파고와 세계 바둑 챔피언 이세돌 사범의 대결 ‘구글 딥마인드 챌린지 매치’가 열렸던 포시즌스 호텔 서울에서 마련됐다. 과기정통부는 지난 10년간 축적된 AI 성과를 과학기술 혁신에 활용하고, 국내 AI 생태계 고도화와 책임 있는 AI 활용을 위한 글로벌 협력을 확대하는 계기가 될 것으로 내다봤다.

과기정통부는 AI 기반 과학기술 혁신을 통해 연구 생산성을 높이고 국가적 과제를 해결하기 위한 ‘K-문샷’ 프로젝트를 추진하고 있다. 이번 양해각서를 통해 세계적인 과학기술 AI 역량을 보유한 구글 딥마인드와 기술·인프라·연구자 교류 전반에 걸친 협력 기반을 모색할 예정이다.

특히 양 기관은 생명과학, 기상·기후, AI 과학자 등 다양한 과학기술 분야에서 협력하고, 올해 5월부터 운영 예정인 국가과학AI연구센터를 중심으로 공동 연구와 연구자 교류를 확대할 계획이다.

또한 과학적 발견을 가속화하기 위해 AI 모델·도구의 개발과 검증, 과학 데이터 활용을 추진하는 한편, AI 바이오 혁신 연구거점을 중심으로 한 협력도 모색하기로 했다. 아울러 국내 우수 AI 인재가 구글 딥마인드의 연구 환경을 직접 경험할 수 있도록 인턴십 기회도 발굴한다. 이를 뒷받침하기 위해 구글은 한국에 AI 캠퍼스를 설립하고 학계, 연구자, 스타트업과의 협력을 확대할 계획이다. 해당 AI 캠퍼스는 K-문샷과 연계해 구글 딥마인드와의 AI 기반 과학기술 협력 거점 역할을 수행할 것으로 예상된다.

이와 함께 양 기관은 AI 안전 및 거버넌스 분야에서도 협력한다. AI 기술의 책임 있는 개발을 위해 안전성 프레임워크와 AI 모델의 안전장치에 관한 공동 연구를 추진할 예정이다. 특히 AI 위험 대응을 위해 AI 안전성 평가·연구를 담당하는 AI안전연구소와 연계해, 안전 프레임워크 구축과 테스트 방법론에 대한 논의를 이어갈 계획이다. 이 밖에도 인류가 직면한 문제 해결과 공동 번영을 위한 글로벌 AI 허브 조성 및 협력 방안에 대해서도 지속적으로 논의할 예정이다.

배경훈 과기정통부 장관 겸 부총리는 “10년 전 알파고가 AI 시대의 출발점이었다면, 이제는 AI가 과학기술 난제를 해결하고 국민의 삶에 실질적인 영향을 미치는 단계로 나아가고 있다”며 “이번 MOU는 K-문샷을 중심으로 과학기술 분야 AI 혁신을 가속화하고, 안전하고 책임 있는 AI 연구와 모범 사례 확산을 위한 협력의 기반이 될 것”이라고 말했다.

하사비스는 “알파고 대국 이후 한국은 구글에 매우 의미 있는 곳이 됐다”며 “이 경험을 바탕으로 바이오 혁신과 기상 예측 분야의 새로운 가능성을 모색하고, AI가 책임감 있게 발전할 수 있도록 하는 보호 체계 구축에도 협력하겠다”고 밝혔다.

양 기관은 이번 협약의 실질적인 이행을 위해 공동 워킹그룹을 구성하고, 분기별 화상회의와 연 1회 대면회의를 통해 세부 협력 과제와 추진 방안을 지속적으로 협의해 나갈 계획이다.

이와 별도로 하사비스는 27일 오후 이재명 한국 대통령과 만나, 구글이 올해 안에 서울에 ‘AI 캠퍼스’를 개소하고 연구자 및 스타트업과의 협력을 확대하겠다는 계획을 밝혔다. 대통령실은 해당 AI 캠퍼스가 전 세계 최초로 한국에 설립된다는 점에서 의미가 크다고 설명했다. 또한 하사비스는 구글 연구진의 한국 파견을 적극 검토하기로 했으며, 대통령 측이 최소 10명 규모의 파견을 요청하자 이에 동의했다고 대통령실은 전했다.
jihyun.lee@foundryco.com

  • ✇Security | CIO
  • “보도자료도 AI 친화적으로”···행안부, 마크다운 도입 추진
    행정안전부는 국민의 알 권리를 보장하고, AI 서비스와 검색 엔진이 공공 정보를 보다 정확하게 활용할 수 있도록 홈페이지 보도자료 게시판을 개편하고 문서 제공 방식도 개선한다고 27일 밝혔다. 그동안 행정안전부 보도자료는 주로 아래아한글(HWPX)이나 PDF 형태의 첨부 파일로 제공돼 왔다. 그러나 이러한 형식은 스마트폰 등 모바일 환경에서 본문을 바로 확인하기 어렵고, 별도 뷰어 프로그램이 필요한 경우가 많아 이용 편의성이 떨어진다는 지적이 있었다. 특히 시각장애인용 스크린리더 등 장애인 보조 기술 측면에서도 접근성이 충분하지 않다는 문제가 제기돼 왔다. 과거 행정안전부 보도자료 게시판에서는 본문을 바로 확인하기 어려워, 별도 프로그램 설치나 운영체제에 따른 제약이 발생하는 경우가 있었다. 이에 행정안전부는 이러한 불편을 개선하기 위해 게시판을 개편하고, 보도자료 본문을 웹상에서 바로 확인하고 활용할 수 있도록 했다. 이번 개선으
     

“보도자료도 AI 친화적으로”···행안부, 마크다운 도입 추진

27 de Abril de 2026, 06:08

행정안전부는 국민의 알 권리를 보장하고, AI 서비스와 검색 엔진이 공공 정보를 보다 정확하게 활용할 수 있도록 홈페이지 보도자료 게시판을 개편하고 문서 제공 방식도 개선한다고 27일 밝혔다.

그동안 행정안전부 보도자료는 주로 아래아한글(HWPX)이나 PDF 형태의 첨부 파일로 제공돼 왔다. 그러나 이러한 형식은 스마트폰 등 모바일 환경에서 본문을 바로 확인하기 어렵고, 별도 뷰어 프로그램이 필요한 경우가 많아 이용 편의성이 떨어진다는 지적이 있었다. 특히 시각장애인용 스크린리더 등 장애인 보조 기술 측면에서도 접근성이 충분하지 않다는 문제가 제기돼 왔다.

과거 행정안전부 보도자료 게시판에서는 본문을 바로 확인하기 어려워, 별도 프로그램 설치나 운영체제에 따른 제약이 발생하는 경우가 있었다. 이에 행정안전부는 이러한 불편을 개선하기 위해 게시판을 개편하고, 보도자료 본문을 웹상에서 바로 확인하고 활용할 수 있도록 했다. 이번 개선으로 국민은 별도 프로그램 설치나 운영체제 제약 없이 보도자료 전문을 게시글 형태로 확인할 수 있게 된다.

특히 행정안전부는 사람뿐 아니라 AI도 쉽게 이해할 수 있는 문서 형식인 ‘마크다운(Markdown)’ 양식의 내려받기를 함께 지원할 예정이다. 마크다운은 제목, 본문, 목록 등 글 구조를 일관되고 명확하게 표현할 수 있는 방식으로, 검색엔진이나 AI 서비스가 데이터를 체계적으로 수집·학습하는 데 적합하다. 위키피디아, 노션, 깃허브 등 다양한 플랫폼에서 활용되고 있으며, 챗GPT, 클로드, 제미나이 등 상용 AI 서비스의 답변 형식에도 적용되고 있다.

기존에는 AI 서비스가 행정안전부 보도자료 게시판의 첨부파일을 직접 활용하는 데 제약이 있어, 국민이 정책 관련 정보를 AI에 질문할 경우 출처가 불분명하거나 정확성이 검증되지 않은 자료가 참고될 가능성이 제기돼 왔다. 앞으로는 행정안전부가 제공하는 공식 보도자료 데이터를 기반으로 보다 정확한 답변이 가능해짐에 따라, 정책 안내의 신뢰성과 일관성이 한층 개선될 것으로 보인다.

조영진 행정안전부 대변인은 “이제 보도자료는 국민 모두와 AI 서비스까지 편리하게 활용할 수 있는 형태로 제공돼야 한다”며 “국민 눈높이에 맞는 소통과 홍보를 통해 누구나 행정안전부 정책을 쉽게 접할 수 있도록 노력하겠다”고 밝혔다.
jihyun.lee@foundryco.com

  • ✇Security | CIO
  • Germany’s sovereign AI hope changes hands
    As Europe seeks to assert its technological independence from the US vendors Aleph Alpha, once seen as Germany’s sovereign AI hope, is the target of a transatlantic takeover. Aleph Alpha is set to merge with Canada’s Cohere in a deal that will bring together Cohere’s global AI clout and Aleph Alpha’s background in research. The two companies hope they will be able to develop an AI powerhouse, with backing from their Canadian and German ecosystems “Organizations globa
     

Germany’s sovereign AI hope changes hands

24 de Abril de 2026, 15:31

As Europe seeks to assert its technological independence from the US vendors Aleph Alpha, once seen as Germany’s sovereign AI hope, is the target of a transatlantic takeover.

Aleph Alpha is set to merge with Canada’s Cohere in a deal that will bring together Cohere’s global AI clout and Aleph Alpha’s background in research. The two companies hope they will be able to develop an AI powerhouse, with backing from their Canadian and German ecosystems

“Organizations globally are demanding uncompromising control over their AI stack. This transatlantic partnership unlocks the massive scale, robust infrastructure, and world-class R&D talent required to meet that demand,” said ” said Cohere CEO Aidan Gomez in a news release that artfully presents the deal as a merger of equals but that, according to a footnote, only requires the approval of the German company’s shareholders, a sure sign of a one-sided takeover.

The combined companies will be looking to offer customized AI in highly-regulated sectors including finance, defense and healthcare. By pooling their talents and offerings, theu hope to offer AI solutions to organizations according to local laws, cultural contexts, and institutional requirements.

The move comes at a time when businesses across the word are looking at non-US options as a reaction to the Trump administration’s policy on tariffs and the uncertainty caused by the war with Iran.

There have been several initiatives within Europe to counteract the US dominance. The EU’s Eurostack plan looked to make sure that major projects had a European option. Aleph Alpha was one of the companies highlighted within the scheme. The EU also launched Open  Euro LLM, an attempt to slow down the US and China’s lead in AI.

  • ✇Security | CIO
  • Smart factories are here — but is your team ready to use them?
    Since the emergence of Industry 4.0 in 2011, manufacturing has undergone a digital transformation. Industrial Internet of Things (IIoT) sensors now allow machines and assets to communicate seamlessly, while artificial intelligence has become a core business enabler. Cloud computing provides virtually limitless processing power and storage, and big data analytics has become essential for strategic decision-making. By integrating data from ERP systems with real-time machine
     

Smart factories are here — but is your team ready to use them?

23 de Abril de 2026, 09:00

Since the emergence of Industry 4.0 in 2011, manufacturing has undergone a digital transformation. Industrial Internet of Things (IIoT) sensors now allow machines and assets to communicate seamlessly, while artificial intelligence has become a core business enabler. Cloud computing provides virtually limitless processing power and storage, and big data analytics has become essential for strategic decision-making. By integrating data from ERP systems with real-time machine data — via SCADA, PLCs, and other automated tools — manufacturing execution systems (MES) have paved the way for the modern smart factory.

Smart factories are not limited to MES alone but also cover other areas like energy management systems (EMS), video analytics-based plant safety, digital quality inspection using vision-based cameras, immersive technology-based shopfloor training, operational technology (OT) network, firewalls and other related tools.

If we go up the value chain, today factories are designed using digital twins with full process simulations and products are designed using product lifecycle management (PLM) platforms. Maturity of smart factories is an evolution, tightly linked with the digital transformation plan of the enterprise. Still 49% of the enterprises lack confidence in their future manufacturing strategy.

While visiting various plants, the disparity in digital maturity is often striking. In many business units, specific digital initiatives take precedence because they are driven by the immediate priorities or critical requirements of the end customer. In other instances, regulatory compliance dictates the roadmap. Ultimately, delaying a plant’s digital transformation can be a strategic choice; these are complex business decisions managed by CXOs based on broader organizational goals.

Having said this, based on Gartner’s Top 10 Strategic Technology Trends for 2026, digital and AI technologies will continue to be the fundamental for driving smart factories maturity. And according to IDC’s 2026 Manufacturing FutureScape, by 2027, 40% of factories’ operational data will be integrated across applications and platforms autonomously, due to increased standardization and the use of AI agents purpose-built for specific data.

In fact, I envision there will be AI agentic mesh in the smart factories, working under an AI orchestrator layer, either collecting or sharing data in a multi-agent AI environment, with human-in-the-loop (HITL) for critical business decisions.

Impact on the workforce skillset

In terms of coping with the impact of digital transformation, the world of the workforce on the shopfloor of factories is changing at a faster pace. The tasks and activities done by operators, supervisors, maintenance technicians, quality inspectors, material handlers and others need to be seen through digital, AI and smart factory lenses.

There is a growing realization within the workforce that the convergence of automation, AI, cloud/edge computing, and IIoT is fundamentally reshaping every manufacturing process. AI-driven shopfloor assistants have become increasingly common, guiding workers through machine maintenance, process automation, and quality checks. These digital tools are particularly vital during night shifts or off-hours, when fewer human experts are available on-site to provide support.

Over the last few years, I have observed manual quality inspections being steadily replaced or augmented by advanced vision systems. In fact, many modern machines now come with these cameras factory-installed. From robots performing thousands of precise welds on vehicle seating to the automated painting and injection molding of automotive parts, the shift is undeniable. Consequently, the workforce skillset required to drive digital transformation in these smart factories needs a comprehensive revisit. The sentiment of reskilling is well captured in the book “What Got You Here, Won’t Get You There,” though it’s more pertinent to managers or senior leaders.

Bridging the skillset gap

Through AI innovation, by 2031, over 30 million jobs per year will be redesigned – not eliminated. So, learning and development (L&D) leaders need to look at the talent development and retention strategies, which will stay relevant in the smart factories’ era and beyond.

Successful initiatives often involve learning and development (L&D) leaders collaborating with business unit heads and digital stakeholders to build a comprehensive transformation matrix. This matrix maps out the manufacturing processes most affected by AI and digital tools, identifies the relevant job roles, and aligns them with the necessary technologies—such as IIoT, cloud computing, Gen AI, agentic AI and computer vision.

From this matrix, the skillset gaps for the impacted roles because of process and technology changes are tracked and fed into the L&D talent development plan. This plan is developed at the BU/plant-level and the requisite investments on training and infrastructure are approved by the business head in conjunction with the digital head.

From my perspective, I feel immersive technology-based training is quite effective in smart factories. Virtual reality (VR)/augmented reality (AR) solutions have helped to cut down the training time by 20-50%, with full tracking of the talent proficiency. This information is fed into learning management system (LMS).

One of the most effective features is that the workforce skillset matrix is generated directly from the learning management system (LMS). This integration enables plant managers to assign operators to specific machinery based on their verified proficiency and skill levels. This automated allocation of production line personnel is becoming increasingly standard, effectively eliminating the risk of unqualified staff operating sensitive equipment. By ensuring the right person is at the right machine, organizations can significantly improve safety, ‘first-time-right’ rates and overall product quality.

Keeping the workforce AI-ready

The digitalization of manufacturing generates vast quantities of data. While IT and digital teams are responsible for ensuring this data is captured securely on scalable platforms like the cloud, it is equally vital that the shopfloor workforce understands the underlying dataflow. When operators grasp how information moves through the system, they can better support the integrity and efficiency of the smart factory.

Furthermore, the workforce must recognize that data quality is the foundation of any effective AI solution — whether it involves shopfloor assistants or predictive forecasting. Because AI models are trained on specific datasets for specific use cases, their output is only as reliable as the input. Enterprises must strategically determine whether these models should be trained exclusively on internal enterprise data or supplemented with broader industry and internet-based information.

The bottom line is that AI-based solutions help organizations to stay ahead of the curve in terms of differentiation, competitive edge, business decisioning, growth and so on. The upskilling and cross-skilling of the workforce, as per the talent development plan, should be updated and tracked from AI lens, especially as this technology is changing at a rapid speed.

The best practice I have seen being followed in the industry is when the digital/AI team works with the HR and BU teams to identify training for different sets of employee groups. Shop floor training on digital and AI, for instance, will be a lot more hands-on and manufacturing-focused compared to training for mid/senior level executives, where the focus will be about the technology, its impact on the business and how to stay abreast of it.

Industry-specific certifications in digital and AI technologies can significantly enhance workforce productivity and efficiency. To complement formal training, many organizations now partner with startup ecosystems on relevant business projects, giving employees first-hand experience with emerging tools. Furthermore, ‘AI playgrounds’ allow business units to democratize these technologies by applying them to live use cases. Ultimately, bridging the skills gap requires more than just academic instruction; practical, hands-on exercises are essential to ensuring an AI-ready workforce.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Why planning structures must evolve in modern manufacturing
    Across many manufacturing organizations I have worked with, I keep seeing the same puzzling pattern. Companies invest in better forecasting tools. They implement advanced planning systems. They improve supply chain processes. Yet something strange still happens. Some components are overplanned. Others are repeatedly short. Production teams start expediting parts. Suppliers are pushed to deliver faster. Eventually, leaders ask the obvious question: If plannin
     

Why planning structures must evolve in modern manufacturing

22 de Abril de 2026, 06:00

Across many manufacturing organizations I have worked with, I keep seeing the same puzzling pattern.

Companies invest in better forecasting tools. They implement advanced planning systems. They improve supply chain processes.

Yet something strange still happens.

Some components are overplanned. Others are repeatedly short. Production teams start expediting parts. Suppliers are pushed to deliver faster.

Eventually, leaders ask the obvious question:

If planning systems are improving, why do these imbalances still occur — and why are teams still relying on spreadsheets and manual workarounds?

In my experience, the issue is rarely forecasting accuracy, execution capability or supplier performance. It begins with how planning parameters are defined inside enterprise systems.

Most ERP environments I have worked with still rely on static assumptions, while the real supply chain behaves dynamically. This mismatch between static planning logic and dynamic operational behavior is where structural imbalances originate.

The hidden problem: Static planning parameters

Across implementations, I consistently find that three tightly connected parameters drive planning behavior:

  • Planning Bills of Materials (Planning BOMs)
  • Lead Times
  • Safety Stock

These are typically maintained as master data, reviewed periodically and updated manually, generally once or twice a year. That approach may have worked in stable environments, but modern manufacturing operates under continuous change. Product configurations evolve, customer preferences shift and supply conditions fluctuate.

When these assumptions remain static, the system does not fail; it drifts. And that drift manifests as imbalance across components, time and availability.

Example #1: Planning BOM

In one environment I worked with, the Planning BOM assumed that 70% of orders used a standard PLC module and 30% used an advanced PLC. Over time, actual demand shifted and advanced PLC usage exceeded 50%.

However, the planning structure did not change, largely because updating it required significant manual effort and coordination across teams.

The result was not simply excess inventory — it was misalignment:

  • Overplanning of standard components
  • Underplanning of advanced components
  • Repeated substitutions and expediting

The forecast itself remained reasonably accurate. The imbalance emerged because demand was being translated through outdated structural assumptions.

More fundamentally, I have observed that Planning Bills of Materials, while central to ERP-driven planning, were never designed to capture the full complexity of manufacturing execution. Traditional BOM structures define what needs to be built, but not how it is built.

This limitation has been highlighted in patent US10832197B1, which introduces the concept of a “bill of work” to represent the actual activities, routing and process steps required for manufacturing. However, this type of execution-aware structural modeling is still rarely implemented in most ERP systems, which continue to rely primarily on static BOM definitions.

In my experience, this gap reinforces a broader point: Static planning structures alone are insufficient to model dynamic, real-world production environments.

Example #2: Lead time

I have seen cases where average demand remained stable at 100 units per week and lead time was assumed to be static at 10 weeks. In reality, lead time fluctuated between 8 and 14 weeks.

This did not just affect total inventory; it disrupted timing alignment:

  • Materials arriving too early for some components
  • Materials arriving too late for others

The issue was not quantity. It was synchronization across time.

Example #3: Safety stock

When shortages occur, organizations often increase safety stock. Most enterprise systems support this through simple mechanisms:

  • Fixed quantities
  • Coverage-based calculations

Safety Stock = Average Daily Demand × Days of Coverage

Both approaches assume relatively stable demand variability and supply risk.

However, real supply chains are not stable. Demand patterns shift, suppliers fluctuate and disruptions occur frequently. In this context, increasing safety stock often protects a distorted signal rather than correcting it.

In my work on inventory optimization, sometimes referred to as Garg’s Principle, I evaluate safety stock across the full forecast horizon rather than at a single point.

A simplified representation is:

Safety Stock = Target Service Inventory − Minimum Projected Inventory Across the Forecast Horizon

This approach identifies the lowest projected inventory point and ensures buffers protect that constraint. It transforms safety stock from a static buffer into a forward-looking stability mechanism.

In practice, I consistently see that increasing buffers alone does not resolve imbalance:

  • Some components become over-buffered
  • Others remain constrained
  • Overall inventory may increase, but instability persists

The problem is not how much safety stock exists; it is how it is aligned.

Individually, each of the above three examples (planning BOM, lead time and safety stock) introduces distortion. Together, they amplify it.

Why static planning structures break in a dynamic world

Many ERP planning systems were designed for environments where product configurations, supplier behavior and demand patterns changed slowly.

That reality no longer exists.

Today’s manufacturing environments operate in constant change. Product variants evolve rapidly, customer expectations shift quickly and supply chains face ongoing disruption. Yet many planning models still assume stable product mixes, fixed lead times and constant buffers.

This gap between dynamic markets and static planning structures is where imbalances begin.

At a broader level, this reflects a structural limitation of ERP-centric planning. ERP systems are highly effective at executing transactions and maintaining control, but they extend past data into the future using relatively fixed assumptions. As highlighted in Why ERP-Centric Planning Can’t Keep Up with Modern Supply Chains, such systems often struggle to keep pace when demand patterns, supply variability and product configurations change continuously.

In many cases, supply chains do not struggle because forecasts are wrong; they struggle because the parameters translating demand into supply decisions remain static or are not updated regularly or require huge manual efforts.

Execution systems cannot fix planning imbalance

Planning imbalances do not remain confined to ERP systems, they propagate across the entire manufacturing stack.

Manufacturing Execution Systems (MES) and shop-floor operations depend on the plans they receive. When those plans are structurally imbalanced, execution systems cannot correct them; they simply operationalize the imbalance.

This relationship between planning and execution has been widely discussed in the context of modern MES platforms, which act as the bridge between enterprise systems and real-time production environments, as explored in Manufacturing execution systems: A comprehensive guide to selection and implementation.

I have also discussed a similar pattern in Why your ERP still can’t solve inventory drift — and the architecture that will, where ERP systems struggle not because they are broken, but because they operate on outdated assumptions.

From what I have seen, once a structural error enters the system, it flows through:

Forecast → Planning BOM → ERP → MES → Shop-floor execution

By the time production begins, the imbalance is already embedded.

From static to dynamic planning architecture

For CIOs, I do not see the solution as replacing ERP systems. Instead, I see an opportunity to modernize the intelligence layer that feeds them.

In my experience, artificial intelligence can transform static planning parameters into adaptive models that continuously learn from enterprise data.

AI-driven planning systems can incorporate:

  • Historical configurations and production data
  • Sales inputs and forward-looking programs
  • Engineering changes and substitution patterns
  • Supplier performance and variability

Using these inputs, machine learning models can estimate the probability distribution of components and dynamically generate Planning BOMs that reflect real-world behavior.

In parallel:

  • Lead times can be adjusted dynamically
  • Safety stock can be aligned with forward-looking variability

In practice, this works through four steps:

  1. Build a structural signature from early demand signals
  2. Identify comparable configurations using historical data
  3. Predict component mix probabilities
  4. Generate a dynamic Planning BOM

ERP remains the execution engine, but the structure feeding it becomes adaptive.

When I experimented with dynamic planning approaches, the impact was structural:

BehaviorTraditional Static PlanningDynamic Planning
Component alignmentFrequent mismatchImproved alignment
ExpeditingFrequentReduced by ~30–40%
Production schedulesUnstableMore predictable
ERP- MES alignmentFrequent substitutionsImproved synchronization
Safety stock behaviorIncreasing without stabilityTargeted and stable

These results reinforce a broader lesson:

Planning challenges are not driven by lack of inventory; they are driven by lack of alignment.

Mini case study: Resolving structural imbalance

In one manufacturing environment I worked with, forecasting accuracy was strong and supplier performance was stable. Yet planning imbalance persisted.

At a system level, inventory appeared sufficient. However:

  • Critical components were frequently unavailable
  • Non-critical components accumulated
  • Production schedules required constant adjustment

The issue was not shortage, it was misalignment.

When I analyzed the system, I found:

  • Planning BOMs reflected outdated configurations
  • Lead times were fixed despite variability
  • Safety stock was increased uniformly

This created a cycle of persistent imbalance and expediting.

We shifted to a dynamic planning approach:

  • BOM assumptions aligned with actual demand
  • Lead times adjusted based on observed variability
  • Inventory evaluated across the planning horizon

Within a few cycles:

  • Imbalance reduced significantly
  • Expediting declined
  • Production schedules stabilized

The key change was not more inventory; it was better alignment.

A strategic opportunity for CIOs and supply chain VPs

From a CIO perspective, this represents a fundamental shift.

The question is no longer: “How do we improve planning tools?”

The better question is: How do we transform static planning parameters into adaptive planning intelligence?”

Because in modern manufacturing, planning structure is strategy.

Conclusion

Based on my experience, traditional planning systems rely on static assumptions, while modern supply chains operate in constant change.

The challenge is not about inventory levels; it is planning alignment.

When planning structures remain static, imbalances persist — even when forecasting and execution improve.

But when planning becomes dynamic, when assumptions evolve with reality, those imbalances begin to disappear.

The next era of manufacturing advantage will come not from more inventory or faster execution, but from dynamic real-time alignment between planning assumptions and real-world behavior.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • 데이터센터 세제 혜택, 지방정부에 수십억 달러 부담으로 돌아와
    하이퍼스케일러와 기타 데이터센터 운영사에 제공되는 세제 혜택이 지방정부에 수십억 달러 규모의 부담을 안기고 있다. 비영리 단체 굿잡스퍼스트(Good Jobs First)가 공개한 보고서에 따르면, 미국에서는 이미 3개 주가 10억 달러(약 1조 4,700억 원) 이상의 잠재 세수를 포기하고 있으며, 14개 주는 데이터센터 보조금이 납세자에게 어떤 비용 부담을 주는지조차 공개하지 않고 있다. 굿잡스퍼스트는 이 같은 세금 감면 미공개가 미국 일반회계원칙(GAAP)에 위배된다고 지적했다. 특히 2017년 이후부터는 이러한 세제 혜택을 ‘손실된 세수’로 보고해야 한다고 강조했다. 굿잡스퍼스트는 “대규모 인공지능(AI) 시설이 등장하기 이전, 훨씬 작은 규모의 데이터센터를 기준으로 만들어진 세금 감면 법안이 현재는 예상치 못한 수준의 세수 손실을 초래하고 있다”며 “조지아, 버지니아, 텍사스 등 3개 주는 이미 연간 10억 달러 이상의 세수
     

데이터센터 세제 혜택, 지방정부에 수십억 달러 부담으로 돌아와

21 de Abril de 2026, 22:05

하이퍼스케일러와 기타 데이터센터 운영사에 제공되는 세제 혜택이 지방정부에 수십억 달러 규모의 부담을 안기고 있다. 비영리 단체 굿잡스퍼스트(Good Jobs First)가 공개한 보고서에 따르면, 미국에서는 이미 3개 주가 10억 달러(약 1조 4,700억 원) 이상의 잠재 세수를 포기하고 있으며, 14개 주는 데이터센터 보조금이 납세자에게 어떤 비용 부담을 주는지조차 공개하지 않고 있다.

굿잡스퍼스트는 이 같은 세금 감면 미공개가 미국 일반회계원칙(GAAP)에 위배된다고 지적했다. 특히 2017년 이후부터는 이러한 세제 혜택을 ‘손실된 세수’로 보고해야 한다고 강조했다.

굿잡스퍼스트는 “대규모 인공지능(AI) 시설이 등장하기 이전, 훨씬 작은 규모의 데이터센터를 기준으로 만들어진 세금 감면 법안이 현재는 예상치 못한 수준의 세수 손실을 초래하고 있다”며 “조지아, 버지니아, 텍사스 등 3개 주는 이미 연간 10억 달러 이상의 세수를 잃고 있다”고 밝혔다.

납세자 입장에서는 기업에 제공되는 과도한 세제 혜택과 이에 따른 세수 감소에 불만이 커질 수 있지만, 데이터센터 운영을 추진하는 기업에는 오히려 유리한 환경이 조성되고 있다. 다양한 인센티브와 우호적인 조건이 제공되면서 혜택을 적극적으로 활용할 수 있기 때문이다.

글로벌 컨설팅 기업 PwC는 기업들이 데이터센터 관련 다양한 세금 감면 제도를 통해 실질적인 비용 절감 효과를 얻을 수 있다고 분석했다.

미국 외 국가들도 데이터센터 유치를 위해 유사한 재정 지원을 제공하고 있다. 영국은 에너지 절감 기술에 대해 100% 세액 공제를 지원하고 있으며, 브라질 역시 데이터센터 운영에 일정 수준의 세제 혜택을 제공하고 있다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • Ciberseguridad en el sector farmacéutico: la experiencia de Faes Farma
    La ciberseguridad en el sector farmacéutico es un asunto de salud pública y de continuidad operativa y no únicamente de protección de datos. En un entorno de digitalización industrial, presión geopolítica, riesgo poscuántico, inteligencia artificial disruptiva y regulación creciente, la única respuesta viable es una estrategia de ciberresiliencia integral, basada en prevención, detección, respuesta y recuperación, respaldada por la alta dirección e integrada en la cultura
     

Ciberseguridad en el sector farmacéutico: la experiencia de Faes Farma

21 de Abril de 2026, 06:39

La ciberseguridad en el sector farmacéutico es un asunto de salud pública y de continuidad operativa y no únicamente de protección de datos. En un entorno de digitalización industrial, presión geopolítica, riesgo poscuántico, inteligencia artificial disruptiva y regulación creciente, la única respuesta viable es una estrategia de ciberresiliencia integral, basada en prevención, detección, respuesta y recuperación, respaldada por la alta dirección e integrada en la cultura organizativa. Así lo explicó Jaime López Ostio, director global de TI de Faes Farma, en el evento CIO ForwardTech & ThreatScape Spain, celebrado el 16 de abril en Madrid. Más que evitar ataques, en este sector el objetivo es otro: garantizar que la producción de medicamentos nunca se detenga.

El directivo explicó que, durante los últimos años, la superficie de ataque se ha ampliado de forma exponencial y alcanza también a la cadena de suministro. En esta línea, relató como un incidente en una fábrica de papel puede afectar a la comercialización de un medicamento, al impedir que éste pueda salir al mercado sin prospecto.

También habló de la importancia de incorporar la ciberseguridad desde el diseño. “Así hemos hecho en una nueva fábrica en Vizcaya”, dijo tras apuntar las diferencias con otras instalaciones históricas cuya arquitectura tecnológica responde a paradigmas de hace décadas.

López habló de las características específicas de un sector que, si bien no tiene amenazas completamente diferentes, sí presenta condiciones que amplifican el impacto. Así, habló del alto valor de los datos clínicos y personales que se manejan, de la criticidad operativa y de las restricciones regulatorias.

“Lo que sí tenemos son atacantes especiales, como el grupo APT29, especializado en agredir a la industria farmacéutica”, dijo. También relató algunos ejemplos de ofensivas específicas. “Pueden cambiar las condiciones de producción de los medicamentos; por ejemplo cambiar la temperatura necesaria para ello de 12 a 18 grados. Eso no afecta al usuario final, porque las medicinas se prueban antes de comercializarse, pero arruina toda una producción”, dijo.

El experto describió también nuevos vectores de riesgo que afectan al sector farma. Uno de ellos es el uso no controlado de herramientas de inteligencia artificial por parte de empleados, la ‘Shadow AI’, que supone exposición de datos sensibles, pérdida de control sobre información estratégica y riesgos regulatorios. También la mayor interdependencia de proveedores, que aumenta indefectiblemente la superficie de ataque. La evaluación de proveedores se vuelve clave no solo por continuidad industrial, sino por exposición cibernética indirecta.

Jaime López Ostio, director Global de TI de Faes Farma

Garpress | Foundry

El director global de TI de Faes Farma señaló que algunos ciberataques “pueden cambiar las condiciones de producción de los medicamentos, por ejemplo, la temperatura necesaria para ello de 12 a 18 grados. Eso no afecta al usuario final, porque las medicinas se prueban antes de comercializarse, pero arruina toda una producción”

Riesgo poscuántico

El director global de TI de Faes Farma tampoco escatimó advertencias sobre un peligro que cada vez preocupa más en el sector: el riesgo poscuántico. “El ordenador cuántico para descifrar información encriptada es un riesgo real. Hay quien dice que China ya lo tiene”, advirtió.  “Es un robo de información que se hace con nuestra información encriptada. El ‘recoge hoy para descifrar mañana’ parece que está bastante cerca”, dijo.

López Ostio estructuró la estrategia de ciberresiliencia en cuatro pilares fundamentales: prevención y protección proactivas; detección y análisis continuo; respuesta y contención efectivas; y recuperación y continuidad operativa. En este último punto remarcó la conveniencia de aplicar la Regla 3-2-1 de copias de seguridad y realizar simulacros anuales.

Respecto a la regulación, defendió que es una oportunidad para justificar inversiones, alinear negocio y tecnología, imponer estándares internos y generar un efecto cascada en proveedores (NIS2). La presión normativa, aseguró, ayuda a superar resistencias organizativas.

Por último, recopiló las principales lecciones sobre ciberseguridad recibidas en los últimos tiempos. La importancia de la mejora continua; el rol crítico del factor humano; la gestión de terceros y la preparación para incidentes. “Hay que asumir que habrá incidentes, practicar escenarios e integrar la ciberseguridad en la estrategia corporativa”, concluyó.

  • ✇Security | CIO
  • Data centers are costing local governments billions
    Tax benefits for hyperscalers and other data center operators are costing local administrations billions of dollars. In the US, three states are already giving away more than $1 billion in potential tax revenue, while 14 are failing to declare how much data center subsidies are costing taxpayers, according to Good Jobs First. The campaign group said the failure to declare the tax subsidies goes against US Generally Accepted Accounting Principles (GAAP) and that they sho
     

Data centers are costing local governments billions

17 de Abril de 2026, 14:46

Tax benefits for hyperscalers and other data center operators are costing local administrations billions of dollars. In the US, three states are already giving away more than $1 billion in potential tax revenue, while 14 are failing to declare how much data center subsidies are costing taxpayers, according to Good Jobs First.

The campaign group said the failure to declare the tax subsidies goes against US Generally Accepted Accounting Principles (GAAP) and that they should, since 2017, be declared as lost revenue.

“Tax-abatement laws written long ago for much smaller data centers, predating massive artificial intelligence (AI) facilities, are now unexpectedly costing governments billions of dollars in lost tax revenue,” Good Jobs First said. “Three states, Georgia, Virginia, and Texas, already lose $1 billion or more per year,” it reported in its new study, “Data Center Tax Abatements: Why States and Localities Must Disclose These Soaring Revenue Losses.”

While taxpayers may be aggrieved at the tax advantages being dished out to these corporations and the loss of revenue, enterprises looking to run data centers are being offered a lot of favorable terms and are in a good position to benefit from the incentives. Management consultant PWC has pointed out that companies can reap the rewards of a variety of tax breaks for data centers.

Outside the US, other countries are happy to provide financial breaks to data center operators too: the UK can offer 100% tax relief on energy saving technology while Brazil also provides an element of relief for the operation of data centers.

This article first appeared on Network World.

  • ✇Security | CIO
  • UK wants to build sovereign AI — with just 0.08% of OpenAI’s market cap
    The UK government has created a Sovereign AI investment fund with up to £500 million (US$675 million) to spend on turning UK startups into national AI champions. Its support could involve investments of up to £20 million per startup, or provision of up to 1 million GPU-hours of AI compute, and fast-tracking of visas to bring skilled workers to the UK. The multi-million-pound budget sounds impressive, but it’s just 0.08% of OpenAI’s recent $852 billion valuation. That
     

UK wants to build sovereign AI — with just 0.08% of OpenAI’s market cap

17 de Abril de 2026, 14:23

The UK government has created a Sovereign AI investment fund with up to £500 million (US$675 million) to spend on turning UK startups into national AI champions.

Its support could involve investments of up to £20 million per startup, or provision of up to 1 million GPU-hours of AI compute, and fast-tracking of visas to bring skilled workers to the UK.

The multi-million-pound budget sounds impressive, but it’s just 0.08% of OpenAI’s recent $852 billion valuation. That company just received fresh investment of $122 billion, dwarfing the UK’s sovereign fund.

Closer to home, that £500 million would buy about 5% of French AI startup Mistral, which has achieved its success by offering a European alternative for businesses that do not want to use American or Chinese AI providers.

The UK government does not have a great record when it comes to investing in national IT champions. In the 1960s and 1970s, the government ran the National Enterprise Board which provided funding to new technology companies, but even the biggest names helped in this way have slipped out of UK ownership: ICL, a mainframe challenger to IBM, eventually became part of Japan’s Fujitsu, while Inmos, an early innovator in parallel computing, is now part of Dutch chip giant STMicroelectronics.

This article first appeared on Computerworld.

  • ✇Security | CIO
  • AI, market whiplash and the case for a force multiplier
    In early February 2026, markets delivered a powerful reminder of how sensitive industries have become to new developments in artificial intelligence. Software stocks slid after investors reacted to new AI capabilities, and days later, insurance intermediary stocks dropped sharply following news that OpenAI approved a self-service insurance broker application. In less than a week, the software and insurance sectors saw material valuation swings tied to AI sentiment rather t
     

AI, market whiplash and the case for a force multiplier

17 de Abril de 2026, 09:00

In early February 2026, markets delivered a powerful reminder of how sensitive industries have become to new developments in artificial intelligence. Software stocks slid after investors reacted to new AI capabilities, and days later, insurance intermediary stocks dropped sharply following news that OpenAI approved a self-service insurance broker application. In less than a week, the software and insurance sectors saw material valuation swings tied to AI sentiment rather than proven outcomes.

These moves aren’t isolated either. Headlines about generative AI tools targeting legal research and workflow automation have also coincided with investor doubt across parts of financial services and knowledge fields. The market reaction was severe and swift — a clear signal that AI can reshape perceptions of value and risk.

Defining AI for your business and your workforce

Strategic AI integration starts with definition. AI today is powerful at pattern recognition, prediction and automation of structured tasks. But it does not think, reason or exercise professional judgment in the human sense. It simulates responses based on training data and statistical relationships, and it can be wrong or “hallucinate” plausible but incorrect results, a risk well documented in research on generative models.

This has real implications in regulated fields such as law, insurance and health care. Only licensed professionals can provide legal or medical advice. AI can augment those professionals — speeding up research or analysis — but it cannot assume responsibility, hold a license or stand in court. The liability and ethical stakes are high.

Instead of viewing AI as a replacement for expertise, CIOs should position it as a force multiplier. AI:

  • Accelerates research
  • Surfaces patterns in data faster than traditional tools
  • Supports decision workflows

But it should not replace professional judgment where outcomes matter. Organizations that treat AI as a co-pilot, not a substitute, protect both quality and trust within their organizations and externally with their customers, vendors and partners.

Building a deliberate AI strategy

To navigate AI disruption effectively, businesses need a clear, offensive strategy that aligns technology with core value propositions. Here are five key priorities:

1. Define AI in business terms

Too often, organizations adopt tools without understanding how they advance organizational strategic objectives. AI is a set of capabilities, not a one-size-fits-all solution. Clarify which problems AI will solve, which outcomes you seek and which risks you must mitigate with its use and alongside its use.

2. Reinforce your value proposition

When markets assume an entire industry might be “done” because of AI headlines, it’s usually because the industry’s value has not been sufficiently articulated. Complex commercial insurance advice, nuanced legal counsel and consultative enterprise relationships cannot be fully commoditized. Leaders must articulate and defend these differentiators to both internal and external audiences.

3. Invest in talent, not just tools

AI’s value is directly tied to the humans who deploy it. Firms must maintain a pipeline of entry-level and mid-career talent who understand both domain context and AI literacy so that future entry-level organizational work is not dependent exclusively on AI. This dual fluency is what separates AI-enabled advantage from tool-driven mediocrity.

4. Communicate team value and vision

Headlines drive fear. Clear, consistent messaging about how AI enhances, not replaces, human expertise strengthens morale and aligns teams with strategic direction.

5. Shift from defensive to offensive

Defensive strategies focus on risk avoidance; offensive strategies focus on growth. Leaders must identify where AI can unlock new service models, improve customer experience, streamline operations and create new revenue streams. Redesigning workflows around AI requires intent, not reaction.

The real impact on work

The debate about AI’s impact on jobs often overlooks a more practical reality: AI is more likely to reshape work than eliminate entire professions and industries.

Workforce projections consistently show that automation will affect significant portions of routine and structured work, but there is no broad consensus that employment will disappear wholesale. Many estimates suggest that AI will both displace and create roles, leading to workforce evolution rather than collapse.

In fact, AI’s measurable productivity and employment effects across industries have yet to emerge. A survey of roughly 6,000 executives across the U.S. and Europe found that nearly nine in 10 firms report no significant productivity gains from AI over the past three years, despite broad adoption of the technology. Similarly, most respondents reported minimal impacts on employment to date, underscoring that early AI usage has been more experimental than transformative.

McKinsey’s most recent global survey supports this mixed picture: Around 88% of organizations say they use AI in at least one business function, but only a minority have scaled AI programs across the enterprise or seen material enterprise-wide financial impact.

Talent constraints are slowing AI progress. Industry surveys consistently show that organizations struggle to find professionals with the technical and governance expertise needed to scale AI beyond pilot programs. That imbalance has implications beyond staffing numbers. It affects how all organizations grow, innovate and compete.

The more useful question for CIOs is not how many jobs AI will remove, but how work will be redesigned. AI is not a one-time disruption; it is an ongoing shift in how technology integrates with business strategy. Markets will react to headlines, sentiment will fluctuate and new capabilities will spark fresh waves of optimism and anxiety. Over time, however, AI will simply become part of the enterprise operating environment.

Organizations that navigate this well will not treat AI as either a threat or a cure-all. They will define how it fits their model, invest in talent alongside tools and strengthen the human expertise that sets them apart.

The real question is not whether disruption will continue, because it will. Instead, ask yourself how deliberately you choose to deploy AI when it is in your control.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

❌
❌