Visualização normal

Antes de ontemStream principal
  • ✇Security | CIO
  • ServiceNow continues its AI transformation with an integrated experience
    ServiceNow has unveiled updates to its workflow management platform advancing its redefinition of itself as the “AI control tower for business reinvention” at its Knowledge customer event this week. The AI Control Tower product itself, introduced at last year’s event, gets new integrations with Microsoft Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP) and other LLM providers to extend governance and observability of enterprise infrastructure, adding to its
     

ServiceNow continues its AI transformation with an integrated experience

5 de Maio de 2026, 14:57

ServiceNow has unveiled updates to its workflow management platform advancing its redefinition of itself as the “AI control tower for business reinvention” at its Knowledge customer event this week.

The AI Control Tower product itself, introduced at last year’s event, gets new integrations with Microsoft Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP) and other LLM providers to extend governance and observability of enterprise infrastructure, adding to its existing links with OpenAI and Anthropic. The integrations also span applications such as SAP, Oracle, and Workday. In addition, Control Tower can now discover non-human identities and connected devices to bring OT and IoT under the same governance as AI agents and cloud services.

All this ties in to the ServiceNow Action Fabric, which opens the platform to any AI agent, whether built on ServiceNow or from another source, via a Model Context Protocol (MCP) server, the company said.

And thanks to the recent acquisition of Traceloop, Control Tower now provides more extensive observability into agent behavior at runtime. Five new risk frameworks aligned with NIST and EU Act standards offer compliance controls.

Autonomous workforce

To expand the reach of what ServiceNow calls the Autonomous Workforce, a group of specialist AI agents announced in February that began with a single L1 IT service desk agent, it has added “AI teammates” that work alongside humans in CRM, IT, employee services, and security and risk management.

The autonomous IT cohort includes an AIOps agent that detects anomalies, correlates events, and triggers remediation, and a specialist for site reliability engineering (SRE) that performs incident triage and postmortem documentation. Other new agents assist with asset lifecycle management and portfolio planning.

Autonomous CRM offers specialist agents for sales qualification and quoting, order fulfillment, managing invoice disputes, and service and renewal, and in the world of employee services, AI specialists act as digital employees with role-specific skills in HR, workplace services, legal, finance, procurement, supplier management, and health and safety.

To round out the offerings, ServiceNow announced Autonomous Security & Risk, designed to span the entire threat landscape from finding and remediating vulnerabilities through examining third party vendor risk.

Employee experience

ServiceNow EmployeeWorks, the previously announced “conversational front door for the enterprise”, is now generally available. In addition, ServiceNow announced Otto, an AI assistant that unifies Now Assist, Moveworks, and AI Experience, and operates across the enterprise.

“Rather than living inside a single application, ServiceNow Otto sits across the entire enterprise, understanding intent, routing work to the right agent, and executing it to completion,” the company said. “Employees, customers, and support teams talk, chat, search, browse, analyze, and build. ServiceNow Otto is designed to handle the rest, adapting to each employee’s role and location without requiring them to know which system handles their request. Actions are governed by AI Control Tower, which can log each AI interaction, enforce enterprise policies, and provide explainability for every decision.”

Otto is already available in EmployeeWorks and the AI Control Tower, and will be rolled out in all other products “in the year ahead.”

According to Nenshad Bardoliwalla, ServiceNow’s group VP of AI products, all this means that “together with a new commercial model that bundles everything customers need to deploy AI quickly, we’ve made it clear the era of sidecar AI is over.”

What technology analyst Carmi Levy finds most interesting in these announcements is how quickly we’re seeing AI-enabled workflows extend beyond their initial entry point in IT.

“What was once the exclusive domain of senior IT leaders and planners is now filtering across all operational areas of the typical organization, including CRM, HR, IT operations, security and risk,” he said. “AI is also deeply embedded in the average worker’s desktop and is rewriting their work experiences in the process. Likewise, it puts highly autonomous tools in the hands of organizations intent on improving productivity, sharpening customer responsiveness, and driving operational efficiencies.”

Stephen Elliot, group VP at IDC, added, “The agentic focus is critical as the company continues to expand its specialist agent library. Customers can adopt these across core workflows to realize business value and increase productivity. The recent commercial pricing model complements the agentic capabilities. It meets customers where they are in their AI maturity journey enabling a pragmatic approach to adoption.”

But, he added, “Customers should consider the combination of workflows, AI, data, governance, and security as they deploy AI capabilities. No one model can do it all.”

Indeed, he said, “We are hearing from some CIOs that they are pausing some AI use cases because of the security and governance risks.”

Charles Betz, VP principal analyst at Forrester, said that ServiceNow is on the right track, especially with its continued focus on data. “The data governance, provenance, and currency issues are not trivial. Agents reasoning at machine speed over a stale graph are going to produce wrong outputs, and it’ll be data-quality-based hallucination,” he said. In addition, “documenting decision traces within the AI domain is super important.”

Levy agreed. “ServiceNow’s offerings reflect a keen understanding of where AI can drive optimal benefit throughout all areas of the business, what those workflows might look like, and how the tools and supports need to evolve,” he said.

  • ✇Security | CIO
  • Oracle will patch more often to counter AI cybersecurity threat
    Oracle plans to issue security patches for its ERP, database, and other software on a monthly cycle, rather than quarterly, to respond to the increased pace of AI-enabled software vulnerability discovery. Other software vendors, notably Microsoft, SAP, and Adobe, already release patches on a monthly beat, always on the second Tuesday of each month. Oracle, though, is taking an off-beat approach: It will release the first of its monthly Critical Security Patch Updates
     

Oracle will patch more often to counter AI cybersecurity threat

5 de Maio de 2026, 12:38

Oracle plans to issue security patches for its ERP, database, and other software on a monthly cycle, rather than quarterly, to respond to the increased pace of AI-enabled software vulnerability discovery.

Other software vendors, notably Microsoft, SAP, and Adobe, already release patches on a monthly beat, always on the second Tuesday of each month.

Oracle, though, is taking an off-beat approach: It will release the first of its monthly Critical Security Patch Updates (CSPUs) on May 28, the fourth Thursday, and after that, it will release its patches on the third Tuesday of each month — a week after the other vendors — with the next batches arriving on June 16, July 21, and August 18, it said earlier this week.

The new CSPUs “provide targeted fixes for critical vulnerabilities in a smaller, more focused format, allowing customers to address high-priority issues without waiting for the next quarterly release,” Oracle said.

It will issue a cumulative Critical Patch Update each quarter, so on the same schedule as before. The first one this year came in January.

Oracle initially announced the switch to a monthly patching schedule last week, but did not provide the dates.

The new patching rhythm will primarily interest customers running Oracle applications on premises or in their own or third-party hosting environments. For customers using the software in an Oracle-managed cloud, Oracle applies the patches automatically automatically.

Oracle is using artificial intelligence to identify and fix the vulnerabilities faster than before. It said it has access to OpenAI’s latest models through that company’s Trusted Access for Cyber program, and to Anthropic’s Claude Mythos Preview.

Mythos has contributed greatly to concerns that AI will uncover thousands of zero-day flaws in software, but as of mid-April, only one vulnerability report had been tied directly to it.

This article first appeared on CSO.

  • ✇Security | CIO
  • SAP to acquire data lakehouse vendor Dremio
    SAP on Monday announced plans to acquire Dremio, which bills itself as an agentic lakehouse company, for an unspecified price. The move is complicated by similar offerings from existing SAP partners Snowflake and Databricks, but analysts point to key differences with Dremio, especially in its ability to work with data while it sits in the enterprise’s environment, rather than having to live externally. One of SAP’s justifications for the acquisition is that it will theo
     

SAP to acquire data lakehouse vendor Dremio

4 de Maio de 2026, 23:56

SAP on Monday announced plans to acquire Dremio, which bills itself as an agentic lakehouse company, for an unspecified price. The move is complicated by similar offerings from existing SAP partners Snowflake and Databricks, but analysts point to key differences with Dremio, especially in its ability to work with data while it sits in the enterprise’s environment, rather than having to live externally.

One of SAP’s justifications for the acquisition is that it will theoretically make it easier for IT executives to combine SAP data with non-SAP data. But its strongest rationale involves Dremio’s ability to make complex data more AI-friendly, so that it can more quickly and cost-effectively be made usable. 

“Most enterprise AI projects fail to deliver value not because of the AI itself, but because the underlying data is fragmented, locked in proprietary formats and stripped of the business context that makes it meaningful,” the SAP announcement said. “The result is a familiar and costly pattern: pilots that cannot scale, slow integration of new data sources, duplicated engineering work and compliance risk when organizations cannot explain how an AI-driven decision was reached. Dremio helps eliminate that data fragmentation and integration friction.”

While SAP is citing the data quality argument, there are many elements of enterprise data quality, including data that is outdated, from unreliable sources, or that exists without meaningful context that aren’t addressed by Dremio.

However, SAP said, “With Dremio, SAP Business Data Cloud will become an Apache Iceberg-native enterprise lakehouse that unifies SAP and non-SAP data to power agentic AI at enterprise scale. Apache Iceberg is the industry-standard open table format, and SAP Business Data Cloud will natively support it as its foundation.” This means that there need be no data movement or format conversion; SAP and non-SAP data “can coexist on the same open foundation, with federated analytical reach across every enterprise data source.”

Complicated comparison

Analysts and consultants said that any comparison of Dremio to existing SAP partners Snowflake and Databricks is complicated. For example, Dremio is younger and less established than either Snowflake or Databricks, which suggests that it is a less ideal match for enterprises. 

SAP strategy specialist Harikishore Sreenivasalu, CEO of Aarini Consulting in the Netherlands, said that both Snowflake and Databricks would have been ideal acquisition targets many years ago, but they would be far too expensive today. 

“Databricks and Snowflake are better [for enterprise IT] for sure because they have a mature platform, they do multi cloud” whereas Dremio “is the new entrant in the market and they have to mature more to be enterprise ready. Their security aspects need to mature,” Sreenivasalu told CIO.

But Sreenivasalu added that the situation could easily change after SAP invests and works with the Dremio team. He advised CIOs to “stick with where you are today but watch how technologies get integrated. Listen to the SAP roadmap.”

In a LinkedIn post, Sreenivasalu said the move still is very positive for SAP: “This is the missing piece. SAP has Joule. SAP has BTP. SAP has the business processes. Now it has the open data fabric to feed AI agents the context they need to act, not just answer. For those of us building on SAP BTP + Databricks + SAP BDC, this is a signal: the lakehouse and the ERP world are converging, fast. The future of enterprise AI just got a whole lot clearer.” 

Addresses LLM limitations

During a news conference Monday morning, SAP executives focused on how this move potentially addresses some of the key large language model (LLM) limitations with enterprise data, especially with predictive analytics.

Philipp Herzig, SAP’s chief technology officer, said that LLMs have various limitations, noting, “LLMs don’t deal really well with numbers” and that they struggle with structured data “where we have a lot of differentiation.” 

The practical difference is when systems try to predict the future as opposed to analyzing the past, such as when asking how well a retailer’s product will sell over the next 10 months, or predicting likely payment delays and their impacts on projected cashflow. “This is where LLMs struggle a lot,” Herzig said. He also stressed that Dremio’s ability to work with enterprise data while it still resides in that organization’s on-prem systems is critical for highly-regulated enterprises. 

Local data difference

Flavio Villanustre, CISO for the LexisNexis Risk Solutions Group, also sees the ability to handle data locally as the big draw.

Databricks and Snowflake both offer strong functionality, he pointed out, but users must move the data to their platform and reformat it. After this is complete, the result is a central data lake to address data access needs. “Dremio, on the other hand, provides easy decentralized data access, allowing users to access their data in place,” he said. “Of course, this could be at the expense of data processing performance, but the ease of use and flexibility could outweigh the performance loss.” Implementation speed in days versus weeks or months is another plus, he added. “There is a significant benefit to that.”

Sanchit Vir Gogia, chief analyst at Greyhound Research, agreed with Villanustre, but only to a limited extent. 

“The distinction is not as clean as ‘Dremio lets data stay in place, while Snowflake and Databricks require everything to move,’” he noted. “Snowflake and Databricks have both invested significantly in external data access, sharing, open formats, governance layers, and interoperability. So it would be unfair to describe either as old-style ‘move everything first’ platforms.’” But, he added, the broader argument is correct. “[Dremio] starts from the assumption that enterprise data is already distributed and that the first problem is often access, context, federation, and governance, not wholesale relocation. For SAP customers, that matters a great deal,” he said.

That’s because of the nature of many of SAP enterprise customers’ datasets. 

“Most large SAP estates are not clean, centralized data environments,” he pointed out. “They are brownfield landscapes: SAP data, non-SAP data, legacy warehouses, departmental lakes, regional repositories, acquired systems, partner data, and industry-specific platforms.” While telling these customers that AI-readiness begins with moving everything into one central platform may be good for the vendor, it’s a lot of work for the buyer.

Dremio gives SAP “a more pragmatic story,” Gogia said. “It allows SAP to say: keep more of your data where it is, access it faster, apply more consistent catalogue and semantic controls, and bring it into Business Data Cloud and AI workflows without forcing a major migration program upfront.”

Aman Mahapatra, chief strategy officer for Tribeca Softtech, a New York City-based technology consulting firm, noted that an acquisition of either Snowflake or Databricks would obliterate SAP’s marketing message/sales pitch.

“SAP did not buy a data warehouse. They bought a position in the open table format wars, and the timing tells you exactly why Snowflake and Databricks were never realistic targets,” he said. “Acquiring either would have collapsed SAP Business Data Cloud’s neutrality story overnight and alienated half the customer base in either direction. SAP’s strategic position depends on sitting above the warehouse layer rather than inside it, and Dremio is the federated layer that talks to both Snowflake and Databricks without requiring SAP to pick a side.”

Assume things will change

Mahapatra urges enterprise CIOs to be extra cautious. 

“For IT executives with active Snowflake and Databricks contracts this morning, nothing changes in the next two quarters, but by the first half of 2027, expect SAP to steer net-new AI workloads toward Business Data Cloud regardless of what the partnership press releases say today. The CIOs who plan for that trajectory now will negotiate from strength,” Mahapatra said.

Compute and storage that data warehouse vendors provide is rapidly becoming a commodity, he said, and the “defensible value” in enterprise AI is migrating up the stack to the semantic layer, the catalog, the lineage graph, and the business context that lets an agent know what ‘active customer’ means within an organization.

“SAP just bought the toolkit to own that layer for any company running SAP at the core,” he said. “If you are an SAP-heavy shop running analytics on Snowflake or Databricks, your warehouse vendors are about to feel less strategic and more like high-performance compute backends.”

Corrects a strategic error

Jason Andersen, principal analyst for Moor Insights & Strategy, noted that for quite some time, SAP has been relentlessly encouraging enterprises to host all of their data within SAP systems. SAP can’t reverse that position even if it wanted to. 

What the Dremio deal does, Andersen opines, is to instead address the pockets of data that many enterprise CIOs, especially in manufacturing and highly-regulated verticals, have refused to turn over to SAP. The Dremio deal gives SAP a face-saving way to get an even higher percentage of its customers’ data, he said. 

“Manufacturing is loath to put things in the cloud and [manufacturing CIOs] put up a violent protest [against] going into the cloud,” Andersen said. “This [acquisition] lets SAP access a lot of data that hasn’t yet moved to SAP.”

Shashi Bellamkonda, principal research director at Info-Tech Research Group, said he sees the SAP Dremio move as fixing a strategic error that SAP made years ago, when it did not develop its own Apache Iceberg capabilities. 

“Apache Iceberg is an open-source table format designed for large-scale analytical datasets stored in data lakes, a kind of bridge between raw data files and analytical tools,” Bellamkonda said. “[SAP] should have done this earlier rather than waiting till 2026.”

  • ✇Security | CIO
  • SAP’s new API policy restricts AI access, draws customer criticism
    With the rise of AI, APIs have once again become increasingly vital tools for fueling transformation. Enterprise software APIs, in particular, provide a critical link for CIOs’ AI strategies, enabling them to extract data from core business systems and feed it into their AI models of choice, for analysis, decision-making, and action. In response to the rapidly increasing use of APIs by non-SAP systems, enterprise software giant SAP has introduced a new API policy limiti
     

SAP’s new API policy restricts AI access, draws customer criticism

4 de Maio de 2026, 13:29

With the rise of AI, APIs have once again become increasingly vital tools for fueling transformation. Enterprise software APIs, in particular, provide a critical link for CIOs’ AI strategies, enabling them to extract data from core business systems and feed it into their AI models of choice, for analysis, decision-making, and action.

In response to the rapidly increasing use of APIs by non-SAP systems, enterprise software giant SAP has introduced a new API policy limiting access to the data housed in its systems. According to an official statement, the policy stipulates that only those interfaces listed in the SAP Business Accelerator Hub or in the respective product documentation are considered published APIs.

“Customer and third-party applications must not access, invoke, or interact in any manner with APIs that are not Published APIs,” the policy states.

‘This is unacceptable.’

While SAP justifies its new API policy as “designed to safeguard solution health” and as a necessary guarantee of technical stability, the policy could jeopardize the security of customers’ strategic plans as well as their innovation capabilities, the German-speaking SAP User Group (DSAG) warns.

“For SAP-to-non-SAP scenarios, this means: They will only be reliably supported where SAP has explicitly published and documented the underlying interfaces,” DSAG Chairman Jens Hungershausen explained in a statement.

Furthermore, the DSAG believes that the SAP Business Accelerator Hub and the vaguely defined product documentation have not yet been clearly established as contractual components. From the customer’s perspective, this necessitates the creation of clear and reliable framework conditions to enable early assessment of the impact of changes, Hungershausen stated.

“The DSAG has long been demanding absolutely reliable contract documents. However, SAP has taked a contrary position, for example with the SAP Business Data Cloud and now with its API Policy,” says Michael Bloch, DSAG board member for licenses, contracts, and support. Customers currently have questions regarding the interpretation of the documentation, and from DSAG’s perspective, there is a need for clarification regarding their contractual classification. “This is unacceptable,” Bloch states.

Cutting off AI system access?

The DSAG points out that potential new pricing models or usage regulations surrounding APIs must be communicated transparently — and early — to ensure planning fidelity for customers and partners. SAP, for example, has already developed a pricing model with its Digital Access model for creating certain document types in indirect usage.

“According to SAP information, there will be a fair-use model. However, the specific details are currently unclear and should be transparently documented in the API policy,” Bloch says.

Another critical point is that SAP links API usage to technical and organizational requirements. Moreover, use of APIs is restricted for certain scenarios, including:

  • Undocumented purposes
  • Systematic or large-scale data extractions
  • In conjunction with use of (semi-)autonomous or generative AI systems

Here, API usage is permitted only if it explicitly takes place within architectures or services provided by SAP.

“Except through and within the limits of SAP-endorsed architectures, data services, or service-specific pathways expressly identified and intended for such purposes, SAP prohibits API use for: (a) interaction or integration with (semi-)autonomous or generative AI systems that plan, select, or execute sequences of API calls, and (b) scraping, harvesting, or systematic and/or large-scale data extraction or replication,” the policy states.

“According to the information available to us, existing customer integrations and authorized partner solutions are not affected,” says DSAG CTO Stefan Nogly. However, he believes this important protection for existing integrations should be explicitly stated in SAP’s API policy.

Nogly points out that many user companies are already working on proofs of concept (PoC) and pilot projects based on the current interpretation of API usage. “From a customer perspective, we see a significant need for clarification and adaptation — especially to avoid disrupting existing business-critical end-to-end processes or making them legally vulnerable,” he says.

width="1024" height="576" sizes="auto, (max-width: 1024px) 100vw, 1024px">
Stefan Nogly, DSAG Executive Board Member for Technology: “In an era of increasingly heterogeneous architectures and intensive AI experiments, APIs are a key driver of innovation.”

DSAG

More transparency and transition periods needed

The SAP user group is particularly critical of SAP’s lack of transparency. Its members point out that the new API policy does not clearly document which specific APIs are affected, nor is the extent of the impact clearly defined. “The question is which interfaces are used in the partner solutions,” says DSAG Chairman Hungershausen.

According to DSAG’s understanding, those using official APIs don’t need to take any action, although the lack of contractual safeguards doesn’t guarantee absolute security. For some partner companies, however, the effort involved could be significant, and business models could collapse.

“Therefore, it is essential that SAP grants customers more time for the transition,” Hungershausen says. Customers and partners also need concrete technical and organizational support for switching to SAP-supported interfaces.

From DSAG’s perspective, it is crucial that customers are not forced to resort to other solution providers due to a lack of viable alternatives when existing scenarios are limited.

  • ✇Security | CIO
  • The rise of the double agent CIO
    CIOs of B2B SaaS companies are just as responsible to represent technology as they are to run it. In an environment where the buyer is often another CIO, however, the role becomes something fundamentally different. It’s no longer confined to internal execution. It extends into the market, customer conversations, and the moments that ultimately shape revenue, trust, and long-term relationships. So the modern SaaS CIO operates as a true double agent, running the business fro
     

The rise of the double agent CIO

4 de Maio de 2026, 07:00

CIOs of B2B SaaS companies are just as responsible to represent technology as they are to run it. In an environment where the buyer is often another CIO, however, the role becomes something fundamentally different. It’s no longer confined to internal execution. It extends into the market, customer conversations, and the moments that ultimately shape revenue, trust, and long-term relationships. So the modern SaaS CIO operates as a true double agent, running the business from within while representing it to the market.

Box CIO Ravi Malick sits squarely in that duality. After serving as CIO of Vistra Energy, a company defined by legacy systems and industrial scale, he stepped into a digitally native, founder-led SaaS business in 2021 where technology is inseparable from the business itself. He now leads internal tech while engaging directly with CIOs of companies evaluating Box, bringing a perspective shaped by both worlds. What stands out in Malick’s perspective isn’t how different the role is, but how much more expansive it’s become.

What stays the same, what evolves

The core tension of the CIO role hasn’t changed. “There’s always more demand than you have the capacity or funding for,” Malick says. Prioritization, alignment to business strategy, and the constant need to modernize while operating at scale still define the job. The difference, however, is the environment in which those challenges now exist.

At Box, Malick operates inside a workforce where technology fluency is high and expectations are even higher. “I partner with 3,000 technologists who love to solve problems with technology,” he says. That creates a powerful advantage, but also a new kind of pressure. Demand for tools, platforms, and innovation is constant, and AI has only accelerated it.

That dynamic is further shaped by Box’s leadership. As a founder-led company, technology conversations extend well beyond the CIO’s organization. “It’s a different dynamic when your CEO is a founder and a technologist,” Malick says. “You’re as much a steward of incoming ideas as you are a generator of them.” That relationship creates both pace and perspective, requiring the CIO to operate as both orchestrator and partner in shaping how technology evolves across the business.

In that context, the CIO is leading within a highly informed, highly engaged organization where expectations for speed and innovation are constant. The challenge isn’t modernization as a one-time effort, but ensuring the tech stack continuously evolves and scales with the business.

Balancing the internal mandate with external pull

What truly differentiates the role in SaaS is what happens outside the enterprise, and the pressure that comes with it. The CIO is still accountable for running IT, ensuring security, and maintaining operational excellence. At the same time, there’s growing expectation to show up externally, engage customers, and directly support revenue.

Malick doesn’t present that balance as seamless. “It’s a daily challenge,” he says. “But sometimes not balanced so well.” There’s a constant push and pull between internal priorities and external demands, and in many cases, revenue pulls hard. The opportunity to influence deals, build relationships, and contribute to growth elevates the strategic importance of the role, but it doesn’t remove the responsibility for the day job.

What allows Malick to operate effectively in both worlds is the strength of the foundation behind him. He points to the maturity of his leadership team, operating model, and internal processes as critical enablers. With clear structures, strong leaders, and disciplined execution in place, he has the bandwidth to spend meaningful time externally. It isn’t always a perfect balance, but it’s a deliberate one.

From operator to peer in the market

Through Box’s customer zero program, Box on Box, Malick operates as both CIO and practitioner, bringing firsthand experience into customer conversations. “I can take how we build at Box to customer conversations,” he says. That perspective shifts the dialogue away from product positioning, and toward the realities of execution.

In a market where CIOs are constantly being pitched, that distinction carries weight. “They want to know how it works from the perspective of someone managing it,” he says, adding he leans into that by being transparent about both successes and missteps. “We share the challenges and false starts we’ve managed through.”

That candor builds credibility, and credibility builds trust. After all, people buy from people they trust, and in enterprise technology, says Malick, peer-to-peer conversations are a faster path to trust than demos. 

The external dimension of the role also holds a symbiotic relationship with internal responsibilities. Malick brings customer conversations back into Box, using them to inform how he thinks about technology decisions and broader strategy. He describes the CIO community as uniquely open, even therapeutic, where leaders candidly share challenges and exchange ideas. That openness creates a feedback loop where external insights sharpen internal execution, and internal experience strengthens external credibility.

What this means for the CIO role

What makes Malick’s perspective especially relevant is that the lesson isn’t limited to SaaS. As technology becomes more central to growth, customer experience, and business model change, CIOs in every industry are being pulled closer to the front office. The shift is about becoming more fluent in how technology translates into trust, speed, and commercial impact, not just becoming more visible.

For Malick, one of the biggest lessons is the role now demands a different kind of leadership than many CIOs were originally trained for. “Don’t make assumptions, and don’t assume something’s easy or intuitive,” Malick says. In a world where technology is reshaping how people work in real time, communication becomes a strategic discipline. CIOs have to explain change, absorb feedback, and keep translating between technical possibility and business reality.

The rise of AI adds another dimension to the double agent role. CIOs are building the content foundation that AI needs to be effective, and ensuring the organization can experiment with AI without sacrificing compliance or control. In a fast-paced technology company, ideas, opinions, and new technologies come from every direction. So the CIO isn’t simply the expert with the answers but often the one managing velocity itself, deciding where to push and where to hold.

“You have to figure out when you need to be in the fast lane and when you don’t,” Malick says. That kind of judgment is becoming more critical as technology moves to the center of the business, and it’s another reason CIOs are stepping into CEO and COO roles.

As AI accelerates the pace of change and creates the potential to decouple revenue growth from headcount growth, that ability to manage speed, scale, and tradeoffs becomes a defining leadership capability. That’s why the SaaS CIO should matter to leaders far beyond software. With AI transforming every industry, the role is becoming a preview of where the profession is headed — not just to run technology, but help shape how the company grows, how it shows up in the market, and how it earns trust. The double agent CIO may sound like a SaaS phenomenon. Increasingly, though, it looks more like the future of the job.

  • ✇Security | CIO
  • 일문일답 | “S/4HANA 전환 2027년 기한 달성 어려워…ERP는 여전히 핵심”
    SAP S/4HANA로의 전환은 여전히 대기업 IT 환경에서 핵심 의제로 자리 잡고 있다. 그렇다면 기업들은 실제로 어느 단계에 와 있으며, 어떤 전략이 효과를 거두고 있을까. SAP 전환의 현재 진행 상황과 S/4HANA 도입 과정에서 CIO가 직면한 주요 과제, 그리고 최신 ERP 트렌드를 살펴보기 위해 기업용 컨설팅 업체 CBS(Corporate Business Solutions)의 대표 홀거 쉘과 인터뷰를 진행했다. Q: S/4HANA 전환이 현재 프로젝트 사업에 미치는 영향은 어느 정도인가?A(홀거 쉘): 핵심 동력이다. 현재 우리가 구축하는 솔루션의 약 98%가 SAP 기반이며, 상당수 프로젝트가 ERP 시스템 현대화와 S/4HANA 전환 필요성에서 비롯됐다. Q: 현재 기업들은 전환 과정에서 어느 단계에 와 있는가?A: 주요 산업군인 대기업과 중견 제조기업을 기준으로 보면, 긍정적으로 평가해 약 절반 정도가 전환을
     

일문일답 | “S/4HANA 전환 2027년 기한 달성 어려워…ERP는 여전히 핵심”

30 de Abril de 2026, 04:21

SAP S/4HANA로의 전환은 여전히 대기업 IT 환경에서 핵심 의제로 자리 잡고 있다. 그렇다면 기업들은 실제로 어느 단계에 와 있으며, 어떤 전략이 효과를 거두고 있을까.

SAP 전환의 현재 진행 상황과 S/4HANA 도입 과정에서 CIO가 직면한 주요 과제, 그리고 최신 ERP 트렌드를 살펴보기 위해 기업용 컨설팅 업체 CBS(Corporate Business Solutions)의 대표 홀거 쉘과 인터뷰를 진행했다.

Q: S/4HANA 전환이 현재 프로젝트 사업에 미치는 영향은 어느 정도인가?
A(홀거 쉘): 핵심 동력이다. 현재 우리가 구축하는 솔루션의 약 98%가 SAP 기반이며, 상당수 프로젝트가 ERP 시스템 현대화와 S/4HANA 전환 필요성에서 비롯됐다.

Q: 현재 기업들은 전환 과정에서 어느 단계에 와 있는가?
A: 주요 산업군인 대기업과 중견 제조기업을 기준으로 보면, 긍정적으로 평가해 약 절반 정도가 전환을 시작한 상태다. 다만 완전히 완료한 기업은 매우 적다.

Q: 많은 기업이 2027년 SAP 마감 기한에 직면해 있다. 현실적인가?
A: 많은 기업에는 해당되지 않는다. 특히 ERP 시스템이 다수인 대기업은 2027년까지 확장 유지보수 단계에 도달하기 어려울 가능성이 크다. 2030년이 보다 현실적인 목표 시점이다.

Q: 현재 주로 사용되는 전환 전략은 브라운필드인가, 그린필드인가?
A: 어느 하나의 순수한 방식이 지배적이지 않다. 완전히 새로 구축하는 전통적인 그린필드 방식은 실제로 성공 사례가 매우 제한적이다. 그렇다고 단순 기술 전환인 브라운필드가 제조기업에서 표준 방식으로 자리 잡은 것도 아니다.

Q: 그렇다면 실제로 선호되는 접근 방식은 무엇인가?
A: 대부분 기업은 혁신과 전환을 선택적으로 결합한 하이브리드 전략을 채택하고 있다. 흔히 ‘스마트 브라운필드’ 또는 ‘믹스 앤 매치’라고 부르는 방식이다.

그 이유는 명확하다. 순수 브라운필드 접근은 비용과 시간이 많이 들지만 추가적인 비즈니스 가치를 만들어내지 못한다. 기업은 단순히 지원 기간을 맞추는 데 그치지 않고, 실제 성과를 만들어내기를 원한다. 즉, 프로세스를 개선하고 혁신을 도입하며, 전환이 왜 필요한지 경영진에게 설명할 수 있는 명확한 가치를 확보하려는 것이다.

Q: SAP가 ‘클린 코어(Clean Core)’ 접근 방식을 강하게 강조하고 있다. 실제로 얼마나 현실적인가?
A: 단순하게 볼 수 있는 개념은 아니다. 클린 코어는 표준 소프트웨어만 사용하고 모든 커스터마이징이 사라진다는 의미가 아니다. 오히려 새로운 설계 패러다임에 가깝다. 핵심은 ERP 코어, 즉 S/4를 가능한 한 표준에 가깝게 유지하는 것이다. 그 외 기업별 맞춤 기능은 SAP BTP 같은 플랫폼이나 새로운 개발 방식으로 외부에서 구현하는 구조다.

Q: 과도한 표준화가 경쟁력을 약화시킬 수 있다는 지적도 있다. 어떻게 보는가?
A: 차별화는 여전히 매우 중요하다. 기업은 고유한 역량을 계속 보여줘야 한다. 클린 코어는 이를 구현하는 기술적 방식이 바뀌는 것일 뿐, 차별화 필요성 자체가 사라지는 것은 아니다.

그래서 우리는 ‘더 깨끗한 코어(cleaner core)’라는 표현을 사용한다. ERP 코어가 단순하고 정돈될수록 업그레이드와 신규 기능 도입이 쉬워지고, 기업의 민첩성도 높아진다. 다만 이는 급격한 변화가 아니라 점진적인 진화 과정이다.

Q: 기업들이 S/4HANA 프로젝트에서 복잡성을 가장 과소평가하는 부분은 무엇인가?
A: 가장 중요한 요소는 출발점이다. 특히 제조업에서는 20년 이상에 걸쳐 축적된 ERP 시스템이 많다. 이로 인해 데이터 구조는 복잡하고, 프로세스 역시 역사적으로 누적된 형태를 띤다. 이를 해체하고 표준화·통합된 목표 구조로 재정비하는 작업은 매우 큰 과제이며, 동시에 혁신 도입과 병행해야 하는 경우가 많다.

Q: 이러한 요소가 실제 프로젝트에는 어떤 의미를 가지는가?
A: 레거시 시스템이 많고, 프로세스·시스템·데이터 환경의 준비도가 낮을수록 전환 기간은 길어진다. 결국 미래 지향적인 ERP 플랫폼으로 가는 과정이 그만큼 복잡해진다.

Q: 현재 많은 기업이 SAP가 아닌 외부 AI 솔루션을 활용하고 있다. SAP가 압박을 받고 있다고 보는가?
A: 일정 부분 그렇다. 소프트웨어 기업이라면 반드시 성과를 보여줘야 한다. 다만 SAP는 역사적으로 신기술에서 선도 기업 역할을 해온 경우는 많지 않았다. 대신 비즈니스 맥락을 시스템에 통합하는 데 강점을 가져왔다.

현재 시장을 보면 AI는 고객에게 매우 중요한 요소다. 기업은 AI를 통해 경쟁력을 확보하고 효율성을 높이기를 기대한다. 그 결과, 과거에는 많은 AI 솔루션이 SAP 외부에서 구현됐다.

앞으로 중요한 지점은 SAP가 강조하는 ‘비즈니스 AI’다. 기업의 비즈니스 프로세스에서 생성되는 방대한 데이터를 어떻게 효과적으로 활용할 것인가의 문제다. 이러한 데이터는 ERP 시스템, 즉 기업의 핵심에 존재한다. SAP는 비즈니스 스위트와 ERP 중심 아키텍처를 기반으로 이 영역에서 매우 유리한 위치에 있다.

Q: 최근 AI와 전통적인 ERP 시스템의 미래를 둘러싼 논의가 활발하다. 비즈니스 모델이 위협받고 있다고 보는가?
A: 오해에 가깝다. 많은 사람들이 이른바 ‘챗GPT 모먼트’를 과대평가하면서 ERP 기반이 더 이상 필요 없다고 생각한다. 하지만 AI 시스템은 신뢰할 수 있고 의미적으로 정제된 데이터 기반을 필요로 한다. 이 지점에서 SAP는 구축된 시스템 환경과 고객 기반, 그리고 제조업 고객과의 경험을 바탕으로 분명한 강점을 가지고 있다.

Q: 그렇다면 ERP 공급업체의 비즈니스 모델이 구조적으로 위협받고 있다고 보지는 않는가?
A: 현재의 과열된 분위기는 점차 진정될 것이며, 견고한 ERP 백본 아키텍처의 중요성이 다시 인식될 것으로 본다. ERP는 ‘에이전트 기반 기업’과 같은 새로운 개념을 구현하는 기반이 된다. 이는 ERP와 같은 기존 트랜잭션 중심 시스템 위에 행동 시스템(system of action)을 구축하는 구조다.

Q: 이는 서비스나우(ServiceNow)와 같은 플랫폼 모델과 유사한 개념인가?
A: 그렇다. 그런 방향으로 발전할 가능성이 크다. 기존 시스템 위에 에이전트 레이어가 올라가는 구조다. SAP 역시 이러한 흐름에 맞춰 관련 기능과 솔루션을 아키텍처에 통합하고 있다.

Q: 전체 구조를 어떻게 이해해야 하는가?
A: 크게 두 가지 계층이 있다. 하나는 데이터 기록 시스템(system of record), 다른 하나는 실행 시스템(system of action)이다. 그리고 그 사이에 데이터 레이어가 존재한다. SAP는 ‘비즈니스 데이터 클라우드(Business Data Cloud)’를 통해 데이터에 맥락을 부여하고, 분석과 트랜잭션 영역을 통합하는 방식을 제시하고 있다. 이는 시스템과 벤더를 넘어서 적용된다.

Q: 이러한 구조가 의미하는 바는 무엇인가?
A: 핵심 가치는 여전히 기업이 보유한 데이터에 있다. 단순한 데이터가 아니라, 명확히 이해되고 맥락이 부여된 비즈니스 데이터다. 이러한 데이터가 있어야 ‘에이전틱 기업’이 안정적으로 작동할 수 있고, 궁극적으로 신뢰를 확보할 수 있다.

Q: AI 공급업체들이 오류율을 크게 낮췄다. 그럼에도 기업이 요구하는 수준의 신뢰성에는 아직 못 미친다는 평가가 있다. 어떻게 보는가?
A: 현재 수준은 아직 충분하지 않다. 기업의 핵심 의사결정에 필요한 100% 신뢰성에는 도달하지 못한 상태다.

지금의 AI는 기술적으로 잘 구현됐다고 하더라도 상당 부분 추정에 기반해 작동한다. 물론 이를 계속 개선할 수는 있지만, 본질적으로는 추정의 성격을 완전히 벗어나기 어렵다.

결국 중요한 것은 의미적으로 정제된 데이터 기반의 신뢰성이다. 이를 위해서는 감사 기준에도 부합할 수 있는 안정적이고 신뢰할 수 있는 데이터 코어가 필요하다. 그리고 바로 그 기반을 ERP 시스템이 제공한다.

Q: 그렇다면 ERP 시스템은 여전히 핵심 인프라인가?
A: 그렇다. ERP는 여전히 기업의 핵심 백본이다. 특히 제조기업은 ERP에 막대한 투자를 해왔다. 이를 단순히 에이전트 기반 시스템으로 대체하는 것은 현실적으로 불가능하다고 본다.

Q: 현재 SAP 환경에서 AI 프로젝트는 어느 정도 진행되고 있는가?
A: 이미 1년 전부터 명확한 기준을 세웠다. SAP 환경에서 고객의 디지털 전환 목표를 설계할 때 AI를 초기 단계부터 필수 요소로 포함하고 있다. 가능한 모든 영역에 AI를 적용해 최대한의 가치를 창출하려는 방향이다.

AI는 컨설팅 사업 자체를 변화시키고 있으며, SAP 기반 미래 솔루션의 핵심 요소로 자리 잡고 있다. 현재도 이 분야에 지속적이고 적극적으로 투자하고 있다.

Q: SAP 내 AI는 어떤 방식으로 구분되는가?
A: SAP는 AI를 임베디드 AI와 커스텀 AI로 구분한다. 임베디드 AI는 표준 소프트웨어에 포함된 기능이며, 커스텀 AI는 SAP 기술 기반, 예를 들어 비즈니스 테크놀로지 플랫폼을 활용해 특정 업무에 맞춰 개발하는 솔루션이다.

그동안은 코어 영역에서 제공되는 기능이 제한적이었기 때문에 커스텀 AI 중심으로 프로젝트를 진행해왔다. 하지만 최근 들어 상황이 크게 달라지고 있다.

Q: 현재 프로젝트에서 AI의 역할은 무엇인가?
A: 기대 수준이 크게 높아졌다. 이제는 현업 부서와의 워크숍에서 AI 활용 가능성이 기본 논의 항목으로 자리 잡았다.

프로세스를 분석하면서 반복 작업이 어디에 있는지, 자동화가 가능한 영역은 어디인지, 분석이나 제어 과정에서 복잡성을 AI로 줄일 수 있는 부분은 어디인지 등을 함께 검토한다. 실제 적용 가능한 영역이 점점 늘어나고 있으며, 고객의 관심도도 빠르게 증가하고 있다.

Q: AI가 향후 새로운 성장 영역이 될 수 있는가? 현재는 S/4HANA 전환이 주요 사업 동력인데, 이후에는 어떻게 되는가?
A: 결국 기업들은 S/4 전환을 완료하게 된다. 기술 중심 마이그레이션에만 집중한 사업자에게는 위험 요소가 될 수 있다. 하지만 시장은 그렇게 단순하지 않다.

S/4 수요가 갑자기 사라지지는 않는다. 단기적으로는 마이그레이션 중심 프로젝트가 줄어들 수 있지만, 그 자리는 후속 프로젝트가 대체하게 될 것이다.

Q: 후속 프로젝트란 무엇을 의미하는가?
A: 이미 ‘포스트 S/4 전환’이라는 개념이 논의되고 있다. 많은 기업이 브라운필드 방식으로 S/4HANA로 전환했지만, 프로세스를 근본적으로 바꾸지는 않았다. 즉, 실제 핵심 변화라고 할 수 있는 혁신과 비즈니스 프로세스 고도화는 이제부터 시작되는 단계다.

대외적으로는 기업의 전환 여정이 실제보다 단순하게 인식되는 경우가 많다. 하지만 이는 단순히 새로운 시스템을 도입하는 문제가 아니라, 데이터 기반 조직으로 점진적으로 진화하는 과정이며, 궁극적으로는 ‘에이전틱 엔터프라이즈’로 나아가는 흐름이다.

Q: 그렇다면 전환은 장기적인 과정이라고 볼 수 있는가?
A: 그렇다. 기업은 시간을 두고 역량을 체계적으로 확장해 나간다. S/4HANA는 목적지가 아니라 출발점에 가깝다. 현재 기업들은 비즈니스 전환, 기술 전환, 프로세스 혁신, 디지털 전환 등 보다 폭넓은 과제를 동시에 추진하고 있다.

Q: SAP가 자체 기능을 확대하면 컨설팅 파트너의 역할은 줄어들지 않는가?
A: 핵심은 컨설턴트가 기업을 어떻게 변화로 이끄는가에 있다. 비즈니스 프로세스를 어떻게 설계하고, 미래로 가는 경로를 어떻게 제시하느냐가 중요하다. 이러한 역할은 앞으로도 컨설팅의 중심이 될 것이다.

실제로 컨설팅 영역은 축소가 아니라 확대되고 있다. 시스템과 프로세스의 복잡성이 크게 증가하고 있으며, ‘에이전틱 엔터프라이즈’와 같은 개념은 새로운 의미적 계층을 추가하고 있다. 그 결과 IT와 프로세스 환경은 더욱 정교하고 어려워지고 있다.

기업은 이러한 복잡성을 이해하고 구조화하며, 이를 기반으로 실질적인 비즈니스 해법을 도출하는 데 점점 더 많은 지원을 필요로 한다. 이것이 바로 컨설팅의 역할이다.

반면, 전통적인 개발·테스트·설정과 같은 단순 구현 중심 서비스는 압박을 받고 있다. 이러한 흐름은 이전부터 존재했지만, AI와 SAP의 전략 변화로 인해 앞으로 더 가속화될 가능성이 크다. 해당 영역은 대체 가능성이 높아지며 수요도 점차 줄어들 것으로 보인다.
dl-ciokorea@foundryco.com


  • ✇Security | CIO
  • Salesforce expands beyond the front office with Agentforce Operations
    Enterprises have been fixated on AI agents for front office workflows, but there’s still a lot of operational drag behind the scenes. Many back office tasks, such as returns processing, inventory reconciliation, and supply chain oversight, are still performed manually, leading to inefficiencies. Today, Salesforce is turning its attention to that problem with Agentforce Operations, which tasks AI agents with the drudgery of the back office. The company claims agents can
     

Salesforce expands beyond the front office with Agentforce Operations

29 de Abril de 2026, 09:05

Enterprises have been fixated on AI agents for front office workflows, but there’s still a lot of operational drag behind the scenes. Many back office tasks, such as returns processing, inventory reconciliation, and supply chain oversight, are still performed manually, leading to inefficiencies.

Today, Salesforce is turning its attention to that problem with Agentforce Operations, which tasks AI agents with the drudgery of the back office. The company claims agents can cut cycle times by up to 70% for processes like auditing and onboarding, and eliminate 80% of manual chores like data entry.

Compared to Salesforce’s Agentforce, an early-entrant agent builder platform, the new Agentforce Operations “is tackling a completely different problem,” said Sanjna Parulekar, the company’s VP of AI. “There’s so much time spent on these back office processes, to no avail.”

Autonomous agents handle ‘busy work’

Agentforce Operations builds on Salesforce’s acquisition of Regrello, an AI-powered operating system for manufacturing and supply chains.

Agentforce Operations coordinates AI agents that handle “busy work,” based on business process blueprints. These guidance documents can be loaded into the system for company-specific workflows, or users can access 30-plus out-of-the-box blueprints for common tasks like onboarding, invoice auditing, or rescheduling. Either way, users don’t need to build models from scratch.

“You have a Lucidchart, a Word doc, a drawing, you upload it into Agentforce Operations, and it’ll digitize that process into a multi step workflow,” Parulekar explained. “It’ll split up the work into several minion agents that can take action on different steps.”

For instance, agents extract data from documents, run computations, or identify gaps in compliance. They can work across typically disconnected systems like email or enterprise resource planning (ERP) platforms.

Human users can continue working with existing tools and interact via email, Slack, or Microsoft Teams, tweaking AI activities as needed, and updating agent operations with plain language. The system automatically flags delays (such as lags in required approvals) or suggests fixes. Every agent action is recorded and mapped back to the digital blueprint.

“You can build in steps for review, for humans to be in the loop wherever you want,” said Parulekar. “That combination of non deterministic and deterministic, when it comes to this agentic AI world, it’s so critical.”

Salesforce claims that a single AI agent can perform an audit within 60 seconds; normally, this would take a team of human auditors four hours to complete.

A new area for Salesforce

Matt Mullen, lead analyst for AI applications at consultancy firm Deep Analysis, noted that the ability to rapidly create a diagrammatic of a process, ingest it, and have a workable starting point for an automated version is indeed a potential time-saver.

When combined with technology such as task mining via Salesforce Apromore that determines process details up front, it offers “real potential to organizations looking to modernize their key processes,” he said.

The biggest hurdles enterprises face when handling backend workflows are “complexity and criticality,” Mullen noted. But these processes define primary operations: How things are made, how materials are ordered, how products are shipped.

“Those processes have a lot of moving parts, and typically are integrated into a whole raft of line-of-business systems at various points in their execution,” he said. Processes evolve, and in some cases they’re only understood in totality by a small number of people.

Thus, enterprises that already have partial or complete automation established for various processes will likely see Salesforce as a cost-effective tool, Mullen said, particularly in areas like banking, insurance, healthcare, or in heavy industries like construction that still rely on manual, paper-heavy tasks.

That said, this is a new area for Salesforce. The company will need to enable its vast array of integration partners to have conversations around job titles and organizational areas they’ve not typically engaged with.

“Salesforce has been front-office focused from its inception, and making sure that it can articulate the value and sell into back office operations will be an ongoing challenge,” said Mullen.

A hard problem to solve 

When orchestrating agents for backend systems, there’s a lot to consider, Parulekar pointed out, including issues around ERPs, customer relationship management (CRM) platforms, and external data lakes. Some of these systems are so old they may not even have application programming interfaces (APIs).

“It’s a minefield for customers,” she said. “It kind of feels obvious: [They ask] ‘why didn’t someone do this already?’ [The answer:] Because it’s a really hard problem to solve.”

What’s different about Agentforce Operations, Parulekar said, is that agents look at processes first, rather than people, and assess how those processes can be managed accurately and with high performance. That might even mean adding more steps to the process.

“It’s such a knee jerk reaction if you’re optimizing for humans to say, ‘let’s just give [a human] one thing to review instead of five,’” she said. With agents, a workflow may go from five steps to 50, but 48 of those are completed by an agent. Humans are only brought into the loop when they’re most useful, and can focus attention elsewhere otherwise.

“The most trite thing people say right now is ‘AI is going to free up your work,’” Parulekar acknowledged. But it’s true, she said: “I think it’s really bringing some creativity back to the work. Enterprises can focus on more critical decisions that have less to do with the minutia of a process and more to do with strategy.”

More capacity without more headcount

Melanie Kalia, director of product management at Equinox Group, said that, like many organizations, the fitness company was dealing with back office and operations workflows that were slow and labor-intensive. Particularly in the fitness industry, she explained, there’s an “enormous amount” of administrative work around managing sales pipelines, following up, explaining promotions, and scheduling tours.

“We agreed automation had to be part of the answer, but generic workflow tools weren’t cutting it, or felt like luxury,” she said. Her team looked at other options like Sierra and Netomi, but Agentforce Operations was a natural extension to its existing Salesforce infrastructure.

The company’s primary focus with agents is lead generation and “nurturing,” Kalia explained. Leads can fall through the cracks because back office teams are too stretched to follow up consistently. Agentforce Operations is helping automate outreach sequences, qualify inbound leads, and move prospects through the funnel without requiring manual intervention at every step.

“It’s essentially giving us the capacity of a much larger team without adding headcount,” she said.

Being an early adopter ‘a challenge’

Being an early platform adopter was a challenge, however; as Salesforce iterated its approach, her team had to follow suit. Early CRM cleanup was also required to ensure more reliable outputs. Then there was the internal change management piece; getting sales and ops teams comfortable handing off tasks to AI took some trust, Kalia said.

However, “once we ran a few pilots and people saw the agents actually working accurately, adoption has picked up and is gaining momentum,” she noted.

Ultimately, Equinox is seeing “encouraging signals,” with faster response times to inbound leads and more consistent follow up on those that otherwise might have gone cold.

“We haven’t fully quantified the full ROI yet,” Kalia noted. But the results are trending positive, leading to sales increases, more quality messaging, and “a sense that our solution is finally working for and with us rather than replacing us.”

  • ✇Security | CIO
  • SAP 2027 deadline for S/4HANA out of reach for most customers
    Migrating to SAP S/4HANA remains a dominant topic in the IT landscape of large corporations. But where do these companies really stand — and which strategies are proving successful? Computerwoche spoke with Holger Scheel, managing director of cbs (Corporate Business Solutions), to get the SAP consultant’s insights into the current state of SAP migrations, the challenges CIOs face in shifting to S/4HANA, and the latest ERP-related trends. Here is that interview, edite
     

SAP 2027 deadline for S/4HANA out of reach for most customers

29 de Abril de 2026, 06:00

Migrating to SAP S/4HANA remains a dominant topic in the IT landscape of large corporations. But where do these companies really stand — and which strategies are proving successful?

Computerwoche spoke with Holger Scheel, managing director of cbs (Corporate Business Solutions), to get the SAP consultant’s insights into the current state of SAP migrations, the challenges CIOs face in shifting to S/4HANA, and the latest ERP-related trends.

Here is that interview, edited for clarity and length.

Computerwoche: How strongly is the topic of S/4HANA transformation currently shaping your project business?

Holger Scheel: It’s a key driver — around 98% of the solutions we implement are based on SAP, with a large number of projects triggered by the need to modernize ERP systems and implement the S/4HANA transformation.

Where do companies currently stand in this transformation?

If we look at our core sectors — large and midsize industrial companies — then I would say: To put it positively, about half of the companies have started the transformation. But very few have completely finished it.

Many companies are facing the SAP deadline of 2027. Is that realistic?

Not for many. Larger companies with numerous ERP systems, in particular, are unlikely to make it to extended maintenance by 2027. The 2030 timeline is a more realistic target.

Which migration strategy currently dominates — brownfield or greenfield?

Neither in its purest form. The classic greenfield approach — developing everything from scratch — has proved successful and feasible for very few companies. But even the purely technical conversion, i.e., brownfield, is not the dominant standard among manufacturing customers.

What is the preferred approach instead?

The majority of companies are pursuing a hybrid approach — a selective combination of innovation and transformation. This is often called “smart brownfield” or “mix & match.”

The reason: A purely brownfield approach costs money and time but delivers no added value. Companies want more than just to stay within the release window — they want to create real added value. They want to improve processes, introduce innovations, and be able to explain to their management why the transformation is worthwhile.

SAP strongly promotes the “clean core” approach. How realistic is that in practice?

You have to look at it in a nuanced way. Clean Core doesn’t mean I only use standard software and all custom developments disappear. It’s more about a new design paradigm. The idea is that the ERP core — i.e., S/4 — remains as close to the standard as possible. Everything a company needs beyond that in terms of customization for its business is then organized externally — for example, via platforms like SAP BTP — or through new development approaches.

Critics argue that excessive standardization jeopardizes competitive advantages. What is your view?

Differentiation remains absolutely crucial. Companies must continue to showcase their specific capabilities. Clean core only changes how this is implemented technically — not the need for differentiation.

We therefore speak more of a “cleaner core”: The cleaner the ERP core, the easier upgrades and the use of new functionality become, and the more agile the company becomes. But it remains an evolutionary process — not a radical break.

Where do companies most underestimate the complexity of S/4HANA projects?

A key point is the initial state. Especially in industry, many companies have ERP systems that have grown organically over 20 years — with correspondingly complex data structures and historically evolved processes. Breaking all of this down and achieving a more standardized, harmonized, and consolidated target state is an enormous task that often has to be mastered in conjunction with the establishment of innovations.

What does this mean specifically for the projects?

The more legacy systems a company has and the less prepared its process, system, and data landscape is, the longer the transformation will take. The path to a future-proof ERP platform then becomes correspondingly more complex.

Many users are currently relying on external AI solutions rather than SAP offerings. Do you see the company under pressure in this regard?

Of course, SAP is under pressure — a software manufacturer has to deliver. But historically, SAP has rarely been a first mover with new technologies. Its strength has always lay in the business context that SAP integrates into its systems.

Looking around, AI is clearly of enormous importance to customers. Companies expect it to give them a competitive edge and increase efficiency. Consequently, many solutions have been implemented outside of SAP in the past.

Looking ahead, the crucial point is that SAP is talking about “Business AI”: How can I effectively utilize the wealth of data from my business processes? And this data resides in the ERP system — the “heart and soul” of the company. SAP is, of course, exceptionally well-positioned here thanks to its Business Suite and ERP-driven architecture.

There is currently a lot of discussion about AI and the future of traditional ERP systems. Is the business model threatened?

That’s a misconception. Many people are overestimating this “ChatGPT moment” and think an ERP foundation is no longer needed. AI systems require a reliable, semantically clean data foundation. And that’s where SAP has a significant advantage with its environments, installed base, and experience with industrial customers.

So you do not believe the business model of ERP providers is under sustained threat.

We believe that the current hysteria will subside and that the crucial importance of a solid ERP backbone architecture will be recognized. It forms the basis for implementing new concepts such as an agent-driven company — that is, a system of action built on a classic transactional system of record, such as ERP.

Is this similar to platform approaches — such as ServiceNow’s — that function as a higher-level platform on which agents work and retrieve the necessary data from various systems?

Exactly. That’s how it will be. This is essentially the agent layer that’s placed on top of the existing systems. SAP is also adding this and integrating corresponding solutions and functions into its architecture.

Ultimately, we’re talking about different levels: the system of record on the one hand and the system of action on the other. The data layer lies in between. SAP addresses this, for example, with its Business Data Cloud, to contextualize data and merge analytical and transactional levels — even across systems and vendors.

What does that mean?

My conviction is that the real value still lies in the data treasure trove of companies — that is, in clearly understood and contextualized business data. This remains the crucial foundation for an “agentic company” to function in the future and ultimately be trustworthy.

AI providers have significantly reduced error rates. Nevertheless, we are probably still far from the 100% reliability that companies need for business-critical decisions.

What AI systems can do today, in many cases, is guesswork, provided it’s technically well implemented — and while you can perfect that further, it ultimately remains guesswork.

The reliability of the semantic foundation is crucial. This requires a stable, trustworthy data core — a foundation that even auditors will accept. And that is precisely the foundation that ERP systems provide.

The ERP system therefore remains the backbone.

I am convinced this foundation remains indispensable. Manufacturing companies in particular have invested heavily in their ERP systems. They will not simply abandon them and replace them with purely agent-based systems. I consider that impossible.

Are you already managing many AI projects in the SAP environment for your customers?

We issued a clear guideline over a year ago: When we develop target scenarios for our clients’ digital transformation in the SAP environment, AI is an integral part of the process from the very beginning. We strive to consistently incorporate it and use it to create as much added value as possible. AI shapes our consulting business; AI shapes the SAP-based solutions of the future. AI is a driving force for us — we are investing heavily and sustainably in this area.

It’s important to understand that the possibilities within SAP itself have only developed gradually. SAP distinguishes between embedded AI and custom AI. Embedded AI is what the standard software already provides. Custom AI, on the other hand, refers to specific solutions developed for individual use cases based on SAP technologies — such as the Business Technology Platform.

In recent years, we’ve primarily worked in the custom AI environment because there simply weren’t that many ready-made features available in the core area. However, that has changed significantly since then.

What role does AI play in your projects today?

The expectations have risen significantly. In our workshops with the specialist departments, the question of AI potential is now a standard part of the discussion. We examine processes and consider: Where are there repetitive tasks, where can automation help, where can complexity — for example in analysis or control — be reduced through AI?

And we are finding there are more and more meaningful areas of application and a growing interest on the part of customers to actively address such topics.

Will AI become a new growth area for you? Currently, your business is primarily driven by S/4HANA migrations. What will happen when this wave subsides?

Eventually, companies will have completed their S/4 migration. For providers who have focused exclusively on technical migrations, this may be a risk. But that’s not how we see the market.

Demand for S/4 will not disappear abruptly. While purely migration-driven projects will decrease in the long term, they will be replaced by follow-up projects.

What do you mean by follow-up projects?

We’re already talking about “post-S/4 transformations.” Many companies, for example, migrated to S/4HANA using a brownfield approach, without fundamentally changing their processes. This means that the real substantive transformation — innovation and further development of business processes — is still to come.

The public perception often underestimates the true extent of a company’s transformation journey. It’s not just about a new system, but about a gradual evolution towards a data-driven organization — ultimately, an “agentic enterprise.”

So it’s a longer process?

Companies systematically expand their capabilities over time — and S/4HANA is more of a starting point than the destination. Today, companies are increasingly pursuing a broader agenda: business transformation, technology transformation, process innovation, and digital transformation.

Will there be enough left for consulting partners if SAP increasingly provides its own functionality?

The essential thing is the question: How do I, as a consultant, bring a company along? How do I design business processes? And how do I convey the path to the future? That will continue to be the core task of consulting — and that’s how we are positioned.

We see very clearly that the field of consulting is not shrinking, but expanding. Complexity is increasing massively. Concepts like the “agentic enterprise” add a new semantic layer. This makes IT and process landscapes even more demanding.

Companies increasingly need support to understand and structure this complexity and derive meaningful business solutions from it. That is precisely where our role lies. Services that are very close to pure implementation — classic development, testing, or configuration tasks — are under greater pressure. Although this has always been the case, AI and SAP’s strategic direction will likely intensify this pressure. Such tasks will become easier to replace and will tend to be in less demand.

  • ✇Security | CIO
  • You selected the right vendors. Now govern them like you mean it.
    Waiting for your vendor to fix a program isn’t a strategy. It’s a cost, accumulating quietly while everyone in the room maintains the fiction that the process is working. I’ve been in both rooms. The room where the client already knows something is wrong and needs the language and the evidence to act, and the room where the client doesn’t know yet. The program feels manageable, the vendor is professional, the steering committee meetings run on time, and the warning sign
     

You selected the right vendors. Now govern them like you mean it.

27 de Abril de 2026, 07:00

Waiting for your vendor to fix a program isn’t a strategy. It’s a cost, accumulating quietly while everyone in the room maintains the fiction that the process is working.

I’ve been in both rooms. The room where the client already knows something is wrong and needs the language and the evidence to act, and the room where the client doesn’t know yet. The program feels manageable, the vendor is professional, the steering committee meetings run on time, and the warning signs are sitting in plain sight waiting for someone to name them.

That second room is the more important one. Because the window to act is still open. And most clients don’t move until it’s started to close.

Warning signs most clients miss appear in design

The earliest signal is rarely a missed milestone or a failed deliverable. It appears in language. When the phrase “path to green” starts appearing in status reports and steering committee decks, the program has already accepted it’s not green. It’s shifted from managing execution to managing the narrative.

Watch what the steering committee is actually doing. If it’s consistently hearing about what happened last month rather than what’s forecast for next month, leadership has been converted from a decision-making body into an audience. The vendor controls the agenda, the framing, and the cadence of what gets surfaced.

The most serious signal is when a program sponsor hears about material issues from their own direct reports that the vendor hasn’t raised in the room. That’s not a communication gap but a calculated decision about what leadership is ready to hear. When that pattern appears in SAP, Oracle, or Salesforce programs, the trust that makes the governance model function has already eroded.

When you see these signals, don’t wait for the next steering committee. Start demanding data that can be independently corroborated. Ask the vendor to forecast, not report. If they can’t tell you where the program will be in 60 days, they’re managing your perception, not your program.

Your master conductor has a conflict of interest you’re not addressing

A pattern I’ve seen consistently across multi-vendor programs involving Accenture, Deloitte, PwC, and others is the master conductor, or program integration coordinator, is quick to name client’s gaps, other vendors’ shortcomings, and third-party dependencies running behind. What they almost never do is name their own firm’s failures with the same directness in the same room.

That’s not a personality issue but a structural conflict. The firm serving as master conductor is delivering against its own statement of work (SOW), and the governance position gives them access to information, reporting authority, and narrative control they’ll use to, consciously or not, protect their own delivery track.

This is why I advise clients to treat the master conductor and program integration coordinator role as structurally separate from the vendor delivery role. That means a, entirely separate firm, an independent integrator with no delivery stake in the outcome. In practice, it’s more often a designated individual or a group within the project management or transformation office carved from one of the existing vendors, reporting directly to the client and accountable to the steering committee, not to their own firm’s engagement leadership.

There’s no true firewall in that model, but there’s a behavioral test. Watch what that role or team does with information that reflects badly on their own firm. Do they surface it or escalate it with the same urgency they bring to client gaps? Do they forecast problems on their own track, or only on everyone else’s?

A master conductor who’ll escalate failures that implicate their own delivery team is doing the job. One who only calls out the client and the other vendors is protecting the engagement.

Before the next SOW is signed, make it structural. Define the master conductor role separately from the delivery role, name the individual or team, set the reporting line directly to the client, and use the behavioral test to determine whether the role is being performed or merely filled.

Waiting isn’t neutral

The financial cost of waiting is more specific than most clients realize. In a multi-vendor environment where two or three system integrators are billing against active SOWs, every month of schedule extension carries a material cost, potentially millions to tens of millions of dollars per firm, not because scope expanded, but because governance didn’t hold the timeline.

The commercial exposure appears even earlier. When scope boundaries are unclear and the integrated plan is unstable, vendors have no reliable baseline to price against. The result is predictable: a significant spread between a time-and-materials estimate and a fixed fee quote for the same scope. That spread is not a pricing difference. It’s the vendor converting your governance uncertainty into their contract protection. The client absorbs it either way.

What makes the waiting feel reasonable is the vendor’s day-to-day team is usually professional and working hard. So the problem is authority and incentive, not effort. The program manager running the engagement can’t authorize additional resources nor commit spend across organizational lines. Their job is to manage the relationship, protect their firm’s margin, and keep the engagement profitable. Fixing your program isn’t the same job.

The window to act is real and short. A senior executive at the vendor can absorb costs, bring new talent, and make commitments the delivery team has no authority to make. But that authority diminishes as the program ages. The more that’s been billed and the more scope has shifted, the harder it is for even a motivated senior executive to make the client whole. Clients who act in design or early build have options that clients who wait until three months before go-live don’t.

The intervention that works is a leadership one

When the signals are clear and the client is ready to act, the intervention that moves the needle isn’t a governance document or a scorecard meeting but a top-to-top conversation between client and vendor senior leadership. This includes execs who aren’t running the day-to-day program but have something personal at stake in the outcome.

That conversation works because it activates a different set of incentives. The vendor’s senior executive, the sector partner and industry leader whose name is on the relationship, needs your program to be referenceable. They don’t want a PR failure on a flagship engagement, nor do they want to explain to their firm’s leadership why a major client program collapsed. They have authority their delivery team doesn’t: power to assign their best resources, ability to absorb costs the SOW or change order doesn’t cover, and they can accelerate staffing decisions and make commitments that change what the program can do. They have skin in the game their team doesn’t.

Also, structure the engagement deliberately. Have senior executives on both sides and new talent brought in as a visible signal of vendor investment. And have a cadence that continues until the data shows the program is back on track, with time-bound accountability on both sides. And have explicit understanding that the relationship itself is under review, not just the program.

This is sustained leadership engagement, not a one-time meeting, and it doesn’t replace the governance model. It enforces it.

The only recovery signal worth trusting

When the top-to-top works, you’ll know it by what the vendor brings back to the table. Not reassurances or a revised plan with optimistic milestone dates, but facts about where they failed, what they’re changing, and, most critically, where the client has performance gaps that also need to close.

A vendor who comes back and accepts blame still manages the relationship. A vendor who says we failed here and here, these are the specific changes we’re making, and you have a gap here we need you to address, that vendor is engaged and mutually accountable. That’s the integrity test.

It runs both ways because program failure almost always does. Slow client decisions. Unavailable business resources. Requirements that shifted after design was locked. A vendor who names those things alongside their own failures isn’t deflecting, they’re investing in an outcome. That’s the signal the recovery is real.

If the executive meeting produces only promises and general commitment, keep the pressure on. Real engagement looks like specific admissions, named resources, and a willingness to hold the mirror up to both sides of the table.

You hold the accountability. Be the human in the middle.

Through all of it, the client holds the ultimate accountability. The master conductor holds the responsibility for execution and integration across the vendor ecosystem. That distinction isn’t administrative. It means the client can’t outsource their judgment, regardless of how rigorous the governance model looks on paper.

Think of it like the vendor can hallucinate. Not out of malice, but because every status report is a curated narrative produced by people whose compensation, future work, and professional reputation depend on how that narrative lands. The program deck isn’t neutral data, it’s information filtered through interests. What’s present tells you something. What’s absent, however, tells you more.

Be the human in the middle. Verify, cross-reference, ask questions the deck didn’t answer, and notice what’s missing as much as what’s there. If the steering committee is only hearing good news, that’s a sign someone is deciding what leadership is ready to hear, not that the program is running well.

Demand forecasts, not status reports. Look for hard evidence that can be independently corroborated. When the vendor names a client performance gap alongside their own, take it seriously. That’s the accountability model working the way it’s supposed to, not a deflection.

The warning signs may not always be apparent, though. The window is open, but won’t stay that way, so waiting isn’t a strategy.

  • ✇Security | CIO
  • IBM shareholder proposal demands IBM defend AI bias protocols
    CIOs have long struggled with AI reliability issues, given problems with training data, model interpretations, and inconsistent data weighting delivering various levels of bias. IBM officials at next week’s shareholder meeting will have to address those concerns directly, as they face a shareholder motion demanding increased visibility into how it manages AI bias, a thorny issue that also affects all of the other major AI players.  The shareholder resolution is demandin
     

IBM shareholder proposal demands IBM defend AI bias protocols

23 de Abril de 2026, 22:53

CIOs have long struggled with AI reliability issues, given problems with training data, model interpretations, and inconsistent data weighting delivering various levels of bias. IBM officials at next week’s shareholder meeting will have to address those concerns directly, as they face a shareholder motion demanding increased visibility into how it manages AI bias, a thorny issue that also affects all of the other major AI players. 

The shareholder resolution is demanding that IBM “issue a report, within the next year, on the methods used to eliminate bias from the Company’s artificial intelligence (AI) models, Including an assessment of the risk that seeking to avoid disparate impact in outputs will undermine the accuracy of, and trust in, those outputs.”

IBM’s official response to the resolution asks shareholders to reject the proposal. “Since releasing its first Granite model, IBM has been transparent with its data management and training procedures via technical reports, model cards, and other model documentation,” the company said. “The IBM models are open source In order to foster transparency. Moreover, IBM publicly provides the information sought by this proposal in its submissions to Stanford University’s Foundation Model Transparency Index (FMTI).”

FMTI is a benchmarking initiative that looks at how transparent companies are about their foundation models, measuring disclosure across areas like data sources, training methods, evaluation metrics, risks, and governance practices, to help stakeholders understand how responsibly and openly these models are developed and deployed.

IBM’s response added, “information related to mitigating bias that the proponent requests is largely already publicly available for consideration by stockholders.” Therefore, it argued, preparing such a report “would not provide new meaningful information and it is not in the best interests of IBM stockholders, as it will divert management’s attention and would be an inefficient use of corporate resources.”

Beyond its argument that it already provides such AI bias transparency, the company pointed to its customers’ ability to fine-tune their models to resolve any specific bias concerns.

“IBM models are smaller and targeted towards enterprise clients and use cases,” it said. “These models are not general purpose, consumer-facing models. Therefore, our open-source models are built in a manner that allows our clients to build an AI solution that will address their specific needs. IBM developed several methods to allow clients to address bias issues that may arise as they train the AI system. In other words, IBM not only provides the building blocks for its clients’ AI solutions, but also provides the tools to help more clients address bias.”

An industry-wide problem

Analysts and consultants generally found IBM’s position correct, but most pointed to the AI bias issue as an industry-wide problem impacting all of the major AI providers and all of their enterprise users. 

Sanchit Vir Gogia, chief analyst at Greyhound Research, said, “IBM’s stance deserves to be taken seriously, but not at face value. The company is right to say that bias mitigation, fairness frameworks, and governance controls are already built into its AI systems. That is not in dispute. In fact, compared to much of the market, IBM has been more deliberate than most in turning responsible AI from a set of principles into something operational.”

But, he added, “when IBM points out that customers can and should address bias through fine-tuning and governance, it is quietly acknowledging a limitation. It is admitting that whatever happens at the model layer is not the end of the story. It is only the beginning of it.”

Manish Jain, a principal research director at Info-Tech Research Group, saw the IBM position as correct, but also as the latest example of large vendors shifting responsibility for AI accuracy onto their enterprise customers. 

“I see IBM’s board’s stance being broadly consistent with industry practice, which is doing everything to shift the responsibility of removing bias towards customers,” Jain said. “In fact, many independent software vendors (ISVs) are also taking a similar position and saying to their customers, ‘We’ll provide the compass, you chart the course.’ Unfortunately, accountability is the victim. Regulatory guidelines, independent audits, standardized benchmarks, in addition to clearer disclosures, are extremely important to ascertain this accountability.”

Noah Kenney, principal consultant for Digital 520, had similar feelings about IBM’s response. 

The shareholder demand for more transparency “is asking the right question for the wrong reason,” Kenney said. “The proponent frames disparate-impact correction as a threat to accuracy, but the real issue is that IBM, and every major model provider, is measuring bias at the output layer when most of it originates upstream. You cannot fairness-tune your way out of a training data problem.”

He noted, “IBM’s response is accurate on the facts. FMTI scores, model cards, FairIQ, Equi-tuning, FairReprogram, and the Granite transparency posture are all real, and more than most of their peers publish. The gap is not disclosure. The gap is that the industry has converged on post-hoc mitigation as the dominant paradigm, and post-hoc mitigation has diminishing returns once a model is trained.”

Mike Leone, VP/principal analyst at Moor Insights & Strategy, pointed out that IBM is doing a better job than most AI vendors in terms of bias transparency, and that the industry needs to address the issue globally. 

“IBM discloses more of its AI bias and transparency work than most vendors. IBM has built specific bias mitigation methods into the stack rather than just talking about bias at a high level. A new annual report would mostly repeat what’s already out there,” Leone said.

“I truly don’t think eliminating bias is possible, and that’s not an IBM problem,” he added. “The whole market is operating the same way in that any model trained on human-generated data carries the biases of whoever made it, which is exactly the same as humans would do. What vendors can do, IBM included, as they do a bunch of this already, is measure it, disclose it, monitor it after deployment, and give customers tools to adapt. I’m in the camp that anyone promising to completely eliminate bias is telling you what you want to hear.”

‘Unbiased’ can’t be defined

Part of the answer to the AI bias problem is technological, but there is an underlying fundamental issue of bias that cannot possibly be defined. 

Carmi Levy, an independent technology analyst, observed, “the very definition of the word, unbiased, simply doesn’t exist, because what might seem unbiased or perfectly fair to one stakeholder might be perceived as wildly biased or unfair by another.”

Within that context, he noted, “the notion of eliminating any and all forms of bias from the AI equation is unrealistic. At best, vendors should be aiming for mitigation instead of outright elimination. They might also want to devote more resources toward transparency. Although sharing too much can compromise their competitive market position, there’s no reason why a carefully balanced and communicated messaging strategy can’t alleviate stakeholder concerns over bias without giving competitors undue advantage.”

Complete bias removal impossible

The AI bias issue is sometimes subtle, as it chooses which of the relevant details it should use in answers and in what sequence. But in other instances, such as when racial and gender prejudices are reinforced by AI working for human resources, the bias can potentially appear quite blatant. California, for example, wants to force AI vendors to prove that they have strong bias safeguards. 

However, said Gartner VP analyst Nader Henein, “completely removing bias is impossible, which is why almost every piece of AI regulation focuses on AI systems that make decisions that will impact people’s lives or people’s livelihoods and introduce obligations such as human oversight.”

For example, he said, “a recruitment application that sorts applicants by the suitability to a job role should be used by a trained recruiter who understands that this is AI, that it can make mistakes, and they are responsible to oversee the AI system, much in the same way that you oversee an entry level employee taking on sensitive work, the difference being that oversight is permanent.”

Chris Hood, an independent AI strategist and former head of Google’s strategy and transformation, also said that IBM’s position is legitimate, but it’s not enough.

“IBM’s position is technically defensible and practically insufficient,” Hood said. “Publishing bias mitigation reports and giving customers fine-tuning options are reasonable steps. They are also steps that address the symptoms rather than the architecture. IBM is describing what it does to manage bias. The harder question is whether bias in foundation models is manageable at all, or whether it is structural.”

He noted that the models learned from human-generated content, which carries “every historical imbalance, cultural assumption, and factual error humans have ever produced at scale. You can audit it, weight it, and filter it. You cannot eliminate it. The data is what it is.”

Potentially more of an issue is the personal bias that every user brings to every AI interaction. “A geopolitically charged question asked by someone in the United States and the same question asked by someone in another country will be interpreted differently, evaluated differently, and potentially produce different outputs. This layer is almost impossible to govern at the model level because it lives in the interaction, not the training,” Hood noted.

He added: “IBM recommending shareholders reject this proposal while pointing to existing efforts is a reasonable governance posture. It is also a posture that treats bias as a solved problem rather than a managed one. The difference matters enormously as agents move from answering questions to making decisions.”

  • ✇Security | CIO
  • Ways CIOs can prove to boards that AI projects will deliver
    There’s been a wake-up call for CIOs. All the talk about perceived productivity boosts that have previously dominated conversations about AI has been replaced with a demand for measurable value from investments in emerging tech. As MIT states that project failure rates are as high as 95%, executive boards are starting to question when AI will pay dividends. PWC’s Global CEO Survey shows that more than half of companies have seen neither higher revenues nor lower costs fr
     

Ways CIOs can prove to boards that AI projects will deliver

22 de Abril de 2026, 07:00

There’s been a wake-up call for CIOs. All the talk about perceived productivity boosts that have previously dominated conversations about AI has been replaced with a demand for measurable value from investments in emerging tech.

As MIT states that project failure rates are as high as 95%, executive boards are starting to question when AI will pay dividends. PWC’s Global CEO Survey shows that more than half of companies have seen neither higher revenues nor lower costs from AI, and only one in eight have achieved positive outcomes.

While Gartner predicts significant growth in AI spending this year, John-David Lovelock, distinguished VP analyst at the research firm, says the lack of tangible returns means digital leaders are changing tack. Rather than hoping their AI explorations will produce returns, CIOs are switching to more targeted initiatives.

“The projects growing quickly are the ones doing business, and those initiatives include AI,” he says. “CIOs are starting to de-emphasize AI and re-emphasize business. These projects are about AI enhancing existing work and moving away from moonshot transformational projects.”

Lenovo’s CIO Playbook for 2026, produced with tech analyst IDC, also suggests enterprises will get serious about AI deployments this year, with explorations replaced by production-level services that drive business transformation. With boards exerting pressure for measurable returns, Ewa Zborowska, research director at IDC, says more digital leaders want to use AI to enhance, innovate, and reinvent their organizations.

“CIOs aren’t just considering AI out of curiosity, they want to see what they can get out of it to grow the business,” she says. “AI adoption is much more about doing new things or taking a fresh approach to creating value rather than becoming more efficient at cost-cutting.”

Such is the clamor for value that Richard Corbridge, CIO at property specialist Segro, suggests that returns from AI are a main digital leadership priority: “If you discover, for example, that everyone in the organization used Copilot 10 times today, that might mean they’ve been more efficient,” he says. “But what have they actually done with the time they saved? How has saving time created value?”

CIOs will grapple with these questions during the next 12 months. With CEOs and boards becoming impatient for returns, digital leaders are working more with their bosses to define value. Successful CIOs fine-tune their arguments to ensure their projects are backed, and then demonstrate the value of their AI initiatives to the board.

Defining a valuable AI project

What’s clear is CIOs can’t deliver outputs from AI projects without input from their enterprise peers. IDC’s Zborowska says tighter cooperation across project ownership and KPIs ensure emerging technology investments are targeted at the right places.

This increased interaction between digital and business leaders also changes project aims. As stakeholders work closely together to generate value from AI, Zborowska expects executives to seek KPIs that stretch across operational concerns.

“I’d bet we see more non-financial aims over the next few years,” she says. “Executives will consider things such as are employees more engaged, has their work improved in any way, are AI implementations impacting customer experiences, and are internal decisions being made more efficiently.”

Martin Hardy, cyber portfolio and architecture director at the UK’s Royal Mail, agrees that defining valuable AI projects is all about finding the right focus. Effective deployments target processes in distinct areas, and business stakeholders must be part of the value-defining process.

“If we’re making decisions about legal documentation, AI is probably not there yet,” he says. “But if we can use AI to approve holidays, for instance, that might be something because if you have rules that say no more than two people off at a time, you could use AI to check about booking holidays without having to ask everyone in the office.”

For CIOs seeking value-generating use cases, Gartner’s Lovelock suggests AI can deliver results in key business areas such as boosting revenue, supporting decision-making, engaging staff, and improving experiences. He says the right path to AI exploitation correlates with Gartner’s enterprise technology adoption profiles, which group companies into a range of categories.

“The folks who are furthest forward, what we call the agile leaders in technology, are much more likely to drive AI to change the business,” he says. “The laggards on the other side are more likely to take on the technology that’s given to them by incumbent software providers, and use it in a prescriptive manner.”

Fine-tuning the use case

The challenge now is for digital leaders to work with their business peers to determine a more refined approach to AI deployment. For some CIOs, the value of AI is clear but the potential risks must be considered.

Take Dan Keyworth, executive director of performance technology and systems at McLaren Racing, whose focus is operational stability and race-day reliability. While he says being aware of developments in generative and agentic AI is crucial, the priority is tried-and-tested technologies rather than innovations that put performance at risk.

“Formula One is grounded in traditional machine learning and simulation,” he says. “Developing models has been a big part of our performance journey, and since the engine already existed, gen AI is the turbo that’s bolted on with more investment in AI.”

For other digital leaders, like Barry Panayi, group chief data officer at insurance firm Howden, success depends on keeping the human in the loop. Yes, automation can improve customer service, but rather than replacing staff, he wants to use AI to ensure Howden’s professionals have the right insight when they interact with clients.

“There’s absolutely no desire to use data to drive productivity by automating what we do with our customers,” he says. “This is a business where people speak to people. Our brokers need information that can give them an edge, and prove to their clients they understand the risks and can give them the best deals.”

Nick Pearson, CIO at technology specialist Ricoh Europe, adds that the use case for AI at his firm is two-fold: boosting operational productivity and improving customer processes. So he’s established a tri-party AI council with the head of service operations and the commercial manager in Spain. This council explores opportunities to buy, build, and reuse emerging tech.

“We’ve got a strategy that looks at where AI matters, which means exploring the technology we already have to boost internal productivity,” he says. “We’ve got a lot of people who know how to code and build things in Copilot Studio and other platforms, so let’s use that to increase productivity.”

Showing returns to the board

For Gartner’s Lovelock, the key lesson for CIOs eager to generate value from AI is to work with their peers and set desired outcomes before investing. “Most people start with the idea that more is more, and if you do that, you won’t get to the idea of quality,” he says.

That sentiment resonates with Segro’s Corbridge, who encourages digital leaders to start conversations with other professionals by focusing on value. Ask people how investing in an AI implementation will create value for them personally, for the wider business, and the customers the organization serves.

He says CIOs shouldn’t try to prove that AI works, but rather concentrate on how emerging tech adds value. That definition is so critical to Segro’s way of working that the organization uses the phrase proof of value rather than proof of concept.

“Most things work, but they might be more expensive,” he says. “For example, you might be able to use AI to transform how the organization uses spreadsheets, but that project might cost you $300,000. And if you’re currently paying someone $40,000 to do that work, and they’re happy doing it, then you have to question the value.”

Lessons are being learned, says IDC’s Zborowska, whose firm’s research suggests that half of AI POCs now transition into production. While some people might think this success rate isn’t impressive, the quantity a year ago was 10%. After several years of AI exploration, it appears CIOs and their businesses are now firmly focused on real returns.

“These numbers speak to the fact that companies are being more mature and mindful in how they allocate budgets,” she says. “They also support the main theme that we’re on a journey to transformation and a maturing market for AI adoption.”

  • ✇Security | CIO
  • SaaS의 진화 방향 제시한 어도비… “핵심은 에이전트와 데이터”
    AI 에이전트의 등장으로 소비자 참여 방식이 근본적으로 변화하면서, 서비스형 소프트웨어(SaaS) 기업들은 전략 재검토에 나서고 있다. 크리에이티브 플랫폼 기업 어도비는 이에 대응해 ‘고객 경험 오케스트레이션(CXO)’이라는 새로운 접근 방식으로 전환하고 있다. 어도비는 자체 컨퍼런스인 ‘어도비 서밋’에서 ‘어도비 CX 엔터프라이즈’ 제품군을 공개하며, 단순 소프트웨어가 아닌 에이전트 중심으로 정의되는 미래로의 전환을 선언했다. 이 과정에서 SaaS 기업은 축적된 도메인 전문성과 퍼스트 및 서드파티 데이터 자산을 기반으로 경쟁력을 확보할 수 있다는 점을 강조했다. 이 플랫폼은 맞춤형 및 즉시 활용 가능한 AI 에이전트, MCP(Model Context Protocol) 엔드포인트, 그리고 어도비의 오케스트레이션 엔진 기반 신규 인텔리전스 시스템을 통합한 것이 특징이다. 어도비의 부사장 선딥 파르사는 “SaaS는 지금 변화하고
     

SaaS의 진화 방향 제시한 어도비… “핵심은 에이전트와 데이터”

21 de Abril de 2026, 22:16

AI 에이전트의 등장으로 소비자 참여 방식이 근본적으로 변화하면서, 서비스형 소프트웨어(SaaS) 기업들은 전략 재검토에 나서고 있다. 크리에이티브 플랫폼 기업 어도비는 이에 대응해 ‘고객 경험 오케스트레이션(CXO)’이라는 새로운 접근 방식으로 전환하고 있다.

어도비는 자체 컨퍼런스인 ‘어도비 서밋’에서 ‘어도비 CX 엔터프라이즈’ 제품군을 공개하며, 단순 소프트웨어가 아닌 에이전트 중심으로 정의되는 미래로의 전환을 선언했다. 이 과정에서 SaaS 기업은 축적된 도메인 전문성과 퍼스트 및 서드파티 데이터 자산을 기반으로 경쟁력을 확보할 수 있다는 점을 강조했다.

이 플랫폼은 맞춤형 및 즉시 활용 가능한 AI 에이전트, MCP(Model Context Protocol) 엔드포인트, 그리고 어도비의 오케스트레이션 엔진 기반 신규 인텔리전스 시스템을 통합한 것이 특징이다.

어도비의 부사장 선딥 파르사는 “SaaS는 지금 변화하고 있으며, 우리는 SaaS의 재구상과 재정의 과정에 참여하기 위해 아키텍처를 재설계하고 있다”고 밝혔다.

‘코치’의 지휘 아래 실행되는 에이전트

어도비 CX 엔터프라이즈는 ‘어도비 익스피리언스 플랫폼(AEP) 에이전트 오케스트레이터’를 기반으로 구축됐다. 이 기술은 AI 에이전트를 어도비 애플리케이션에 직접 통합한 것이 특징이다. 2025년 출시된 AEP는 현재 연간 1조 건 이상의 고객 경험을 처리하고 있다.

AEP는 여전히 CX 엔터프라이즈의 핵심 축으로 작동한다. 기업은 이를 통해 재사용 가능한 ‘에이전트 스킬’을 정의할 수 있으며, 특정 목적에 맞춘 맞춤형 에이전트도 구축할 수 있다. 또한 앤트로픽의 클로드, 오픈AI의 챗GPT, 구글의 제미나이, 마이크로소프트(MS)의 코파일럿, 엔비디아의 네모클로(NemoClaw) 등 다양한 AI 기술 스택과 연동이 가능하다. 개발자들은 모델 컨텍스트 프로토콜(MCP) 서버를 포함해 맞춤형 활용 사례 구축에 필요한 인프라도 활용할 수 있다.

어도비의 부사장 선딥 파르사는 “애플리케이션이 UI 레이어에 갇히지 않도록 하고, MCP 호출이나 A2A 계층을 통해 조합 가능한 서비스로 제공할 것”이라며 “고객은 기존 자산을 활용해 자신만의 프로세스를 구축하고, 자체 UI를 구현할 수 있다”고 설명했다.

파르사는 고객 선택권의 중요성도 강조했다. 현재 많은 기업이 자체 구축(build)과 외부 도입(buy) 사이에서 고민하고 있으며, 일부는 맞춤형 UI를 직접 개발하려는 반면 다른 기업은 이에 관심이 없다는 설명이다.

CX 엔터프라이즈를 활용하면 기업은 사전 정의된 에이전트 스킬을 기반으로 맞춤형 워크플로우를 구성할 수 있다. 또한 업무 최적화(작업 조율 및 자동화)나 브랜드 거버넌스(정책 준수, 권한 관리, 자산 권리 추적) 등 특정 기능에 특화된 에이전트도 바로 활용할 수 있다.

여기에 더해, 향후 몇 달 내 출시될 ‘어도비 CX 엔터프라이즈 코워커’는 설정된 목표를 기반으로 여러 에이전트를 조율하며 다단계 작업을 수행하는 역할을 맡는다.

예를 들어 마케팅팀이 다음 분기 구독률을 3% 높이는 목표를 설정하면, 코워커는 관련 고객군을 식별하고 성과 인사이트를 도출한 뒤 전략 수립, 이메일 문구 작성, 비주얼 자산 제작까지 지원한다. 이후 사람이 이를 승인하면 캠페인 실행과 성과 모니터링까지 이어진다.

파르사는 “기존 에이전트는 고객군을 생성한 뒤 ‘잠드는’ 방식이었다”며 “새로운 CX 엔터프라이즈 코워커는 항상 작동하는 상태로, 지속적인 메모리를 기반으로 수주에서 분기 단위까지 워크플로우를 운영할 수 있다”고 설명했다.

이어 “코워커는 미식축구 쿼터백처럼 현장에서 플레이를 지휘하는 역할을 하며, 마케터나 브랜드 담당자가 코치 역할을 맡는다”고 비유했다.

파르사는 “고객 경험 오케스트레이션이라는 방향성에 더욱 집중하고 있다”고 강조했다.

1대1 개인화로의 전환

어도비는 에이전트 기반 도구와 함께 두 가지 새로운 인텔리전스 시스템도 공개했다. ‘어도비 브랜드 인텔리전스’와 ‘어도비 인게이지먼트 인텔리전스’다.

브랜드 인텔리전스는 시각-언어 이해 능력을 갖춘 미세 조정 대형언어모델(LLM)을 기반으로 한다. 주석, 피드백 반복 과정, 폐기된 자산 등 정성적이고 미묘한 데이터를 학습해 브랜드 맥락을 이해하는 것이 특징이다.

어도비의 부사장 선딥 파르사는 “브랜드 인텔리전스는 단순 CSS 스타일 가이드를 정리한 ‘브랜드 키트’보다 훨씬 복잡한 문제를 다룬다”며 “데이터 상호작용 신호와 실제 기업 자산을 바탕으로 브랜드 감성을 이해하기 시작한다”고 설명했다.

어도비 인게이지먼트 인텔리전스는 타깃 고객에게 가장 적합한 제안이나 메시지, 다음 행동을 결정하는 데 도움을 준다. 클릭률이나 전환율이 아닌, 고객의 전체 생애주기 상호작용 데이터를 기반으로 한다는 점이 특징이다.

파르사는 “과거에는 ‘적을수록 좋다’는 접근이 통했지만, 이제는 ‘많을수록 좋다’는 환경으로 바뀌고 있다”며 “생성형 AI의 핵심 가치는 더 많은 콘텐츠를 경제적으로 생산할 수 있다는 데 있다”고 말했다. 이어 “단순히 양을 늘리는 것이 아니라, 1대1 개인화에 가까운 정밀 타깃 캠페인을 구현하는 것이 중요하다”고 덧붙였다.

초기 생산성 향상 효과도 크다는 평가다. 파르사는 “문제 해결과 초기 이상 탐지에 걸리는 시간이 기존 수일에서 수주 단위가 아닌, 몇 시간 수준으로 단축됐다”고 강조했다.

SaaS 기업의 데이터 경쟁력

에이전트 중심 환경에서 사용자 단위 과금 모델의 영향력이 약해지는 가운데, 어도비는 데이터 경쟁력을 핵심 차별화 요소로 내세우고 있다.

파르사는 “지난 수년간 2만 개 이상의 기업이 어도비 플랫폼 위에 시스템을 구축해왔다”며 “이를 통해 방대한 데이터와 도메인 전문성이 축적됐다”고 설명했다.

이어 “생성형 AI와 AI 에이전트는 전 세계 지식 체계를 이해하고 유용한 기능을 만드는 데 강점을 갖고 있다”면서도 “하지만 기업 내부 데이터는 폐쇄된 ‘월드 가든’ 구조에 있어 접근이 제한된다”고 지적했다.

또한 기업 환경은 매우 복잡하고, 여러 애플리케이션에 분산돼 있다는 점도 문제로 꼽았다. 파르사는 “업무 방식은 문서로 정리되기도 하지만, 일부는 조직 내부의 암묵지에 의존한다”며 “독립적으로 작동하는 AI 에이전트는 이런 맥락을 이해하지 못해 기업 환경에서 쉽게 한계에 부딪힌다”고 말했다.

이어 “어도비는 자사 애플리케이션 내부에 존재하는 기업 맥락을 AI 계층으로 연결하는 역할을 한다”며 “고객이 AI 플랫폼에서 이를 처음부터 다시 구축하는 것보다 훨씬 빠르게 구현할 수 있다”고 강조했다.

마지막으로 파르사는 “AI 시대에 고객 참여 방식이 급격히 변화하는 만큼, 어도비도 이에 맞춰 지속적으로 적응하고 있다”며 “무엇보다 중요한 것은 ‘개방성’”이라고 밝혔다.

그는 “기술 파트너 및 다른 SaaS 기업과 협력해 유연성을 유지하고, 고객이 있는 환경에 맞춰 서비스를 제공할 것”이라고 덧붙였다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • 데이터브릭스, APJ 총괄에 사이먼 데이비스 선임…“4분기 85% 성장 속 지역 사업 확대”
    데이터브릭스 APJ 지역은 지난 4분기 동안 전년 대비 85% 이상의 매출 성장을 기록하며 전 세계에서 빠르게 성장하는 핵심 지역으로 부상했다. 데이터브릭스는 해당 지역 투자 확대의 일환으로 현재 1,500명 이상의 인력을 고용하고 있으며, 올해 말 싱가포르 IOI 센트럴 블러바드 타워(IOI Central Boulevard Towers) 내 약 900평 규모의 신규 싱가포르 APJ 지역본부로 이전해 거점 규모를 4배로 확장할 계획이다. 데이터브릭스에 따르면, APJ 지역에서 인공지능(AI) 도입이 확산되면서 데이터브릭스의 고객 기반도 금융 서비스, 통신, 공공 등 주요 산업 전반으로 점진적으로 확대되고 있다. 한국의 삼성생명과 싱가포르 관세청, 싱텔(SingTel) 등이 새롭게 고객으로 합류했으며, LG전자와 아틀라시안(Atlassian), 호주국립은행(NAB), 도요타 등 아시아태평양 지역 주요 기업들도 이미 데이터브릭스를 활용하고
     

데이터브릭스, APJ 총괄에 사이먼 데이비스 선임…“4분기 85% 성장 속 지역 사업 확대”

21 de Abril de 2026, 04:35

데이터브릭스 APJ 지역은 지난 4분기 동안 전년 대비 85% 이상의 매출 성장을 기록하며 전 세계에서 빠르게 성장하는 핵심 지역으로 부상했다. 데이터브릭스는 해당 지역 투자 확대의 일환으로 현재 1,500명 이상의 인력을 고용하고 있으며, 올해 말 싱가포르 IOI 센트럴 블러바드 타워(IOI Central Boulevard Towers) 내 약 900평 규모의 신규 싱가포르 APJ 지역본부로 이전해 거점 규모를 4배로 확장할 계획이다.

데이터브릭스에 따르면, APJ 지역에서 인공지능(AI) 도입이 확산되면서 데이터브릭스의 고객 기반도 금융 서비스, 통신, 공공 등 주요 산업 전반으로 점진적으로 확대되고 있다. 한국의 삼성생명과 싱가포르 관세청, 싱텔(SingTel) 등이 새롭게 고객으로 합류했으며, LG전자와 아틀라시안(Atlassian), 호주국립은행(NAB), 도요타 등 아시아태평양 지역 주요 기업들도 이미 데이터브릭스를 활용하고 있다.

사이먼 데이비스 총괄은 싱가포르에 기반을 두고 향후 데이터브릭스의 APJ 지역 비즈니스를 총괄하며 한국, 일본, 호주 및 뉴질랜드, 아세안, 인도, 중화권 등 주요 시장 전반의 전략, 운영 및 성장을 이끌게 된다. 그는 엔터프라이즈 기술과 데이터, 클라우드 서비스 분야에서 30년 이상의 경험을 보유했으며, SAP, 스플렁크(Splunk), 마이크로소프트(MS), 세일즈포스, 오라클 등 글로벌 기업에서 주요 리더십을 맡아왔다.

최근에는 SAP 아시아태평양 총괄 회장으로 재직하며 전략, 운영, 인력, 영업, 서비스, 파트너십, 수익성 등 지역 사업 전반을 총괄했다. SAP 합류 이전에는 스플렁크에서 수석 부사장 겸 총괄을 맡았다.

데이터브릭스 최고수익책임자(CRO, Chief Revenue Officer) 론 가브리스코는 “사이먼 데이비스는 깊은 지역 전문성과 산업 통찰력뿐만 아니라 성과 중심의 팀을 구축해 온 뛰어난 역량을 갖춘 리더”라며 “그의 리더십은 가장 빠르게 성장하는 지역 중 하나인 데이터브릭스 APJ 지역의 다음 단계 성장을 견인하고, 기업들이 데이터와 AI의 잠재력을 최대한 활용해 비즈니스 혁신을 이룰 수 있도록 지원하는 데 핵심적인 역할을 할 것”이라고 밝혔다.

데이터브릭스 APJ 지역 수석 부사장 겸 총괄(SVP & General Manager for APJ) 사이먼 데이비스는 “APJ는 세계에서 가장 디지털화가 앞서 있고 AI 도입 준비도가 높은 지역 가운데 하나로, 많은 기업이 실험 단계를 넘어 실제 비즈니스 영향력을 창출하는 단계로 빠르게 나아가고 있다”며 “이처럼 중요한 시점에 데이터브릭스에 합류하게 되어 매우 기쁘다”고 말했다. 이어 그는 “데이터브릭스의 차별점은 빠른 혁신과 강력한 실행력을 결합해 고객이 데이터를 통합하고 AI 애플리케이션과 에이전트를 실질적인 비즈니스 성과로 연결할 수 있도록 지원한다는 점”이라며 “앞으로 데이터브릭스의 팀, 고객 및 파트너들과 협력해 이러한 모멘텀을 더욱 가속화하고 의미 있는 성과를 창출해 나갈 것”이라고 말했다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • 오픈텍스트, ‘2026 SAP 글로벌 파트너 어워드’ 2개 부문 수상
    SAP 파트너 어워드는 한 해 동안 고객의 비즈니스 혁신과 성과 창출에 기여한 글로벌 파트너를 선정하는 프로그램이다. 성과 지표와 데이터 기반 평가를 통해 수상 기업이 결정된다. 오픈텍스트에 따르면, 이번 수상은 ‘파트너 솔루션 성공(Partner Solution Success)’과 ‘인적 자본 관리(Human Capital Management, HCM) 솔루션 우수성(Human Capital Management Solution Excellence)’ 부문에서 이루어졌으며, SAP(SAP) 솔루션 기반 혁신과 고객 가치 창출 성과를 인정받은 결과다. 오픈텍스트는 SAP 솔루션 확장(SAP Solution Extensions) 분야에서의 기술 리더십과 협업 성과를 인정받았다. SAP 환경 내에서 정보관리(Information Management), 콘텐츠(Content), 데이터(Data), AI(Artificial Intelligence
     

오픈텍스트, ‘2026 SAP 글로벌 파트너 어워드’ 2개 부문 수상

20 de Abril de 2026, 23:24

SAP 파트너 어워드는 한 해 동안 고객의 비즈니스 혁신과 성과 창출에 기여한 글로벌 파트너를 선정하는 프로그램이다. 성과 지표와 데이터 기반 평가를 통해 수상 기업이 결정된다.

오픈텍스트에 따르면, 이번 수상은 ‘파트너 솔루션 성공(Partner Solution Success)’과 ‘인적 자본 관리(Human Capital Management, HCM) 솔루션 우수성(Human Capital Management Solution Excellence)’ 부문에서 이루어졌으며, SAP(SAP) 솔루션 기반 혁신과 고객 가치 창출 성과를 인정받은 결과다. 오픈텍스트는 SAP 솔루션 확장(SAP Solution Extensions) 분야에서의 기술 리더십과 협업 성과를 인정받았다. SAP 환경 내에서 정보관리(Information Management), 콘텐츠(Content), 데이터(Data), AI(Artificial Intelligence) 기능을 통합해 기업의 운영 효율성과 규제 대응 역량을 동시에 강화한 점이 주요 평가 요소로 작용했다.

양사는 협력을 통해 기업이 SAP 기반 비즈니스 프로세스 전반에서 AI 기반 콘텐츠 활용, 자동화(Automation), 클라우드 전환(Cloud Transformation)을 보다 빠르게 추진할 수 있도록 지원하고 있다. 이를 통해 기업은 SAP S/4HANA(SAP S/4HANA) 클라우드 전환 과정에서 복잡성을 줄이고 생산성을 높일 수 있다.


또한 오픈텍스트는 SAP 석세스팩터스(SAP SuccessFactors)와의 통합을 통해 인사(Human Resources, HR) 영역에서도 디지털 문서 관리(Document Management)와 업무 자동화를 지원하고 있으며, 전사적 업무 효율 개선을 돕고 있다.

오픈텍스트 SAP 파트너십 담당 부사장 마크 베일리는 “이번 수상은 SAP와의 긴밀한 협력을 통해 고객의 디지털 전환(Digital Transformation)과 AI 기반 혁신을 실질적으로 지원해 온 성과를 보여준다”라며 “앞으로도 기업이 보다 빠르고 안전하게 AI 시대에 대응할 수 있도록 지원을 확대할 계획”이라고 전했다.

오픈텍스트는 향후에도 SAP와의 협력을 기반으로 클라우드 전환과 AI 활용을 가속화하고, 글로벌 고객의 정보관리와 비즈니스 혁신을 지속 지원할 방침이다.
dl-ciokorea@foundryco.com



  • ✇Security | CIO
  • Adobe bets on agentic AI to rewrite SaaS for customer experience
    Consumer engagement has been fundamentally changing with the advent of AI agents, forcing a rethink by software-as-a-service (SaaS) companies, and creativity platform provider Adobe is responding by shifting its approach to what it calls ‘Customer Experience Orchestration (CXO).’ Announced today at Adobe Summit, the new Adobe CX Enterprise suite is a pivot to a future defined by agents rather than by software alone, where SaaS companies claim an advantage based on their
     

Adobe bets on agentic AI to rewrite SaaS for customer experience

20 de Abril de 2026, 13:59

Consumer engagement has been fundamentally changing with the advent of AI agents, forcing a rethink by software-as-a-service (SaaS) companies, and creativity platform provider Adobe is responding by shifting its approach to what it calls ‘Customer Experience Orchestration (CXO).’

Announced today at Adobe Summit, the new Adobe CX Enterprise suite is a pivot to a future defined by agents rather than by software alone, where SaaS companies claim an advantage based on their deep domain expertise and troves of first and third-party data.

The platform brings together customizable and out-of-the-box AI agents, Model Context Protocol (MCP) endpoints, and new intelligence systems built on Adobe’s orchestration engine.

“SaaS is changing, and we are re-architecting so that we can participate in the reimagination, the redefinition of SaaS,” said Adobe VP Sundeep Parsa.

[ More Adobe Summit 2026 coverage ]

Agents executing with guidance from a ‘coach’

Adobe CX Enterprise builds on the company’s Adobe Experience Platform (AEP) Agent Orchestrator, which brought AI agents directly into Adobe apps. Released in 2025, AEP now  powers more 1 trillion experiences annually, according to the company.

AEP remains the “anchor” for Adobe CX Enterprise, which now gives customers the ability to create agent skills (reusable instructions), as well as providing specialized and customizable agents. These can be incorporated into any AI tech stack, including Anthropic’s Claude, OpenAI’s ChatGPT, Google’s Gemini, Microsoft Copilot, Nvidia’s NemoClaw, and others. Developers also have access to Model Context Protocol (MCP) servers and other infrastructure required to build customized use cases.

“We’re going to make sure our applications are not trapped inside our UI layer, that they become composable services available through MCP tool calls or the A2A layer,” Parsa explained. “Customers can tap into what they have and bring that into their own unique processes, be their own UI.”

He emphasized the importance of customer choice. Many enterprises are still grappling with the ‘build or buy’ question; some will prefer to create their own bespoke user interface (UI) layer, while others will have no interest in doing so.

With CX Enterprise, enterprises can use pre-loaded agent skills to build custom workflows, or can launch agents pre-built for specific tasks like workflow optimization (coordinating tasks or automating handoffs) and brand governance (enforcing policies, managing permissions, tracking asset rights). And, a new Adobe CX Enterprise Coworker, to be available in the coming months, will act on specified goals and orchestrate other agents to perform multi-step actions.

For instance, if a marketing team is looking to increase loyalty subscriptions by 3% in the next quarter, the CX Enterprise Coworker will work with other agents to identify relevant audience segments, surface performance insights, create a plan, and develop email copy or visual assets, Parsa noted. Once all this is approved by a human, the Coworker will then help execute the campaign and monitor results.

Whereas previously agents would build an audience, then “go to sleep,” Adobe’s new CX Enterprise Coworker is “always on,” has persistent memory, and can run workflows across weeks, or even full financial quarters if required, Parsa explained. He likened the CX Enterprise Coworker to an American football quarterback, the player who directs the activities on the field, guided by a coach on the sidelines. Coworker’s coach is a marketer or a brand specialist.

“We’re doubling down on this framing of customer experience orchestration,” Parsa says.

Moving to one-on-one personalization

Along with these agentic tools, Adobe is introducing two new intelligence systems: Adobe Brand Intelligence and Adobe Engagement Intelligence.

Brand Intelligence is built on a fine-tuned large language model (LLM) with vision-language capabilities that learns from “qualitative and nuanced inputs” like annotations, feedback cycles, or rejected assets.

“Brand intelligence is going after a much harder problem than ‘a brand kit,’ which is a codification of a CSS style guide,” Parsa explained. The LLM can begin to understand brand sentiment, informed by “data engagement signals and the actual enterprise assets.”

Adobe Engagement Intelligence helps teams decide next best offers, messages, or other actions for targeted customers. This is based on their lifetime interactions, rather than click-throughs or conversions, according to Parsa.

Whereas previously, less was more, “in this world, more is better,” he said, pointing out that the promise of generative AI is producing more material economically. “It’s not creating more for more’s sake, it’s targeted campaigns that get you much closer to one-on-one personalization.”

Early production gains are “massive,” Parsa claimed. This is because troubleshooting and early detection of problems now takes “hours, not days and weeks.”

SaaS companies’ data advantage

Like many SaaS companies grappling with an agent-driven future where pay-per-seat models are becoming less relevant, Adobe is emphasizing its data advantage. Parsa pointed out that more than 20,000 enterprises have built on Adobe’s platform over the years, giving the company enormous amounts of data alongside domain expertise.

Generative AI and AI agents do a good job of understanding the “corpus of world knowledge” and building some “useful capabilities for all of us,” Parsa acknowledged. “But these technologies stop at the enterprise walls, because those are ‘walled gardens.’”

Further, enterprise context is very complicated and spread across numerous applications, he noted. “It’s codified in documents; in some cases just tribal knowledge informs how people function on a day to day basis.” AI agents working on their own (like OpenClaw or Claude Cowork) break in the enterprise because they are “brittle” and not grounded in enterprise data, he said.

“We are a proxy for all of the enterprise context that lives inside our applications,” said Parsa. “We’re going to bring that into the AI layer much faster than a customer restarting that whole process with an AI platform.”

Ultimately, he said, Adobe is “adapting and adjusting” to customer feedback and consumer interaction with brands, as well as with the internet itself, as customer engagement undergoes a dramatic shift in the era of AI. As this unfolds, Parsa emphasized the importance of “open, open, open.”

“We absolutely are going to work with tech partners, we’re going to work with other SaaS companies to make sure that we stay flexible and meet the customer where they are,” he said.

  • ✇Security | CIO
  • Robot Zuckerberg shows how IT can free up CEOs’ time
    Mark Zuckerberg, the CEO of Meta, is building an AI version of himself. The virtual CEO is being trained on Zuckerberg’s mannerisms and will be loaded with his views on corporate strategy, the Financial Times reported. The idea is that employees will find the virtual Zuckerberg more accessible than they would the flesh and blood manifestation. There are plenty of claims that AI will lead to jobs being eliminated but, until now, the CEO job has looked safe. If Zuck
     

Robot Zuckerberg shows how IT can free up CEOs’ time

17 de Abril de 2026, 14:31

Mark Zuckerberg, the CEO of Meta, is building an AI version of himself.

The virtual CEO is being trained on Zuckerberg’s mannerisms and will be loaded with his views on corporate strategy, the Financial Times reported.

The idea is that employees will find the virtual Zuckerberg more accessible than they would the flesh and blood manifestation.

There are plenty of claims that AI will lead to jobs being eliminated but, until now, the CEO job has looked safe. If Zuckerberg’s experiment proves successful, though, even company leaders could be due for the chop.

In February, OpenAI’s Sam Altman warned that CEOs could be as vulnerable as other senior executives. “AI superintelligence at some point on its development curve would be capable of doing a better job being the CEO of a major company than any executive, certainly me,” Altman said. “On our current trajectory, we believe we may be only a couple of years away from early versions of true superintelligence.”

Klarna CEO Sebastian Siemiatkowski has already tempted fate, using an AI version of himself to present the company’s financial results to analysts, and even to take customer calls. So far, though, he’s kept his job.

This article first appeared on Computerworld.

  • ✇Security | CIO
  • “AI 투자, ROI 없이도 간다”…기업 현장에 벌어진 ‘성과 괴리’ 현실
    기업 CIO 사이에서는 생성형 AI와 에이전틱 AI의 투자 대비 수익(ROI)을 명확히 입증하기 어렵다는 점이 이미 공감대로 자리 잡고 있다. 그럼에도 글로벌 컨설팅 기업 KPMG는 일부 기업이 이러한 한계를 인지한 상태에서도 AI 도입을 적극적으로 추진하고 있다고 전했다. 정량적으로 산출 가능한 ROI가 부족함에도, 경기 둔화가 AI 투자 계획을 늦추는 요인으로 작용하지는 않고 있다는 분석이다. KPMG는 “글로벌 리더의 4분의 3이 경제적 불확실성에도 불구하고 AI 투자를 우선순위에 둘 것”이라고 밝혔다. KPMG는 ‘글로벌 AI 펄스 설문조사(Global AI Pulse Survey)’라는 자체 보고서에서 “여전히 실험 단계에 머무는 조직과 파일럿을 넘어 AI 에이전트를 전면 확장해 실질적인 비즈니스 가치를 창출하는 조직 사이에는 분명한 격차가 존재한다”라고 설명했다. 이어 “전 세계적으로 AI 도입은 가속화하고 있지만, 명확
     

“AI 투자, ROI 없이도 간다”…기업 현장에 벌어진 ‘성과 괴리’ 현실

15 de Abril de 2026, 07:24

기업 CIO 사이에서는 생성형 AI와 에이전틱 AI의 투자 대비 수익(ROI)을 명확히 입증하기 어렵다는 점이 이미 공감대로 자리 잡고 있다. 그럼에도 글로벌 컨설팅 기업 KPMG는 일부 기업이 이러한 한계를 인지한 상태에서도 AI 도입을 적극적으로 추진하고 있다고 전했다.

정량적으로 산출 가능한 ROI가 부족함에도, 경기 둔화가 AI 투자 계획을 늦추는 요인으로 작용하지는 않고 있다는 분석이다. KPMG는 “글로벌 리더의 4분의 3이 경제적 불확실성에도 불구하고 AI 투자를 우선순위에 둘 것”이라고 밝혔다.

KPMG는 ‘글로벌 AI 펄스 설문조사(Global AI Pulse Survey)’라는 자체 보고서에서 “여전히 실험 단계에 머무는 조직과 파일럿을 넘어 AI 에이전트를 전면 확장해 실질적인 비즈니스 가치를 창출하는 조직 사이에는 분명한 격차가 존재한다”라고 설명했다. 이어 “전 세계적으로 AI 도입은 가속화하고 있지만, 명확한 수익을 확인하고 있는 AI 리더 그룹은 소수에 불과하다. 이들 가운데 82%는 AI가 이미 의미 있는 비즈니스 가치를 제공하고 있다고 답했으며, 이는 다른 기업의 62%와 비교해 높은 수치다. 이는 단순한 AI 성숙도 차이를 넘어, AI를 전사적 혁신으로 접근하는 조직과 기존 모델에 덧붙이는 수준에 그치는 조직 간 성과 격차가 확대되고 있음을 보여준다”라고 분석했다.

영국을 중심으로 한 별도 분석에서도 유사한 흐름이 나타났다. KPMG는 “AI는 더 이상 전통적인 ROI 기준으로만 정당화되지 않는다”라며 “영국 응답자의 65%는 가시적인 ROI와 관계없이 AI 투자를 지속할 것이라고 답했다. 기업이 인공지능에 상당한 비용을 지출하고 있지만, 기술의 가치를 인정하는 데 반드시 전통적인 ROI가 필요한 것은 아니다”라고 밝혔다.

인식 전환

KPMG의 AI 부문 책임자 리앤 앨런은 전사 차원의 AI에 대한 높은 관심이 기술의 재무적 접근 방식에도 변화를 가져왔다고 설명했다.

앨런은 “AI를 즉각적인 수익을 창출해야 하는 기술로 보던 시각에서 벗어나, 전사적 혁신을 가능하게 하는 전략적 수단이자 장기 투자로 인식하는 방향으로 비즈니스 리더의 사고가 전환된 것은 중요한 이정표”라고 평가했다. 이어 “그러나 명확한 전략 없이 무작정 AI에 투자해서는 안 된다. AI는 조직 운영 방식과 의사결정 구조, 그리고 인간과 AI 에이전트가 일상적으로 협업하는 방식을 근본적으로 재편하고 있다”라고 밝혔다.

이 같은 사고의 전환은 현실적 판단에 기반한 측면도 있다. 많은 CIO가 이사회로부터 AI 투자는 선택 사항이 아니라는 메시지를 받고 있기 때문이다. 다만 AI의 ROI를 둘러싼 과제는 여전히 다양한 형태로 나타나고 있다.

AI ROI를 둘러싼 복합적 과제

AI 실험과 도입이 빠르게 진행되는 상황에서, 일부 경영진은 비현실적인 ROI 목표를 설정한 채 개념검증(PoC)을 추진하고 있다. 기술적으로 달성하기 어려운 기준을 적용해 성과를 평가한다면, 적절하지 않은 지표가 충족되지 않았다고 해서 이를 대형언어모델(LLM)의 한계로 단정하기는 어렵다.

또한 일부 기업은 AI 도입 과정에서 예상치 못한 비용을 경험하고 있다. 예를 들어 고객용 챗봇에 AI를 적용했지만, 이용자가 이를 ‘무료’ 생성형 AI 도구처럼 활용하면서 추가 토큰 사용량이 발생하고, 그 비용을 기업이 부담하는 사례가 나타나고 있다.

무엇을 측정해야 하는가

일부 분석가와 투자자는 AI가 대체하고 있는 지적 노동이 그동안 제대로 측정된 적이 거의 없었다는 점을 문제로 지적한다. 이로 인해 재무 부서는 AI의 ROI를 산정하기 위해 기존과는 다른 접근 방식을 모색해야 하는 상황에 놓였다.

투자 자문 기업 람튼 캐피털 파트너스의 매니징 파트너 벤 그랜트는 “문제는 측정 방식에 있다. 전통적인 ROI는 명확한 투입 대비 산출 구조를 요구하지만, 현재 대부분의 기업에서 AI는 그런 형태로 작동하지 않는다”라고 진단했다. 이어 “AI의 가치는 절약된 시간, 더 빠른 의사결정, 문제가 되기 전에 공백을 메우는 효과 등으로 나타난다. 이런 요소를 스프레드시트에 담기는 쉽지 않다”라고 설명했다.

그랜트는 또 “전통적인 ROI 없이 AI에 투자한다고 해서 무모하다고 보지는 않는다. 이는 실용적인 판단에 가깝다. 기업은 이미 충분한 가능성을 확인했지만, 재무 조직이 요구하는 방식으로 정량화하지 못하고 있을 뿐”이라고 언급했다.

컨설팅 기업 인포테크리서치그룹의 수석 연구 책임자 마니시 자인은 이러한 괴리가 발생하는 배경에 대해 “기업이 동시에 두 가지 모드로 운영되고 있기 때문”이라고 분석했다. 자인은 “학습 속도가 중요한 탐색 단계와, 가시적 성과 실현이 요구되지만 여전히 성숙도가 발전 중인 산업화 단계가 병존하고 있다”라고 설명했다.

자인은 또 기업의 기대치가 달라졌다고 짚었다. “기업이 수익을 중요하게 여기지 않는 것은 아니다. 다만 ROI에 집중하기에 앞서 AI 역량을 성숙시키는 것이 우선이라는 점을 학습했다는 의미”라며 “새로운 엔진이 등장했을 때 현명한 운영자는 그것이 얼마를 벌어들이는지부터 묻지 않는다. 오히려 자신만 그 엔진을 갖추지 못했을 때 어떤 일이 벌어질지를 먼저 고민한다”라고 말했다.

AI는 일상 기술이 되고 있는가

가트너의 부사장 애널리스트 나데르 헤네인은 AI의 산출물을 사소한 수준으로 보기는 어렵다고 전제하면서도, AI가 점차 일상적인 업무 기능에 통합되고 있다고 진단했다. 이러한 변화는 전통적인 ROI 산정 방식에 도전 과제를 던지고 있다는 설명이다.

헤네인은 “AI 어시스턴트와 같은 일부 투자는 오피스 제품군처럼 표준 업무 도구로 자리 잡아가고 있다. 워드 문서나 프레젠테이션 개수를 세어 ROI를 계산하지는 않는다”라고 말했다. 이어 “그렇다고 AI 프로젝트의 ROI 계산이 사라지는 것은 아니다. 자금만 소진하고 가시적인 성과를 내지 못한다면 결국 중단될 것이다. 상장 기업의 손익계산서와 투자자 기대는 변하지 않는다”라고 밝혔다.

지출과 기대 사이

무어 인사이트 앤 스트래티지의 부사장 겸 수석 애널리스트 마이클 리온은 AI 도입 방식의 다양성이 기존 ROI 체계를 무력화하는 요인으로 작용하고 있다고 분석했다.

리온은 “ERP나 클라우드 전환에 적용하던 기존 ROI 공식은 AI에 들어맞지 않는다. 내가 만난 모든 CIO가 이를 인지하고 있다”라며 “특정 워크플로에서 어떤 생산성 향상이 있었는지는 설명할 수 있지만, 3년 뒤 전사적 차원의 수익이 어떻게 나타날지 묻는 질문에는 명확한 답을 내놓기 어렵다. ‘ROI와 무관하게 투자한다’는 표현은 바로 이 지점에서 나온다. 개인적으로 리더가 그럼에도 투자를 지속하는 판단은 타당하다고 본다”라고 말했다.

이어 “예산 부족은 이미 AI 프로젝트를 좌초시키는 요인 목록에서 사라진 지 오래다. 자금도 확보됐고 추진 동력도 있다. 현재의 진짜 장애물은 보안과 프라이버시, 그리고 이를 대규모로 운영할 인력이 거의 없다는 점”이라며 “대부분 조직은 충분한 정보를 바탕으로 한 베팅을 하고 있다. 뒤처질 경우 치러야 할 비용을 계산해봤고, 그 결과가 만족스럽지 않았기 때문”이라고 설명했다.

리온은 실제로 복리 효과를 낼 수 있을 만큼 인재, 거버넌스, 운영 역량을 갖춘 기업은 10곳 중 1곳 수준에 불과하다고 덧붙였다. “나머지는 일단 투자하고 성과를 기대하는 상황이다. 그것이 현재의 현실”이라고 평가했다.

기술 분석가 카미 레비는 “최소한의 ROI 근거도 없이 최첨단 기술에 투자하는 것은 재정적으로 자살 행위에 가깝다”라고 지적했다. 그러나 동시에 “AI의 발전 속도와 범위가 워낙 빠르기 때문에, 전통적인 ROI 산정 방식은 이미 시대에 뒤처졌다. 이제 조직은 뒤처질 것에 대한 두려움 때문에라도 AI에 뛰어들 수밖에 없는 상황”이라고 분석했다.

레비는 이 같은 상황에서 재무 조직이 일시적으로 경직된 ROI 기준을 완화할 필요가 있다고 주장했다.

레비는 “AI 경쟁력을 유지하거나, 최소한 경쟁사가 AI를 이해해가는 동안 시야에서 벗어나지 않기 위해서는 과거와 동일한 수준의 재무적 엄밀성을 적용하기 어려울 수 있다”라며 “통상적으로 경제 환경이 불안정해지면 기업은 기술 투자를 줄이지만, AI가 기술 로드맵의 핵심으로 자리 잡으면서 이러한 공식이 시험대에 오르고 있다”라고 말했다.

이어 “많은 조직은 경제적 불확실성 속에서도 AI 중심 지출을 줄이지 않는 경쟁사에 뒤처질 위험을 피하기 위해 다른 영역에서 비용 절감을 모색할 것”이라며 “실제로 다수의 경영진은 AI를 향후 발생할 수 있는 비용 절감의 포괄적 동력으로 활용하고 있으며, AI 주도 흐름에서 뒤처지지 않으려는 급박함 속에서 이러한 논리는 최고경영진의 승인을 이끌어내기에 충분한 명분이 되고 있다”라고 설명했다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • IBM’s government DEI settlement could increase pressure to avoid tech hiring diversity
    IBM has agreed to settle a complaint from the US Justice Department around its initiatives to diversify its workforce and to encourage hiring of underrepresented groups, contrary to a presidential directive. The federal contractor also agreed to pay the government roughly $17 million. The pressure from the Trump administration to eliminate workforce diversification efforts, typically known as DEI (Diversity, Equity, and Inclusion) programs, has persuaded many companies,
     

IBM’s government DEI settlement could increase pressure to avoid tech hiring diversity

15 de Abril de 2026, 01:01

IBM has agreed to settle a complaint from the US Justice Department around its initiatives to diversify its workforce and to encourage hiring of underrepresented groups, contrary to a presidential directive. The federal contractor also agreed to pay the government roughly $17 million.

The pressure from the Trump administration to eliminate workforce diversification efforts, typically known as DEI (Diversity, Equity, and Inclusion) programs, has persuaded many companies, including Meta, Google, Amazon, Salesforce, Intel, OpenAI, Tesla and Zoom, to publicly back away from those diversification efforts. A few companies, including Apple, Microsoft, Nvidia and Oracle, have held firm in favor of DEI, for the most part. 

The government’s official position states that age, race, sexual preference, and gender should have zero impact on hiring decisions. Diversification proponents counter that workforce composition will stay stagnant unless explicit efforts are made to diversify.

Focus of settlement

The Justice Department settlement focused mostly on IBM’s role as a government contractor.

The government filing said IBM made “false claims” and “false statements” to the government regarding hiring practices in connection with IBM’s government contract work.

“As a federal contractor, IBM was required to comply with anti-discrimination requirements as set forth in Title VII of the Civil Rights Act of 1964,” the settlement said, adding that IBM “discriminated against employees during employment and applicants for employment because of race, color, national origin, or sex, and failed to treat employees during employment without regard to race, color, national origin, or sex.”

Beyond hiring practices, the government also opposed hiring goals that encouraged diversity, including “developing race and sex demographic goals for business units and taking race, color, national origin, or sex into account when making employment decisions to achieve progress towards those demographic goals” and using those same criteria to offer “certain training, partnerships, mentoring, leadership development programs, educational opportunities or resources, and/or similar opportunities only to certain employees.”

The agreement also said that the deal “is neither an admission of liability by IBM nor a concession by the United States that its claims are not well founded” and added that IBM agreed to the settlement “to avoid the delay, uncertainty, inconvenience and expense of protracted litigation.”

Acting US Attorney General Todd Blanche issued a statement saying, “racial discrimination is illegal, and government contractors cannot evade the law by repackaging it as DEI.”

IBM did not respond to an email seeking comment.

Companies can work around biases

Bryan Howard, the CEO of recruiting strategy consulting firm Peoplyst, said he would encourage enterprises to simply move their workforce diversification efforts earlier in the recruitment process. 

“There’s a big difference between candidate pool and the selection process,” Howard said, suggesting that there are no federal rules limiting outreach choices. If, for example, a company wanted to increase workforce representation for a particular group, then the job notice should be focused on universities and other places where that group is well represented.

“Expand your pool and do not contract it. Fish in the ponds where those people are,” Howard said. “Increase diversity by simply recruiting from diverse sources.”

Howard also said the government position leverages last year’s US Supreme Court decision in Ames v. Ohio Department of Youth Services, where the court held that reverse discrimination is illegal. 

Complicating diversification efforts today are two popular recruiting/hiring tools pushed by HR: Using genAI to filter a massive number of applicants and only present a small handful to the hiring managers to choose from; and referral programs in which employees are offered cash incentives if they recommend job candidates who are eventually hired.

AI’s bias is to seek job candidates whose profiles most closely resemble that of the current workforce. In other words, AI wants to learn everything it can about who the company has hired before, to help it determine the attributes to look for. 

Referral programs, Howard said, also tend to attract people with the same characteristics as the existing workforce. Even though those referral hires tend to stay with the company longer, “if you have a population that is already skewed and that is the population recruiting, the existing bias will likely continue.”

Settlement could hurt recruitment efforts

Consultant Brian Levine, executive director of FormerGov, said it is difficult to interpret the settlement as anything other than opposing DEI efforts. 

The US Justice Department, where Levine once worked as a federal prosecutor, ”has issued a multi-million dollar penalty for company policy that seemed to be intended to encourage diversity,” he said. “As with Anthropic, in this new world, sometimes organizations may be forced to choose between ‘the law’ as it is currently being interpreted by some, and a good faith effort to positively influence society, or at least to minimize societal harm.”

Levine said some enterprises may try to overcompensate to keep the current administration happy.

“Fearing financial penalties, some companies that work with the federal government will now choose to ensure their DEI program is fully dismantled,” Levine said. “Other companies may choose to cease working with the federal government and/or may choose to keep, or even double down, on their DEI program. If Anthropic is any indication, these latter companies may ultimately be rewarded in the market.”

Flavio Villanustre, CISO for the LexisNexis Risk Solutions Group, added that this settlement might end up hurting tech recruitment efforts. 

“I think that this will force organizations to reframe their DEI programs to not upset the DOJ, which could have an impact on hiring of individuals in certain classes and could result in overall less diversity,” Villanustre said. “Diversity is an important part of building resilient, successful organizations, so this could have a broader impact than just the one at hiring time.”

  • ✇Security | CIO
  • Nvidia announces quantum AI models
    Nvidia today unveiled a new family of open-source quantum AI models for building quantum processors. The announcement coincides with World Quantum Day, an international initiative by quantum scientists to promote public understanding of quantum science and technology. Nvidia is calling its new family of quantum AI models Nvidia Ising, named after the Lenz-Ising model of ferromagnetism in statistical mechanics. That model dramatically simplified the understanding of comp
     

Nvidia announces quantum AI models

14 de Abril de 2026, 12:00

Nvidia today unveiled a new family of open-source quantum AI models for building quantum processors. The announcement coincides with World Quantum Day, an international initiative by quantum scientists to promote public understanding of quantum science and technology.

Nvidia is calling its new family of quantum AI models Nvidia Ising, named after the Lenz-Ising model of ferromagnetism in statistical mechanics. That model dramatically simplified the understanding of complex physical systems.

Ising joins other Nvidia model families, including Nemotron for specialized agentic AI systems, Cosmos for physical AI systems, Isaac for robotics, Clara for biomedical and life sciences models, Apollo for AI physics, and Alpamayo for autonomous vehicles.

Split decision

Ising will consist of two model domains at launch: Ising Calibration and Ising Decoding.

Ising Calibration is a vision language model for interpreting and reacting to measurements from quantum processors, and enables automation of continuous calibration by AI agents. Ising Decoding consists of two variants of a 3D convolutional neural network model for real-time decoding for quantum error correction. One variant is optimized for speed while the other for accuracy.

“Both of these are targeting the fundamental challenge in quantum computing, which is that qubits are inherently noise,” said Sam Stanwyck, Nvidia’s director of quantum product, in a press briefing Monday. “That noise is the fundamental bottleneck standing between today’s quantum hardware and useful applications.”

A qubit, or quantum bit, is the basic unit of information in quantum computing. And where bits in traditional computing have two possible states of either 0 or 1, qubits can represent a superposition of all states between 0 and 1 simultaneously. This allows quantum algorithms to solve certain problems in a fraction of the time it would take the fastest traditional computer systems.

Physical qubits are noisy and error-prone, which has made machines that depend on them impractical for real-world applications. For the past several years, researchers have been developing logical qubits as a higher-level abstraction from physical qubits, which can be used in fault-tolerant quantum computing to protect against noise and errors. Nvidia says the Ising models will deliver up to 2.5 times faster performance and 3 times higher accuracy for the decoding process needed for quantum error correction for logical qubits.

“Today, the very best quantum processors make an error about once in every thousand operations, which is amazing,” Stanwyck said. “But to become useful accelerators for scientific and enterprise valuable problems, that number needs to become one in a trillion or even less.”

AI is the key to closing that gap, he said, and it’ll be the control plane or operating system for quantum machines. To that end, he added, the models must be open so they can be customized, fine-tuned, and continuously improved upon by the quantum community.

The test kitchen heats up

Along with Ising, Stanwyck said Nvidia is providing a cookbook of quantum computing workflows and training data along with Nvidia NIM microservices.

“The cookbook has fine tuning, quantization, and inference workflows, recipes for how to integrate this into agentic workflows, plus open research papers and benchmark data,” he said.

He also noted that leading enterprises, academic institutions, and research labs have already adopted Ising, including Atom Computing, Fermi National Accelerator Laboratory, Harvard John A. Paulson School of Engineering and Applied Sciences, Cornell University, and others.

Quantum leaps

The global quantum technology industry took a big step forward in 2025, according to a report released today by the Quantum Economic Development Consortium (QED-C). In its State of the Global Quantum Industry 2026 report, QED-C said the global quantum market reached $1.9 billion in 2025 while the global pure-play quantum workforce grew by 14%. It forecasts the market will grow at a 30% annual rate to reach $3 billion by 2028, and Nvidia plans to be a key player in that growth.

“Our AI leadership is going to directly accelerate the path to useful quantum computers,” Stanwyck said. “The same GPUs that are running the world’s AI can run the control plane for quantum hardware.”

  • ✇Security | CIO
  • How effective are semantic hubs in moving agentic AI forward?
    The focus of enterprise AI initiatives is shifting from storing, processing, and moving data to ensuring data means the same thing wherever it’s used. This is vital if an LLM is to understand the nuances and specifics of an individual business. Walmart’s recent announcement that it’s ending its partnership with OpenAI to power shopping through ChatGPT is a case in point. Relying on the LLM to scrape Walmart’s product data and then infer meaning from that led to hallucin
     

How effective are semantic hubs in moving agentic AI forward?

13 de Abril de 2026, 07:00

The focus of enterprise AI initiatives is shifting from storing, processing, and moving data to ensuring data means the same thing wherever it’s used. This is vital if an LLM is to understand the nuances and specifics of an individual business.

Walmart’s recent announcement that it’s ending its partnership with OpenAI to power shopping through ChatGPT is a case in point. Relying on the LLM to scrape Walmart’s product data and then infer meaning from that led to hallucinations and poor customer experience, resulting in conversion rates three times lower than shoppers using Walmart’s own website. Had the AI agent been grounded in Walmart’s actual business logic developed over years, then the results might have been very different.

Semantic hubs that provide a centralized architecture translating raw data into consistent and clear business concepts are key components in powering effective agentic AI deployments. They mitigate the risks of semantic drift where an LLM’s understanding of concepts and terms is fluid, and changes over time. However, businesses don’t operate in a vacuum and need to exchange data with suppliers, customers, regulators, and financial institutions. Semantic hubs on their own may provide a point of failure in these instances as definitions and understanding will vary across organizations. As commerce moves toward more autonomous agentic systems, this is a problem.

A shared language

McKinsey predict AI agents could handle between $3 and $5 trillion of global commerce by 2030. The economic and business rationale for moving to a more agentic world is compelling and, although still in its early stages, the technology that will power this revolution is developing rapidly. MCP, Agent2Agent (A2A), and other open standards offer protocols for agents to communicate with each other and pull in data as needed. However, a missing building block is a common language that allows dispersed agents to consistently and accurately infer meaning from the data they use.

The Open Semantic Interchange (OSI) initially developed by Salesforce and Snowflake can be seen as a universal mental model for data, ensuring every AI agent perceives business definitions with the same precision and intent as a human expert, regardless of which system they navigate. “If successful, OSI has the power to fundamentally reshape the competitive landscape by commoditizing the definition of a semantic model,” says Brad Shimmin, VP practice lead, data and analytics at tech analyst The Futurum Group. “Vendors will no longer be able to lock in customers with proprietary metric languages. Instead, they’ll need to compete on the execution of semantics, differentiating on performance, caching efficiency, security, and the sophistication of their AI integrations.”

Hurdles on the road to technical sovereignty

While the OSI initiative was only launched in September last year, major vendors including Cloudera, Databricks, Instacart, and ThoughtSpot have now joined founding members Salesforce and Snowflake in developing the standard. However, many CIOs are concerned that without wider cooperation from other enterprise software providers like Microsoft (Power BI) and SAP, there’s a risk it won’t build the momentum needed for true interoperability across industries. The rapid adoption of MCP by competing vendors offers hope that more companies will join the initiative as they realize common and open standards are required to build effective agentic systems.

Also, OSI specification is still in its early development phase, so committing live data assets to tools still in beta isn’t viable in the short term. This is likely to change over the coming months as leading vendors incorporate OSI and case studies emerge.

Another essential driver of adoption will be support from industry groups that have vested interests in seeing safe and reliable agentic systems be deployed through their sectors. The banking and medical sectors, for instance, rely on agreed languages and definitions to prevent financial fraud and health risks. Trevor Hall, chief architect at Tableau Salesforce and an OSI contributor says that while OSI wouldn’t necessarily define specific domain models such as medicine, he’d hope that industry would actually lean on OSI to define its models.

OSI next steps

Snowflake is positioning itself as a primary custodian of the OSI standard, and with around 20% of the cloud data warehousing market, it has the potential to drive adoption. Salesforce is the other key founding member that sees OSI as a core element of its Agentforce fleet of AI agents. Alongside this, Phase 2 of expanding the OSI ecosystem will continue through the remainder of 2026, with plans to bring native import and export buttons for OSI models to over 50 different data platforms.

Things are moving quickly in the agentic space and the underpinning infrastructure is taking shape. Now’s the time to make sure your data is ready for this revolution, and OSI could be a key element in your planning.

❌
❌