Visualização normal

Antes de ontemStream principal
  • ✇Security | CIO
  • The AI workplace paradox: Higher productivity, higher anxiety
    Workers are facing a conundrum: They worry about the potential for their displacement by AI even as it dramatically speeds up their own productivity. According to a new survey from Anthropic, workers in roles most likely to be taken over by AI (developers or IT workers, for instance) recognize their precarious position. Yet, perhaps naturally, they readily adopt the tools that could take their jobs, and see first-hand how well they work. This measurement is fundament
     

The AI workplace paradox: Higher productivity, higher anxiety

23 de Abril de 2026, 23:39

Workers are facing a conundrum: They worry about the potential for their displacement by AI even as it dramatically speeds up their own productivity.

According to a new survey from Anthropic, workers in roles most likely to be taken over by AI (developers or IT workers, for instance) recognize their precarious position. Yet, perhaps naturally, they readily adopt the tools that could take their jobs, and see first-hand how well they work.

This measurement is fundamentally different from the way others are gauging AI job displacement, noted Thomas Randall, research director at Info-Tech Research Group.

While macro reports, such as those from Goldman Sachs, the International Monetary Fund (IMF), or the World Economic Forum (WEF), are asking what share of existing job tasks AI could theoretically perform in the future, “Anthropic is measuring qualitative experiences of workers in the present,” he pointed out. This “tells us how people are navigating this landscape in real time.”

The paradox of AI in the workforce

Anthropic’s survey of 81,000 Claude users gauged peoples’ “visions and fears” around advances in AI, and weighed these findings against the company’s own measurement of jobs most vulnerable to AI displacement. This was based on Claude usage data; jobs are identified as more exposed when associated tasks are significantly performed on the platform, in work-related contexts, and take up a larger share of a role.

Some occupations at risk include computer programmers, data entry keyers, market researchers, software quality assurance analysts and testers, information security analysts, and computer user support specialists.

Overall, one-fifth of respondents expressed concern about displacement, noting that their job, or at least aspects of it, is being taken over by automation. Those in jobs identified as most exposed readily recognized that fact, voicing worry three times as often as those in less at-risk positions. One software engineer remarked: “like anyone who has a white collar job these days, I’m 100% concerned, pretty much 24/7 concerned, about losing my job eventually to AI.”

Early-career respondents were also more nervous than others.

At the same time, those in the highest-paid occupations reported the largest productivity gains when using AI. This is most notably in terms of their ability to perform new tasks, which was cited by 48% of users. In addition, 40% of workers said the technology helped speed up their work, and a little more than 10% said it improved the quality of their work.

In general, enterprise usage of AI is “actually quite consistent,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. Teams are using the technology “where information is abundant and time is limited,” such as in drafting documents and code, summarizing content, responding to customer queries, navigating internal systems.

Is AI actually creating more work?

Still, not everyone thinks AI makes their jobs easier or faster. In some cases, people felt it made their work harder; for instance, project managers are assigning tickets for issues that are much more difficult to solve, Anthropic noted.

Gogia agreed that, even when tasks become easier, scope and responsibilities expand, and roles can absorb adjacent tasks. This results in a “redistribution of effort,” rather than a reduction of effort.

“Faster generation means higher expectations on quality,” he said. More output feeds into decision pipelines that are already constrained. “In some cases, the system becomes heavier, not lighter.”

Delayed impact on enterprises

 The market is rewarding those who can integrate AI into complex workflows to do more, faster, and often with better outcomes, Gogia noted. However, the most exposed tasks, including  documentation, basic coding, routine analysis, and structured support work, often “sit at the base of the experience ladder.”

These very tasks traditionally have given entry-level workers a way in, and the automation of them reduces the urgency for companies to hire them. “What you begin to lose is not the job,” said Gogia. “It is the path into the job.”

This can have a delayed impact; enterprises may not realize until years later that they do not have enough mid-level experts because they didn’t bring enough people in at lower levels. As AI plays a greater role in the workplace, there must be a “conscious effort” to rethink how people enter and grow, Gogia said. “New pathways need to be created, and they need to be deliberate.”

How enterprise leaders should adjust

As is often the case, sentiment moves faster than structural change, Gogia pointed out. Workers feel the shift almost immediately, but organizations take longer to adjust hiring, redesign roles, and rethink workforce structures.

“This is why expectations can become misaligned,” he noted. The reality is that most enterprises have introduced AI into existing ways of working without fundamentally changing them. Acceleration occurs in unchanged systems that still carry the same dependencies, approval chains, and coordination challenges.

Ultimately, Gogia advised, leaders must approach the shift with “intentional design.” This requires clarity, he emphasized; people need to understand how their work is expected to change. What will be enhanced? What will reduce? Where should they focus their development?

Baselines are moving: Roles may begin to look “oversized” as what used to be considered a full day’s work begins to look like half a day’s work, or what used to be considered efficient begins to look average. “AI is changing how work is done, but more importantly, it is changing what work expects from people,” said Gogia.

As well, Info-Tech’s Randall pointed out that workers who experience AI expanding what they can do by performing tasks previously outside their competence appear to relate to AI more positively than those who experience it as doing their existing job faster. So, he advised, “tech leaders should design AI deployment around capability extensions.”

Along with goal setting, managers must have support, Gogia emphasized. They set expectations and interpret strategy, and when they’re not properly equipped, “even the best tools will fall short,” he said. Measurement must also evolve; enterprises need to look at quality, sustainability, and capability development over time.

“What we are witnessing right now is not a sudden disruption,” said Gogia. “It is a gradual shift that is becoming impossible to ignore.”

This article originally appeared on Computerworld.

  • ✇Security | CIO
  • テック業界が女性を失い続ける5つの理由
    大規模なテック業界のレイオフが続き、企業のDEI(多様性・公平性・包括性)プログラムが縮小する中、女性は依然として厳しい状況に置かれている。WomenTech Network(WTN)のデータによれば、女性は世界の労働力全体の42%を占めるが、テック業界での割合は26〜28%、STEM労働者では25%にとどまる。1970年比では8ポイント改善しているものの、55年経っても男女同等には程遠い。 STEM職は2034年までに8%以上成長すると見込まれ、非STEM職の2.7%を大きく上回る。だが、機会が広がるものの、ジェンダーギャップは解消されていない。 学位取得から職場環境まで、5つの側面から女性が直面する課題の実態を見ていく。 1,上に行くほど、女性は消える DEIへの公約を掲げながらも、大手テック企業での女性の割合は依然として低い。WTNのデータによれば、Amazonが45%と最も高く、Meta(37%)、Apple(35%)、Google(34%)、Microsoft(32%)と続く。しかしこの数字は役職が上がるほ
     

テック業界が女性を失い続ける5つの理由

21 de Abril de 2026, 20:00

大規模なテック業界のレイオフが続き、企業のDEI(多様性・公平性・包括性)プログラムが縮小する中、女性は依然として厳しい状況に置かれている。WomenTech Network(WTN)のデータによれば、女性は世界の労働力全体の42%を占めるが、テック業界での割合は26〜28%、STEM労働者では25%にとどまる。1970年比では8ポイント改善しているものの、55年経っても男女同等には程遠い。

STEM職は2034年までに8%以上成長すると見込まれ、非STEM職の2.7%を大きく上回る。だが、機会が広がるものの、ジェンダーギャップは解消されていない。

学位取得から職場環境まで、5つの側面から女性が直面する課題の実態を見ていく。

1,上に行くほど、女性は消える

DEIへの公約を掲げながらも、大手テック企業での女性の割合は依然として低い。WTNのデータによれば、Amazonが45%と最も高く、Meta(37%)、Apple(35%)、Google(34%)、Microsoft(32%)と続く。しかしこの数字は役職が上がるほど下がっていく。女性の割合はエントリーレベルで最も高く、中間管理職、上級職と上がるにつれて減少する。

特にソフトウェアエンジニアリング職では格差が顕著だ。ジュニアおよびミドルレベルで女性の応募者は約25%少なく、ERPやUI/UXデザインなどの上位職ではさらに差が広がる。世界経済フォーラム(World Economic Forum)とLinkedInのデータでは、女性はSTEM管理職の約24%、Cレベル職のわずか12%を占めるに過ぎない。McKinseyのデータでは、過去1年間に管理職に昇進した女性は男性100人に対して93人、有色人種の女性に至っては74人だった。

2, 学位取得のギャップ

米国労働統計局(BLS)によれば、STEM職はこの30年で79%成長し、2030年までにさらに11%の増加が見込まれる。しかし全米科学財団によれば、学術レベルでもジェンダーギャップは続いている。コンピューターサイエンスと情報科学の学士号取得者に占める女性の割合は約21%、工学・工学技術は22%、経済学は35%、物理科学は39%だ。

有色人種になると、女性の格差はさらに大きくなる。コンピューターサイエンスの学士号取得者に占める黒人女性はわずか9%、ヒスパニック系女性はこれらの分野の修士号取得者の8%に過ぎない。女性エンジニア協会によれば、工学・コンピューターサイエンスの修士号を取得する女性は30%、博士号では24%にとどまる。

卒業後の状況はさらに厳しい。全米科学財団によれば、コンピューターサイエンスを専攻した女性のうち実際にその分野で働いているのは38%で、男性の53%を大きく下回る。卒業してもSTEM職に定着しにくい「パイプラインの漏れ」と呼ばれる現象が続いている。

3, 定着率の課題

Accentureのデータによれば、女性がテック業界を離れる割合は男性より45%高く、35歳までにテック業界を去る女性は50%に上る(他業界では20%)。

レイオフの影響も女性に不均衡に及んでいる。2022年に各社が断行した大規模テックレイオフでは、50社以上・約5000人のプロフィールを対象にしたWTNの調査で、解雇された従業員の69%以上が女性だった。女性はレイオフにあう確率が1.6倍高い。

職場でのマイクロアグレッション(日常的な小さな差別)も深刻だ。WTNのレポートでは、64%の女性が会議中に発言を遮られた経験を持つことがわかっている。19%がジェンダーステレオタイプによって役割を決められたと感じ、11%が業務外の「会議の食事準備」を頼まれた経験があると回答している。WTNはまた、テック系採用担当者の65%が採用における偏見を認め、女性の66%が社内での明確なキャリアアップの道筋がないと感じていることも明らかにしている。BLSのデータでは、テック業界における女性の平均在籍期間は3.1年。これは、男性の4.2年を下回る年数だ。

4, スポンサーシップとメンターシップの格差

キャリアアップ、特にリーダー職への道において、スポンサーシップは極めて重要だ。しかしMcKinseyの「2025年 Women in the Workplace」レポートによれば、女性がスポンサーを得られる機会は男性より少なく、スポンサーがいても昇進率は男性より15%低い。Cレベル職に占める女性は25%に過ぎないが、そのうち有色人種の女性はわずか5%だ。

上位層のリーダーが自分に似た人材を引き上げる傾向があるため、マイノリティグループにとってはメンターやスポンサーを見つけること自体が難しい。McKinseyによれば、上級職に就く女性の84%が昇進を望んでいるが(男性は92%)、実際にはすでに昇進の機会を見逃したと感じ、上を目指す現実的な道が見えないと答えている。こうした状況が続く限り、上位職に就く女性候補者は減り続け、格差の解消はさらに遠のく。

5,賃金格差

男女の賃金格差は依然として解消されていない。STEMにおける男性の平均年収は8万5000ドルに対し、女性は6万828ドルと約1万5000ドルの差がある。ラテン系および黒人女性に限ると5万2000ドルまで下がる。ラテン系女性の平均年収は白人男性の約半分であり、年収格差を埋めるには約2年分の追加労働が必要になる計算だ。

AIと女性——広がる新たな格差

Deloitteのデータによれば、AI分野でも女性は約4分の1の割合にとどまる。世界のAI研究者に占める女性はわずか12%、大学のAI研究者・教授職では約16%だ。

AI活用率はどうだろう。エントリー・ミドルレベルの女性のAI活用率は33%で、男性の44%を下回る。主な要因は倫理上の懸念と、AIスキルへの自信の低さだ(若い女性の56%が低い自信を報告、男性は74%)。一方、ポジティブな傾向として、上級職の女性はAIの活用スピードが男性より最大16%速い。

Deloitteはまた、AIに対する信頼度にも男女差があることを示している。AIが生産性を高めると答えた女性は41%で、男性の61%を大きく下回る。使い慣れるにつれて信頼度は上がるが、初期の信頼度は男性より低い。AIを試している女性のうち高い信頼度を示した割合は18%にとどまり、男性の31%とは大きな開きがある。

さらに深刻なデータがある。WEFとLinkedInによると、女性はAIによって「補強」されるよりも「代替・混乱」される職種に就いている割合が高い。米国では男性の24%がAIで補強される職種に就いているのに対し、女性は20%。AIによって混乱を受ける職種には女性が34%、男性が25%と、ここでも格差が表れている。

このように、完全なジェンダー平等の実現はいまだ遠く、AIの台頭がその道をさらに複雑にしている。

  • ✇Security | CIO
  • Increased AI expectations without guidance leads to employee burnout
    Burnout in the tech industry has nearly doubled in the past year, with 46% of workers expressing feeling burnt out and almost 25% saying they’re very burned out, according to recent data from Dice. Alongside that uptick, daily AI use has quadrupled, layoffs have impacted nearly two-thirds of the workforce, and overall confidence in the long-term future of tech dropped from 80% to 60%. Tech employees most likely to experience burnout are millennials, those with 10 to 19
     

Increased AI expectations without guidance leads to employee burnout

21 de Abril de 2026, 07:00

Burnout in the tech industry has nearly doubled in the past year, with 46% of workers expressing feeling burnt out and almost 25% saying they’re very burned out, according to recent data from Dice. Alongside that uptick, daily AI use has quadrupled, layoffs have impacted nearly two-thirds of the workforce, and overall confidence in the long-term future of tech dropped from 80% to 60%.

Tech employees most likely to experience burnout are millennials, those with 10 to 19 years of experience, or those at small companies with fewer than 250 employees already worried about layoffs. 

These growing frustrations arrive on the heels of several years of ups and downs in the industry, so it’s critical that employers demonstrate stability for employees. That means emphasizing AI governance and transparency, financial health, clear policies, and transparency from leadership acknowledging market strains, according to Dice.

“You can identify AI burnout the same way as failed AI value, by looking at rework and outcomes,” says Laura Stash, EVP at iTech AG. “If error rates are rising, review cycles are increasing, or employees are spending more time validating outputs, that’s a sign AI is creating more work.”

Where AI-induced burnout crops up

Burnout surrounding AI is typically tied to friction rather than traditional overwork, as well as usage patterns, says Paul Farnsworth, president of Dice. Daily AI users are more likely to express higher levels of burnout, with over half of AI users reporting burnout compared to only a third of those who never use AI, according to Dice.

“Increased exposure to AI without the right support can amplify rather than reduce workplace stress,” says Farnsworth. “In an AI setting, burnout tends to appear as increased rework, lower confidence in outputs, and frustration tied to unclear expectations or lack of training. If employees spend more time correcting or validating work than benefiting from efficiency gains, that’s usually the earliest and clearest signal.”

AI also contributes to more subtle forms of burnout tied to the constant change and uncertainty of AI. This can create a new type of fatigue that employees experience switching between multiple tools, feeling pressure to keep up with new AI capabilities, and the need to recheck outputs.

“These challenges are compounded in environments where expectations are unclear or evolving quickly,” adds Farnsworth. “Over time, that combination can lead to disengagement if employees feel the pace is unsustainable.”

Stash agrees that a lot of AI burnout starts to show up where there isn’t clear guidance on how to use AI tools. You’ll find employees switch between different tools, or reuse outputs across systems, repeating unnecessary work, and therefore possibly lose important context between different applications, she says.

Companies should rather focus on embedding AI tools directly into the day-to-day tools, services, and software employees already use. That way, they become part of the workflow instead of another tool that requires constant re-prompting and context switching.

“The goal shouldn’t be to give employees more AI tools, but simplify the experience,” says Stash. “Fewer tools, clearer use cases, and AI embedded into existing workflows is what reduces friction and prevents burnout.”

Increased expectations ramp up burnout

A report from the Upwork Institute found that around 71% of full-time employees say they are burned out and 65% report struggling with employer demands on their productivity. And executives seem aware of this shift, with 81% of C-suite leaders saying they acknowledge they’ve increased their demands on employees over the past year, and 96% saying they expect AI tools will boost productivity in the organization.

However, nearly half of all employees using AI say they have no idea how to achieve the productivity gains their employers expect, and 77% say AI tools have decreased their productivity and added to their workload.

“A common issue is that AI is introduced faster than it’s operationalized,” says Farnsworth. “When employees are expected to navigate multiple tools without clear guidance, it adds complexity.”

Respondents in the Upwork survey say they now spend more time reviewing and moderating AI-generated content (39%), investing time into learning new AI tools (23%), and are still being asked to do more work than before (21%). Overall, 40% say they feel their company is asking too much of them when it comes to AI.

Farnsworth suggests that leaders focus on narrowing toolsets, defining specific use cases, and providing role-based training to help reduce that burden, as well as emphasizing and setting the expectation that AI is meant to improve how work gets done, not simply increase the volume or pace of output.

Expectations vs reality for AI productivity

Executives express high confidence around employee skills, with 37% of C-suite leaders at companies that use AI saying their workforce is highly skilled and comfortable with AI tools. But this perception doesn’t match the 17% of workers who say they feel skilled and comfortable using AI tools.

Additionally, 38% of employees say they feel overwhelmed about using AI at work and that it’s adding to their workload, suggesting too many leaders are moving forward implementing AI without realistic expectations of what workers can do, especially without proper training and upskilling.

And while that 96% of C-suite executives say they expect AI tools to boost productivity, only 26% say they have proper AI training programs in place, and only 13% say they have a well-implemented AI strategy, according to Upwork.

Data from Upwork also reveals further imbalances in executive perception and employee experience, with 69% of C-suite leaders admitting they’re aware of the current struggles employees face regarding productivity demands, and 84% are adamant their organizations value employee well-being over productivity. But only 60% of full-time employees say their employer prioritizes that despite mostly agreeing their employers provide flexibility and greater clarity on strategic goals. In addition, the report points out that employees who perceive their company to value productivity over well-being report higher rates of feeling overwhelmed by their workload.

AI burnout can quickly lead to disengagement and even trigger an exodus of talent. So leaders need to take stock of AI strategies and ensure they align with realistic training and upskilling opportunities for employees. Expectations around AI should be delivered to employees clearly and timely, without leaving room for question or interpretation.

“This kind of change management is not new, and we should use tools and techniques that have helped before to help mitigate burnout,” says Farnsworth. “Creating cross-functional working teams, highlighting best practices, reducing redundancy in tools, and understanding the goals of an organization and then applying tools on top are all ways to help tech professionals who struggle with AI burnout.”

  • ✇Security | CIO
  • CIOs are caught between employee AI fatigue and leadership expectations
    In 2024, when cloud-based software company BlackLine implemented its Buckie AI agent, a knowledge base that employees could ask HR- or IT-related questions, the company didn’t expect to move away from the tool within a year. “The technology was moving so fast,” CIO Sumit Johar says, and the company needed a different system to scale for the future. By June the following year, BlackLine had migrated to Google Gemini enterprise, and today, employees organization-wide h
     

CIOs are caught between employee AI fatigue and leadership expectations

15 de Abril de 2026, 07:00

In 2024, when cloud-based software company BlackLine implemented its Buckie AI agent, a knowledge base that employees could ask HR- or IT-related questions, the company didn’t expect to move away from the tool within a year.

“The technology was moving so fast,” CIO Sumit Johar says, and the company needed a different system to scale for the future.

By June the following year, BlackLine had migrated to Google Gemini enterprise, and today, employees organization-wide have built nearly 300 AI agents themselves.

The rapid clip at which organizations are adopting AI is compounding challenges for CIOs. And for employees, being bombarded with new tools and processes is leading to AI fatigue, a feeling of burnout from added workflows and unmet promises of time savings.

At the same time, corporate boards are putting increased pressure on CEOs to deploy AI and deliver results. So CIOs are caught in the middle, balancing board and leadership expectations with employee reality on the ground. They’re pressured to move quickly — a strategy that, in reality, often backfires, according to Doug Gilbert, CIO and chief digital officer at global business technology consultancy Sutherland. He says AI implementations currently have up to a 90% failure rate. “Doing AI right may sound slower, but in the long run, it’s going to be faster,” he says.

Why employee fatigue happens

Riley Stricklin, founder and chief strategy officer at AI integration firm Cadre AI, agrees that AI fatigue is a growing problem across companies. It’s not necessarily because employees are anti-AI, but rather they’re overwhelmed with new tools, new expectations, and constant change, he says.

The initial steps to implement AI take time, temporarily adding to employee workloads before delivering promised time savings, a common complaint Johar hears. Then, the moment teams feel they’re settling in with a new technology, understanding how they can organize their business processes and maximize value, something new comes up that changes everything. “That’s why there’s exhaustion, because things are moving so fast,” he says.

Gilbert adds that AI fatigue most commonly arises when AI is clunky, when organizations bolt AI on top of an existing process, rather than implement it as an in-line solution. Employees could be asked, for instance, to copy and paste data from their programs into a separate LLM like ChatGPT. But the method doesn’t take. “You’re frustrating the heck out of the employee,” Gilbert says.

On top of that, he adds that when AI isn’t properly integrated with a company’s data, or it lacks broader organizational context, the LLM can hallucinate, delivering outputs that, as he puts it, are kind of crap.

Stricklin also says when AI is an added layer instead of an integrated solution, it compounds friction when the purpose is to reduce it.

So the most successful CIOs don’t simply plug AI into existing systems and expect transformation, he says. They rethink the entire workflow and build AI into operations. And in the most seamless AI integrations, Gilbert says employees don’t really think about AI; they simply use a process and get better, faster results.

CIOs pressured from all sides

Gilbert says the clunky approach often happens because of a top-down push that ripples throughout the organization. Boards and CEOs may see case studies or articles of what other companies are doing related to AI, and want to jump on the bandwagon. The AI request trickles to the CIO, who then feels pressure to deploy a solution quickly, rather than take the time to develop an in-line system.

“The reality is you’ll never meet the false expectations they have in their heads,” Gilbert says, adding that boards and CEOs often have a utopian mindset of AI capabilities. Likewise, company investors often expect AI to slash costs, which pressures leadership to demonstrate immediate ROI from AI, Johar adds.

“They don’t always understand that you have to incur the cost before you save any cost,” he says.

In fact, a recent McKinsey survey shows that of the companies that participated, only 39% reported AI‑related impact on their earnings at the enterprise level, suggesting the majority of AI programs have yet to deliver meaningful financial results.

In addition to top-down pressure, sometimes CIOs are feeling stress from the ground up. Despite employee fatigue around AI, Johar’s team at BlackLine has seen requests for AI-based tools from other departments increase by up to 25%.

The higher volume of requests creates fatigue for the IT team itself, as they evaluate myriad tools. Increasing the challenge, the fast pace of change with AI means the team’s processes to evaluate technology have to evolve, too. By the time IT makes a decision to procure a technology or select a supplier, it’s possible the tech is already obsolete, Johar says.

BlackLine has also trained employees on how to build their own agents for specific departmental functions, and to date, employees have built nearly 300 AI agents. CIOs and their teams bear the responsibility of bringing governance and structure to the flood of agents, as Johar puts it, ensuring they meet corporate policies around data privacy or security.

As tech features such as vibe coding continue to gain traction, Johar anticipates additional questions will arise for CIOs related to software oversight.

Framing the AI narrative

Delivering business value continues to be a top priority for tech leaders, and Stricklin says the most successful CIOs establish clear business objectives — whether it’s increasing revenue or margins, or reducing cycle time — before an AI deployment.

But when persuading employees to embrace AI that ultimately creates business value, CIOs may need a different tact than touting the benefits.

Johar says CIOs should frame AI’s benefits as compelling from an employee point of view, like helping employees do their jobs more effectively and building skillsets. “Once you position it that way, employees become a lot more accommodating to invest their time,” he says.

In this kind of climate, Gilbert says CIOs need to reassure employees that AI isn’t a means to headcount reduction but about flipping the narrative to how AI will work alongside employees, not replace them. Gilbert adds that humans should always be in the loop to fine-tune models and improve the accuracy of AI’s outputs over time.

Finding the right balance is key, given the gap that still exists between leader and employee sentiment around AI. Executives are 15% more likely to say AI has had a significant positive impact on their companies than their employees are, according to a survey commissioned by Google Workspace.

Stricklin also advises CIOs to have a focused strategy for how they adopt AI instead of trying to boil the ocean and immediately implement AI organization-wide. So they should pick two to three priority areas to use AI over the next six months, and get employees involved with the best course of action. “Trying to address everything simultaneously will cause more harm than wins,” Stricklin says, adding that equally important is selecting areas in which an organization won’t pursue AI.

Gilbert agrees that not every facet of a business is enhanced by gen AI. CIOs should be mindful of that and not be afraid to push back against CEOs or boards if they suspect an AI deployment is unnecessary. “Sometimes AI isn’t the answer,” Gilbert says.

  • ✇Security | CIO
  • Increasing AI adoption with agents built to serve ALL employees
    The pattern is remarkably consistent across industries: executive enthusiasm at the top, isolated pockets of experimentation in the middle and stalled adoption everywhere else. Despite billions in AI investment, only 5% of firms worldwide have achieved AI value at scale, according to Boston Consulting Group. A 2025 UKG global study found that just 38% of frontline workers use AI in their daily roles — a stark reminder that technology alone doesn’t drive transformation.
     

Increasing AI adoption with agents built to serve ALL employees

13 de Abril de 2026, 08:00

The pattern is remarkably consistent across industries: executive enthusiasm at the top, isolated pockets of experimentation in the middle and stalled adoption everywhere else. Despite billions in AI investment, only 5% of firms worldwide have achieved AI value at scale, according to Boston Consulting Group. A 2025 UKG global study found that just 38% of frontline workers use AI in their daily roles — a stark reminder that technology alone doesn’t drive transformation.

The real obstacle isn’t capability. It’s trust.

Adoption breaks down when employees feel like AI is happening to them, not for them. They want to know: Will this replace me? Who controls it? What’s in it for me? Until those questions are answered — through actions, not just messaging — enterprise AI will keep stalling at the pilot stage.

At UKG, we made a deliberate choice that changed everything: We started with an AI agent built to serve everyone.

A case study: The agent for all

In October 2025, UKG launched a global brand refresh — a pivotal moment to reintroduce ourselves to the market and to our more than 14,000 employees who serve 80,000+ customers worldwide. The challenge: How do you get an entire global workforce aligned on a new brand identity, quickly and consistently?

We built the UKG Brand Communicator, an AI agent designed to help every employee — not just marketing or communications — apply our new brand voice across everything they write, whether a customer email, an internal memo, or a social post.

This wasn’t a theoretical proof of concept. A cross-functional team built it, stress-tested it, broke it, refined it and deployed it. The agent was trained extensively on our new brand: what we say, how we say it and — just as critically — what we don’t. It reviews drafted messages, suggests tone adjustments and rewrites content to be clear, warm and people-focused.

The results were immediate. In the first 60 days:

  • ~7,300 employee sessions
  • ~13,000 AI-assisted rewrites
  • ~1,500 hours saved — redirected to higher-value work and customer service

Why “agents for all” work

The Brand Communicator succeeded because of deliberate design choices that most AI rollouts skip:

  • Low risk, high relevance. Brand guidance is something nearly every employee needs. It’s a safe sandbox to learn — people can experiment without fear of breaking a critical system or exposing sensitive data.
  • Built-in simplicity. Employees didn’t need to learn prompt engineering. The agent was structured and guided from day one, meeting people where they are — not where technologists wish they were.
  • Psychological safety. Pre-configured guardrails give employees permission to try, fail and try again without consequences. That experimentation loop is exactly what turns first-time AI users into daily users.

One lesson we learned the hard way: If it’s not in the workflow, it won’t stick. Sidebars, sandboxes and innovation labs are useful for discovery, but they don’t drive sustained adoption. The agent has to be the path of least resistance to getting the job done. It’s not just about launching. It’s about landing.

From one agent to an AI-native organization

The Brand Communicator was the first domino for UKG. By making AI useful — immediately, tangibly — for thousands of employees across functions, it converted curiosity into habit. It lowered the psychological barrier for every AI initiative that followed.

Today, 80% of UKG employees use AI in their daily workflows. We have more than 11,500 agents built by employees, for employees, generating approximately 155,000 agent-supported actions per month and saving 24,000 hours monthly. This is what human-AI collaboration looks like at scale.

That kind of adoption doesn’t happen by accident. Three systems made it possible:

  1. An AI hub and idea-to-implementation (I-2-I) framework. Our internal AI Hub is the centralized place where employees explore tools, submit ideas, collaborate and experiment. It prevents duplication, surfaces promising work for scale and feeds governance. Functional champions shepherd ideas from concept to production, keeping momentum without chaos.
  2. A portfolio model for experimentation. We run AI like a venture capital portfolio. Tier one is scale — prioritized use cases with clear ROI and defined outcomes. Tier two is growth — building capabilities that don’t yet exist, always with the customer at the center. Tier three is exploration — time-boxed, 90-day pilots and AI Demo Days that help us quickly decide what to grow, pivot, or archive. Velocity with discipline.
  3. Lightweight, principled governance. Guardrails are baked into the flow of work — not bolted on at the end. Security, privacy, legal checkpoints and a risk checklist are standard. We route teams toward validated enterprise tools and away from ad hoc point solutions. When teams bypass the guardrails, we restrict access — not to punish, but to protect the trust we’ve worked hard to build.

The bottom line

Trust is the invisible infrastructure of AI adoption. It’s built through transparency about intent, honest conversations about job impact, visible upskilling opportunities and letting employees see their peers genuinely benefit. When those conditions exist, adoption doesn’t need to be pushed — it pulls itself forward.

If you want AI to scale across your organization, start simple and start broad. Choose problems that are safe, relevant across roles, and immediately useful. Embed guardrails from the beginning. Measure what actually changes how work gets done — not just what gets launched.

Do that, and AI stops being a mandate. It becomes how your organization works.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

❌
❌