Visualização de leitura

How UKG puts AI to work for frontline employees

As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation framework to transform the frontline worker experience, and why the CIO-CHRO partnership is critical to making it stick.

How do you leverage AI for growth and transformation at UKG?

UKG is one of the largest HR, pay, and workforce management tech platforms in the market, and our expertise is in creating solutions for frontline workers, which account for 80% of the world’s workforce. This is important because when companies rebrand themselves as AI for knowledge workers, they’re not talking about frontline workers. But people in retail, manufacturing, healthcare, and so on also benefit from AI capabilities.

So the richness of our data sets, and our long history with the frontline workforce, positions us well for AI driven workforce transformation. 

What are some examples?

We use agentic AI for dynamic workforce operations, which shows us real-time labor demand. Our customers employ thousands of frontline workers, and the timely market insights and suggested actions we give them are new and valuable.

We also provide voice agents. Traditionally, when a frontline worker requests a shift, managers would review availability, fill out paperwork or update scheduling software, and eventually offer an appropriate job. With voice agents, AI works directly with the frontline worker, going through background and skills validation, communication, and even workflow execution. The worker can also ask if they can swap shifts or even get advice on how to make more money in a particular month. This is where AI changes the entire frontline worker experience.

We also launched People Assist, an autonomous employee support agent. Typically, when an employee is onboarded, IT and HR need to trigger and approve workflows. People Assist  not only tracks workflows, but also performs those necessary IT and HR onboarding activities so new employees are productive from day one.

What framework do you use to create these new capabilities?

For internal AI usage for our own employee experience, we use an idea-to-implementation framework, which involves a community of UKG power users who are subject matter experts in their area. Ideas can come from anybody, and since we started nine months ago, more than 800 ideas have been submitted. The power users set our priorities by choosing the ideas that will make the most impact.

Rather than funneling ideas through a small central team — a linear process that kills momentum — we’ve democratized innovation across the business. We give teams the governance frameworks, change models, and risk guardrails they need to move quickly.  With AI, the most important thing isn’t to launch, but to land.

But before we adopted the framework, we defined internal personas so we could collaborate with different employee groups across the company, from sales to finance.

With the personas and the framework, we can prioritize ideas by persona, which also facilitates crowd sourcing. You’re asking an entire persona which of these 10 ideas will make their lives better, rather than senior leaders making those decisions for them.

Why do so many CIOs focus on personas for their AI engine?

Across the enterprise, every function has a role to play. We hire marketing, sales, and finance for a particular purpose. Before AI, we gave generic packaged tools to everyone. AI allows us to build capabilities to make a specific job more effective. Even our generic AI tools are delivered by persona. Its impact on specific roles is the reason personas are so important right now. Our focus is on the actual jobs, the people who do them, the skills and tasks needed, and the outcomes they want to achieve.

We know our framework and persona focus work from employee data. In our most recent global employee engagement survey, 90% said they’re getting the right AI tools to be effective. For the AI tools we’ve launched broadly across the company, eight out of 10 employees use them. For me, AI isn’t about launching 10,000 tools, because if no one uses them, it’s just additional cost for the CIO and the company.

Is the build or buy question more challenging in this nascent stage of AI?

The lifecycle of technology has moved from three years to three hours, so whenever we build at UKG, we use an open architecture, which allows us to build with a commercial product if one comes on the market.

Given the speed of innovation, we lean toward augmentation rather than build. There are areas, like our own native products, where a dedicated engineering team makes sense. But for most of our AI capabilities — customer support and voice agents, for example — we work with our vendor partners. We test and learn with multiple vendors, and decide on one usually within two weeks.

This is what AI is giving all CIOs: flexibility, rapid adoption, interoperability, and the ability to quickly switch vendors. It’s IT that’s very different from what it used to be.

Given the shift to augmentation, how will the role of the software engineer change?

For software builders, business acumen — the ability to understand context — is no longer optional. In the past, the business user would own the business context, and the developer, who owns the technology, brings that business idea to life. Going forward, the builder has the business context to create the right prompts to let AI do the building, and the human in the loop is no longer the technology builder, but the provider of context, prompts, and validation of the work. So the engineer doesn’t go away, however they now finish a three-week scope of work in hours. With AI, engineers operate at a different altitude. The SDLC stays, but agility increases where a two-week concept compresses into two days.

At UKG, you’re directly connected to the CHRO community. What should they be thinking about how the workforce is changing with AI?

The best CHROs are thinking about the skills they’ll need for the future, and how to train existing talent to be ready. They’re not questioning whether we’ll need people, but how to sharpen our teams for new roles. The runbooks for both IT and HR are evolving, which is why the CIO-CHRO partnership has never been more critical to create the right culture for AI transformation.

CIOs can deliver a wealth of employee data like roles, skillsets, and how people spend their time. And as HR leaders help business leaders think through their roadmap for talent —  both human and AI — IT leaders can equip them with exactly that intelligence.

What advice would you give to CIOs driving AI adoption?

Invest in AI fluency, not just AI tools. Your people don’t need to become data scientists, but they do need a new kind of literacy — the ability to work alongside AI, question its outputs, and know when to override it. That’s a training and culture investment, not a software investment.

And redesign work before you redeploy people. Don’t just drop AI into existing workflows. Use this moment to ask what work really matters. AI is forcing us to have the job design conversations we should’ve had years ago, so it’s important to be transparent about the journey. What’s killing workforce trust now is ambiguity. Your people can handle hard truths but not silence. Leaders who communicate openly about where AI is taking the organization will retain the talent they need to get there.

The AI assessment gap: Why your hiring process can’t find the talent you need

The next time someone on your team says, ‘hire an AI engineer,’ stop the conversation.

That title is too vague because it fails to account for critical differences in engineering strengths. Instead, companies need to decide specifically what they need. Is it someone to rapidly prototype AI solutions? Or someone to build the solution that makes it ready for production? Or someone to design the supporting capabilities and infrastructure to scale it? These are all different skills and require different assessments during the hiring process.

But here’s where companies also fall short. Assessing skills is hard and assessments, as we know them, are broken when it comes to AI. They’re misaligned with what AI roles actually demand. That misalignment is what I call the AI assessment gap.

Where the gap lives

Most technical assessments were built for a pre-AI world: Coding proficiency, algorithms, deterministic system design. These are skills tests. They confirm that an engineer can do the work. But they don’t tell you whether that engineer has the technical taste to make the right decisions when building, scaling or deploying AI systems in production.

In conversations with enterprise engineering leaders, we’re hearing that candidates are now running AI agents during live interviews, getting textbook-perfect answers fed to them in real time. If your assessment can be passed by an AI whispering in someone’s ear, it was never testing for the right thing. Skills can be faked or augmented. Taste can’t.

To see what this looks like, consider this scenario: An enterprise needs someone with deep experience in a specific data platform. A candidate passes the data engineering assessment. They get to the client interview, and the hiring manager says: ‘Tell me about a time you had to make a hard tradeoff in designing a streaming architecture.’ The candidate defines every concept involved. They don’t have the taste to explain why one approach would be dramatically better than another for a specific context. They’re out.

This happens because most assessment pipelines only test for skills: Can they code, understand the fundamentals? Nobody is systematically testing for technical taste: Can this person make better-than-default decisions about architecture, tooling and approach? That question only surfaces once someone with real experience asks it. By then, everyone has wasted time and the role is still open.

Traditional job postings compound the problem by filtering for ‘5+ years of AI experience,’ which screens out strong candidates because the category itself is only a few years old. What matters at the AI layer isn’t tenure. It’s the depth and specificity of what someone has built, deployed or scaled in production. Meanwhile, seniority at the foundational role level still matters in the ways it always has: A senior engineer brings architectural judgment that can’t be shortcut. The mistake is applying years-of-experience filters to the AI layers, where the work hasn’t existed long enough for tenure to be a meaningful signal.

One of the most telling signals of a broken assessment process: When stakeholders simultaneously complain that assessments are too hard and too easy. That’s not a calibration problem. It means the assessments aren’t measuring the right things in the first place. They’re testing for skills when they should be testing for taste.

Start with the work, not the title

To close the AI assessment gap, decompose the problem before you assess and decompose the need across the dimensions that actually determine whether someone can do the job. For example:

DimensionThe QuestionWhat It DeterminesHow You Evaluate
RoleWhat technical domain does the work live in?Foundational skills and stack (e.g., backend engineer, Python, PostgreSQL)Skills assessment: Project-based or simulation-based filter that confirms core engineering competency
SeniorityWhat level of judgment and autonomy does this work require?Engineering maturity, depth of technical taste, ability to operate under ambiguityExperience depth at the role level: Years of practice in the domain, complexity of systems designed and shipped
AI Engagement PatternHow will this person engage with AI systems?The specific technical taste required (e.g., Prototyper needs taste for sensing value; Builder needs taste for architecture and integration decisions; Scaler needs taste for performance, governance and risk tradeoffs)Applied assessments: Not ‘define RAG’ but ‘given this use case, which approach would you choose and why?’ Testing for tradeoff reasoning, not terminology

This decomposition replaces the single job description with a structured picture of what you actually need. It also immediately reveals whether you’re looking for one person or a team. If the project requires rapid prototyping to find value and then a production build, you probably need engineers with different profiles–not one ‘AI engineer’ who’s supposed to do both.

Three things most enterprises get wrong

  1. They test for skills when they should test for taste. Most assessments confirm that an engineer can write code and define concepts. They don’t test whether that engineer can make the architectural and tooling decisions that actually determine project success. An engineer who knows what agentic search is and an engineer who knows when agentic search is the right choice for a specific problem are two completely different hires. The first passes your skills test. The second delivers in production.
  1. They conflate skills with experience. A skills assessment tells you if someone can do the work. An experience validation tells you if someone has done the work in the specific context the job demands. These require completely different evaluation methods. When companies try to test both with a single instrument, they get the ‘too hard and too easy’ paradox: The assessment is simultaneously screening out competent people and letting through candidates who can’t perform. Seniority and years of experience are meaningful at the role level, where 10 years of backend engineering builds real architectural judgment and compounds technical taste. They’re much less meaningful at the AI engagement layer, where the work itself is only a few years old and depth of hands-on exposure matters more than calendar time.
  1. They treat assessment as a snapshot. The traditional model is a one-time gate: Pass or fail, in or out. In an AI world where skills are evolving monthly, that approach breaks down fast. Six months ago, almost nobody was shipping production code with agentic tools like Claude Code. Model Context Protocol, which lets AI systems plug into enterprise tools and data sources, was barely on anyone’s radar. Now enterprises are hiring for these skills specifically. Six months from now, the list will change again.

That means an assessment built in January is already partially stale by June. Companies that treat assessment as a living system, continuously updated by performance signals from real engagements, will consistently field better talent than those running the same tests they built a year ago.

The reskilling imperative

The reality is, there is no way to close this gap through hiring alone. The supply of engineers who already have the technical taste for AI work is a tiny fraction of what the market demands.  For example, since the launch of ChatGPT in 2022, demand for roles that require more analytical, technical or creative work has increased by 20%.

Which means enterprises have to reskill and upskill existing workforces. And without a targeted approach mapped to actual needs, AI upskilling efforts often fail, leaving employees unsupported and initiatives stalled.

This is where the multi-dimensional model pays off beyond hiring. The same framework that powers your talent acquisition also powers your training strategy. Assessment results don’t just filter candidates in or out. They generate a heat map of where your workforce is strong and where it’s thin, across every dimension: Role competency, seniority depth and the specific technical taste required for prototyping, building or scaling AI systems. That heat map becomes your training roadmap.

Most companies skip this entirely and jump straight to ‘let’s buy an AI training program.’ Without that foundation, even the best training program is solving the wrong problem.

Ever ready

In the world of AI, the most critical skill might be knowing that you don’t and can’t possibly know everything. Or even what’s coming next. The roles we need today will look different in six months. The skills taxonomies we build now will need constant revision. The assessments we deploy this quarter will need recalibration by next quarter.

Companies that accept this reality and build nimble, multi-dimensional approaches to talent assessment will find something valuable: The technical taste they need already exists in their workforce, hiding behind outdated job descriptions and misaligned tests. CIOs must actively audit these descriptions to eliminate the traditional experience filters that mask the latent talent already sitting on their teams.  The others will keep posting for ‘AI engineers’ and wondering why nobody who gets hired can actually do the job.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The AI workplace paradox: Higher productivity, higher anxiety

Workers are facing a conundrum: They worry about the potential for their displacement by AI even as it dramatically speeds up their own productivity.

According to a new survey from Anthropic, workers in roles most likely to be taken over by AI (developers or IT workers, for instance) recognize their precarious position. Yet, perhaps naturally, they readily adopt the tools that could take their jobs, and see first-hand how well they work.

This measurement is fundamentally different from the way others are gauging AI job displacement, noted Thomas Randall, research director at Info-Tech Research Group.

While macro reports, such as those from Goldman Sachs, the International Monetary Fund (IMF), or the World Economic Forum (WEF), are asking what share of existing job tasks AI could theoretically perform in the future, “Anthropic is measuring qualitative experiences of workers in the present,” he pointed out. This “tells us how people are navigating this landscape in real time.”

The paradox of AI in the workforce

Anthropic’s survey of 81,000 Claude users gauged peoples’ “visions and fears” around advances in AI, and weighed these findings against the company’s own measurement of jobs most vulnerable to AI displacement. This was based on Claude usage data; jobs are identified as more exposed when associated tasks are significantly performed on the platform, in work-related contexts, and take up a larger share of a role.

Some occupations at risk include computer programmers, data entry keyers, market researchers, software quality assurance analysts and testers, information security analysts, and computer user support specialists.

Overall, one-fifth of respondents expressed concern about displacement, noting that their job, or at least aspects of it, is being taken over by automation. Those in jobs identified as most exposed readily recognized that fact, voicing worry three times as often as those in less at-risk positions. One software engineer remarked: “like anyone who has a white collar job these days, I’m 100% concerned, pretty much 24/7 concerned, about losing my job eventually to AI.”

Early-career respondents were also more nervous than others.

At the same time, those in the highest-paid occupations reported the largest productivity gains when using AI. This is most notably in terms of their ability to perform new tasks, which was cited by 48% of users. In addition, 40% of workers said the technology helped speed up their work, and a little more than 10% said it improved the quality of their work.

In general, enterprise usage of AI is “actually quite consistent,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. Teams are using the technology “where information is abundant and time is limited,” such as in drafting documents and code, summarizing content, responding to customer queries, navigating internal systems.

Is AI actually creating more work?

Still, not everyone thinks AI makes their jobs easier or faster. In some cases, people felt it made their work harder; for instance, project managers are assigning tickets for issues that are much more difficult to solve, Anthropic noted.

Gogia agreed that, even when tasks become easier, scope and responsibilities expand, and roles can absorb adjacent tasks. This results in a “redistribution of effort,” rather than a reduction of effort.

“Faster generation means higher expectations on quality,” he said. More output feeds into decision pipelines that are already constrained. “In some cases, the system becomes heavier, not lighter.”

Delayed impact on enterprises

 The market is rewarding those who can integrate AI into complex workflows to do more, faster, and often with better outcomes, Gogia noted. However, the most exposed tasks, including  documentation, basic coding, routine analysis, and structured support work, often “sit at the base of the experience ladder.”

These very tasks traditionally have given entry-level workers a way in, and the automation of them reduces the urgency for companies to hire them. “What you begin to lose is not the job,” said Gogia. “It is the path into the job.”

This can have a delayed impact; enterprises may not realize until years later that they do not have enough mid-level experts because they didn’t bring enough people in at lower levels. As AI plays a greater role in the workplace, there must be a “conscious effort” to rethink how people enter and grow, Gogia said. “New pathways need to be created, and they need to be deliberate.”

How enterprise leaders should adjust

As is often the case, sentiment moves faster than structural change, Gogia pointed out. Workers feel the shift almost immediately, but organizations take longer to adjust hiring, redesign roles, and rethink workforce structures.

“This is why expectations can become misaligned,” he noted. The reality is that most enterprises have introduced AI into existing ways of working without fundamentally changing them. Acceleration occurs in unchanged systems that still carry the same dependencies, approval chains, and coordination challenges.

Ultimately, Gogia advised, leaders must approach the shift with “intentional design.” This requires clarity, he emphasized; people need to understand how their work is expected to change. What will be enhanced? What will reduce? Where should they focus their development?

Baselines are moving: Roles may begin to look “oversized” as what used to be considered a full day’s work begins to look like half a day’s work, or what used to be considered efficient begins to look average. “AI is changing how work is done, but more importantly, it is changing what work expects from people,” said Gogia.

As well, Info-Tech’s Randall pointed out that workers who experience AI expanding what they can do by performing tasks previously outside their competence appear to relate to AI more positively than those who experience it as doing their existing job faster. So, he advised, “tech leaders should design AI deployment around capability extensions.”

Along with goal setting, managers must have support, Gogia emphasized. They set expectations and interpret strategy, and when they’re not properly equipped, “even the best tools will fall short,” he said. Measurement must also evolve; enterprises need to look at quality, sustainability, and capability development over time.

“What we are witnessing right now is not a sudden disruption,” said Gogia. “It is a gradual shift that is becoming impossible to ignore.”

This article originally appeared on Computerworld.

AI is scoring your job candidates. Can you explain how?

Somewhere in your organization’s hiring stack, there is probably an AI system producing candidate scores. If you’re a leader who helped evaluate or approve that system, here’s a question worth sitting with: If one of those scores got challenged, by a candidate, an internal audit or a regulator, could your team explain how it was produced?

Not “the vendor said it’s accurate.” Not “the model was trained on historical data.” A specific, documented explanation of what criteria were evaluated, how the candidate performed against them and why those criteria are job-relevant.

For a growing number of organizations using AI video interview scoring tools, the honest answer is no. And as regulatory frameworks targeting employment AI move from guidance to enforcement, that answer is a risk.

What the system is actually optimizing for

Before asking how accurate an AI scoring system is, the right question is what it is optimizing for.

Many video interview scoring platforms evaluate tone of voice, pace, eye contact, facial expressions and fluency alongside or in some cases instead of, the actual content of candidate responses. The underlying assumption is that these signals correlate with job performance or cultural fit. The evidence for that assumption is weak. The evidence that measuring these signals introduces systematic, legally significant bias is much stronger.

Several major players in this space removed facial analysis features after regulatory pressure and public scrutiny. That acknowledgment — that criteria advertised as objective were neither reliable nor fair — should raise a harder question. If those criteria were in production and no one caught it until outside pressure forced a change, what else is still being measured that shouldn’t be?

This is not a hypothetical risk. The EEOC has made it clear that employers are liable under Title VII for discriminatory outcomes from AI hiring tools, regardless of whether those tools were built in-house or purchased from a vendor. New York City’s Local Law 144 requires annual independent bias audits of automated employment decision tools and public disclosure of results. Illinois requires notice and consent before AI is used to evaluate video interviews. The EU AI Act, whose high-risk AI provisions take full effect this August, explicitly classifies employment AI as high-risk, with binding requirements for transparency, explainability and human oversight.

The common thread: Can you explain what your AI is measuring, and can you demonstrate that it’s measuring the right things?

The accountability problem at the executive level

For technology leaders, this is where the conversation becomes concrete.

Consider the scenario: A hiring decision gets challenged by a candidate, an internal audit or a regulator. The question is how the decision was made. “The AI scored them lower” is not a defensible answer in any of those contexts. It can’t be traced to specific job-relevant criteria. It can’t be explained to the candidate. It won’t satisfy an auditor. And if the system’s logic is proprietary and opaque, the organization has no way to produce a satisfying answer even if it wants to.

The organizations that adopt black-box scoring tools often do so with the right intentions: To reduce human bias and create a more consistent process. Those are legitimate goals. But a system whose internal logic can’t be questioned, explained or audited just obscures bias. It doesn’t reduce it. And when bias becomes difficult to see, it becomes more difficult to address.

This is a pattern you’ll recognize from other domains. When a system produces outcomes that look plausible but are wrong in ways that aren’t immediately visible, the failure compounds before it surfaces. The cost of discovering it late is almost always higher than the cost of building it right from the start.

What a defensible architecture looks like

There is a meaningful difference between AI that scores interviews and AI that scores interviews in a way that can be explained and defended. The distinction is structural.

Defensible scoring starts before any candidate records a response. It starts with the job. What competencies does this role require, and what does strong performance against each competency look like? From those answers, explicit rubrics are developed. Criteria that describe what high-quality, adequate and weak responses look like for each dimension being evaluated. Those rubrics are reviewed and approved by the hiring team before scoring begins.

When responses come in, the AI evaluates what candidates actually said against those pre-defined criteria. Not tone. Not pacing. Not facial expression. What they communicated, measured against a standard the hiring team set, and can explain. Criterion-level scores roll up to an overall assessment, and every part of that chain is visible and auditable.

This architecture has an important secondary property: The human remains meaningfully in the loop. The AI generates a starting point by identifying relevant competencies and drafting rubric criteria from the job description, but the standard is owned by the people responsible for the hire. If a hiring manager can’t look at a scoring rubric and explain what it’s evaluating and why, it should not be deployed. That is not a burden on the tool. It is the minimum condition for using it responsibly.

Four questions for the governance conversation

For leaders evaluating or overseeing AI video interview tools, four questions surface most of what matters.

  1. What specifically is the system scoring? Request an explicit list of evaluation criteria. If the answer includes anything beyond the content of candidate responses, ask for the validation data that connects those criteria to job performance outcomes.
  2. Are the criteria derived from job requirements? Generic rubrics applied uniformly across roles create standardized evaluation, not structured evaluation, which is different. Legitimate scoring starts from the specific competencies required for the specific role.
  3. Can the criteria be reviewed, modified and approved before scoring begins? If the rubrics are fixed and opaque, the organization is not in control of its own evaluation standard. That is a governance gap.
  4. Can any score be explained to a candidate or a regulator? This is the accountability test. If the explanation requires “the AI said so” rather than pointing to specific, documented criteria and how a candidate performed against them, the process will not withstand scrutiny.

Well-designed systems answer these questions directly. The ones that can’t are telling you something important about the tradeoffs their creators made.

Why this moment matters

The EU AI Act deadline is in August, forcing organizations with global operations or EU-based candidates to evaluate their tech. But getting this right isn’t just regulatory, it’s practical.

When hiring teams can see exactly how a score was produced, they use it. When they can’t explain it, they override it or work around it, the efficiency gains disappear. The tools that will last in enterprise hiring stacks are the ones that make decisions transparently enough that the humans responsible for those decisions trust them.

That’s not a high bar. But it requires being precise about what any given AI system is really measuring. And honest about whether that’s what you actually want to know.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

IBM’s government DEI settlement could increase pressure to avoid tech hiring diversity

IBM has agreed to settle a complaint from the US Justice Department around its initiatives to diversify its workforce and to encourage hiring of underrepresented groups, contrary to a presidential directive. The federal contractor also agreed to pay the government roughly $17 million.

The pressure from the Trump administration to eliminate workforce diversification efforts, typically known as DEI (Diversity, Equity, and Inclusion) programs, has persuaded many companies, including Meta, Google, Amazon, Salesforce, Intel, OpenAI, Tesla and Zoom, to publicly back away from those diversification efforts. A few companies, including Apple, Microsoft, Nvidia and Oracle, have held firm in favor of DEI, for the most part. 

The government’s official position states that age, race, sexual preference, and gender should have zero impact on hiring decisions. Diversification proponents counter that workforce composition will stay stagnant unless explicit efforts are made to diversify.

Focus of settlement

The Justice Department settlement focused mostly on IBM’s role as a government contractor.

The government filing said IBM made “false claims” and “false statements” to the government regarding hiring practices in connection with IBM’s government contract work.

“As a federal contractor, IBM was required to comply with anti-discrimination requirements as set forth in Title VII of the Civil Rights Act of 1964,” the settlement said, adding that IBM “discriminated against employees during employment and applicants for employment because of race, color, national origin, or sex, and failed to treat employees during employment without regard to race, color, national origin, or sex.”

Beyond hiring practices, the government also opposed hiring goals that encouraged diversity, including “developing race and sex demographic goals for business units and taking race, color, national origin, or sex into account when making employment decisions to achieve progress towards those demographic goals” and using those same criteria to offer “certain training, partnerships, mentoring, leadership development programs, educational opportunities or resources, and/or similar opportunities only to certain employees.”

The agreement also said that the deal “is neither an admission of liability by IBM nor a concession by the United States that its claims are not well founded” and added that IBM agreed to the settlement “to avoid the delay, uncertainty, inconvenience and expense of protracted litigation.”

Acting US Attorney General Todd Blanche issued a statement saying, “racial discrimination is illegal, and government contractors cannot evade the law by repackaging it as DEI.”

IBM did not respond to an email seeking comment.

Companies can work around biases

Bryan Howard, the CEO of recruiting strategy consulting firm Peoplyst, said he would encourage enterprises to simply move their workforce diversification efforts earlier in the recruitment process. 

“There’s a big difference between candidate pool and the selection process,” Howard said, suggesting that there are no federal rules limiting outreach choices. If, for example, a company wanted to increase workforce representation for a particular group, then the job notice should be focused on universities and other places where that group is well represented.

“Expand your pool and do not contract it. Fish in the ponds where those people are,” Howard said. “Increase diversity by simply recruiting from diverse sources.”

Howard also said the government position leverages last year’s US Supreme Court decision in Ames v. Ohio Department of Youth Services, where the court held that reverse discrimination is illegal. 

Complicating diversification efforts today are two popular recruiting/hiring tools pushed by HR: Using genAI to filter a massive number of applicants and only present a small handful to the hiring managers to choose from; and referral programs in which employees are offered cash incentives if they recommend job candidates who are eventually hired.

AI’s bias is to seek job candidates whose profiles most closely resemble that of the current workforce. In other words, AI wants to learn everything it can about who the company has hired before, to help it determine the attributes to look for. 

Referral programs, Howard said, also tend to attract people with the same characteristics as the existing workforce. Even though those referral hires tend to stay with the company longer, “if you have a population that is already skewed and that is the population recruiting, the existing bias will likely continue.”

Settlement could hurt recruitment efforts

Consultant Brian Levine, executive director of FormerGov, said it is difficult to interpret the settlement as anything other than opposing DEI efforts. 

The US Justice Department, where Levine once worked as a federal prosecutor, ”has issued a multi-million dollar penalty for company policy that seemed to be intended to encourage diversity,” he said. “As with Anthropic, in this new world, sometimes organizations may be forced to choose between ‘the law’ as it is currently being interpreted by some, and a good faith effort to positively influence society, or at least to minimize societal harm.”

Levine said some enterprises may try to overcompensate to keep the current administration happy.

“Fearing financial penalties, some companies that work with the federal government will now choose to ensure their DEI program is fully dismantled,” Levine said. “Other companies may choose to cease working with the federal government and/or may choose to keep, or even double down, on their DEI program. If Anthropic is any indication, these latter companies may ultimately be rewarded in the market.”

Flavio Villanustre, CISO for the LexisNexis Risk Solutions Group, added that this settlement might end up hurting tech recruitment efforts. 

“I think that this will force organizations to reframe their DEI programs to not upset the DOJ, which could have an impact on hiring of individuals in certain classes and could result in overall less diversity,” Villanustre said. “Diversity is an important part of building resilient, successful organizations, so this could have a broader impact than just the one at hiring time.”

❌