Visualização de leitura

How UKG puts AI to work for frontline employees

As organizations rebrand themselves as AI companies, most of the conversation is focused on knowledge workers rather than the people in retail, manufacturing, and healthcare who can benefit from AI just as much. Prakash Kota, CIO of UKG, one of the largest HR tech platforms in the market, which delivers a workforce operating platform utilized by 80,000 organizations in 150 countries, explains how his company uses agentic AI, voice agents, and a democratized innovation framework to transform the frontline worker experience, and why the CIO-CHRO partnership is critical to making it stick.

How do you leverage AI for growth and transformation at UKG?

UKG is one of the largest HR, pay, and workforce management tech platforms in the market, and our expertise is in creating solutions for frontline workers, which account for 80% of the world’s workforce. This is important because when companies rebrand themselves as AI for knowledge workers, they’re not talking about frontline workers. But people in retail, manufacturing, healthcare, and so on also benefit from AI capabilities.

So the richness of our data sets, and our long history with the frontline workforce, positions us well for AI driven workforce transformation. 

What are some examples?

We use agentic AI for dynamic workforce operations, which shows us real-time labor demand. Our customers employ thousands of frontline workers, and the timely market insights and suggested actions we give them are new and valuable.

We also provide voice agents. Traditionally, when a frontline worker requests a shift, managers would review availability, fill out paperwork or update scheduling software, and eventually offer an appropriate job. With voice agents, AI works directly with the frontline worker, going through background and skills validation, communication, and even workflow execution. The worker can also ask if they can swap shifts or even get advice on how to make more money in a particular month. This is where AI changes the entire frontline worker experience.

We also launched People Assist, an autonomous employee support agent. Typically, when an employee is onboarded, IT and HR need to trigger and approve workflows. People Assist  not only tracks workflows, but also performs those necessary IT and HR onboarding activities so new employees are productive from day one.

What framework do you use to create these new capabilities?

For internal AI usage for our own employee experience, we use an idea-to-implementation framework, which involves a community of UKG power users who are subject matter experts in their area. Ideas can come from anybody, and since we started nine months ago, more than 800 ideas have been submitted. The power users set our priorities by choosing the ideas that will make the most impact.

Rather than funneling ideas through a small central team — a linear process that kills momentum — we’ve democratized innovation across the business. We give teams the governance frameworks, change models, and risk guardrails they need to move quickly.  With AI, the most important thing isn’t to launch, but to land.

But before we adopted the framework, we defined internal personas so we could collaborate with different employee groups across the company, from sales to finance.

With the personas and the framework, we can prioritize ideas by persona, which also facilitates crowd sourcing. You’re asking an entire persona which of these 10 ideas will make their lives better, rather than senior leaders making those decisions for them.

Why do so many CIOs focus on personas for their AI engine?

Across the enterprise, every function has a role to play. We hire marketing, sales, and finance for a particular purpose. Before AI, we gave generic packaged tools to everyone. AI allows us to build capabilities to make a specific job more effective. Even our generic AI tools are delivered by persona. Its impact on specific roles is the reason personas are so important right now. Our focus is on the actual jobs, the people who do them, the skills and tasks needed, and the outcomes they want to achieve.

We know our framework and persona focus work from employee data. In our most recent global employee engagement survey, 90% said they’re getting the right AI tools to be effective. For the AI tools we’ve launched broadly across the company, eight out of 10 employees use them. For me, AI isn’t about launching 10,000 tools, because if no one uses them, it’s just additional cost for the CIO and the company.

Is the build or buy question more challenging in this nascent stage of AI?

The lifecycle of technology has moved from three years to three hours, so whenever we build at UKG, we use an open architecture, which allows us to build with a commercial product if one comes on the market.

Given the speed of innovation, we lean toward augmentation rather than build. There are areas, like our own native products, where a dedicated engineering team makes sense. But for most of our AI capabilities — customer support and voice agents, for example — we work with our vendor partners. We test and learn with multiple vendors, and decide on one usually within two weeks.

This is what AI is giving all CIOs: flexibility, rapid adoption, interoperability, and the ability to quickly switch vendors. It’s IT that’s very different from what it used to be.

Given the shift to augmentation, how will the role of the software engineer change?

For software builders, business acumen — the ability to understand context — is no longer optional. In the past, the business user would own the business context, and the developer, who owns the technology, brings that business idea to life. Going forward, the builder has the business context to create the right prompts to let AI do the building, and the human in the loop is no longer the technology builder, but the provider of context, prompts, and validation of the work. So the engineer doesn’t go away, however they now finish a three-week scope of work in hours. With AI, engineers operate at a different altitude. The SDLC stays, but agility increases where a two-week concept compresses into two days.

At UKG, you’re directly connected to the CHRO community. What should they be thinking about how the workforce is changing with AI?

The best CHROs are thinking about the skills they’ll need for the future, and how to train existing talent to be ready. They’re not questioning whether we’ll need people, but how to sharpen our teams for new roles. The runbooks for both IT and HR are evolving, which is why the CIO-CHRO partnership has never been more critical to create the right culture for AI transformation.

CIOs can deliver a wealth of employee data like roles, skillsets, and how people spend their time. And as HR leaders help business leaders think through their roadmap for talent —  both human and AI — IT leaders can equip them with exactly that intelligence.

What advice would you give to CIOs driving AI adoption?

Invest in AI fluency, not just AI tools. Your people don’t need to become data scientists, but they do need a new kind of literacy — the ability to work alongside AI, question its outputs, and know when to override it. That’s a training and culture investment, not a software investment.

And redesign work before you redeploy people. Don’t just drop AI into existing workflows. Use this moment to ask what work really matters. AI is forcing us to have the job design conversations we should’ve had years ago, so it’s important to be transparent about the journey. What’s killing workforce trust now is ambiguity. Your people can handle hard truths but not silence. Leaders who communicate openly about where AI is taking the organization will retain the talent they need to get there.

Taiwan High Speed Rail Hit by Spoofing Attack That Stops Three Trains

During the recent Qingming Festival holiday, the Taiwan High Speed Rail (THSR) experienced a severe cybersecurity incident that disrupted major transit operations. Three trains were suddenly forced into emergency stops, causing a 48-minute delay for passengers. Authorities have now determined that the disruption was not a mechanical failure but a targeted radio signal spoofing attack […]

The post Taiwan High Speed Rail Hit by Spoofing Attack That Stops Three Trains appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

CISOs step up to the security workforce challenge

A robust cybersecurity program needs a range of skilled people, yet many CISOs continue to face an ongoing skills shortage — and the squeeze may only get worse as AI gains traction.

Some 95% of cybersecurity practitioners and decision-makers noted at least one security skills gap at their organization, with almost 60% citing critical or significant skills gaps, according to ISC2’s 2025 Cybersecurity Workforce Study.

AI is the most pressing skill need, followed by cloud security, risk assessment, application security, security engineering, and governance, risk, and compliance (GRC), the survey found.

There are no simple solutions for a profession that requires passion, curiosity, and a thirst for defending systems. Such professionals are a rare breed.

“You need to have a special mindset,” says Juan Gomez-Sanchez, VP of cyber resilience at McLane Company.

“While IT people are obsessed with how things work, security people are obsessed with how things break, and people who are truly effective and passionate about that can be difficult to find,” says Gomez-Sanchez.

Add to that the fact that the cyber degree studies are challenging, technology is changing rapidly, and the profession is still comparatively young, and the true extent of the problem becomes clear.

If CISOs can’t hire the skills they need, some will look toward in-house training and development to foster the expertise they need.

“Hiring certain types of security professionals can be very difficult because the skills are not held by a lot of people, so I look for someone who’s got a solid security foundation in one or more other areas and transition them,” says Keith Turpin, CISO of The Friedkin Group.

This is its own challenge, requiring time and a good deal of unlearning certain things and honing that ‘how to break’ security mindset. For example, Turpin says, upskilling “someone who’s competent in networking, server administration, or software development to the equivalent security role takes an additional two years.”

Turpin has found that just establishing the security mindset can take up to a year within that timeframe. “Instead of thinking, ‘How do I keep it going,’ as the security person it’s thinking, ‘How can it go wrong.’ It’s a different approach,” he says.

“If I can find someone who’s got the right drive, the right people skills, they’re a good cultural fit, and they have the potential, I can turn them into a good technologist,” adds Turpin, who like Gomez-Sanchez will be inducted into the CSO Hall of Fame this year.

Gomez-Sanchez and Turpin are speaking at the CSO Cybersecurity Awards & Conference, May 11-13. Reserve your place.

AI changes the equation

And then there’s AI. When it comes to security, AI may help partially offset cyber skills shortages by automating certain tasks, but it also ramps up cyberattack volumes and expands the organizational attack surface, without fixing CISOs’ ongoing talent pipeline problems. In fact, AI may end up worsening the structural skills shortage.

“You can have 100, 1,000, 10,000 instances of AI doing the work of enabling attacks at much greater scale, including against smaller, less protected targets because they’re now within reach because the barrier is lower,” says Turpin.

This increases the pressure on defenders, putting more pressure on the workforce challenge, even as AI helps automate some tasks. But it’s not going away and will only increase in importance for both attackers and defenders.

“I’m encouraging my teams to look for opportunities to leverage AI and look at how our vendors are leveraging AI,” he says.

“This is what we’re going to be dealing with five years down the road. It’s going to be the center of technology so we can’t afford not to learn this,” he adds.

Reducing the organizational risk of skills shortages

Skills shortages are more than just an inconvenience; they pose organizational risks on par with threats and malicious attacks, says Gomez-Sanchez, who views them “much the way that you think about threat actors and vulnerabilities.”

“Your ability to execute is limited by the amount of people you have to actually do the work,” he explains.

As a result, Gomez-Sanchez encourages CISOs to view the skills gaps and talent shortages as a first-class security risk that needs to be managed as a KPI for the security function. “Our ability to attract and retain good talent is a major measure of capability,” he says.

Being structural rather than temporary, skills gaps place significant pressure on CISOs’ sourcing decisions. “Security people may choose to do things differently, especially as it relates to insourcing or outsourcing because of the talent shortage,” Gomez-Sanchez notes.

By the same token, staffing constraints can shape architecture and tooling choices. For example, Gomez-Sanchez adds, a host of best-of-breed point tools instead of a more integrated platform usually requires more headcount and expertise to stitch together.

Gomez-Sanchez also gives the example of adopting a single hyperscaler versus a multicloud strategy and the increase in human workload and skills required to secure it. “Ultimately, you want to leverage native controls within the hyperscaler, and that requires you to have specialized skills in each one of those,” he says.

CISO have also looked to automation to absorb some headcount pressure, but doing so isn’t always a simple fix. Gomez-Sanchez sees agent-enabled automation as a means for providing more firepower for developers and analysts, among other roles. But the reality of agentic AI capabilities for cybersecurity remains a work in progress.

What’s clear is that persistent talent shortages are forcing CISOs to rethink hiring and training as one of numerous ways to reduce the risk that comes with the skills gap. This entrenched problem — and CISOs’ attempts to address it — will also have a significant impact on the decisions security leaders will make regarding cyber architecture, platforms, processes, and AI use ahead.

The cyber talent gap is putting increasing pressure on the cyber agenda, and your peers are already adapting. Hear Juan Gomez-Sanchez, Keith Turpin, Jen Spencer, and other leading CISOs share what’s working at the CSO Cybersecurity Awards & Conference, May 11-13. Secure your seat before it fills up.

AI won’t fix tech talent gaps — but YOU can

Every CIO I talk to — and I talk to a lot of them — agrees that skills-first hiring makes sense. And with AI now embedded in nearly every stage of the hiring process, from resume screening to candidate matching, many assume the technology will finally make it happen at scale.

It won’t. AI can accelerate hiring decisions, but it can’t fix the underlying systems that power those decisions.

Despite initial progress in removing college degree requirements from job postings, many organizations are still getting it wrong — and AI is giving them new ways to get it wrong faster. Agreeing on a principle isn’t the same as operationalizing one. Even when there’s a skills-first hiring strategy in place, execution fails if IT, HR and business leaders aren’t aligned on outcomes, accountability and measurement. When AI screening tools are layered on top of misaligned systems, the result isn’t smarter hiring; it’s automated bias with a veneer of objectivity.

Why skills-first hiring became a buzzword

The idea of prioritizing skills in hiring decisions isn’t new. Competency-based hiring has been discussed in talent management circles since the 1970s. Over the last two decades, the growing technology skills gap, the explosion of non-traditional learning pathways and the broader recognition of “degree inflation” pushed skills-based hiring into the mainstream. Large employers publicly dropped degree requirements. States followed. Everyone was buzzing about skills-first hiring.

But buzz doesn’t change systems. Data from Indeed showed a decrease in job postings with college degree requirements between 2019 to 2024, but by November 2025, the number swung back up, nearly erasing the gains of the previous five years. 

Meanwhile, the skills that matter most now — prompt engineering, AI tool fluency, the ability to scope and complete AI-augmented projects — are being developed outside traditional degree programs. That makes degrees an even worse proxy for career readiness.

Announcing that an organization is “skills-first” without redesigning the infrastructure that surrounds hiring — job descriptions, applicant screening, recruiter training, interview rubrics and onboarding frameworks — doesn’t change practices. A recent survey by the University of Phoenix found that 69% of hiring decision makers believe there’s still too much focus on college degrees, with little clarity on what they should evaluate instead.

The 3 most common failure points

From my experience in working with CIOs on their entry-level talent pipelines, skills-first initiatives tend to break down in one or more of these places: job descriptions, screening tools and internal skepticism.

First, take job descriptions. Hiring managers tend to default to historical templates — pasting in degree requirements and years-of-experience thresholds that were never validated against actual job performance and are even less relevant with AI in the mix.

Second, screening tools. Nearly 90 percent of companies are using some form of AI to screen candidates, expecting greater efficiency and less bias. But AI screening tools learn from existing hiring data — which, if biased, just means that bias is now automated. Patterns in successful candidates’ backgrounds get baked into future decisions, except now, these decisions appear “data-driven” and neutral instead of more obviously predicated on certain hiring managers’ preference for college graduates.

The third failure point is internal skepticism — and the training gap that feeds it. According to the survey by the University of Phoenix, one in four non-HR managers receives no training before interviewing job candidates, even when the final hiring decision is theirs. Without shared definitions of what “skills-first” means and clear accountability, the initiative collapses under the weight of individual discretion.

What CIOs see that others miss

CIOs are often closer to the consequences of a bad hire than anyone else in the C-suite. When a cybersecurity analyst freezes during an incident response, the CIO gets a front-row view of the damage a skills gap can cause.

CIOs are also watching AI redefine what “qualified” looks like in real time. The engineer who deploys an agentic AI system to automate monitoring, or the analyst who chains multiple AI tools into a custom workflow — these people deliver outsized value, and their qualifications often look nothing like what a traditional job description demands.

And CIOs have a keen understanding of issues with tech talent pipelines. Projects slip while niche technical roles sit open for six months or more, even as candidates from rigorous programs — people with the specific skills for the job — are filtered out.

How successful CIOs operationalize skills-first hiring

Successful CIOs get specific. They work with their teams to define exactly which skills matter for each role — and they validate those definitions against the performance of current, thriving employees.

Should that taxonomy include demonstrated experience with AI tools and platforms: Has the candidate built or deployed an AI agent? Can they work across multiple AI tools? Have they completed projects requiring AI-augmented problem-solving? These concrete, observable skills predict performance far more than a degree ever could.

Second, they establish shared metrics across IT and HR. Organizations that get this right track 90-day performance reviews, first-year retention and promotion velocity alongside traditional recruiting metrics. In its New Collar Program with Sentara Healthcare, TEKsystems worked with company leaders to fill open big data positions through a skills-based cohort model and achieved 80% retention one-year post-training.

Third, these CIOs build direct relationships with employer-aligned training pipelines before a role opens. Bank of America invested nearly $40 million in workforce development initiatives in 2025 alone and partnered with more than 600 nonprofits across the US.

At Per Scholas, our head of IT, Tyrone Washington, makes it clear that while technical skills might get you through the door, it’s “smart skills” — discernment, emotional intelligence, complex problem-solving and agility — that build a career in an AI-driven landscape.

What the data shows

Skills-first hiring, when paired with structured onboarding and development pathways, is not just a talent acquisition strategy — it’s a retention strategy. And higher retention contributes directly to the bottom line, as the fully loaded cost of replacing a technical employee ranges from one to two times their annual salary.

In one employer partnership deploying skills-trained talent, a TEKsystems client came out $238,000 ahead in the first year after accounting for program costs, with a projected annual return of over $1.2 million. IT leaders reported that skills-trained talent becomes productive measurably faster than early-career hires from conventional pipelines.

How CIOs can lead the Shift — even without owning HR

CIOs who are moving the needle are piloting skills-based hiring for one or two roles, tracking outcomes rigorously and using that data to make the case for broader adoption. They’re building external partnerships before they need them. Bank of America, a Per Scholas long-term partner, knows that our graduates are team players, lifelong learners and motivated employees; our graduates’ certifications (through CompTIA and Google) validates that they have the technical know-how.

Every quarter that technical roles sit open has a measurable impact on project delivery, team capacity and competitive positioning. Surfacing these costs — backed by data — is something CIOs are uniquely positioned to do.

The bottom line

Skills-first hiring will remain a well-intentioned abstraction unless CIOs treat it as an operating model — one that reflects how AI is reshaping the skills organizations need.

The candidates who can demonstrate hands-on experience in building and deploying AI agents, integrating multiple AI tools into a workflow or evaluating when AI can help are the ones who will drive value. But they’ll keep getting filtered out unless CIOs get specific about skill definitions, align IT and HR around shared metrics, and build employer-aligned pipelines. Bank of America and TEKsystems didn’t achieve their results by endorsing a principle — they achieved them by building systems.

Luckily, building systems is something that CIOs know how to do well.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

You can’t train your way out of the AI skills gap

Most enterprises I talk to say they have an AI skills gap. That sounds plausible right up until you look at what companies are doing. They are spending millions on copilots, launching AI academies, hiring chief AI officers and rolling out internal training at scale. Yet for all that activity, most organizations still do not move faster, decide better or operate in fundamentally new ways. That is the real tension at the center of enterprise AI right now: Companies think they have a skills problem, but what they really have is a work design problem. I have seen this pattern repeatedly. The organizations that get real value from AI are usually not the ones that train the fastest. They are the ones who redesign work sooner.

The AI skills gap is real, but it is not the whole story. In many enterprises, the bigger failure is that AI is being layered onto jobs, workflows and operating models built for a pre-AI world. People are learning new tools, then being sent back into the same meetings, approvals, handoffs and reporting structures that made work slow in the first place. Training may improve local productivity. It does not automatically redesign how the business runs.

That is why so many AI programs feel busy without becoming transformative. The organization can point to courses completed, licenses deployed and pilots launched, but the underlying system still behaves the same way. Decision latency stays high. Bottlenecks remain intact. Managers absorb more complexity, not less. Employees become faster in small ways while the enterprise remains slow in all the ways that matter.

This gap is showing up clearly in the research. Deloitte’s 2026 State of AI in the Enterprise report says insufficient worker skills are the biggest barrier to integrating AI into existing workflows. Yet the most common organizational response is education and reskilling, not role or workflow redesign. In fact, Deloitte explicitly notes that companies are much more focused on AI fluency than on re-architecting how work is done. The same tension appears in Wharton’s 2025 AI Adoption Report: Executive sponsorship is rising; chief AI officer roles are now present in 6 out of 10 enterprises and capability building is still falling short of ambition. The signal is hard to miss. Enterprises know AI matters. Many are investing. But they are still treating adoption as a learning problem when it is really an operating model problem.

Training creates users. Redesign creates advantage

Training matters. Every enterprise needs a baseline level of AI fluency. People need to understand where AI is strong, where it is weak and how to use it responsibly. They need to know how to challenge outputs, apply judgment and separate acceleration from automation. None of that is optional anymore.

But training alone does not create an enterprise advantage. At best, it creates pockets of local efficiency.

An individual contributor may draft faster. A manager may synthesize information faster. An analyst may produce a first pass in less time. Those gains are real, but they do not automatically translate into better operating performance. In many organizations, the efficiency never reaches the P&L. It gets trapped inside legacy workflows, approval layers, meeting culture and fragmented decision rights.

That is the real issue. AI may already be improving work at the individual level, while the enterprise itself remains structurally unchanged. The Writer enterprise AI adoption survey found that executives see AI super-users as at least five times more productive than their peers, yet only 29% of organizations report significant ROI from generative AI. The contrast is telling. The constraint is no longer whether employees can use AI. The constraint is whether the organization is designed to convert those gains into faster decisions, shorter cycle times, higher throughput and better business outcomes.

This is where many AI initiatives quietly stall. Leaders can point to adoption metrics, training completion rates and growing license counts. Employees can honestly say they are using the tools. But the business still does not feel materially more responsive. Revenue does not move faster. Product cycles do not compress enough. Decision latency remains high. Management complexity increases instead of falling.

That is why I believe the wrong question is, “How do we train our people on AI?”

The better question is, “Which work should humans continue to own, which work should AI accelerate and which workflows should be redesigned entirely now that AI exists?”

That is the shift that matters. It moves the conversation from individual capability to institutional performance. It moves AI from a training initiative to an operating model decision.

And that is where CIOs must lead. The organizations creating advantages with AI are not simply teaching employees new tools. They are redesigning roles, workflows and management systems so that individual productivity gains become enterprise-level outcomes. Companies do not fall behind because AI arrived. They fall behind because they kept the same work design after it did.

The real AI shift is separating judgment from execution

The most important AI transformations I have seen do not start with tools. They start with a harder leadership discipline: Separating judgment work from execution work.

Once AI can reliably handle portions of execution, the role itself must be reconsidered. Not eliminated. Reconsidered. The question is no longer just how to make people faster inside the job as it exists. The question is whether the job was designed correctly in the first place.

That is where the real work begins. Leaders must deconstruct work below the level of titles and org charts. This is a much harder challenge than deploying a copilot. It forces decisions about spans of control, management layers, performance expectations and career paths. It changes what excellence looks like across the enterprise. And it changes what companies should reward.

If AI takes on more drafting, synthesis, retrieval and coordination, then the value of human work moves up the stack. The ability to frame the problem, define quality and make accountable decisions becomes more valuable than manually producing every intermediate step.

This is also why so many employees feel uneasy even when leaders talk about AI in optimistic terms. They are being told to use new tools, but not what the organization will still need uniquely from them. They are hearing about productivity, but not about role evolution. Training without redesign does not feel like empowerment. It feels like a shot across the bow of their career.

That is why the most important workforce conversation in AI is not about tool usage. It is about role clarity. People need to understand where human judgment still creates value, where AI should accelerate execution and how their path to relevance and mastery changes as a result.

That is not an HR side discussion. It is one of the central leadership tasks of the AI era.

CIOs must lead the redesign, not just the rollout

This is where CIOs have a larger role than many companies still recognize.

AI adoption is often framed as a cross-functional initiative, and of course it is. But when AI moves from experimentation into execution, the CIO is often the only executive with a clear view of the full system: Workflows, dependencies, security, data architecture, control points, operational friction and how work moves across the enterprise. That perspective matters because the next phase of AI is not about individual productivity. It is about institutional redesign.

That means CIOs cannot limit their role to rollout, enablement and tool selection. They must help redesign how the business operates.

In my view, that starts with three questions:

  1. Where is AI simply making existing work cheaper or faster, and where could it allow the business to operate differently? Those are not the same thing.
  2. Which roles need to be rebuilt? Not renamed. Rebuilt. If analysts spend less time gathering information, what should the organization expect more of in return? If leadership cannot answer that, it is not redesigning work. It is just hoping individual productivity turns into enterprise value on its own.
  3. What new management disciplines does AI require? As AI becomes part of execution, leaders need clearer standards for validation, accountability and quality control. AI can compress execution, but it can also multiply errors at scale. That raises the premium on operating discipline, not lowers it.

This is why I think the skills-gap narrative can be misleading. It invites leaders to believe the problem is mostly educational, as if enough courses, certifications and training hours will somehow carry the organization into the future. They will not. They are necessary, but they are nowhere near sufficient.

The companies that pull ahead will treat AI as a redesign moment. They will rethink work at the level of tasks, decisions, teams and operating models. They will create roles with more judgment and less administrative drag. They will redesign career paths, so people are not just trained on AI, but advanced through the responsible use of it. And they will measure success not only through adoption, but through decision velocity, throughput, exception rates and business outcomes.

Most of all, they will stop asking employees to bolt AI onto broken systems.

That is the real opportunity in front of CIOs. Not just to deploy the tools. Not just to sponsor training. It is to help redesign the enterprise around a new division of labor between humans and machines.

The AI skills gap is real. But education alone will not close it.

Only better work design will.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

What’s holding back enterprise AI? Shortage of talent, CIOs say

A shortage of expertise has held back AI initiatives at many organizations, with shallow knowledge of the technology plaguing practitioners’ ability to make good on the promise of AI.

According to CIO.com’s 2026 State of the CIO survey, lack of in-house talent was the top challenge IT teams faced in implementing AI strategies during the past 12 months, identified by 40% of respondents.

The shortage is especially acute for roles at the intersection of AI and cybersecurity, says Ha Hoang, CIO at cyber resilience vendor Commvault. Cybersecurity companies need people who can understand data and operations and translate risk insights into business decisions, she says.

Vendors such as Commvault also need engineers and analysts who understand how to secure AI models, protect training data, and detect AI-related threats such as prompt injection and model poisoning, she adds.

“As AI-driven automation reshapes IT and security operations, CIOs and CISOs will need professionals who can interpret, tune, and govern AI systems, not just monitor alerts,” Hoang says. “We’ll need fewer siloed specialists and more AI-fluent generalists who can evolve as technology does.”

Deep expertise needed

Part of the problem is a shortage of people who understand the power of AI and can predict where AI technologies are headed, adds Anand Srinivasan, chief strategy officer of AI-powered enterprise planning platform vendor o9 Solutions.

“The challenge is not simply a shortage of AI experts, but a deeper structural gap between how enterprises are organized and what modern AI enables,” he says. “Most large organizations still operate through functionally siloed, hierarchical decision-making models designed for stability and scale, not speed and adaptability.”

The most critical expertise gap is not just in building AI systems, but also in rethinking how decisions are made and executed across the enterprise, Srinivasan says. AI can enable huge changes in agility and adaptability, but only if enterprise decision-making capabilities allow organizations to convert strategy into action faster and with less risk, he adds.

Srinivasan quotes hockey legend Wayne Gretzky to illustrate the problem: “Skate to where the puck is going, not where it has been.” The AI puck is moving very fast, he notes, and AI expertise is a moving target.

“Skills in traditional ML are being rapidly displaced by needs for generative AI, agentic AI, and AI governance,” he adds. “Workers with AI skills now command significant wage premiums over peers in the same roles without those skills.”

Beyond the challenges with a fast-evolving technology, there’s a problem with shallow AI expertise, adds AJ Sunder, CIO and chief product officer at strategic response management software vendor Responsive. There are plenty of people available who have some AI knowledge, but many lack a deeper understanding of how to deploy it to meet enterprise needs, he suggests.

“There is certainly a shortage of people that can build reliable, safe, production-scale AI systems,” he adds. “This abundance of AI-aware talent, combined with a dearth of people that can translate that into functioning AI applications, creates a massive problem sorting through the noise.​​​​​​​​​​​​​​​​”

It’s been a challenge for Responsive to find workers with that level of expertise, but the company has been fortunate to find some outside talent, Sunder says.

“The types of AI problems we solve require expertise in dealing with content at scale, with all the complexities of messy enterprise data,” he adds. “There aren’t too many people with sufficient experience solving the kind of problems we solve at the scale we do.”

Hands-on training

Responsive has prioritized internal training to build in-house expertise, with internal teams driving educational efforts, Sunder says. The AI-focused company had a bit of a head start because it had already focused on the technology before the current wave.

“We’ve been fortunate to have talented people that quickly recognized the pace of AI and the value of hands-on learning, experimentation, trial and error, and unlearning to learn new things,” he adds. “That meant all of us collectively learning, sharing, and teaching one another.”

The company also builds teams by pairing AI specialists with domain experts rather than putting them in isolated groups, Sunder says. Response has also invested aggressively in AI tools that allow a broader set of engineers contribute to AI-powered features without needing deep ML backgrounds.

“You don’t need everyone to be an AI expert right away,” he says.

Sunder questions the need for more outside AI training programs, saying there may already be too many out there.

“Some structured training to bring most if not all of your team members to a baseline level of knowledge is necessary, and it’s out there,” he says. “Beyond that, unstructured learning, hands-on exercises, building useful solutions beyond ‘hello-world’ tutorials are far more effective than any long-running training programs could do. This is mainly due to how fast things are evolving.”

Commvault is also focused on internal training methods and on reskilling current employees, Hoang says. The company is also exploring partnerships with universities and cybersecurity boot camps.

“The hardest skills to find are those that combine security fundamentals with AI model governance or automation tooling,” she says. “Many practitioners have one side of the equation, but not both.”

Companies also need to be flexible about how they view AI expertise, she says.

“Many organizations still rely on rigid job descriptions that overemphasize years of experience or specific certifications, while candidates have transferable skills but lack the exact title or tool exposure,” Hoang adds. “Forward-looking CIOs are rethinking the hiring funnel by prioritizing capability and a learning mindset over narrow experience.”

Subscription model: How AI is reshaping corporate education

In the world of EdTech, where I have had the fortune to design and scale numerous platforms for higher education and enterprise environments alike, one shift is occurring at an increasing rate in 2026 for corporate learning.

It is shifting from being programmatic to becoming a continuous system built with AI to identify the needs of the corporation and workforce and to provide the training required to ensure that the workforce develops the necessary skills.

Many enterprises are adopting AI-powered learning ecosystems to address the needs of their organizations in real time, according to CIO analysis. However, what is emerging now goes a step further to address those needs with subscription-based learning environments that adapt to the needs of the organizations themselves.

Architecting the subscription learning economy

Through my experience working with executive education and enterprise platforms, I have found that traditional learning models fail not because of content quality but because of delivery architecture.

The subscription model for learning emphasizes continuous learning, modular content and regular skill updates rather than traditional fixed courses.

Diagram: Subscription principles

Vishal Shukla

This model enables organizations to deploy micro-credentials that align directly with evolving business priorities such as AI adoption, digital transformation and data-driven decision making.

More importantly, though, and even beyond the change in the way that learning content is delivered, such changes can represent changes to the way that companies and platform view revenue and growth. For instance, subscription-based models can help to keep learner engaged with their learning and create recurring engagement loops that support both learner motivation and organizational values.

Engineering personalized learning pathways with AI

One of the most significant contributions I have seen in this space is the application of AI to dynamically orchestrate learning journeys.

In platforms I have led, AI systems do not simply recommend courses. They:

  • Outcome-driven learning: Maps skills directly to business outcomes.
  • Adaptive learning paths: Adjusts learning sequences based on performance signals.
  • Aligned skill growth: Connects individual development with enterprise capability frameworks.
Diagram: Adaptive learning paths

Vishal Shukla

This transition represents moving from content recommendation engines to capability orchestration systems.

Further, there’s also the benefit of increased efficiency. While the ability to drastically improve skills and capabilities is reason enough for workers to employ AI tools in their educational endeavors, evidence shows that learning can also become 57 percent more efficient, according to  Training Providers Statistics 2025. This CIO perspective highlights how learning is repositioned as a strategic lever in this article.

Activating learning through private cohort networks

While AI enables personalization, I have consistently observed that behavioral transformation happens in groups.

This is where private cohort learning is emerging as a critical layer in B2B education. In enterprise implementations I have supported, curated cohorts of leaders create high-impact learning environments where knowledge is contextualized through peer interaction.

The growing adoption of cohort models is driven by measurable outcomes:

  • Organizations are prioritizing learning tied to real business results, not just completion metrics.
  • Cohort structures close the gap between knowing and doing — faster than self-paced formats ever could.
  • Engagement and retention hold significantly stronger when learners move through content together.
  • Learning ecosystems are evolving into a hybrid model — Netflix-style accessibility with the accountability of a cohort.

In my view, the convergence of subscription access and cohort-based engagement represents a hybrid learning architecture that balances scale with depth.

From content platforms to capability engines

The table below provides mapping between learning dimension and platform features, and what those changes mean in relation to workforce outcomes.

DimensionLegacy learning modelAI-driven subscription modelStrategic outcome
Learning architectureProgram-based deliveryContinuous subscription ecosystemSustained workforce readiness
Content strategyStatic curriculumModular micro-credentialsRapid skill deployment
PersonalizationRole-based segmentationAI-orchestrated pathwaysPrecision learning at scale
Engagement modelIndividual consumptionCohort-driven collaborationBehavioral and performance change

Case in point: Scaling AI learning in executive education

In one of the executive education platforms I helped design, we introduced AI-driven subscription learning paths focused on digital and AI transformation.

What we observed was not just increased participation, but a shift in how leaders engaged with learning:

  • They moved from passive consumption to active problem solving
  • Learning cycles aligned directly with business initiatives
  • Peer cohorts reinforced accountability and execution

This model enabled organizations to translate learning investments into measurable outcomes, bridging a gap that has historically limited the impact of corporate training.

Research on what actually motivates learners to commit to online courses backs this up — people engage when the learning feels relevant to their real work, not when it’s assigned to them.

Defining the next generation of corporate education

Based on my work across enterprise and higher education ecosystems, I believe the future of corporate learning will be defined by three foundational shifts:

  • From learning delivery to capability engineering
  • From static programs to adaptive ecosystems
  • From completion metrics to business impact measurement

Organizations that operationalize these principles will not only upskill their workforce but also build resilient, future-ready talent systems.

Conclusion: Designing learning systems that learn

The most important realization from my work in this space is that modern learning platforms must themselves become intelligent systems.

They must learn from users, adapt to organizational needs and continuously evolve. Subscription models, powered by AI and reinforced through cohort dynamics, are making this possible.

In 2026, corporate education is no longer about providing access to knowledge. It is about designing systems that enable organizations to continuously generate capability — at scale, in real time and aligned with business transformation.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

IT reskilling: the pressing CIO imperative

Keeping up with new technologies, and recalibrating existing ones, can seem almost impossible as they seem to appear every month. But innovation breeds necessity, so knowledge updates within the enterprise are essential to successful reskilling, and CIOs are the pacesetters.

“In our profession, training is a given, like courage in soldiers,” says Gracia Sánchez-Vizcaíno, CIO of Securitas for Iberia and Latin America. “Without continuous training, teams become obsolete.

Disruption is also proportionate to the speed of change, leaving organizations usually a step behind technology. But speed isn’t the only challenge facing IT departments. The range of people themselves who need training has also changed. New technologies have become so intricate in corporate activity that staff training, as well as training of external teams, need to be more targeted.

So seeing which key knowledge areas dominate the concerns of CIOs and training experts helps visualize the scope of this challenge. Sánchez-Vizcaíno is focused on the particular speed of agentic AI. “We need a change in mindset, but also in knowledge,” she says.

Gen AI and cybersecurity are equally front of mind in developing new skills and knowledge, as are data analytics and automation. But the list also includes soft skills like communication, negotiation, and leadership since having critical thinking skills is equally important, says David González, business director of IT permanent recruitment at Hays Spain.

Develop or hire?

The need for staff to be adroit in emerging tech also raises another the question of upskilling internally or hiring externally those already skilled. While the latter can bring benefits, the general consensus among IT leaders is strengthening an established team is more advantageous to yield significant returns.

“Within the technology market, reskilling shouldn’t be an option but an advantage,” González says, adding that it’s not so much about pitting one model against the other, but seeing what each one offers and valuing it on findings. The IT job market is no longer dominated by a kind of recruitment race, however. “Attracting talent is very difficult,” Sánchez-Vizcaíno says, whereas training is another form of compensation that increases commitment, broadens the staff skill base, and reduces dependence on external resources that can be expensive and noncommittal.

“The cost of hiring a new employee can be three times that of a proper reskilling program,” says Magalí Riera, director of the master’s degree in people management, talent, and digital transformation at UNIE University. “Skills development isn’t just a simple corporate wellness option, it’s a vital strategy for staying competitive.”

Similarly, when there’s already a well-oiled, functioning team in place with diverse profiles and talents, it may be more worthwhile to update rather than try to fit new people in. 

“The team gives you something that goes beyond the technological aspect,” adds Álvaro Ontañón, CIO of Merlin Properties. But he, in turn, needs a team to deliver. “For me, within that context, trust is very important,” he says. “Once you have that, if the limitation is technology — unless it’s something very disruptive, requiring starting from scratch, or is expensive and requires hiring — we dedicate time to it.”

What reskilling should look like

Reskilling must be approached from a business perspective, says González, not just a technological one or with generic certifications. And a successful process must begin by understanding what will happen in the short, medium, and long term, promoting key market areas, and providing continuous but applied training.

And while industry experts acknowledge that video-based courses can be useful for routine tasks like safety training, they’re ineffective when it comes to developing new skills. Riera recommends project-based learning and avoiding purely passive learning methods.

Sánchez-Vizcaíno sees it firsthand as well. “The way we share and process information has also changed, and for all of this to work, it’s about moving from theoretical knowledge to adaptable, practical skills,” she says. Learning happens by sharing in Teams channels, talking with colleagues, and even listening to other companies. These are more multidirectional processes, compared to the unidirectional or bidirectional training of the past. “More than ever, it’s about learning by doing,” she adds.

Above all, it’s about creating a supportive and motivating environment, and fostering a fertile ground for learning and acquiring new skills. “If you want to benefit from learning, you must have an affinity for the training you’re going to receive,” Ontañón says. In his team, staff are involved in the preliminary process since if there aren’t interested people, there’s no training. It’s a pragmatic decision that avoids the feeling of seeing a course as a burden, and reinforces the desire to participate. But that’s almost the default state in the IT world, being constantly on the cusp of change, even outside of work hours, and needing to learn. 

Similarly, working with interests and needs in mind helps foster flexibility. “We dedicate a lot of time to this, because it’s what can guarantee its effectiveness,” says Ontañón, adding that it’s not about training for the sake of training, but responding to those concerns.

In a world where information is much more accessible than in the past, there are many more sources of knowledge. “The downside is because there’s so much, you have to find what really interests you and adapt to it,” he says.

Reskilling incorporated in the corporate vision

Equally important in the reskilling processes is monitoring its impact on the workforce. When change occurs, González says, productivity will initially drop before it rises. “Companies that fail are those that demand senior-level performance from the outset,” he says.

An adjustment period has to be factored in, which may even involve temporarily supplementing the workforce with external or temporary staff. “This learning process is a necessity,” Riera adds. “It’s not extra training or a reward for the employee, so it must be part of the work schedule.” She recommends not filling the entire workday with courses, but rather dedicate a small portion of the day to them so as not to hinder daily operations. Also, maintain clear communication with the team about what’s being done, why it’s being done, and what will be gained.

Smart factories are here — but is your team ready to use them?

Since the emergence of Industry 4.0 in 2011, manufacturing has undergone a digital transformation. Industrial Internet of Things (IIoT) sensors now allow machines and assets to communicate seamlessly, while artificial intelligence has become a core business enabler. Cloud computing provides virtually limitless processing power and storage, and big data analytics has become essential for strategic decision-making. By integrating data from ERP systems with real-time machine data — via SCADA, PLCs, and other automated tools — manufacturing execution systems (MES) have paved the way for the modern smart factory.

Smart factories are not limited to MES alone but also cover other areas like energy management systems (EMS), video analytics-based plant safety, digital quality inspection using vision-based cameras, immersive technology-based shopfloor training, operational technology (OT) network, firewalls and other related tools.

If we go up the value chain, today factories are designed using digital twins with full process simulations and products are designed using product lifecycle management (PLM) platforms. Maturity of smart factories is an evolution, tightly linked with the digital transformation plan of the enterprise. Still 49% of the enterprises lack confidence in their future manufacturing strategy.

While visiting various plants, the disparity in digital maturity is often striking. In many business units, specific digital initiatives take precedence because they are driven by the immediate priorities or critical requirements of the end customer. In other instances, regulatory compliance dictates the roadmap. Ultimately, delaying a plant’s digital transformation can be a strategic choice; these are complex business decisions managed by CXOs based on broader organizational goals.

Having said this, based on Gartner’s Top 10 Strategic Technology Trends for 2026, digital and AI technologies will continue to be the fundamental for driving smart factories maturity. And according to IDC’s 2026 Manufacturing FutureScape, by 2027, 40% of factories’ operational data will be integrated across applications and platforms autonomously, due to increased standardization and the use of AI agents purpose-built for specific data.

In fact, I envision there will be AI agentic mesh in the smart factories, working under an AI orchestrator layer, either collecting or sharing data in a multi-agent AI environment, with human-in-the-loop (HITL) for critical business decisions.

Impact on the workforce skillset

In terms of coping with the impact of digital transformation, the world of the workforce on the shopfloor of factories is changing at a faster pace. The tasks and activities done by operators, supervisors, maintenance technicians, quality inspectors, material handlers and others need to be seen through digital, AI and smart factory lenses.

There is a growing realization within the workforce that the convergence of automation, AI, cloud/edge computing, and IIoT is fundamentally reshaping every manufacturing process. AI-driven shopfloor assistants have become increasingly common, guiding workers through machine maintenance, process automation, and quality checks. These digital tools are particularly vital during night shifts or off-hours, when fewer human experts are available on-site to provide support.

Over the last few years, I have observed manual quality inspections being steadily replaced or augmented by advanced vision systems. In fact, many modern machines now come with these cameras factory-installed. From robots performing thousands of precise welds on vehicle seating to the automated painting and injection molding of automotive parts, the shift is undeniable. Consequently, the workforce skillset required to drive digital transformation in these smart factories needs a comprehensive revisit. The sentiment of reskilling is well captured in the book “What Got You Here, Won’t Get You There,” though it’s more pertinent to managers or senior leaders.

Bridging the skillset gap

Through AI innovation, by 2031, over 30 million jobs per year will be redesigned – not eliminated. So, learning and development (L&D) leaders need to look at the talent development and retention strategies, which will stay relevant in the smart factories’ era and beyond.

Successful initiatives often involve learning and development (L&D) leaders collaborating with business unit heads and digital stakeholders to build a comprehensive transformation matrix. This matrix maps out the manufacturing processes most affected by AI and digital tools, identifies the relevant job roles, and aligns them with the necessary technologies—such as IIoT, cloud computing, Gen AI, agentic AI and computer vision.

From this matrix, the skillset gaps for the impacted roles because of process and technology changes are tracked and fed into the L&D talent development plan. This plan is developed at the BU/plant-level and the requisite investments on training and infrastructure are approved by the business head in conjunction with the digital head.

From my perspective, I feel immersive technology-based training is quite effective in smart factories. Virtual reality (VR)/augmented reality (AR) solutions have helped to cut down the training time by 20-50%, with full tracking of the talent proficiency. This information is fed into learning management system (LMS).

One of the most effective features is that the workforce skillset matrix is generated directly from the learning management system (LMS). This integration enables plant managers to assign operators to specific machinery based on their verified proficiency and skill levels. This automated allocation of production line personnel is becoming increasingly standard, effectively eliminating the risk of unqualified staff operating sensitive equipment. By ensuring the right person is at the right machine, organizations can significantly improve safety, ‘first-time-right’ rates and overall product quality.

Keeping the workforce AI-ready

The digitalization of manufacturing generates vast quantities of data. While IT and digital teams are responsible for ensuring this data is captured securely on scalable platforms like the cloud, it is equally vital that the shopfloor workforce understands the underlying dataflow. When operators grasp how information moves through the system, they can better support the integrity and efficiency of the smart factory.

Furthermore, the workforce must recognize that data quality is the foundation of any effective AI solution — whether it involves shopfloor assistants or predictive forecasting. Because AI models are trained on specific datasets for specific use cases, their output is only as reliable as the input. Enterprises must strategically determine whether these models should be trained exclusively on internal enterprise data or supplemented with broader industry and internet-based information.

The bottom line is that AI-based solutions help organizations to stay ahead of the curve in terms of differentiation, competitive edge, business decisioning, growth and so on. The upskilling and cross-skilling of the workforce, as per the talent development plan, should be updated and tracked from AI lens, especially as this technology is changing at a rapid speed.

The best practice I have seen being followed in the industry is when the digital/AI team works with the HR and BU teams to identify training for different sets of employee groups. Shop floor training on digital and AI, for instance, will be a lot more hands-on and manufacturing-focused compared to training for mid/senior level executives, where the focus will be about the technology, its impact on the business and how to stay abreast of it.

Industry-specific certifications in digital and AI technologies can significantly enhance workforce productivity and efficiency. To complement formal training, many organizations now partner with startup ecosystems on relevant business projects, giving employees first-hand experience with emerging tools. Furthermore, ‘AI playgrounds’ allow business units to democratize these technologies by applying them to live use cases. Ultimately, bridging the skills gap requires more than just academic instruction; practical, hands-on exercises are essential to ensuring an AI-ready workforce.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Why hiring ‘AI engineers’ won’t work

Practically every company today is posting roles to hire an “AI engineer.”  They’re likely assuming that an “AI engineer” can handle everything from product development to infrastructure to data integration. Most of the time, though, they’re going to be disappointed.

That’s because assessing the competency of engineers has always been hard–and adding AI to the mix makes it even harder–and companies are often testing for the wrong thing. Under the umbrella of “AI engineer,” they’re collapsing at least three different types of technical work into a single job description, then wondering why the person they hired can’t do the job they need done.

At Andela, where we assess, train and vet software engineering talent as the core of our business, we’re finding that basic AI skills assessments produce an almost 75% fail rate.

My first reaction when I saw the 75% fail rate was that we had an assessment problem. But the more I dug in, the clearer it became that the problem was upstream. Those candidates weren’t failing because they lacked skills. Many of them are exceptional engineers. They were failing because the entire industry is based on assessment frameworks that can’t distinguish between the types of AI work that need to be done.

Consider this situation. I was recently reviewing assessment results for a batch of AI engineering candidates. One candidate stood out: strong resume, passed the coding assessment and defined every concept we threw at them: RAG architectures, agentic search, vector databases, prompt chaining. On paper, this person had the skills.

Then we got to design. We presented a real enterprise scenario and asked which approach they’d use and why. The candidate described a RAG implementation. The solution was technically correct and valid. But for this use case, a RAG implementation would have required significantly more engineering while producing less complete results than an agentic search approach. (The problem required dynamic reasoning across multiple data sources rather than retrieval from a fixed index.) The candidate knew the concepts but lacked the judgment to know which solution was dramatically better for the specific problem.

I’d call that a gap in technical taste: the ability to choose between valid options and find the one that’s right for a specific context. And it’s the gap our assessments, and almost every assessment pipeline in the industry, weren’t built to catch.

And it’s costing real money. Enterprises are burning months on mismatched hires, misaligned teams and AI initiatives that stall, not because the technology failed, but because the people doing the work were the wrong people for that particular work. Highlighting this difficulty is ManpowerGroup’s 2026 Talent Shortage Survey, which found that AI skills have surpassed all others to become the most difficult for employers to find globally, with 72% of employers reporting hiring difficulty.

Digging deeper

In my previous article, I spoke about how enterprises should seek to hire Forward Deployed Engineers (FDEs) who can bridge engineering, architecture and business strategy, to push AI past the ‘integration wall’ and into production. FDEs are the expedition leaders. No company has enough of them. No company can afford to hire enough of them for all the work ahead.

So, what do you do below the FDE layer? You have to dig deeper. For every one FDE, teams will need three or four engineers operating in more specific modes of work. In our experience, the AI work that enterprises need done falls along a spectrum defined by three archetypes.

  • Prototypers. These are the rapid experimenters. They are engineers, product managers or designers who use AI tools to quickly test ideas, find value and throw away what doesn’t work. In a previous era, validating a new product concept meant scoping a project, building a team and committing to a six-month build cycle. Now one person with the right tools and good instincts can shortcut that entire process, testing and discarding dozens of ideas to find the ones worth investing in. The prototyper’s technical taste is about sensing what’s valuable before an organization commits real resources.
  • Builders. The engineers who turn validated ideas into production systems. A builder needs to do more than ‘vibe code.’ They need to operate as agentic engineers: architecting the system, orchestrating the agents to build it, verifying the output and shipping with confidence. Critically, building in an AI context means building the full stack, including the data pipelines that organize content from disparate systems, the access controls that govern what the AI can reach and the integration layer that connects AI to the messy reality of enterprise data and infrastructure. Without this end-to-end capability, AI stays trapped in sandboxes. The builder’s technical taste is about choosing the right architecture and integration approach when multiple valid options exist and knowing which one will be dramatically better for a specific production context.
  • Scalers. The engineers responsible for reliability, governance, observability and production AI operations. These professionals were, in a previous era, DevOps engineers. They know how to deploy LLMs and manage the liability of model output at enterprise scale. The scaler’s technical taste is about tradeoffs: performance versus cost, governance rigor versus development velocity and risk tolerance versus time to market.

These aren’t rigid job categories. They’re patterns of AI engagement. In practice, they blend. A backend engineer on a given project might spend 60% of their time doing builder work and 40% on scaling. The point isn’t to put people into boxes. It’s to give enterprises a vocabulary for decomposing what they need, so they stop collapsing fundamentally different work into a single job posting.

These patterns have different toolchains, different skill profiles and different hiring criteria. Companies that treat them as interchangeable will end up building subpar teams. Understanding where your AI initiatives fall along this spectrum is one of the most important change-management decisions enterprises face in an AI-first world, and it’s why companies that identify their specific location on this spectrum move dramatically faster than those hiring generically.

How to think about AI talent

Here’s where it gets practical. Prototypers, builders and scalers are not job titles. They’re lenses that sit on top of the domain roles enterprises have always hired for: frontend engineers, backend engineers, data engineers, DevOps/SRE and so on. To move from the vague ‘AI engineer’ to a structured picture of what you need, you have to think across three dimensions.

  • Role is the foundation: what technical domain does this person work in? Backend, data engineering, DevOps/SRE, full stack? These are the roles enterprises have always hired for. They come with foundational skills like API design, database architecture and CI/CD pipelines. And they come in specific flavors: a Python backend engineer is not a Java backend engineer. This layer hasn’t changed because of AI.
  • Seniority determines the level of judgment and autonomy you can expect. A senior backend engineer with 10 years of experience brings architectural instincts and decision-making under ambiguity that a two-year engineer doesn’t. Seniority is also where technical taste compounds. An engineer with deep experience has seen more tradeoffs, made more wrong calls and developed the pattern recognition that allows them to make better-than-default decisions. Not every role on an AI initiative requires a senior engineer, but the roles that involve system design decisions, risk trade-offs and client-facing judgment absolutely do.
  • AI engagement pattern is how this person engages with AI systems. This is the archetype layer, and it’s what’s new. A backend engineer doing builder work (designing the orchestration logic for an agentic workflow and integrating it with enterprise data) needs fundamentally different technical tastes than that same backend engineer doing scaler work (deploying LLM infrastructure and building observability for model performance). The role is the noun. The archetype is the adjective. And it changes what you need to test for.

In practice, certain role families map naturally to certain AI engagement patterns. Prototypers can come from anywhere; engineering, product, design and are often already on your team. They’re the person who’s always building side projects and testing ideas. Builders tend to draw from full-stack, frontend, backend, data engineering and AI/ML talent. Scalers tend to draw from DevOps/SRE, security, backend and infrastructure engineering. Forward-deployed engineers span all the above with business acumen and stakeholder fluency.

Hiring with precision

This multi-dimensional view is what allows enterprises to stop hiring for a vague ‘AI engineer’ and start composing teams with precision. It’s also what makes a credible assessment strategy possible, because now you know what you’re testing for at each level.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The 10 skills every modern integration architect must master

Enterprise integration has changed fundamentally. What was once a backend technical function is now a strategic capability that determines how quickly an enterprise can adapt, scale and innovate. With SaaS-first architectures, continuous ERP updates, event-driven systems and AI-enabled platforms, integration architects are no longer just connecting systems — they are designing the digital nervous system of the enterprise.

I have spent years implementing large-scale cloud and middleware implementations, particularly across Oracle EBS, Oracle Fusion Cloud and various SaaS ecosystems. What I’ve observed is that the gap between good and great Integration architects isn’t technical knowledge alone, it’s the breadth of skill, judgment and organizational influence an architect brings to every engagement. The following ten competencies define what separates a modern integration architect from a traditional middleware specialist.

1. Platform thinking, not project thinking

What many get wrong: Designing integrations to satisfy a single project — an ERP rollout, payroll go-live or CRM deployment — without considering reuse or long-term evolution.

Why this fails: SaaS platforms like Oracle Fusion Applications refresh weekly and quarterly. Project-based integrations break repeatedly and accumulate technical debt at a punishing rate.

Modern skill: Adopting a Cloud Integration Platform Mindset, where iPaaS (e.g., Oracle Integration Cloud) is treated as:

  • A shared enterprise platform
  • An abstraction layer between SaaS and consumers
  • A long-term capability, not a temporary solution
  • A source of reusable integrations with long-term enhancement opportunities

The skilled architect also knows that not every integration belongs on the iPaaS platform. High-volume, low-latency integrations might perform better with direct API calls or message queues. Highly complex data transformations might be more maintainable in custom code. The integration architect makes deliberate choices about which integrations belong on which platform based on technical requirements, team skills and long-term maintainability.

Modern architects need both strategic understanding of when these platforms add value and the tactical skills to use them effectively.

2. Mastery of iPaaS and cloud-native capabilities

What many get wrong: Using iPaaS as a visual mapping tool while ignoring native cloud capabilities.

Why this fails: Over-customization increases cost, reduces resilience and bypasses built-in scalability.

Modern skill: Deep understanding of integration patterns and architectures. Integration architects must understand the fundamental patterns that govern how systems communicate — these patterns represent proven solutions to recurring challenges and knowing when and how to apply them is essential. This means knowing how to leverage iPaaS features before writing custom logic:

  • Adapters vs REST endpoints
  • Lookups, packages and integration patterns
  • OCI services such as Streaming, Object Storage and Functions

As enterprises migrate to the cloud and adopt hybrid architectures, integration architects must understand cloud platforms and their unique constraints. We increasingly work in multi-cloud environments, which means designing patterns that work across providers. Rather than building cloud-specific integrations, the architect establishes cloud-agnostic interfaces. Using platform-neutral API formats like JSON for data interchange ensures portability.

On a recent HCM implementation, I replaced a polling-based integration pattern with an OIC and OCI Streaming event-driven approach for HR updates. The result was dramatically lower latency and a significant reduction in load on Oracle HCM during peak processing windows.

3. API-led and event-driven design

What many get wrong: Exposing SaaS applications directly to consumers through tightly coupled integrations.

Why this fails: Schema changes, API version updates and new consumers create cascading failures that ripple across the entire landscape.

Modern skill: Designing API-led and event-driven architectures that genuinely decouple systems. APIs have become the primary integration interface for most modern systems. Integration architects need deep expertise in designing APIs that are intuitive, efficient and maintainable.

Consider what I faced when tasked with exposing customer data from a legacy system. A naive design required multiple calls to retrieve related information, one for basic customer details, another for addresses, another for contact preferences and another for order history creating a chatty interface and increased latency. Every extra call compounds latency and couples the consumer to internal data structures. A well-designed API while utilizing mediation capabilities of Integration tools encodes resource relationships so the consumer retrieves what it needs in a predictable, minimal number of call requests. The middleware orchestrates calls to backend systems, aggregates the data and exposes a single, consumer-friendly endpoint. This approach reduced round trips, decoupled consumers from backend structures and improved performance by enabling parallel processing. I also considered trade-offs like payload size and introduced selective expansion to avoid over-fetching. Overall, the design aligns with consumer-driven API principles and leverages mediation capabilities effectively.

4. Canonical data modeling and data governance

What many get wrong: Mapping source-to-target schemas directly for every integration. A common anti-pattern is point-to-point schema mapping — directly transforming source data into target formats for every integration. At first, this seems fast. But it doesn’t scale.

Why this fails: This approach creates a fragile, tightly coupled ecosystem as A single schema change in one system triggers updates across multiple Integrations, Integrations grow from N systems → N² mappings, Inconsistent data definitions – “Customer,” “Account,” or “Contact” may mean different things across systems. Over time, teams spend more effort fixing integrations than delivering value

Every system change requires multiple downstream updates, creating a maintenance nightmare that compounds over time.

Modern skill: Integration architects increasingly need data engineering skills as the lines between integration and data platforms blur. We often serve as the primary advocates and implementers of master data management strategies. Modern integration architects don’t just move data — they define and govern it. Define System of Record (SoR) while establishing authoritative ownership for each data attribute to avoid conflicts and duplication. Defining canonical enterprise data models and enforcing governance through versioning, reusability, security, validation rules, error handling & centralized control at the middleware layer is how we solve that problem at scale. Enable Controlled Data Propagation by defining how updates flow like Event-driven (real-time sync) or Batch (scheduled reconciliation).

In modern architectures, integration architects increasingly act as data stewards, enabling scalable MDM strategies and ensuring consistency across distributed systems through centralized mediation layers like OIC

A canonical ‘Employee’ model I’ve defined for a large financial services client allowed Oracle HCM, multiple payroll providers and identity systems to evolve independently. During a significant HCM upgrade, integration breakage was near zero because the canonical model absorbed the schema changes rather than propagating them to every consumer.

5. Security-by-design in integration

What many get wrong: Treating integration security as a configuration step late in the project.

Why this fails: Integration layers handle sensitive payroll, financial and identity data — and are frequent attack vectors. Retrofitting security onto an insecure design rarely works.

Modern skill: Modern integration architects must think deeply about security, as integrations often become the weak points in enterprise security postures. Embedding Zero-Trust principles from the start means:

  • OAuth and token-based authentication
  • Least-privilege access controls at the integration level
  • Centralized secrets and certificate management

When we were building integrations for a healthcare provider, HIPAA compliance wasn’t optional — it shaped like every architectural decision. Security controls at multiple levels were non-negotiable: field-level encryption, audit logging, access controls tied to role and context rather than just credentials. A skilled architect implementing single sign-on for a corporate portal understands not just SAML and OAuth protocols but how to design attribute exchange, just-in-time provisioning and role mapping between disparate systems.

I’ve made it a rule to align all OIC integrations with OCI IAM policies from day one and enforce per-integration security policies rather than relying on shared credentials. On one engagement, that decision prevented a significant security incident when a downstream system was compromised — our integrations were isolated, not exposed.

6. Observability and business-centric monitoring

What many get wrong: Monitoring integrations only at a technical level — status, error counts and message volume.

Why this fails: Technical success does not guarantee business success. An integration that processes every message without error can still fail the business if it processes the wrong messages.

Modern skill: Implementing business-aware integration observability. This means instrumenting integrations so the operations team can answer questions like ‘Did payroll actually complete successfully?’ not just ‘Were all messages acknowledged?’

I’ve configured OIC activity streams and OCI Logging Analytics for a payroll integration to surface business-level outcomes — completion rates by pay group, exceptions by category (data issues vs system failures vs delays) and SLA tracking and Reconciliation indicators (expected vs processed employee counts). Within weeks, the finance team was reviewing dashboards themselves rather than filing tickets to ask us if the run had been completed. That shift from reactive to proactive operations was transformative while significantly reducing turnaround time, improving SLA adherence and increasing trust in integration reliability.

7. Designing for continuous change

What many get wrong: Assuming integrations should be ‘stable’ and rarely modified.

Why this fails: Cloud environments are defined by constant change — quarterly SaaS updates, API evolution and acquisitions mean no integration is ever truly done. The mistake many teams make is optimizing for initial stability instead of long-term adaptability. This leads to brittle integrations that break with every release cycle, creating fire drills and eroding business trust.

Modern skill: Building change-resilient integrations where change is expected, tested and absorbed with minimal disruption through:

  • Versioned APIs with clear deprecation policies and backward compatibility
  • Contract-first design so consumers agree on interfaces before implementation begins. schema validation at runtime and test time
  • Automated regression testing that runs before every quarterly update while validating API responses, business flows and edge cases and failure handling.

Before each Oracle ERP quarterly update, our automated test suite validated all critical OIC integrations against the new release in a pre-prod environment. We catch breaking changes weeks before they reach production ensuring seamless business continuity. The peace of mind this creates, for the integration team and for the business, cannot be overstated.

Design integrations not for stability, but for evolution — treating change as a constant and embedding resilience through versioning, contract governance, automated validation and decoupled architecture. This shifts integration from a fragile dependency to a durable, adaptable platform capability

8. DevOps and automation for integrations

What many get wrong: Treating integrations as manually deployed artifacts.

Why this fails: Manual deployments increase risk and slow delivery. They also make audit and compliance conversations unnecessarily painful.

Modern skill: Applying CI/CD and DevOps practices to integrations — automated deployment pipelines, environment standardization with traceability and version-controlled artifacts as first-class engineering outputs.

We promoted integration packages from development to test to production using automated pipelines through CI/CD tools like Flex deploy and Jenkins on a recent engagement. Deployment errors dropped to near zero and audit evidence was generated automatically with every release. The integration team stopped dreading deployments and started shipping faster.

9. Business process and domain expertise

What many get wrong: Focusing purely on technical flows without understanding business context.

Why this fails: Integrations that work technically may fail operationally. A technically perfect integration built around the wrong business process creates a well-engineered wrong answer.

Modern skill: Integration architects frequently serve as bridges between business stakeholders and technical teams. This requires translating business needs into technical requirements and explaining technical constraints in business terms — clearly and without condescension.

Armed with process understanding, the architect designs integrations that automate entire workflows rather than just moving data between systems. The difference between a data-mover and a process architect is the difference between a cable and a nervous system.

On a global HR transformation, I spent the first two weeks meeting the HR operations team gathering requirements and understanding their business processes before writing any integration specifications. By understanding the full hire-to-retire lifecycle — not just the data flows, I designed integrations that ensured consistency across HR, payroll, finance and identity systems in a way that no purely technical analysis would have produced.

10. Leadership and enterprise influence

What many get wrong: Assuming integration architects only need technical authority.

Why this fails: Integration decisions impact multiple business units and platforms. Without the ability to influence stakeholders and align cross-functional teams, and drive adoption, even the best technical design can stall or fail.

Modern skill: Acting as a strategic leader, not just a technical expert while bridging the gap between business priorities and technical execution:

  • Influencing architecture decisions across organizational boundaries
  • Establishing integration standards and governance frameworks that drive consistency
  • Guiding multiple delivery teams toward coherent, enterprise-wide outcomes

Defining enterprise-wide standards by reducing duplicated integrations while improving audit readiness and compliance.

Technical brilliance alone is insufficient if integration architects can’t effectively communicate their designs and decisions. When I document a complex integration architecture, I create multiple views targeting different audiences.

For executive stakeholders, I produce high-level diagrams showing how major systems connect and the business capabilities these integrations enable with minimal technical jargon. I focus on conveying business benefits and risk mitigation plans and strategic value of these integrations.

For development teams, I provide detailed sequence diagrams, error-handling flows and API documentation with example requests and responses while providing Clear guidance for implementing integrations between various applications.

For operations, I write runbooks for common failure scenarios and explain how to interpret log messages and metrics in the context of business outcomes. I provide guidance for proactive monitoring and incident response.

Effective architects invest in knowledge transfer — conducting workshops to explain architectural decisions, pairing with developers during implementation ensuring best practices are adopted and creating decision logs that capture why specific approaches were chosen over alternatives. Providing support during initial production rollout, ensuring confidence, reliability and operational readiness.

Modern integration architects combine deep technical expertise with enterprise influence — communicating effectively, guiding teams, enforcing standards and ensuring that integrations deliver measurable business outcomes. Leadership in this role means shaping organizational decisions, reducing redundancy and turning integrations into a strategic asset.

The evolving role: What the next five years will demand

The role of integration architects will continue to evolve as technology and business needs change. Artificial intelligence and machine learning are already beginning to influence integration, with intelligent data mapping, automated error resolution, agentic workflows and predictive scaling. Low-code and no-code integration platforms are democratizing integration development, requiring architects to shift focus toward governance, standards and architecture while empowering business users to build simpler integrations themselves.

I believe the architects who thrive will be those who treat learning as a core professional discipline, not an optional add-on. That means reading technical research, experimenting with new tools and participating in communities where ideas get challenged. Modern Integration architects design intelligent workflows, automate complex business processes and integrate AI insights into enterprise systems, empowering organizations to achieve faster, smarter and more scalable operations.

The fundamental skills that distinguish exceptional integration architects — the ability to understand complex systems, translate between business and technology, design for resilience and scale and continuously learn and adapt — will remain relevant regardless of how specific technologies evolve. Those who master this diverse skill set will continue to play a critical role in enabling enterprises to harness the full power of their technology investments.

Learning from failure: The habit that separates the best

The best integration architects treat failures as learning opportunities rather than events to be survived and forgotten. When an integration outage causes significant business disruption, we don’t just fix the immediate problem. We conduct thorough post-mortems to understand root causes, identify systemic issues that contributed to the failure and implement changes to prevent similar problems.

After an integration failure caused data corruption on a project I led, I resisted the pressure to simply restore from backup and move on. We analyzed why error handling didn’t catch the problem, why monitoring didn’t detect corruption earlier and why automated testing didn’t surface the bug and how could recovery and reconciliation be optimized to minimize business impact?

We used these insights to redesign error-handling patterns to fail safely and recover gracefully. enhance monitoring with business-aware observability and anomaly detection, expand automated test coverage across all critical integrations and Implement reconciliation and recovery procedures that minimize downtime and data loss. This approach builds resilience, reduces risk and enhances trust across business and technical teams. Six months later, that investment paid off when a similar failure mode was caught in staging rather than production.

Successful architects maintain awareness of emerging technologies and patterns. We experiment with new tools, strategies and approaches, attend conferences and webinars, participate in professional communities and read technical blogs, case studies and research papers. Staying current is not optional, it is how integration architects remain relevant, proactive and capable of driving innovation.

A rare combination

The modern integration architect is no longer just a middleware specialist. We are platform strategists, security architects, business translators and technical leaders.

Enterprises that invest in these skills build integration platforms that are resilient, secure and scalable. Those that don’t find themselves constantly reacting to failures, upgrades and missed business opportunities — fighting the same fires in every quarterly cycle.

Integration architecture is not a purely technical discipline, nor is it purely strategic. It requires a rare combination of deep technical expertise, business acumen, communication skills and the ability to navigate organizational complexity. Those who develop this multifaceted skill set find themselves uniquely positioned to drive meaningful business transformation in an increasingly interconnected digital world.

In a cloud-first world, integration excellence is enterprise excellence.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Agility is the new IT currency: A roadmap for skills, readiness and innovation

In an era of constant technological change, agility is more than a buzzword; it is the single most critical characteristic of a high-performing IT department. While C-suite leaders look to technology for a competitive edge, many CIOs find themselves wrestling with a fundamental challenge: Innovation is only as strong as a team’s ability to adapt. The most ambitious transformation roadmaps, from AI adoption to cloud migration, will inevitably stall if the workforce’s skills have not kept pace with the technology. This places a new mandate on the CIO, one focused less on managing technology and more on cultivating a culture of continuous learning.

CIOs need to think and act like chief learning officers, treating skill development as a core strategic function rather than solely an HR responsibility.

The urgency of this shift is clear in the data. The World Economic Forum’s Future of Jobs Report continues to list digital skills, cloud know-how and AI literacy among the fastest-growing capabilities. Meanwhile, research from CompTIA shows that nearly three-quarters of CIOs see skills alignment as the top barrier to realizing the value of their technology investments. This creates a dangerous gap between ambition and execution.

In its 2026 Global CEO Survey, PwC found that CEOs’ top concern is whether they are “transforming fast enough to keep up with technology, including AI”. Yet other PwC research on workforce hopes and fears reveals that only a small fraction of workers use generative AI daily. The pace of innovation is dramatically outstripping workforce readiness, creating an urgent mandate for CIOs to become agents of change.

From my perspective, AI upskilling must be treated as a strategic operating system for the entire IT department. Competitive advantage comes from a deep, holistic understanding of where AI fits, what business problems it solves and how humans and systems can work together effectively. It cannot be an afterthought or a hopeful assumption.

You can’t steer without knowing your starting point

To build an effective upskilling program, you must first understand your current capabilities. I advise CIOs to begin by mapping their existing IT, data and AI skills landscape to identify strengths and, more importantly, to expose blind spots or gaps before they become business risks. This requires a structured skills-mapping process that inventories both core technical skills and adaptive work behaviors. The former includes essentials like cloud fluency, modern software engineering, cybersecurity and data science literacy. The latter is about nurturing the human element and the skills that enable teams to thrive amidst change.

Using established competency frameworks, such as the NIST NICE Framework for cybersecurity roles, can provide a standardized language and structure to this process. These frameworks help create consistent job descriptions and clear learning pathways. This assessment should go beyond just listing skills. CIOs should understand how teams actually apply those capabilities in real delivery environments. Regular reassessment, ideally semi-annual, ensures your skills map stays current as technology evolves and business priorities shift, preventing your upskilling program from becoming misaligned.

Build a continuous learning system, not just a training program

The most effective CIOs I know treat learning like a living program, one designed to evolve as technology, roles and business priorities change. This means moving beyond sporadic, one-time training initiatives to build an ongoing capability-building system. Key elements include role-based, modular learning paths. For a cloud engineer, this might mean a path focused on advanced container orchestration and AI-powered observability tools. For a project manager, the path might focus on agile methodologies for running AI projects and data-driven reporting.

It is also critical to create safe environments for experimentation, such as internal sandboxes or pilot programs, where teams can apply new AI skills without operational risk. This fosters a culture where failure is seen as a valuable learning opportunity, not a mistake to be hidden. Furthermore, encouraging peer-led learning through internal workshops, hackathons and formal mentorship programs can accelerate skill transfer and break down the silos that so often hinder progress.

This is crucial because adoption alone does not guarantee success. For instance, new research from the Project Management Institute demonstrates this very point. In our Gen AI and Agility report, we surveyed 2,000 project professionals who use both agile practices and GenAI. We found that while adoption is growing rapidly, the actual value they realize varies widely depending on how the technology is applied and whether agile values are genuinely practiced. Simply giving teams new tools is not enough.

Governance also plays a critical role here, ensuring that learning investments stay connected to business outcomes. In practice, this could mean a small, cross-functional council that meets quarterly to review learning metrics, assess alignment with new business goals and make decisions on retiring old training modules and commissioning new ones. This keeps the program dynamic and prevents it from becoming obsolete.

Embedding this practice into your IT culture is what makes it stick. CIOs can use several tactics to weave continuous learning into the fabric of their departments. Link skill updates to project retrospectives. Tie career progressions and compensation to skills mastery in core areas like AI literacy and data integrity. I’m also a huge advocate for holding “innovation days” where teams can explore new AI tools and features, building confidence with the very technologies the organization is already investing in. Without this focus on adoption, even the best technology is wasted. A 2025 report on digital adoption from WalkMe found that enterprises wasted millions on underused tech last year alone because adoption was an afterthought.

Avoid the common detours on your upskilling roadmap

As you travel this path, be mindful of common pitfalls that can easily derail your efforts. One of the most frequent pitfalls I see is chasing a single trend, like GenAI, at the expense of foundational IT skills. I’ve seen organizations invest heavily in a single large language model API for all employees while their core network infrastructure remains outdated and vulnerable, creating a lopsided and fragile capability. Another pitfall is treating training and upskilling as a “one-and-done” event. Without continuous reinforcement and opportunities for real-world application, the natural “forgetting curve” takes over and knowledge quickly fades. Finally, a failure to apply governance leads to a “wild west” scenario. This results in one department becoming highly proficient in a specific AI tool that is incompatible with the rest of the enterprise, creating new, more complex silos instead of breaking them down. Upskilling for the AI era demands balance; you must build depth in core disciplines alongside adaptability for new technologies.

This AI era requires CIOs to be cultivators of talent, not just managers of technology. Our role is to model and encourage adaptability, continuous learning and disciplined experimentation across our entire IT workforce. To be truly agile, our teams must be empowered with the skills and the confidence to match their ambition. The time has come to move from a model of reactive training to one of intentional, strategic capability-building that becomes part of your organization’s very DNA. By leading this journey, you can ensure your teams and your organization are ready to meet the future with confidence.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Hack the AI agent: Build agentic AI security skills with the GitHub Secure Code Game

I was scrolling through my feed one evening when I came across OpenClaw, an open source personal AI assistant that people were calling everything from “Jarvis” to “a portal to a new reality.” The idea is beautiful: an AI that lives on your machine or in the cloud, talks to you over WhatsApp or Telegram, clears your inbox, manages your calendar, browses the web, runs shell commands, and even writes its own plugins. Users were having it check them in for flights, build entire websites from their phones, and automate things they never thought possible.

My first reaction was the same as everyone else’s: this is incredible.

My second reaction was…different. I started thinking about what happens when that kind of power meets a malicious prompt. What if someone tricks the agent into reading files it should not access? What if a poisoned web page rewrites the agent’s instructions? What if one agent in a multi-agent chain passes bad data to another that blindly trusts it?

Those questions became Season 4 of the Secure Code Game.

The Secure Code Game: Learn secure coding and have fun doing it

The Secure Code Game is a free, open source in-editor course where players exploit and fix intentionally vulnerable code. When I created the first season in March 2023, the goal was straightforward: make security training that developers would enjoy. Fix the vulnerable code, keep it functional, level up. That core philosophy has not changed across any season.

Season 2 expanded into multi-stack challenges with community contributions across JavaScript, Python, Go, and GitHub Actions. Season 3 took players into LLM security, where they learned to hack and then harden large language models. Along the way, over 10,000 developers across the industry, open source, and academia have played to sharpen their skills.

What has changed with each season is the landscape. When we launched Season 1, AI coding assistants were just starting to become mainstream. By Season 3, we were teaching players to craft malicious prompts and then defend against them. Now, with Season 4, we are tackling the security challenges of AI systems that can act autonomously. They can browse the web, call APIs, coordinate with other agents, and act on your behalf.

Why agentic AI security matters right now

The timing is not a coincidence. AI agents have moved from research prototypes to production tools at remarkable speed, and the security community is racing to keep up.

The OWASP Top 10 for Agentic Applications 2026, developed with input from over 100 security researchers, now catalogues risks like agent goal hijacking, tool misuse, identity abuse, and memory poisoning as critical threats. A Dark Reading poll found that 48% of cybersecurity professionals believe agentic AI will be the top attack vector by the end of 2026. And Cisco’s State of AI Security 2026 report highlighted that while 83% of organizations planned to deploy agentic AI capabilities, only 29% felt ready to do so securely.

The gap between adoption and readiness is exactly where vulnerabilities thrive. And the best way to close that gap is by learning to think like an attacker.

Meet ProdBot: your deliberately vulnerable AI assistant

Season 4 puts you inside ProdBot, your productivity bot, a deliberately vulnerable agentic coding assistant for your terminal. Inspired by tools like OpenClaw and GitHub Copilot CLI, ProdBot turns natural language into bash commands, browses a simulated web, connects to MCP (Model Context Protocol) servers, runs org-approved skills, stores persistent memory, and orchestrates multi-agent workflows.

Your mission across five progressive levels is simple: use natural language to get ProdBot to reveal a secret it should never expose. If you can read the contents of password.txt, you have found a security vulnerability.

No AI or coding experience is needed…just curiosity and willingness to experiment. Everything happens through natural language in the CLI.

Five levels, five upgrades, five vulnerabilities

Each level of the game mirrors a stage in how real AI-powered tools evolve. As ProdBot gains new capabilities, the upgrade opens a new attack surface for you to discover. Here is what ProdBot looks like as it grows:

  • Level 1 starts with the basics: ProdBot generates and executes bash commands inside a sandboxed workspace. Can you break out of the sandbox?
  • Level 2 gives ProdBot web access. It can now browse a simulated internet of news, finance, sports, and shopping sites. What could go wrong when an AI reads untrusted content?
  • Level 3 connects ProdBot to MCP servers…external tool providers for stock quotes, web browsing, and cloud backup. More tools, more power, more ways in.
  • Level 4 adds org-approved skills and persistent memory. ProdBot can now run pre-built automation plugins and remember your preferences across sessions. Trust is layered…but is it earned?
  • Level 5 is everything coming together: six specialized agents, three MCP servers, three skills, and a simulated open-source project web. The platform claims all agents are sandboxed and all data is pre-verified. Time to put that to the test.

Each level builds on the previous one, and that progression is the point.

We aren’t going to tell you exactly which vulnerabilities you will find at each level as that would ruin the fun. But we will say this: the attack patterns you will discover in Season 4 are not theoretical. They reflect the kinds of risks that security teams are grappling with right now as organizations deploy autonomous AI systems into production.

Think about CVE-2026-25253 (CVSS 8.8 – High): Known as “ClawBleed” or the one-click Remote Code Execution (RCE) vulnerability. It allowed attackers to steal authentication tokens via a malicious link and gain full control of the OpenClaw instance.

The goal is not just to learn a specific exploit. It is to build the instinct that helps you spot these patterns in the wild, whether you are reviewing an agent’s architecture, auditing a tool integration, or simply deciding how much autonomy to give the AI assistant that just landed on your team.

Get started in under 2 minutes

This entire experience runs in GitHub Codespaces, so there is nothing to install, nothing to configure, and it doesn’t cost you a penny (Codespaces offers up to 60 hours of free usage per month). You can be inside ProdBot’s terminal in under two minutes, and each season is self-contained, so you can jump straight into Season 4 without covering the earlier ones.

You may find Season 3 to be a helpful foundation since it builds the basics of AI security. But it is not required. Just bring your hacker mindset.

Special thanks to Rahul Zhade, Staff Product Security Engineer at GitHub, and Bartosz Gałek, creator of Season 3, for testing and improving Season 4.

FAQ

Do I need AI or coding experience to play Season 4?

No. Everything happens through natural language in the CLI. You type plain English, or any language, prompts and ProdBot responds. Curiosity and a willingness to experiment are all you need.

 

Do I need to complete previous seasons first?

No. Each season is self-contained. You can jump directly into Season 4 by running ProdBot and typing level <N>. That said, Season 3 builds a helpful foundation in AI security and takes about 1.5 hours.

 

How long does Season 4 take?

Approximately two hours, though it varies depending on how deeply you explore each level. Some players like to try multiple approaches per level.

 

Is this free?

Yes. The Secure Code Game is open source and free to play. It runs in GitHub Codespaces, which provides up to 60 hours of free usage per month.

 

What are the rate limits?

Season 4 uses GitHub Models, which have rate limits. If you hit a limit, wait for it to reset and resume. Learn more about responsible use of GitHub Models.

The post Hack the AI agent: Build agentic AI security skills with the GitHub Secure Code Game appeared first on The GitHub Blog.

Can AI Help “Solve” The Child Porn Problem? Magic 8 Ball Says, “Answer Hazy – Ask Again Later”

The technological trajectory is clear: Hash-based systems anchored in the National Center for Missing and Exploited Children (“NCMEC”) database remain highly effective for identifying known CSAM, but they are structurally incapable of addressing synthetic, modified, or previously unseen material. Machine learning systems—trained on large corpora of images—offer the only plausible path forward for detecting novel..

The post Can AI Help “Solve” The Child Porn Problem? Magic 8 Ball Says, “Answer Hazy – Ask Again Later” appeared first on Security Boulevard.

The Dark Web Explained with John Hammond

The dark web is often misunderstood, but it plays an important role in both privacy technology and cybercrime activity. In this episode, Tom Eston speaks with cybersecurity researcher and educator John Hammond about what the dark web actually is and how it has evolved in recent years. The discussion covers underground marketplaces, ransomware leak sites, […]

The post The Dark Web Explained with John Hammond appeared first on Shared Security Podcast.

The post The Dark Web Explained with John Hammond appeared first on Security Boulevard.

💾

❌