Visualização normal

Ontem — 8 de Maio de 2026Stream principal
  • ✇Security | CIO
  • “채용이 곧 공격 경로”…AI 악용한 가짜 IT 인력, 기업 내부 위협으로 확산
    최근 몇 년 사이 가짜 IT 인력을 채용하는 문제는 점점 심각해지고 있지만, 이를 공개적으로 인정하려는 기업은 많지 않다. 포춘 500 기업부터 중소 조직에 이르기까지 원격 채용 방식이 악용되면서, 실제 신원이 아닌 인물에게 신뢰 기반 접근 권한이 부여되는 사례가 발생하고 있으며 이는 내부자 위협으로 이어질 수 있다. 추정에 따르면 미국 전역에서 수천 명의 가짜 IT 인력이 활동 중이며, 이들은 정보와 지식재산(IP), 데이터 탈취는 물론 해외로의 업무 외주화, 시스템 교란, 외국 정부로의 자금 유입 등 다양한 위협 행위를 수행할 수 있는 위치에 있다. 미국 기업 아마존(Amazon)의 최고보안책임자(CSO) 스티브 슈미트는 “북한이 IT 직무를 확보하기 위해 시도한 1,800건 이상의 사례를 차단했으며, 그 수는 계속 증가하고 있다”고 밝혔다. 일부는 개인적인 이익을 위해 미국 직원으로 위장하고, 또 다른 경우에는 북한과 같
     

“채용이 곧 공격 경로”…AI 악용한 가짜 IT 인력, 기업 내부 위협으로 확산

8 de Maio de 2026, 03:58

최근 몇 년 사이 가짜 IT 인력을 채용하는 문제는 점점 심각해지고 있지만, 이를 공개적으로 인정하려는 기업은 많지 않다. 포춘 500 기업부터 중소 조직에 이르기까지 원격 채용 방식이 악용되면서, 실제 신원이 아닌 인물에게 신뢰 기반 접근 권한이 부여되는 사례가 발생하고 있으며 이는 내부자 위협으로 이어질 수 있다.

추정에 따르면 미국 전역에서 수천 명의 가짜 IT 인력이 활동 중이며, 이들은 정보와 지식재산(IP), 데이터 탈취는 물론 해외로의 업무 외주화, 시스템 교란, 외국 정부로의 자금 유입 등 다양한 위협 행위를 수행할 수 있는 위치에 있다.

미국 기업 아마존(Amazon)의 최고보안책임자(CSO) 스티브 슈미트는 “북한이 IT 직무를 확보하기 위해 시도한 1,800건 이상의 사례를 차단했으며, 그 수는 계속 증가하고 있다”고 밝혔다.

일부는 개인적인 이익을 위해 미국 직원으로 위장하고, 또 다른 경우에는 북한과 같은 국가 단위 조직이 자금 확보 및 기타 불법 목적을 위해 IT 인력으로 위장하기도 한다.

현재 AI 기술은 딥페이크 생성, 더욱 정교한 영상 면접 수행, 빠른 신원 변경 등을 가능하게 하며 이러한 위협을 한층 고도화하고 있다.

슈미트는 공격 방식 역시 변화하고 있다며, 단순히 프로필을 조작하는 수준을 넘어 실제 미국인의 신원을 구매해 활용하는 단계로 진화하고 있다고 경고했다.

사이버보안 기업 센티넬원(SentinelOne)의 위협 연구원 톰 헤겔은 “이 문제는 전통적인 의미의 채용 사기가 아니다”라며 “공격자가 ‘채용되는 것’을 첫 단계로 삼는 내부자 위험 문제”라고 설명했다.

CIO, CISO 등 IT 리더들은 가짜 및 사기성 IT 인력에 대해 지속적으로 경계해야 하지만, 조직이 이를 인지하지 못한 채 피해를 입는 경우도 적지 않다.

가짜 인력은 어떻게 채용을 통과하나

채용 과정에는 단일한 실패 지점이 존재하지 않는다. 가짜 및 사기성 IT 인력은 신원을 숨기고, 역량과 경력을 조작하며, 면접과 검증 절차를 별다른 의심 없이 통과한다.

센티넬원은 북한 연계 IT 인력 조직과 관련된 약 360개의 가짜 인물과 1,000건 이상의 채용 지원 사례를 추적했으며, 자사 채용에도 실제 지원 시도가 있었다고 밝혔다.

헤겔에 따르면 공격자들은 점점 더 대규모로 사회공학 기법과 신원 은폐 전략을 활용하고 있으며, 채용 과정은 이들이 침투하기 위한 핵심 진입 지점으로 작용하고 있다.

이들은 합성 또는 도용된 신원을 기반으로 이력서와 온라인 프로필을 만들고, 스크립트나 대리 응시자, AI 기반 응답을 활용해 면접을 통과한다. 또한 백그라운드 체크는 제출된 정보만 검증하기 때문에 이러한 조작을 그대로 통과시키는 구조다.

헤겔은 “가짜 구직자들은 이제 AI 도구를 활용해 실제 지원자를 모방하고 있다”라며 “초기 신원 검증을 통과할 수 있는 합성 신원을 만들고, 경력 이력을 조작하며, 실시간 AI 지원을 통해 면접에서도 설득력 있게 응답한다”고 설명했다.

보안 기업 플래시포인트(Flashpoint)의 조사에서는 HR 및 채용 플랫폼 계정 정보가 저장된 악성코드 감염 시스템, 번역된 면접 코칭 메모가 담긴 브라우저 기록, 해외에서 기업 장비를 원격 조작하는 ‘노트북 팜’, 그리고 가짜 경력 검증을 위한 페이퍼컴퍼니 등이 확인됐다.

문제는 채용 이후다. 채용이 완료되면 계정과 장비가 지급되고 시스템 접근 권한이 부여되면서 이들은 곧 내부 신뢰 인력으로 전환된다. 헤겔은 “장기적인 위험은 단순히 가짜 직원을 채용하는 데 그치지 않는다”라며 “기업 시스템과 민감한 데이터에 악의적인 접근을 스스로 열어주는 결과로 이어질 수 있다”고 경고했다.

가짜 IT 인력 대응 방법

CIO가 가짜 IT 인력을 의심하는 순간부터 문제의 성격은 단순 채용 이슈에서 내부자 리스크 관리로 전환된다. 이후 대응 절차가 무엇보다 중요해진다.

몽고DB(MongoDB) 재직 당시 조사 및 대응을 총괄했던 IANS 자문이자 베드록 데이터(Bedrock Data) CSO 조지 거초우는, 재직했던 회사가 북한 연계 가짜 IT 인력을 채용한 사실을 뒤늦게 인지하고 조사에 착수한 경험을 공유했다.

문제는 엔드포인트 보안 솔루션 제거 시도에서 시작됐다. 거초우는 “크라우드스트라이크 오버워치(CrowdStrike Overwatch)를 포함한 보안 기능을 제거하려는 시도가 감지됐고, 이후 해당 노트북이 북한 IP 주소와 통신하는 정황이 포착됐다”고 설명했다.

이어 “보안 도구 조작과 북한 연계 트래픽이 동시에 나타난 것은 일반적인 신규 입사자의 행동이 아니라는 명확한 신호였다”고 덧붙였다.

조사 결과 해당 인력은 도용된 신원에 AI로 생성된 이력서, 스크립트 기반 면접 답변을 결합해 검증 절차를 통과한 것으로 드러났다. 기존 백그라운드 체크는 제출된 정보만 확인할 뿐, 조작 여부를 탐지하지 못하는 한계가 있었다.

거초우는 “많은 검증 시스템이 조작된 경력, 합성 신원, 재활용된 개발자 프로필을 식별하지 못한다”라며 “이 때문에 별다른 경고 없이 채용과 면접을 통과할 수 있었다”고 설명했다.

이후 조사에서는 보안 도구 비활성화 시도, 장비 내 지속 접근 확보, 권한 상승 탐색 등의 행위가 확인됐다. 거초우는 “발각되지 않았다면 연방 보안 인증(FedRAMP) 환경까지 접근했을 가능성이 있다”며 위험성을 강조했다.

놓치기 쉬운 ‘경고 신호’…단편적 대응이 문제

사건 이후 돌아보니 여러 이상 징후가 있었다. 면접 영상 품질이 낮고 화면이 불명확했으며, 통화마다 억양이 일관되지 않았다. 면접 평가도 분산돼 있었고, 이를 통합 검토하는 체계가 없었다.

노트북 배송 주소를 마지막 순간 변경한 점도 주요 단서였다. 거초우는 “이는 ‘섀도우 워커’들이 자주 사용하는 전형적인 수법”이라고 말했다.

문제는 이러한 징후가 각각 개별적으로는 채용을 막을 정도로 치명적이지 않았다는 점이다. 거초우는 “각 이상 징후를 통합해 판단하는 역할이 없었기 때문에, 엔드포인트 경고가 발생하기 전까지 패턴을 인식하지 못했다”고 설명했다.

발견 이후 대응…즉각 차단과 전면 조사

가짜 인력이 확인되자 팀은 즉시 장비를 격리하고 모든 계정을 폐기했으며, 포렌식 조사를 실시하고 연방 당국에 신고했다. 조사 결과 데이터 유출이나 내부 확산은 발생하지 않은 것으로 확인됐다.

이후 대응 조치로는 채용 과정에서의 신원 검증 강화, 초기 이상 징후를 통합 관리하는 ‘옐로 플래그’ 담당자 지정, 신규 입사자에 대한 신뢰 확보 전까지 접근 권한 제한 등이 도입됐다.

“신원보다 행동”…채용 이후 모니터링 강화해야

거초우는 채용 이후 행동 기반 모니터링의 중요성도 강조했다. 단순 자격 증명보다 실제 사용 행태가 위장 인력을 식별하는 핵심이라는 설명이다.

이에 따라 기업은 보안 또는 HR 부서 내 검토 담당자를 지정해 면접 영상 품질 저하 등 채용 과정의 불일치를 식별해야 한다. 또한 AI로 생성된 링크드인 프로필, 이력서 불일치, 장비 배송 주소 변경 등도 주요 점검 대상이다.

패널 면접과 프로젝트 기반 평가를 통해 도용 또는 가짜 개발자 신원을 재활용하는 지원자를 식별하고, 신규 입사자에게는 초기 단계에서 민감 데이터나 운영 환경 접근을 제한하는 것이 필요하다.

또한 IAM, EDR, VPN 등 보안 에이전트가 비활성화될 경우 경고를 설정하고, 가짜 개발자 채용 상황을 가정한 탐지·대응 훈련도 병행해야 한다.

거초우는 “근무 시간 외 접근, 내부 시스템 전반에 대한 과도한 검색, 대량의 문서 및 코드 저장소 복제 시도 등도 주요 이상 징후로 주의 깊게 살펴야 한다”고 강조했다.

IT 리더들이 내부에서 목격하는 현실

고용 사기 문제는 앞으로 더욱 악화될 전망이다. 가트너는 2028년까지 전 세계 채용 지원자의 4명 중 1명이 가짜일 것으로 예측했다.

에너지솔루션(Energy Solutions)의 CIO 데이비드 웨이송은 “가짜 및 사기성 구직자의 증가는 조직 전반에 걸친 ‘전염병’ 수준으로 확산되고 있다”고 말했다.

웨이송에 따르면 공격자들은 데브옵스, 시스템 관리자, 데이터 엔지니어, 데이터베이스 관리자 등 높은 접근 권한을 가진 기술 직무를 집중적으로 노린다. 이러한 직무에 채용될 경우 핵심 시스템에 대한 깊은 가시성과 통제 권한을 확보할 수 있기 때문이다.

웨이송은 “이들 직무는 사실상 ‘성문 열쇠’를 쥔 역할”이라며 “시스템 접근을 노린다면 일반 개발자보다 훨씬 가치가 높은 목표”라고 설명했다.

규제가 엄격한 에너지 시장에서 운영되는 에너지솔루션은 미국 내 인력 채용과 데이터의 미국 내 보관이 계약상 의무화돼 있다.

웨이송은 가짜 IT 인력을 직접 식별한 경험을 바탕으로 다른 IT 리더들에게 경고를 전했다. 가장 초기 징후 중 하나는 비정상적인 지원자 급증이었다. 수 시간 만에 수백 건의 지원서가 몰렸으며, 이는 기업 인지도 대비 과도한 수준으로 자동화 또는 조직적인 활동을 시사했다.

면접 단계에서는 ‘신원 바꿔치기’ 사례도 확인됐다. 웨이송은 “전화 인터뷰를 통과한 사람과 화상 면접에 등장한 사람이 다르고, 이후 또 다른 인물이 나타나는 경우도 있었다. 모두 동일한 이름과 이력서를 사용했다”고 밝혔다.

문제의 근본 원인 중 하나는 기존 채용 절차가 정보와 역량을 개별적으로 검증한다는 점이다. 웨이송은 “전통적인 백그라운드 체크는 제출된 정보만 확인할 뿐, 사기를 식별하지 못한다”고 지적했다.

일부 CIO에게는 불편한 현실이지만, 이들이 수행하는 업무 결과 자체는 높은 수준일 수 있으며, 탐지는 성과가 아닌 이상 징후를 통해 이뤄지는 경우가 많다.

그러나 가짜 IT 인력은 보안 위험뿐 아니라 비즈니스와 규제 리스크도 동시에 초래한다. 특히 규제 산업에서는 계약 위반, 규제 조사, 고객 신뢰 상실로 이어질 수 있다.

웨이송은 “가짜 IT 인력은 보안 문제를 넘어 비즈니스와 컴플라이언스 측면에서도 심각한 위험을 초래하며, 규제 산업에서는 계약 위반과 규제 리스크, 고객 신뢰 훼손으로 이어질 수 있다”고 강조했다.

가짜 IT 인력 대응 전략

아마존(Amazon)은 AI 기반 도구와 인적 검토를 병행해 의심스러운 연락처 정보와 허위 학력, 가짜 기업 이력을 식별하고 있다. 또한 보안팀은 수상한 링크드인 프로필을 표시하고, 대면 면접과 사무실 출근을 강화하며, 컴퓨터 사용 패턴과 업무 품질을 모니터링하고 물리적 토큰 기반 인증을 적용하고 있다.

스티브 슈미트는 포츈 인터뷰를 통해 IT와 HR 부서 간 긴밀한 협력이 문제 해결의 핵심이라고 강조했다. 그는 “문제를 초기에 발견하는 것이 HR 조직 입장에서도 훨씬 비용 효율적”이라고 밝혔다.

센티넬원의 헤겔은 채용에 대한 접근 방식 자체를 바꿔야 한다고 지적했다. 그는 “채용을 단순 인사 절차가 아닌 접근 권한 통제 문제로 봐야 한다”라며 “신원을 한 번 확인하는 체크리스트로 끝내지 말고, 원격 채용을 특권 접근 권한 부여처럼 다뤄야 한다”고 설명했다.

에너지솔루션의 웨이송은 경험을 바탕으로 채용 시스템과 내부 프로세스 전반에 걸쳐 대대적인 변화를 도입했다.

채용 공고 단계부터 기술 직무 지원자가 요구사항과 책임을 명확히 이해하도록 모든 문서에 이를 명시했다. 웨이송은 “특히 ‘완전 원격 근무’라는 표현을 제거한 이후, 사기 시도와 해외 지원이 눈에 띄게 줄었다”고 말했다.

이어 “제로 트러스트 방식이 이상적이긴 하지만 채용 과정 자체를 저해하거나 정상 지원자를 위축시켜서는 안 된다”라며 “자동화된 사기 지원자가 애초에 채용 파이프라인에 들어오지 못하도록 충분한 대응책을 마련해야 한다”고 강조했다.

지원자 급증 문제를 해결하기 위해 에너지솔루션은 채용 공고에 강력한 CAPTCHA를 적용하고, 직원 추천 보너스를 통해 내부 네트워크 기반 채용을 확대했으며, 신규 입사자에게는 90일 성과 검증 기간을 운영하고 있다.

채용 심사 과정에서는 전화 대신 영상 면접을 실시하고, 실시간 과제를 위해 화면 공유를 요구한다. 또한 면접 이후 보고서를 통해 지원자의 실제 위치를 검증하며, 미국 외 지역에서 접속할 경우 ‘옐로/레드 플래그’로 분류한다.

지원자는 근무할 사무실을 직접 선택해야 하며, 면접 과정에서 AI 사용 시 탈락될 수 있다는 점에도 동의해야 한다.

경력 및 추천서 검증을 위해 최소 2명의 추천인을 요구하고, 그중 1명은 이전 상사 또는 관리자여야 한다. 과거 근무 이력과 이전 회사도 확인하며, 자택 주소 제출도 의무화했다.

접근 권한 통제를 위해 신규 직무가 민감 정보에 대한 고급 접근 권한을 포함하는지 여부를 사전 문서에서 확인하도록 했다.

입사 첫날에는 반드시 사무실에 출근해 장비를 수령하고 온보딩 교육을 받아야 하며, 모든 직무는 초기에는 온사이트 근무가 원칙이다. 이후 성과가 검증된 경우에만 하이브리드 근무가 허용된다.

웨이송은 “이 문제를 해결하기 위해서는 채용 프로세스를 재점검하고 HR과 긴밀히 협력하며, 각 대응 조치의 효과를 지속적으로 점검해야 한다”고 강조했다. 이어 “채용 시스템 자체가 잘못된 것이 아니라, 신뢰를 단계적으로 구축하는 방식으로 접근해야 한다”고 덧붙였다.
dl-ciokorea@foundryco.com

Antes de ontemStream principal
  • ✇Security | CIO
  • The AI assessment gap: Why your hiring process can’t find the talent you need
    The next time someone on your team says, ‘hire an AI engineer,’ stop the conversation. That title is too vague because it fails to account for critical differences in engineering strengths. Instead, companies need to decide specifically what they need. Is it someone to rapidly prototype AI solutions? Or someone to build the solution that makes it ready for production? Or someone to design the supporting capabilities and infrastructure to scale it? These are all different
     

The AI assessment gap: Why your hiring process can’t find the talent you need

6 de Maio de 2026, 07:00

The next time someone on your team says, ‘hire an AI engineer,’ stop the conversation.

That title is too vague because it fails to account for critical differences in engineering strengths. Instead, companies need to decide specifically what they need. Is it someone to rapidly prototype AI solutions? Or someone to build the solution that makes it ready for production? Or someone to design the supporting capabilities and infrastructure to scale it? These are all different skills and require different assessments during the hiring process.

But here’s where companies also fall short. Assessing skills is hard and assessments, as we know them, are broken when it comes to AI. They’re misaligned with what AI roles actually demand. That misalignment is what I call the AI assessment gap.

Where the gap lives

Most technical assessments were built for a pre-AI world: Coding proficiency, algorithms, deterministic system design. These are skills tests. They confirm that an engineer can do the work. But they don’t tell you whether that engineer has the technical taste to make the right decisions when building, scaling or deploying AI systems in production.

In conversations with enterprise engineering leaders, we’re hearing that candidates are now running AI agents during live interviews, getting textbook-perfect answers fed to them in real time. If your assessment can be passed by an AI whispering in someone’s ear, it was never testing for the right thing. Skills can be faked or augmented. Taste can’t.

To see what this looks like, consider this scenario: An enterprise needs someone with deep experience in a specific data platform. A candidate passes the data engineering assessment. They get to the client interview, and the hiring manager says: ‘Tell me about a time you had to make a hard tradeoff in designing a streaming architecture.’ The candidate defines every concept involved. They don’t have the taste to explain why one approach would be dramatically better than another for a specific context. They’re out.

This happens because most assessment pipelines only test for skills: Can they code, understand the fundamentals? Nobody is systematically testing for technical taste: Can this person make better-than-default decisions about architecture, tooling and approach? That question only surfaces once someone with real experience asks it. By then, everyone has wasted time and the role is still open.

Traditional job postings compound the problem by filtering for ‘5+ years of AI experience,’ which screens out strong candidates because the category itself is only a few years old. What matters at the AI layer isn’t tenure. It’s the depth and specificity of what someone has built, deployed or scaled in production. Meanwhile, seniority at the foundational role level still matters in the ways it always has: A senior engineer brings architectural judgment that can’t be shortcut. The mistake is applying years-of-experience filters to the AI layers, where the work hasn’t existed long enough for tenure to be a meaningful signal.

One of the most telling signals of a broken assessment process: When stakeholders simultaneously complain that assessments are too hard and too easy. That’s not a calibration problem. It means the assessments aren’t measuring the right things in the first place. They’re testing for skills when they should be testing for taste.

Start with the work, not the title

To close the AI assessment gap, decompose the problem before you assess and decompose the need across the dimensions that actually determine whether someone can do the job. For example:

DimensionThe QuestionWhat It DeterminesHow You Evaluate
RoleWhat technical domain does the work live in?Foundational skills and stack (e.g., backend engineer, Python, PostgreSQL)Skills assessment: Project-based or simulation-based filter that confirms core engineering competency
SeniorityWhat level of judgment and autonomy does this work require?Engineering maturity, depth of technical taste, ability to operate under ambiguityExperience depth at the role level: Years of practice in the domain, complexity of systems designed and shipped
AI Engagement PatternHow will this person engage with AI systems?The specific technical taste required (e.g., Prototyper needs taste for sensing value; Builder needs taste for architecture and integration decisions; Scaler needs taste for performance, governance and risk tradeoffs)Applied assessments: Not ‘define RAG’ but ‘given this use case, which approach would you choose and why?’ Testing for tradeoff reasoning, not terminology

This decomposition replaces the single job description with a structured picture of what you actually need. It also immediately reveals whether you’re looking for one person or a team. If the project requires rapid prototyping to find value and then a production build, you probably need engineers with different profiles–not one ‘AI engineer’ who’s supposed to do both.

Three things most enterprises get wrong

  1. They test for skills when they should test for taste. Most assessments confirm that an engineer can write code and define concepts. They don’t test whether that engineer can make the architectural and tooling decisions that actually determine project success. An engineer who knows what agentic search is and an engineer who knows when agentic search is the right choice for a specific problem are two completely different hires. The first passes your skills test. The second delivers in production.
  1. They conflate skills with experience. A skills assessment tells you if someone can do the work. An experience validation tells you if someone has done the work in the specific context the job demands. These require completely different evaluation methods. When companies try to test both with a single instrument, they get the ‘too hard and too easy’ paradox: The assessment is simultaneously screening out competent people and letting through candidates who can’t perform. Seniority and years of experience are meaningful at the role level, where 10 years of backend engineering builds real architectural judgment and compounds technical taste. They’re much less meaningful at the AI engagement layer, where the work itself is only a few years old and depth of hands-on exposure matters more than calendar time.
  1. They treat assessment as a snapshot. The traditional model is a one-time gate: Pass or fail, in or out. In an AI world where skills are evolving monthly, that approach breaks down fast. Six months ago, almost nobody was shipping production code with agentic tools like Claude Code. Model Context Protocol, which lets AI systems plug into enterprise tools and data sources, was barely on anyone’s radar. Now enterprises are hiring for these skills specifically. Six months from now, the list will change again.

That means an assessment built in January is already partially stale by June. Companies that treat assessment as a living system, continuously updated by performance signals from real engagements, will consistently field better talent than those running the same tests they built a year ago.

The reskilling imperative

The reality is, there is no way to close this gap through hiring alone. The supply of engineers who already have the technical taste for AI work is a tiny fraction of what the market demands.  For example, since the launch of ChatGPT in 2022, demand for roles that require more analytical, technical or creative work has increased by 20%.

Which means enterprises have to reskill and upskill existing workforces. And without a targeted approach mapped to actual needs, AI upskilling efforts often fail, leaving employees unsupported and initiatives stalled.

This is where the multi-dimensional model pays off beyond hiring. The same framework that powers your talent acquisition also powers your training strategy. Assessment results don’t just filter candidates in or out. They generate a heat map of where your workforce is strong and where it’s thin, across every dimension: Role competency, seniority depth and the specific technical taste required for prototyping, building or scaling AI systems. That heat map becomes your training roadmap.

Most companies skip this entirely and jump straight to ‘let’s buy an AI training program.’ Without that foundation, even the best training program is solving the wrong problem.

Ever ready

In the world of AI, the most critical skill might be knowing that you don’t and can’t possibly know everything. Or even what’s coming next. The roles we need today will look different in six months. The skills taxonomies we build now will need constant revision. The assessments we deploy this quarter will need recalibration by next quarter.

Companies that accept this reality and build nimble, multi-dimensional approaches to talent assessment will find something valuable: The technical taste they need already exists in their workforce, hiding behind outdated job descriptions and misaligned tests. CIOs must actively audit these descriptions to eliminate the traditional experience filters that mask the latent talent already sitting on their teams.  The others will keep posting for ‘AI engineers’ and wondering why nobody who gets hired can actually do the job.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • The fake IT worker problem CIOs can’t ignore
    Hiring fake IT workers has been a growing problem in recent years — but it’s often a problem very few want to admit to. From Fortune 500 companies down to smaller organizations, remote hiring practices have been exploited to grant trusted access to individuals who are not who they claim to be creating an insider threat risk. Estimates suggest there are thousands of fake IT workers operating across the US who are in a position to steal information, IP and data, outsource
     

The fake IT worker problem CIOs can’t ignore

5 de Maio de 2026, 07:01

Hiring fake IT workers has been a growing problem in recent years — but it’s often a problem very few want to admit to. From Fortune 500 companies down to smaller organizations, remote hiring practices have been exploited to grant trusted access to individuals who are not who they claim to be creating an insider threat risk.

Estimates suggest there are thousands of fake IT workers operating across the US who are in a position to steal information, IP and data, outsource work offshore, carry out sabotage, or funnel money to foreign governments.

Amazon has identified and blocked more than 1,800 attempts by North Korea to secure IT roles — and the numbers are rising, according to its chief security officer, Steve Schmidt.

In some cases, individuals impersonate US employees for personal gain; in others, state-based operatives such as those from North Korean pose as IT workers for state financial gain and other nefarious purposes.

AI is now enabling deepfakes, more convincing video interviews, and rapid identity cycling.

Adversary tactics are also shifting, from fabricating profiles to purchasing legitimate American identities, Schmidt has warned.

“This is not a ‘recruiting scam’ in the traditional sense. It’s an insider-risk problem, where the adversary’s first move is to get hired,” says Tom Hegel, distinguished threat researcher at SentinelOne.

CIOs, CISOs, and other IT leaders need to be continually on guard against fake and fraudulent IT workers, but organizations can fall victim without realizing it.

How fake hires get through

There’s no single point of failure in the recruitment process. Fake and fraudulent IT workers conceal their identity, falsify their skills and experience, and move through interview and screening processes undetected.

SentinelOne has tracked roughly 360 fake personas and more than 1,000 job applications linked to North Korean IT worker operations, including attempts to apply for roles within the company itself.

According to Hegel, adversaries are increasingly deploying social engineering tactics and identity obfuscation at scale, and the hiring process is a prime entry point.

Synthetic or stolen identities are used to create resumes and online profiles; interviews are passed with the help of scripts, stand-ins, or AI-assisted responses; and background checks confirm only what’s presented to them.

“Fake job seekers now leverage AI tools to mimic legitimate candidates, creating synthetic identities that pass initial background checks, falsifying employment histories and even responding convincingly in interviews using real-time AI assistance,” Hegel says.

Flashpoint investigations have found malware-infected hosts containing HR and job-board logins, browser histories showing Google-translated coaching notes, remote-access “laptop farms” used to control corporate devices from overseas, and shell companies to prove reference checks for fabricated resumes.

Once they’re hired, credentials are issued, equipment is shipped, and access is granted — and they become a trusted insider. “The long-term risk isn’t just hiring a fake employee — it’s unknowingly opening your systems and sensitive data to malicious access,” he says.

What to do if you suspect a fake IT worker

When a CIO suspects a fake IT worker, next steps are important as the issue shifts from recruitment to insider risk management.

During his time at MongoDB, George Gerchow, IANS faculty advisor and Bedrock Data CSO, oversaw the investigation after the company detected it had unknowingly hired a North Korean IT worker.

It was first discovered after alerts that an individual was attempting to uninstall endpoint protections, including CrowdStrike Overwatch. “Overwatch then detected the laptop communicating with a North Korean IP address,” says Gerchow.

“That combination of tool tampering plus DPRK-linked traffic immediately signaled that this was not a typical new hire,” he tells CIO.

Mongo realized the fake worker used a stolen identity, paired with AI-generated resume content and scripted interview responses, to evade background checks that verify only the information provided and do not detect fraud.


It highlights a gap in many background checks. “They don’t detect fabricated work histories, synthetic identities, or recycled developer profiles, which is how this individual passed screening and interviews without raising formal flags,” he says.

The subsequent investigation found attempts to disable security tooling, establish persistence on the device, and probe for elevated access.

“Had they remained undetected, their access would have eventually expanded into our FedRAMP environment, which makes these fraud techniques especially high-risk,” Gerchow adds.

After the discovery, several yellow flags became obvious such as poor video quality and unclear visuals during interviews, a noticeably inconsistent accent between calls, and scattered interview feedback with no centralized review.

Another tell was a last-minute change to the laptop shipping address. “That’s a common shadow-worker tactic,” notes Gerchow.

With hindsight, Gerchow joined the dots and it became clear how the person had made it through to employment because any irregularities were treated in isolation.

“None of these individually would prevent a hire. However, because no one was responsible for aggregating subtle anomalies, the pattern wasn’t recognized until the endpoint alert fired,” he says.

When they were discovered, the team quickly isolated the device, revoked all credentials, conducted a full forensic investigation, and notified federal authorities. “We verified there was no data exfiltration or lateral movement,” he says.

The mitigation steps introduced included strengthening identity fraud screening in the hiring process, assigning a Yellow Flag owner to connect early signals, and enforcing zero access until trust is earned for new hires,


Gerchow also believes that behavioral telemetry post-hire is necessary, because behavior, not credentials, reveals impostors.

Mongo recommends organizations designate a reviewer in Security or HR to identify inconsistencies in the hiring process, such as poor video quality. “Also watch for AI-generated LinkedIn profiles, mismatched resumes and questionable changes in laptop shipping addresses,” he says.

“Use panel interviews and project-based evaluations to identify candidates who recycle stolen or fake developer identities, and start new hires without access to sensitive data or production environments,” he advises.

Then employ alerts if security agents (such IAM, EDR, VPN) are disabled before a new hire logs in, and test detection, escalation, and device recovery by simulating the hiring of a fake developer.

“And look for off-hours access, broad internal search activity and large-scale cloning of documents or code repositories,” he adds.

What IT leaders see on the inside

The problem of employment fraud is only expected to worsen, with Gartner predicting that one in four candidate profiles worldwide will be fake by 2028.

“The rise of fake and fraudulent job applicants has become an epidemic across organizations,” says David Weisong, CIO of Energy Solutions.

Weisong says attackers consistently target high-access technical roles such as DevOps, systems administrators, data engineers, and database administrators, where successful hires can gain deep visibility and control over core systems.

“These are the roles with the keys to the castle,” Weisong says. “If you’re trying to gain access, they’re far more valuable than a standard developer position.”

Operating in a regulated energy market, Energy Solutions is contractually required to employ a US-based workforce and keep data within US jurisdiction.

Weisong has first-hand experience with detecting fake IT workers and wants to share his advice with other IT leaders. One of the earliest warning signs was a sudden, abnormal surge in applications — hundreds arriving within hours, far out of proportion to the company’s brand profile, pointing to automated or coordinated activity.

During the interview stage, identity switching was observed. “We saw cases where one person passed the phone screen, a different person showed up on Zoom, and sometimes a third appeared later — all under the same name and resume,” Weisong says.

Part of the problem is that standard hiring practices validate information and skills in isolation. “Traditional background checks only verify the information provided and do not detect fraud,” Weisong also notes.

The uncomfortable reality for some CIOs is that the work may be completed to a high standard and detection comes from signals, not performance.

However, fake IT workers create business and compliance risk as much as security risk, exposing organizations to contractual breaches, regulatory consequences, and loss of client trust — particularly in regulated industries.

Weisong says fake IT workers create business and compliance risk as much as security risk, exposing organizations in regulated industries to contractual breaches, regulatory scrutiny, and loss of client trust.

Combating the problem of fake IT workers

Amazon is using AI-based tools with human oversight to identify unusual contact information, as well as fake academic institutions and companies in resumes, according to Schmidt. Security teams will flag LinkedIn profiles that look suspicious, require more in-person interviews and in-office attendance, monitor computer usage and quality of work, and authenticate with a physical token.

He has also said that IT and HR need to collaborate on hiring to combat the problem.

“It’s actually a lot cheaper for the HR organization if we discover the problem up front,” Amazon’s Schmidt told Fortune.

The shift required, says SentinelOne’s Hegel, is treating hiring decisions as an access control problem rather than a recruitment task. “Stop treating identity as a one-time HR checkbox and start treating remote hiring like you would grant privileged access,” he says.

In the wake of his experience, Weisong instituted a raft of changes to its applicant tracking system and across the organization’s internal systems and processes.

When advertising for positions, they make it clear that candidates applying for technical positions understand the expectations and consequences outlined in all written communication. “Additionally, removing the term ‘fully remote’ from our hiring practices has significantly reduced opportunities for fraud and for applicants applying from outside the US,” he says.

“While a ‘zero-trust’ approach would be ideal for all hiring, we cannot allow it to impede the process or discourage legitimate candidates from applying. Instead, we need sufficient countermeasures to prevent automated and fraudulent applicants from reaching the pipeline in the first place,” he adds.

To control the large volume of applications, many of which are bots, Energy Solutions job listings now have strict CAPTCHA settings, referral bonuses help draw on employee networks, and there’s a 90-day satisfactory performance review for new hires.

During the screening process, interviews are conducted via video not phone, and applicants must share their screen for live challenges. A post-video interview report allows them to verify the exact location of applicants after screening and interview meetings. If a candidate is outside the US, it’s treated as a Yellow/Red flag.

Applicants must select which office they want to work from and they must acknowledge they understand use of AI during interviews will result in disqualification.

To verify references and employment history, they require two references, with one a former supervisor or manager. Employment history is checked, including previous employers, and full home address must be provided.

To guard access, a question has been added to the job kick-off form that indicates whether a new role will have elevated access to confidential or sensitive information.

The first day on the job requires new hires to come into an office to pick up equipment and undertake training and onboarding. All roles must be onsite, with the option to go hybrid after satisfactory performance.

Combating the problem, says Weisong, requires reviewing hiring processes, partnering closely with HR, and monitoring the effectiveness of each countermeasure. For CIOs, the lesson is not that hiring is broken, but that trust must be earned progressively.

  • ✇Security | CIO
  • CISOs step up to the security workforce challenge
    A robust cybersecurity program needs a range of skilled people, yet many CISOs continue to face an ongoing skills shortage — and the squeeze may only get worse as AI gains traction. Some 95% of cybersecurity practitioners and decision-makers noted at least one security skills gap at their organization, with almost 60% citing critical or significant skills gaps, according to ISC2’s 2025 Cybersecurity Workforce Study. AI is the most pressing skill need, followed by clo
     

CISOs step up to the security workforce challenge

5 de Maio de 2026, 06:00

A robust cybersecurity program needs a range of skilled people, yet many CISOs continue to face an ongoing skills shortage — and the squeeze may only get worse as AI gains traction.

Some 95% of cybersecurity practitioners and decision-makers noted at least one security skills gap at their organization, with almost 60% citing critical or significant skills gaps, according to ISC2’s 2025 Cybersecurity Workforce Study.

AI is the most pressing skill need, followed by cloud security, risk assessment, application security, security engineering, and governance, risk, and compliance (GRC), the survey found.

There are no simple solutions for a profession that requires passion, curiosity, and a thirst for defending systems. Such professionals are a rare breed.

“You need to have a special mindset,” says Juan Gomez-Sanchez, VP of cyber resilience at McLane Company.

“While IT people are obsessed with how things work, security people are obsessed with how things break, and people who are truly effective and passionate about that can be difficult to find,” says Gomez-Sanchez.

Add to that the fact that the cyber degree studies are challenging, technology is changing rapidly, and the profession is still comparatively young, and the true extent of the problem becomes clear.

If CISOs can’t hire the skills they need, some will look toward in-house training and development to foster the expertise they need.

“Hiring certain types of security professionals can be very difficult because the skills are not held by a lot of people, so I look for someone who’s got a solid security foundation in one or more other areas and transition them,” says Keith Turpin, CISO of The Friedkin Group.

This is its own challenge, requiring time and a good deal of unlearning certain things and honing that ‘how to break’ security mindset. For example, Turpin says, upskilling “someone who’s competent in networking, server administration, or software development to the equivalent security role takes an additional two years.”

Turpin has found that just establishing the security mindset can take up to a year within that timeframe. “Instead of thinking, ‘How do I keep it going,’ as the security person it’s thinking, ‘How can it go wrong.’ It’s a different approach,” he says.

“If I can find someone who’s got the right drive, the right people skills, they’re a good cultural fit, and they have the potential, I can turn them into a good technologist,” adds Turpin, who like Gomez-Sanchez will be inducted into the CSO Hall of Fame this year.

Gomez-Sanchez and Turpin are speaking at the CSO Cybersecurity Awards & Conference, May 11-13. Reserve your place.

AI changes the equation

And then there’s AI. When it comes to security, AI may help partially offset cyber skills shortages by automating certain tasks, but it also ramps up cyberattack volumes and expands the organizational attack surface, without fixing CISOs’ ongoing talent pipeline problems. In fact, AI may end up worsening the structural skills shortage.

“You can have 100, 1,000, 10,000 instances of AI doing the work of enabling attacks at much greater scale, including against smaller, less protected targets because they’re now within reach because the barrier is lower,” says Turpin.

This increases the pressure on defenders, putting more pressure on the workforce challenge, even as AI helps automate some tasks. But it’s not going away and will only increase in importance for both attackers and defenders.

“I’m encouraging my teams to look for opportunities to leverage AI and look at how our vendors are leveraging AI,” he says.

“This is what we’re going to be dealing with five years down the road. It’s going to be the center of technology so we can’t afford not to learn this,” he adds.

Reducing the organizational risk of skills shortages

Skills shortages are more than just an inconvenience; they pose organizational risks on par with threats and malicious attacks, says Gomez-Sanchez, who views them “much the way that you think about threat actors and vulnerabilities.”

“Your ability to execute is limited by the amount of people you have to actually do the work,” he explains.

As a result, Gomez-Sanchez encourages CISOs to view the skills gaps and talent shortages as a first-class security risk that needs to be managed as a KPI for the security function. “Our ability to attract and retain good talent is a major measure of capability,” he says.

Being structural rather than temporary, skills gaps place significant pressure on CISOs’ sourcing decisions. “Security people may choose to do things differently, especially as it relates to insourcing or outsourcing because of the talent shortage,” Gomez-Sanchez notes.

By the same token, staffing constraints can shape architecture and tooling choices. For example, Gomez-Sanchez adds, a host of best-of-breed point tools instead of a more integrated platform usually requires more headcount and expertise to stitch together.

Gomez-Sanchez also gives the example of adopting a single hyperscaler versus a multicloud strategy and the increase in human workload and skills required to secure it. “Ultimately, you want to leverage native controls within the hyperscaler, and that requires you to have specialized skills in each one of those,” he says.

CISO have also looked to automation to absorb some headcount pressure, but doing so isn’t always a simple fix. Gomez-Sanchez sees agent-enabled automation as a means for providing more firepower for developers and analysts, among other roles. But the reality of agentic AI capabilities for cybersecurity remains a work in progress.

What’s clear is that persistent talent shortages are forcing CISOs to rethink hiring and training as one of numerous ways to reduce the risk that comes with the skills gap. This entrenched problem — and CISOs’ attempts to address it — will also have a significant impact on the decisions security leaders will make regarding cyber architecture, platforms, processes, and AI use ahead.

The cyber talent gap is putting increasing pressure on the cyber agenda, and your peers are already adapting. Hear Juan Gomez-Sanchez, Keith Turpin, Jen Spencer, and other leading CISOs share what’s working at the CSO Cybersecurity Awards & Conference, May 11-13. Secure your seat before it fills up.

  • ✇Security | CIO
  • AI won’t fix tech talent gaps — but YOU can
    Every CIO I talk to — and I talk to a lot of them — agrees that skills-first hiring makes sense. And with AI now embedded in nearly every stage of the hiring process, from resume screening to candidate matching, many assume the technology will finally make it happen at scale. It won’t. AI can accelerate hiring decisions, but it can’t fix the underlying systems that power those decisions. Despite initial progress in removing college degree requirements from job posting
     

AI won’t fix tech talent gaps — but YOU can

4 de Maio de 2026, 06:00

Every CIO I talk to — and I talk to a lot of them — agrees that skills-first hiring makes sense. And with AI now embedded in nearly every stage of the hiring process, from resume screening to candidate matching, many assume the technology will finally make it happen at scale.

It won’t. AI can accelerate hiring decisions, but it can’t fix the underlying systems that power those decisions.

Despite initial progress in removing college degree requirements from job postings, many organizations are still getting it wrong — and AI is giving them new ways to get it wrong faster. Agreeing on a principle isn’t the same as operationalizing one. Even when there’s a skills-first hiring strategy in place, execution fails if IT, HR and business leaders aren’t aligned on outcomes, accountability and measurement. When AI screening tools are layered on top of misaligned systems, the result isn’t smarter hiring; it’s automated bias with a veneer of objectivity.

Why skills-first hiring became a buzzword

The idea of prioritizing skills in hiring decisions isn’t new. Competency-based hiring has been discussed in talent management circles since the 1970s. Over the last two decades, the growing technology skills gap, the explosion of non-traditional learning pathways and the broader recognition of “degree inflation” pushed skills-based hiring into the mainstream. Large employers publicly dropped degree requirements. States followed. Everyone was buzzing about skills-first hiring.

But buzz doesn’t change systems. Data from Indeed showed a decrease in job postings with college degree requirements between 2019 to 2024, but by November 2025, the number swung back up, nearly erasing the gains of the previous five years. 

Meanwhile, the skills that matter most now — prompt engineering, AI tool fluency, the ability to scope and complete AI-augmented projects — are being developed outside traditional degree programs. That makes degrees an even worse proxy for career readiness.

Announcing that an organization is “skills-first” without redesigning the infrastructure that surrounds hiring — job descriptions, applicant screening, recruiter training, interview rubrics and onboarding frameworks — doesn’t change practices. A recent survey by the University of Phoenix found that 69% of hiring decision makers believe there’s still too much focus on college degrees, with little clarity on what they should evaluate instead.

The 3 most common failure points

From my experience in working with CIOs on their entry-level talent pipelines, skills-first initiatives tend to break down in one or more of these places: job descriptions, screening tools and internal skepticism.

First, take job descriptions. Hiring managers tend to default to historical templates — pasting in degree requirements and years-of-experience thresholds that were never validated against actual job performance and are even less relevant with AI in the mix.

Second, screening tools. Nearly 90 percent of companies are using some form of AI to screen candidates, expecting greater efficiency and less bias. But AI screening tools learn from existing hiring data — which, if biased, just means that bias is now automated. Patterns in successful candidates’ backgrounds get baked into future decisions, except now, these decisions appear “data-driven” and neutral instead of more obviously predicated on certain hiring managers’ preference for college graduates.

The third failure point is internal skepticism — and the training gap that feeds it. According to the survey by the University of Phoenix, one in four non-HR managers receives no training before interviewing job candidates, even when the final hiring decision is theirs. Without shared definitions of what “skills-first” means and clear accountability, the initiative collapses under the weight of individual discretion.

What CIOs see that others miss

CIOs are often closer to the consequences of a bad hire than anyone else in the C-suite. When a cybersecurity analyst freezes during an incident response, the CIO gets a front-row view of the damage a skills gap can cause.

CIOs are also watching AI redefine what “qualified” looks like in real time. The engineer who deploys an agentic AI system to automate monitoring, or the analyst who chains multiple AI tools into a custom workflow — these people deliver outsized value, and their qualifications often look nothing like what a traditional job description demands.

And CIOs have a keen understanding of issues with tech talent pipelines. Projects slip while niche technical roles sit open for six months or more, even as candidates from rigorous programs — people with the specific skills for the job — are filtered out.

How successful CIOs operationalize skills-first hiring

Successful CIOs get specific. They work with their teams to define exactly which skills matter for each role — and they validate those definitions against the performance of current, thriving employees.

Should that taxonomy include demonstrated experience with AI tools and platforms: Has the candidate built or deployed an AI agent? Can they work across multiple AI tools? Have they completed projects requiring AI-augmented problem-solving? These concrete, observable skills predict performance far more than a degree ever could.

Second, they establish shared metrics across IT and HR. Organizations that get this right track 90-day performance reviews, first-year retention and promotion velocity alongside traditional recruiting metrics. In its New Collar Program with Sentara Healthcare, TEKsystems worked with company leaders to fill open big data positions through a skills-based cohort model and achieved 80% retention one-year post-training.

Third, these CIOs build direct relationships with employer-aligned training pipelines before a role opens. Bank of America invested nearly $40 million in workforce development initiatives in 2025 alone and partnered with more than 600 nonprofits across the US.

At Per Scholas, our head of IT, Tyrone Washington, makes it clear that while technical skills might get you through the door, it’s “smart skills” — discernment, emotional intelligence, complex problem-solving and agility — that build a career in an AI-driven landscape.

What the data shows

Skills-first hiring, when paired with structured onboarding and development pathways, is not just a talent acquisition strategy — it’s a retention strategy. And higher retention contributes directly to the bottom line, as the fully loaded cost of replacing a technical employee ranges from one to two times their annual salary.

In one employer partnership deploying skills-trained talent, a TEKsystems client came out $238,000 ahead in the first year after accounting for program costs, with a projected annual return of over $1.2 million. IT leaders reported that skills-trained talent becomes productive measurably faster than early-career hires from conventional pipelines.

How CIOs can lead the Shift — even without owning HR

CIOs who are moving the needle are piloting skills-based hiring for one or two roles, tracking outcomes rigorously and using that data to make the case for broader adoption. They’re building external partnerships before they need them. Bank of America, a Per Scholas long-term partner, knows that our graduates are team players, lifelong learners and motivated employees; our graduates’ certifications (through CompTIA and Google) validates that they have the technical know-how.

Every quarter that technical roles sit open has a measurable impact on project delivery, team capacity and competitive positioning. Surfacing these costs — backed by data — is something CIOs are uniquely positioned to do.

The bottom line

Skills-first hiring will remain a well-intentioned abstraction unless CIOs treat it as an operating model — one that reflects how AI is reshaping the skills organizations need.

The candidates who can demonstrate hands-on experience in building and deploying AI agents, integrating multiple AI tools into a workflow or evaluating when AI can help are the ones who will drive value. But they’ll keep getting filtered out unless CIOs get specific about skill definitions, align IT and HR around shared metrics, and build employer-aligned pipelines. Bank of America and TEKsystems didn’t achieve their results by endorsing a principle — they achieved them by building systems.

Luckily, building systems is something that CIOs know how to do well.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Why hiring ‘AI engineers’ won’t work
    Practically every company today is posting roles to hire an “AI engineer.”  They’re likely assuming that an “AI engineer” can handle everything from product development to infrastructure to data integration. Most of the time, though, they’re going to be disappointed. That’s because assessing the competency of engineers has always been hard–and adding AI to the mix makes it even harder–and companies are often testing for the wrong thing. Under the umbrella of “AI engineer
     

Why hiring ‘AI engineers’ won’t work

23 de Abril de 2026, 06:00

Practically every company today is posting roles to hire an “AI engineer.”  They’re likely assuming that an “AI engineer” can handle everything from product development to infrastructure to data integration. Most of the time, though, they’re going to be disappointed.

That’s because assessing the competency of engineers has always been hard–and adding AI to the mix makes it even harder–and companies are often testing for the wrong thing. Under the umbrella of “AI engineer,” they’re collapsing at least three different types of technical work into a single job description, then wondering why the person they hired can’t do the job they need done.

At Andela, where we assess, train and vet software engineering talent as the core of our business, we’re finding that basic AI skills assessments produce an almost 75% fail rate.

My first reaction when I saw the 75% fail rate was that we had an assessment problem. But the more I dug in, the clearer it became that the problem was upstream. Those candidates weren’t failing because they lacked skills. Many of them are exceptional engineers. They were failing because the entire industry is based on assessment frameworks that can’t distinguish between the types of AI work that need to be done.

Consider this situation. I was recently reviewing assessment results for a batch of AI engineering candidates. One candidate stood out: strong resume, passed the coding assessment and defined every concept we threw at them: RAG architectures, agentic search, vector databases, prompt chaining. On paper, this person had the skills.

Then we got to design. We presented a real enterprise scenario and asked which approach they’d use and why. The candidate described a RAG implementation. The solution was technically correct and valid. But for this use case, a RAG implementation would have required significantly more engineering while producing less complete results than an agentic search approach. (The problem required dynamic reasoning across multiple data sources rather than retrieval from a fixed index.) The candidate knew the concepts but lacked the judgment to know which solution was dramatically better for the specific problem.

I’d call that a gap in technical taste: the ability to choose between valid options and find the one that’s right for a specific context. And it’s the gap our assessments, and almost every assessment pipeline in the industry, weren’t built to catch.

And it’s costing real money. Enterprises are burning months on mismatched hires, misaligned teams and AI initiatives that stall, not because the technology failed, but because the people doing the work were the wrong people for that particular work. Highlighting this difficulty is ManpowerGroup’s 2026 Talent Shortage Survey, which found that AI skills have surpassed all others to become the most difficult for employers to find globally, with 72% of employers reporting hiring difficulty.

Digging deeper

In my previous article, I spoke about how enterprises should seek to hire Forward Deployed Engineers (FDEs) who can bridge engineering, architecture and business strategy, to push AI past the ‘integration wall’ and into production. FDEs are the expedition leaders. No company has enough of them. No company can afford to hire enough of them for all the work ahead.

So, what do you do below the FDE layer? You have to dig deeper. For every one FDE, teams will need three or four engineers operating in more specific modes of work. In our experience, the AI work that enterprises need done falls along a spectrum defined by three archetypes.

  • Prototypers. These are the rapid experimenters. They are engineers, product managers or designers who use AI tools to quickly test ideas, find value and throw away what doesn’t work. In a previous era, validating a new product concept meant scoping a project, building a team and committing to a six-month build cycle. Now one person with the right tools and good instincts can shortcut that entire process, testing and discarding dozens of ideas to find the ones worth investing in. The prototyper’s technical taste is about sensing what’s valuable before an organization commits real resources.
  • Builders. The engineers who turn validated ideas into production systems. A builder needs to do more than ‘vibe code.’ They need to operate as agentic engineers: architecting the system, orchestrating the agents to build it, verifying the output and shipping with confidence. Critically, building in an AI context means building the full stack, including the data pipelines that organize content from disparate systems, the access controls that govern what the AI can reach and the integration layer that connects AI to the messy reality of enterprise data and infrastructure. Without this end-to-end capability, AI stays trapped in sandboxes. The builder’s technical taste is about choosing the right architecture and integration approach when multiple valid options exist and knowing which one will be dramatically better for a specific production context.
  • Scalers. The engineers responsible for reliability, governance, observability and production AI operations. These professionals were, in a previous era, DevOps engineers. They know how to deploy LLMs and manage the liability of model output at enterprise scale. The scaler’s technical taste is about tradeoffs: performance versus cost, governance rigor versus development velocity and risk tolerance versus time to market.

These aren’t rigid job categories. They’re patterns of AI engagement. In practice, they blend. A backend engineer on a given project might spend 60% of their time doing builder work and 40% on scaling. The point isn’t to put people into boxes. It’s to give enterprises a vocabulary for decomposing what they need, so they stop collapsing fundamentally different work into a single job posting.

These patterns have different toolchains, different skill profiles and different hiring criteria. Companies that treat them as interchangeable will end up building subpar teams. Understanding where your AI initiatives fall along this spectrum is one of the most important change-management decisions enterprises face in an AI-first world, and it’s why companies that identify their specific location on this spectrum move dramatically faster than those hiring generically.

How to think about AI talent

Here’s where it gets practical. Prototypers, builders and scalers are not job titles. They’re lenses that sit on top of the domain roles enterprises have always hired for: frontend engineers, backend engineers, data engineers, DevOps/SRE and so on. To move from the vague ‘AI engineer’ to a structured picture of what you need, you have to think across three dimensions.

  • Role is the foundation: what technical domain does this person work in? Backend, data engineering, DevOps/SRE, full stack? These are the roles enterprises have always hired for. They come with foundational skills like API design, database architecture and CI/CD pipelines. And they come in specific flavors: a Python backend engineer is not a Java backend engineer. This layer hasn’t changed because of AI.
  • Seniority determines the level of judgment and autonomy you can expect. A senior backend engineer with 10 years of experience brings architectural instincts and decision-making under ambiguity that a two-year engineer doesn’t. Seniority is also where technical taste compounds. An engineer with deep experience has seen more tradeoffs, made more wrong calls and developed the pattern recognition that allows them to make better-than-default decisions. Not every role on an AI initiative requires a senior engineer, but the roles that involve system design decisions, risk trade-offs and client-facing judgment absolutely do.
  • AI engagement pattern is how this person engages with AI systems. This is the archetype layer, and it’s what’s new. A backend engineer doing builder work (designing the orchestration logic for an agentic workflow and integrating it with enterprise data) needs fundamentally different technical tastes than that same backend engineer doing scaler work (deploying LLM infrastructure and building observability for model performance). The role is the noun. The archetype is the adjective. And it changes what you need to test for.

In practice, certain role families map naturally to certain AI engagement patterns. Prototypers can come from anywhere; engineering, product, design and are often already on your team. They’re the person who’s always building side projects and testing ideas. Builders tend to draw from full-stack, frontend, backend, data engineering and AI/ML talent. Scalers tend to draw from DevOps/SRE, security, backend and infrastructure engineering. Forward-deployed engineers span all the above with business acumen and stakeholder fluency.

Hiring with precision

This multi-dimensional view is what allows enterprises to stop hiring for a vague ‘AI engineer’ and start composing teams with precision. It’s also what makes a credible assessment strategy possible, because now you know what you’re testing for at each level.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • AI is scoring your job candidates. Can you explain how?
    Somewhere in your organization’s hiring stack, there is probably an AI system producing candidate scores. If you’re a leader who helped evaluate or approve that system, here’s a question worth sitting with: If one of those scores got challenged, by a candidate, an internal audit or a regulator, could your team explain how it was produced? Not “the vendor said it’s accurate.” Not “the model was trained on historical data.” A specific, documented explanation of what criter
     

AI is scoring your job candidates. Can you explain how?

20 de Abril de 2026, 08:00

Somewhere in your organization’s hiring stack, there is probably an AI system producing candidate scores. If you’re a leader who helped evaluate or approve that system, here’s a question worth sitting with: If one of those scores got challenged, by a candidate, an internal audit or a regulator, could your team explain how it was produced?

Not “the vendor said it’s accurate.” Not “the model was trained on historical data.” A specific, documented explanation of what criteria were evaluated, how the candidate performed against them and why those criteria are job-relevant.

For a growing number of organizations using AI video interview scoring tools, the honest answer is no. And as regulatory frameworks targeting employment AI move from guidance to enforcement, that answer is a risk.

What the system is actually optimizing for

Before asking how accurate an AI scoring system is, the right question is what it is optimizing for.

Many video interview scoring platforms evaluate tone of voice, pace, eye contact, facial expressions and fluency alongside or in some cases instead of, the actual content of candidate responses. The underlying assumption is that these signals correlate with job performance or cultural fit. The evidence for that assumption is weak. The evidence that measuring these signals introduces systematic, legally significant bias is much stronger.

Several major players in this space removed facial analysis features after regulatory pressure and public scrutiny. That acknowledgment — that criteria advertised as objective were neither reliable nor fair — should raise a harder question. If those criteria were in production and no one caught it until outside pressure forced a change, what else is still being measured that shouldn’t be?

This is not a hypothetical risk. The EEOC has made it clear that employers are liable under Title VII for discriminatory outcomes from AI hiring tools, regardless of whether those tools were built in-house or purchased from a vendor. New York City’s Local Law 144 requires annual independent bias audits of automated employment decision tools and public disclosure of results. Illinois requires notice and consent before AI is used to evaluate video interviews. The EU AI Act, whose high-risk AI provisions take full effect this August, explicitly classifies employment AI as high-risk, with binding requirements for transparency, explainability and human oversight.

The common thread: Can you explain what your AI is measuring, and can you demonstrate that it’s measuring the right things?

The accountability problem at the executive level

For technology leaders, this is where the conversation becomes concrete.

Consider the scenario: A hiring decision gets challenged by a candidate, an internal audit or a regulator. The question is how the decision was made. “The AI scored them lower” is not a defensible answer in any of those contexts. It can’t be traced to specific job-relevant criteria. It can’t be explained to the candidate. It won’t satisfy an auditor. And if the system’s logic is proprietary and opaque, the organization has no way to produce a satisfying answer even if it wants to.

The organizations that adopt black-box scoring tools often do so with the right intentions: To reduce human bias and create a more consistent process. Those are legitimate goals. But a system whose internal logic can’t be questioned, explained or audited just obscures bias. It doesn’t reduce it. And when bias becomes difficult to see, it becomes more difficult to address.

This is a pattern you’ll recognize from other domains. When a system produces outcomes that look plausible but are wrong in ways that aren’t immediately visible, the failure compounds before it surfaces. The cost of discovering it late is almost always higher than the cost of building it right from the start.

What a defensible architecture looks like

There is a meaningful difference between AI that scores interviews and AI that scores interviews in a way that can be explained and defended. The distinction is structural.

Defensible scoring starts before any candidate records a response. It starts with the job. What competencies does this role require, and what does strong performance against each competency look like? From those answers, explicit rubrics are developed. Criteria that describe what high-quality, adequate and weak responses look like for each dimension being evaluated. Those rubrics are reviewed and approved by the hiring team before scoring begins.

When responses come in, the AI evaluates what candidates actually said against those pre-defined criteria. Not tone. Not pacing. Not facial expression. What they communicated, measured against a standard the hiring team set, and can explain. Criterion-level scores roll up to an overall assessment, and every part of that chain is visible and auditable.

This architecture has an important secondary property: The human remains meaningfully in the loop. The AI generates a starting point by identifying relevant competencies and drafting rubric criteria from the job description, but the standard is owned by the people responsible for the hire. If a hiring manager can’t look at a scoring rubric and explain what it’s evaluating and why, it should not be deployed. That is not a burden on the tool. It is the minimum condition for using it responsibly.

Four questions for the governance conversation

For leaders evaluating or overseeing AI video interview tools, four questions surface most of what matters.

  1. What specifically is the system scoring? Request an explicit list of evaluation criteria. If the answer includes anything beyond the content of candidate responses, ask for the validation data that connects those criteria to job performance outcomes.
  2. Are the criteria derived from job requirements? Generic rubrics applied uniformly across roles create standardized evaluation, not structured evaluation, which is different. Legitimate scoring starts from the specific competencies required for the specific role.
  3. Can the criteria be reviewed, modified and approved before scoring begins? If the rubrics are fixed and opaque, the organization is not in control of its own evaluation standard. That is a governance gap.
  4. Can any score be explained to a candidate or a regulator? This is the accountability test. If the explanation requires “the AI said so” rather than pointing to specific, documented criteria and how a candidate performed against them, the process will not withstand scrutiny.

Well-designed systems answer these questions directly. The ones that can’t are telling you something important about the tradeoffs their creators made.

Why this moment matters

The EU AI Act deadline is in August, forcing organizations with global operations or EU-based candidates to evaluate their tech. But getting this right isn’t just regulatory, it’s practical.

When hiring teams can see exactly how a score was produced, they use it. When they can’t explain it, they override it or work around it, the efficiency gains disappear. The tools that will last in enterprise hiring stacks are the ones that make decisions transparently enough that the humans responsible for those decisions trust them.

That’s not a high bar. But it requires being precise about what any given AI system is really measuring. And honest about whether that’s what you actually want to know.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • Why tech leaders must own the hiring freeze
    The global economic landscape of 2026 is marked by a paradox: Resilience amid ongoing uncertainty. While the IMF World Economic Outlook forecasts a steady global growth rate of 3.3%, organizations are dealing with “divergent forces” such as shifting trade policies, geopolitical instability and the “higher-for-longer” reality of rising operational costs. In this environment, the technology budget, once considered the “sacred cow” of corporate expenditure, has come under int
     

Why tech leaders must own the hiring freeze

17 de Abril de 2026, 06:00

The global economic landscape of 2026 is marked by a paradox: Resilience amid ongoing uncertainty. While the IMF World Economic Outlook forecasts a steady global growth rate of 3.3%, organizations are dealing with “divergent forces” such as shifting trade policies, geopolitical instability and the “higher-for-longer” reality of rising operational costs. In this environment, the technology budget, once considered the “sacred cow” of corporate expenditure, has come under intense examination.

Traditionally, a hiring freeze is seen by IT leadership as a cage, a period of stagnation where progress depends on headcount. However, in the age of Agentic AI, the visionary CIO/CTO should not just accept a hiring freeze; they should lead it. Rather than being a constraint, a hiring freeze serves as the ultimate forcing mechanism. It serves as a strategic reboot, enabling a shift from a “growth by headcount” model to “growth by architectural efficiency.” By endorsing the freeze, technology leaders signal to the board that they are moving beyond “cost center management” to become architects of the AI-augmented enterprise.

Hiring is not stopping; it is recalibrating

The narrative that AI is a “job killer” is being replaced by a more nuanced reality of structural shifting. According to the World Economic Forum’s Future of Jobs Report 2025, while automation is expected to displace approximately 92 million roles by 2030, the emergence of the “Augmented Workforce” is projected to create 170 million new, high-value positions. The recent waves of tech layoffs were less about AI replacement and more about a “COVID-correction,” a re-balancing after the aggressive digital acceleration of the early 2020s. Today, we are not looking for fewer people; we are looking for different capabilities.

As research from MIT Sloan highlights, demand for roles involving structured, repetitive tasks has dropped significantly, while demand for “augmentation-prone” roles, those requiring analytical and creative skills enhanced by AI, has surged. For tech leaders, the mandate is clear: The workforce is not shrinking; it is evolving into a leaner, high-output engine.

From operational management to strategic redesign

When hiring slows, the CIO and CTO bear the responsibility for productivity. This is the time to go beyond “keeping the lights on” and start the strategic redesign. Instead of asking how teams can “cope” with a frozen headcount, leaders must ask: How much more can the existing team accomplish when AI agents are embedded in every workflow?

  • Software engineering: Shifting from manual syntax to AI-Native Development. Gartner predicts that by 2027, 70% of professional developers will use AI coding assistants to significantly accelerate their output.
  • Cybersecurity: Transitioning from reactive alert-triaging to AI security platforms that centralize visibility and neutralize “rogue agent” actions autonomously.
  • Infrastructure: Deploying multi-agent systems (MAS), consisting of AI agents that work together to achieve complex shared goals, like automated cloud cost optimization or self-healing systems.

McKinsey Global Institute data suggests that “AI High Performers”, those who fundamentally redesign workflows rather than just “patching in” tools, are nearly three times more likely to see a significant impact on their bottom line.

The rise of the AI-native workforce

As the hiring model evolves, a new profile is emerging: The AI-Native Professional. These individuals do not see AI as an external tool but as a “co-pilot” integrated into their daily cognitive process. However, the OECD Skills Outlook 2025 warns of a growing “skills gap.” The bottleneck for most organisations isn’t a lack of open requisitions; it’s a shortage of candidates who possess AI literacy, the ability to understand the capabilities, limitations and ethical guardrails of intelligent systems.

Skill categoryTraditional focusAI-native focus
DevelopmentManual syntax & logicPrompt engineering & agent orchestration
DataReporting & visualizationAnomaly detection & Predictive modelling
OperationsTicket resolutionWorkflow automation & Self-healing systems
LeadershipTask allocationJudgment, ethics and strategic direction

For the CIO, recruitment during a freeze must focus on identifying “cultural catalysts” hires who bring AI-augmented thinking to challenge legacy assumptions, even if the total headcount stays the same.

Natural compression: The lean enterprise

The era of the “bloated enterprise” is coming to an end, not through mass layoffs, but through natural workforce realignment. As employees retire or move on, CIOs should proactively decide not to backfill roles that can now be handled by domain-specific language models (DSLMs). These models, trained on specialized industry data (finance, legal, supply chain), offer higher accuracy and better compliance than general-purpose LLMs. By automating the routine, organizations are gradually becoming smaller in headcount but significantly larger in productive capacity. Stanford HAI’s 2025 AI Index confirms that AI is increasingly moving from the lab to daily life, with business usage accelerating to 78% in the past year.

Strengthening the core: The leadership mandate

In the near term, a CIO’s most important responsibility isn’t external recruitment; it’s developing the current workforce. Forward-thinking leaders are focusing on five key areas:

  1. AI literacy programs: Moving beyond “how to use a chatbot” to a deep architectural understanding of AI agents and Agentic AI.
  2. Automation-first operating models: Mandating that any new process must be “automated by default” before a human is assigned.
  3. Cross-functional innovation pods: Creating “tiger teams” that combine technical expertise with domain-specific knowledge to solve business problems.
  4. Agentic governance: Establishing the security and ethical frameworks necessary to allow AI agents to act autonomously within the enterprise.
  5. Digital twins of work: Using AI to simulate and optimize internal workflows before deployment.

As the MIT Sloan EPOCH framework suggests, human-intensive tasks involving empathy, presence, opinion, creativity and hope are less susceptible to automation but are the strongest candidates for augmentation.

The defining leadership question of the AI era is no longer ‘How many people do we need?’ It is ‘How powerful can the people we already have become?’

Hiring will continue, but differently

Hiring will not cease indefinitely, but its purpose is shifting. Future hiring will focus on capability enhancement by bringing in individuals who can accelerate AI adoption and overhaul outdated workflows. These hires serve as catalysts, introducing new ways of working that boost productivity throughout entire teams. One strategically placed AI-literate engineer can often generate more value than five traditional hires by automating the “drudge work” that keeps a department slow.

The CIO’s leadership moment

Economic uncertainty has always tested leadership. Today’s combination of geopolitical instability and technological disruption creates a demanding environment, but it also creates the perfect conditions for transformation. CIOs and CTOs are no longer simply responsible for technology infrastructure; they are the architects of the AI-augmented organization. A hiring freeze should be treated as a gift, a pause that allows for the fundamental redesign of how work happens. The organizations that answer the productivity question well will not simply survive the next economic cycle. They will define the productivity model of the AI-driven enterprise.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Security | CIO
  • AI in the interview room
    A technical interview goes exceptionally well. The candidate answers every question with confidence, explains complex concepts fluently and demonstrates impressive knowledge of modern tools and architectures. The hiring team leaves the interview convinced they have found a strong addition to the engineering team. Weeks later, after onboarding, a different picture begins to emerge. Routine tasks take longer than expected. Basic troubleshooting requires more assistance tha
     

AI in the interview room

16 de Abril de 2026, 08:00

A technical interview goes exceptionally well. The candidate answers every question with confidence, explains complex concepts fluently and demonstrates impressive knowledge of modern tools and architectures. The hiring team leaves the interview convinced they have found a strong addition to the engineering team.

Weeks later, after onboarding, a different picture begins to emerge. Routine tasks take longer than expected. Basic troubleshooting requires more assistance than anticipated. Design discussions reveal gaps that were not visible during the interview.

Situations like this are not new in the technology industry. But the growing use of artificial intelligence (AI) in job preparation is making them harder to detect.

In cybersecurity, we rarely blame attackers for exploiting weaknesses in a system. Instead, we examine the conditions that allowed the breach to occur and focus on strengthening controls, detection and response mechanisms so the organization becomes more resilient.

A similar mindset may now be needed in the hiring process. Artificial intelligence (AI) is rapidly changing how candidates prepare for technical roles. Many applicants now use AI tools to refine resumes, rehearse interview responses and organize complex ideas before interviews.

In many ways, this is a positive development. AI can help candidates communicate their experience more clearly and prepare more effectively. However, it also introduces a new challenge for hiring teams: distinguishing between candidates who are genuinely capable and those whose interview performance may be heavily assisted by external tools.

This is not about blaming candidates for using AI. Technology inevitably changes how people learn and present themselves. The more important question is whether our hiring processes still provide enough visibility into a candidate’s true capability in an AI-enabled world.

For CIOs and CISOs, this issue extends beyond talent acquisition. Hiring the wrong technical candidate, whether a developer, system administrator, engineer or security professional, can introduce operational weaknesses that eventually translate into reliability, resilience or even security risks. As organizations adopt AI-assisted workflows, technical hiring increasingly becomes a shared responsibility between technology leadership and HR teams, requiring new approaches to evaluation, validation and post-hire observation. This shift is already becoming visible across the technology hiring landscape.

The talent gap is real, and the pressure to hire is increasing

Roles such as software developers, system administrators, cloud engineers, AI specialists and cybersecurity professionals are increasingly difficult to fill. As digital transformation accelerates, companies compete aggressively for individuals who can design, build and secure modern systems.

Across the technology industry, organizations face a persistent shortage of experienced professionals. The challenge is particularly visible in cybersecurity, where demand continues to exceed supply. According to the ISC² Cybersecurity Workforce Study, the global industry faces a shortage of more than 3.4 million cybersecurity professionals. Similar findings appear in the ISACA State of Cybersecurity report, which consistently highlights hiring and skills shortages as major barriers for security teams.

This pressure can place significant strain on hiring teams.

Recruiters must evaluate large numbers of applications. Hiring managers must assess candidates across multiple technical domains. Decisions often must be made quickly to avoid losing strong candidates to competitors.

In this environment, the hiring process itself becomes a critical operational function. Hiring the right person can accelerate innovation and strengthen teams. Hiring the wrong person can delay projects, introduce operational risk and require months to correct. Against this backdrop of talent scarcity, organizations are also navigating a new variable: the growing influence of artificial intelligence on the hiring process itself.

AI can also strengthen hiring

While AI introduces new complexities, it also offers opportunities to improve recruitment.

Organizations can use AI to:

  • Analyze large volumes of candidate data
  • Identify skill patterns across roles
  • Support recruiters in preparing structured interviews
  • Highlight inconsistencies in candidate histories

Used responsibly, AI can help hiring teams spend more time evaluating substance rather than presentation. For technology leaders, this dual role of AI, both enabling candidates and assisting recruiters, reinforces the need to rethink how hiring decisions are made.

AI is changing the candidate experience

AI is now widely accessible to professionals across industries. Candidates are increasingly using AI to:

  • Improve the structure and clarity of their resumes
  • Prepare responses to common interview questions
  • Research technical concepts before interviews
  • Simulate interview scenarios using AI coaching tools

In many cases, these uses are entirely legitimate. Learning how to use AI effectively is becoming an important professional skill. The challenge emerges when AI tools begin to influence the hiring process in ways organizations did not anticipate.

Some recruiters report that AI-generated resumes now appear highly polished and perfectly aligned with job descriptions. Interview responses may be structured, technically accurate and delivered with impressive fluency. Yet when candidates move into practical assessments or real work environments, the depth of knowledge sometimes does not match the initial impression.

This phenomenon is not necessarily the result of intentional deception. Often, it reflects the growing ability of AI tools to enhance presentation beyond the underlying experience.

For hiring teams, this creates a new kind of risk.

The polished profile paradox: When strong presentation outpaces technical depth

As AI becomes a common tool in job preparation, many organizations are noticing an unexpected side effect: candidate profiles are becoming increasingly polished, and increasingly similar.

AI-powered tools help applicants refine resumes, structure achievements and align their profiles closely with job descriptions. As a result, many applications now feature highly consistent language, well-structured narratives and carefully optimized technical terminology.

Concepts such as cloud architecture, DevOps pipelines, automation frameworks, zero-trust security and AI integration appear repeatedly across resumes, often described in nearly identical ways.

In many cases, these experiences may indeed be valid. However, when AI tools standardize how candidates present their backgrounds, it becomes harder for hiring teams to differentiate between individuals who have deep, hands-on expertise and those who are primarily familiar with the terminology.

The challenge is not that candidates are presenting themselves well; clear communication is a valuable skill. The paradox emerges when the quality of presentation begins to outpace the depth of underlying capability, making it more difficult for recruiters and hiring managers to identify truly exceptional technical talent.

In this environment, simply receiving more applications does not necessarily improve hiring outcomes. Without evaluation methods that surface real experience and practical thinking, organizations risk selecting candidates based on polished profiles rather than demonstrated capability.

The challenge of remote interviews

Remote hiring has become the norm across the technology industry. It allows organizations to recruit globally and provides flexibility for both employers and candidates.

But virtual interviews also introduce blind spots. Candidates may have access to:

  • Multiple screens or monitors, allowing them to search for information or reference external materials during the interview.
  • Secondary devices, such as tablets or smartphones, which can be used to quickly look up answers without being visible to the interviewer.
  • Real-time AI tools, capable of generating structured responses to technical questions within seconds.
  • Third-party assistance, where another individual may be providing prompts or guidance to the candidate behind the scenes during the interview.

These possibilities do not automatically imply misconduct. However, they highlight a growing challenge for hiring teams: ensuring that interview responses accurately reflect the candidate’s own reasoning, experience and technical capability, rather than external assistance.

Interview answers may appear polished and technically precise. Long pauses before responses, structured explanations and highly consistent phrasing sometimes raise questions about whether answers are being generated independently.

However, attempting to detect AI use during interviews is unlikely to be a sustainable strategy. Technology evolves faster than detection methods, and overly intrusive monitoring risks undermining trust between candidates and employers. Instead, organizations may need to rethink the design of interviews themselves.

The goal should be evidence of capability

The most effective hiring processes focus on one core objective: gathering evidence that a candidate can actually perform the role. Rather than trying to determine whether AI was used during preparation or interviews, hiring teams should ask a more practical question: Do we have enough evidence to be confident this person can do the job?

When hiring processes generate clear evidence of capability, concerns about AI assistance become far less significant. This requires shifting from traditional question-and-answer interviews toward more evidence-based evaluation methods.

Practical examples include asking candidates to:

  • Explain real projects they worked on
  • Describe decision-making processes behind technical solutions
  • Walk through incident or troubleshooting scenarios
  • Discuss trade-offs made during system design

Experienced professionals can usually describe how problems unfolded, why certain decisions were made and what lessons were learned. These details are much harder to reproduce artificially.

Strengthening the hiring process

Based on observations from recent hiring and interviewing experiences, it has become increasingly clear that organizations may need to revisit how technical hiring processes are structured. As candidates gain access to more sophisticated tools to prepare for interviews, traditional evaluation methods may not always provide sufficient insight into real capability.

Several approaches can help strengthen confidence in hiring decisions.

  • Scenario-based discussions can be particularly useful. Instead of relying solely on theoretical questions, interviewers can present practical situations and ask candidates how they would approach the problem. This often reveals how individuals think, how they prioritize and how they reason through unfamiliar situations.
  • Real-time problem solving can also provide valuable insight. Observing how a candidate works through a technical issue step by step often reveals far more about their mindset and problem-solving approach than prepared responses alone.
  • Cross-functional interview panels: Another helpful approach is the use of cross-functional interview panels, where professionals from different technical backgrounds participate in the evaluation. Engineers, system administrators, architects or other practitioners can often explore different dimensions of a candidate’s experience and provide a more balanced assessment.
  • Finally, skills-based assessments, when designed thoughtfully, can shift the focus from resume claims to practical capability. Tasks that reflect real-world work scenarios often provide clearer signals about how a candidate might perform in the role.

Importantly, the objective of these methods is not to trap candidates or place them under unnecessary pressure. The goal is to create opportunities where genuine experience, thinking patterns and technical understanding can naturally emerge.

Observing capability beyond the interview

Even with improved interview methods, hiring decisions should not rely entirely on a single conversation or assessment. Much like technology systems and processes are monitored and refined after deployment, organizations can treat onboarding and probation periods as part of a broader validation process. These early months provide valuable opportunities to observe how individuals operate within real environments.

During onboarding and probation, teams can better understand:

  • How individuals approach unfamiliar problems
  • How they collaborate and communicate within teams
  • How they translate theoretical knowledge into operational decisions
  • How quickly they adapt to existing tools, processes and organizational practices

These observations often provide a more accurate picture of capability than interviews alone. Viewing hiring as a continuum rather than a single decision point can help organizations reduce risk while supporting new employees as they integrate into the team.

A human-centered hiring mindset

AI is undoubtedly changing how candidates learn, communicate and prepare for professional opportunities. This shift is unlikely to slow down, and organizations will need to adapt accordingly.

However, it is important to remember that hiring processes are ultimately designed to evaluate people, not just technical answers. Candidates bring more than knowledge to a role, they bring personality, professional values, cultural perspectives and individual ways of thinking.

Differences in communication style, body language or cultural background can sometimes influence how candidates present themselves during interviews. In an environment where AI assistance is becoming more common, organizations should remain mindful not to make incorrect assumptions or unfair accusations based on isolated signals.

The objective of hiring is not to identify who delivers the most polished interview responses. It is to identify individuals who can collaborate with others, solve problems and contribute meaningfully once they become part of the organization.

As AI becomes more embedded in the professional landscape, the most effective hiring processes will be those that remain balanced, combining structured evaluation with thoughtful human judgment.

For technology leaders, the implications extend beyond recruitment efficiency. Hiring decisions influence system reliability, operational resilience and in some cases the organization’s overall security posture. When the wrong expertise enters critical engineering, infrastructure or security roles, the downstream impact can reach far beyond the hiring process itself.

Addressing this challenge will require closer collaboration between CIOs, CISOs, hiring managers and HR teams to design hiring approaches that emphasize evidence of real capability rather than polished presentation alone.

Organizations that rethink their hiring processes today — through stronger technical assessments, thoughtful onboarding observation and better interviewer training — will be better positioned to identify authentic talent in an AI-assisted world.

Because in the end, hiring is not about selecting the candidate who interviews the best. It is about identifying the individuals who can actually build, operate and secure the systems organizations depend on.

In an AI-enabled hiring landscape, the organizations that succeed will not be those trying to detect every tool candidates use, but those designing hiring processes that reveal real expertise regardless of it.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇Pulsedive Blog
  • Work With Us: Technical Writer Grace Chi
    Update: This role is now closed and no longer accepting applications.⭐PulsedivePart-Time / ContractFully Remote, GlobalHQ in USAThe OpportunityCreate clear, concise, and user-friendly documentation that empowers our community to effectively utilize Pulsedive's platform.Pulsedive is a threat intelligence startup that delivers frictionless threat intelligence solutions for growing teams. We bring together intelligence in our platform and data products (Pro, API, Feed, Enterprise TIP), correlating
     

Work With Us: Technical Writer

19 de Março de 2025, 12:25
Work With Us: Technical Writer

Update: This role is now closed and no longer accepting applications.

Pulsedive
Part-Time / Contract
Fully Remote, Global
HQ in USA

The Opportunity

Create clear, concise, and user-friendly documentation that empowers our community to effectively utilize Pulsedive's platform.

Pulsedive is a threat intelligence startup that delivers frictionless threat intelligence solutions for growing teams. We bring together intelligence in our platform and data products (Pro, API, Feed, Enterprise TIP), correlating indicators of compromise and organizing information to support threat collection, pivoting, research, and analysis. 

Pulsedive is looking for a skilled technical writer on a contracting basis to document use cases, technical specifications, and guides for our platforms, products, and integrations. If you’re energized by making complex technical information accessible and engaging for technical audiences, this is the role for you. You will work closely with product and engineering to research, write, and maintain high-quality documentation that helps our users and clients leverage Pulsedive's solutions to their fullest potential.

Working at Pulsedive

Regardless of your role or expertise, we seek candidates who embrace honesty, enjoy constant learning, and are empowered by ownership of their work. As a product-led company, our users are our primary stakeholders. We believe there are countless ways for talented individuals from all backgrounds to contribute their unique skills, interests, and perspectives as Pulsedive grows—and we can't wait to work with and learn from you.

You’ll Get To

  • Document technical features, integrations, architectures, and APIs 
  • Create clear and accessible guides, walkthroughs, and help articles for a range of technical audiences and uses cases
  • Migrate and improve existing content, creating a streamlined and centralized system for all technical documentation
  • Collaborate with Pulsedive leadership and subject matter experts 
  • Get hands-on learning by using Pulsedive tools and sandboxed environments
  • Help maintain up-to-date information to reflect new features, integrations, and product changes
  • Create maintenance plans and style guides, laying the groundwork for future documenters
  • Communicate information with diagrams, charts, illustrations, animations, and more to effectively convey concepts and architectures  
  • Act on feedback to improve Pulsedive’s documentation and user support content
  • Manage your time and workflow independently in a fully remote environment

What You’ve Got (and We Want)

  • 3+ years experience in technical writing, documentation, or related fields
  • 2+ years in IT, computer science, networking, and/or cybersecurity
  • Proficiency in English with the ability to communicate technical concepts in a clear, concise, and user-friendly manner
  • Proven experience creating documentation for cloud-based SaaS products
  • Ability to research and write documentation for new features and integrations, while closing gaps in existing content
  • Ability to interview subject matter experts to extract and clarify complex technical information with minimal review

Bonus Points For

  • Familiarity researching and deploying tools or platforms for technical documentation
  • Practical experience with customer success and enablement
  • Extensive experience with cybersecurity platforms, particularly in threat intelligence
  • Familiarity with:
    • Cybersecurity (e.g., IOCs, MITRE ATT&CK, OSINT, incident response)
    • Networking protocols (e.g., DNS, HTTP)
    • APIs
    • Threat intelligence feeds
    • Enterprise SaaS platforms

The Structure

This is a part-time, fully remote contract role with potential for a full-time role at Pulsedive. Our working schedule is flexible, with an average 10 hour weekly commitment. You will have high levels of autonomy, working asynchronously with the Pulsedive team. We’ll develop expectations, milestones, and timelines for deliverables together - but give you the space to work in the ways you find the most productive and fulfilling.

Caught Your Eye?

Send us a resume and relevant materials to: talent@pulsedive.com

🔗
Not for you, but you know someone who knows someone?
Help us get the word out by sharing this post!

What Happens Next?

After we receive your application, we'll update you on your status. If we think there's a fit, we'll send you a quick email to verify relevant experience and then set up a time to interview.

❌
❌