Visualização de leitura

Deepfakes, AI resumes, and the growing threat of fake applicants

Recruiters expect the odd exaggerated resume, but many companies, including us here at Malwarebytes, are now dealing with something far more serious: job applicants who aren’t real people at all.

From fabricated identities to AI-generated resumes and outsourced impostor interviews, hiring pipelines have become a new way for attackers to sneak into organizations.

Fake applicants aren’t just a minor HR inconvenience anymore but a genuine security risk. So, what’s the purpose behind it, and what should you look out for?

How these fake applicants operate

These applicants don’t just fire off a sketchy resume and hope for the best. Many use polished, coordinated tactics designed to slip through screening.

AI-generated resumes

AI-generated resumes are now one of the most common signs of a fake applicant. Language models can produce polished, keyword-heavy resumes in seconds, and scammers often generate dozens of variations to see which one gets past an Applicant Tracking System. In some cases, entire profiles are generated at the same time.

These resumes often look flawless on paper but fall apart when you ask about specific projects, timelines, or achievements. Hiring teams have reported waves of nearly identical resumes for unrelated positions, or applicants whose written materials are far more detailed than anything they can explain in conversation. Some have even received multiple resumes with the same formatting quirks, phrasing, or project descriptions.

Fake or borrowed identities

Impersonation is common. Scammers use AI-generated or stolen profile photos, fake addresses, and VoIP phone numbers to look legitimate. LinkedIn activity is usually sparse, or you’ll find several nearly identical profiles using the same name with slightly different skills.

At Malwarebytes, as in this Register article, we’ve noticed that the details applicants provide don’t always match what we see during the interview. In some cases, the same name and phone number have appeared across multiple applications, each supported by a freshly tailored resume. Further discrepancies occur in many instances where the applicant claims to be located in one country, but calls from another country entirely, usually in Asia.

Outsourced, scripted, and deepfake interviews

Fraudulent interviews tend to follow a familiar pattern. Introductions are short and vague, and answers arrive after long, noticeable pauses, as if the person is being coached off-screen. Many try to keep the camera off, or ask to complete tests offline instead of live.

In more advanced cases, you might see the telltale signs of real-time filters or deepfake tools, like mismatched lip-sync, unnatural blinking, or distorted edges. Most scammers still rely on simpler tricks like camera avoidance or off-screen coaching, but there have been reports of attackers using deepfake video or voice clones in interviews. It’s still rare, but it shows how quickly these tools are evolving.

Why they’re doing it

Scammers have a range of motives, from fraud to full system access.

Financial gain

For some groups, the goal is simple: money. They target remote, well-paid roles and then subcontract the work to cheaper labor behind the scenes. The fraudulent applicant keeps the salary while someone else quietly does the job at a fraction of the cost. It’s a volume game, and the more applications they get through, the more income they can generate.

Identity or documentation fraud

Others are trying to build a paper trail. A “successful hire” can provide employment verification, payroll history, and official contract letters. These documents can later support visa applications, bank loans, or other kinds of identity or financial fraud. In these cases, the scammer may never even intend to start work. They just need the paperwork that makes them look legitimate.

Algorithm testing and data harvesting

Some operations use job applications as a way to probe and learn. They send out thousands of resumes to test how screening software responds, to reverse-engineer what gets past filters, and to capture recruiter email patterns for future campaigns. By doing this at scale, they train automation that can mimic real applicants more convincingly over time.

System access for cybercrime

This is where the stakes get higher. Landing a remote role can give scammers access to internal systems, company data, and intellectual property—anything the job legitimately touches.

Even when the scammer isn’t hired, simply entering your hiring pipeline exposes internal details: how your team communicates, who makes what decisions, which roles have which tools. That information can be enough to craft a convincing impersonation later. At that point, the hiring process becomes an unguarded door into the organization.

The wider risk (not just to recruiters)

Recruiters aren’t the only ones affected. Everyday people on LinkedIn or job sites can get caught in the fallout too.

Fake applicant networks rely on scraping public profiles to build believable identities. LinkedIn added anti-bot checks in 2023, but fake profiles still get through, which means your name, photo, or job history could be copied and reused without your knowledge.

They also send out fake connection requests that lead to phishing messages, malicious job offers, or attempts to collect personal information. Recent research from the University of Portsmouth found that fake social media profiles are more common than many people realise:

80% of respondents said they’d encountered suspicious accounts, and 77% had received link requests from strangers.

It’s a reminder that anyone on LinkedIn can be targeted, not just recruiters, and that these profiles often work by building trust first and slipping in malicious links or requests later.

How recruiters can protect themselves

You can tighten screening without discriminating or adding friction by following these steps:

Verify identity earlier

Start with a camera-on video call whenever you can. Look for the subtle giveaways of filters or deepfakes: unnatural blinking, lip-sync that’s slightly off, or edges of the face that seem to warp or lag. If something feels odd, a simple request like “Please adjust your glasses” or “touch your cheek for a moment” can quickly show whether you’re speaking to a real person.

Cross-check details

Make sure the basics line up. The applicant’s face should match their documents, and their time zone should match where they say they live. Work history should hold up when you check references. A quick search can reveal duplicate resumes, recycled profiles, or LinkedIn accounts with only a few months of activity.

Watch for classic red flags

Most fake applicants slip when the questions get personal or specific. A resume that’s polished but hollow, a communication style that changes between messages, or hesitation when discussing timelines or past roles can all signal coaching. Long pauses before answers often hint that someone off-screen may be feeding responses.

Secure onboarding

If someone does pass the process, treat early access carefully. Limit what new hires can reach, require multi-factor authentication from day one, and make sure their device has been checked before it touches your network. Bringing in your security team early helps ensure that recruitment fraud doesn’t become an accidental entry point.


Final thoughts

Recruiting used to be about finding the best talent. Today, it often includes identity verification and security awareness.

As remote work becomes the norm, scammers are getting smarter. Fake applicants might show up as a nuisance, but the risks range from compliance issues to data loss—or even full-scale breaches.

Spotting the signs early, and building stronger screening processes, protects not just your hiring pipeline, but your organization as a whole.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Be careful responding to unexpected job interviews

One of our customers was contacted on LinkedIn about a job offer. The initial message was followed up by an email:

email contact

“Thank you for your interest in the Senior Construction Manager position at {company}. After reviewing your background, we were impressed with your experience and would like to invite you to the next stage of our selection process — a virtual interview.

In this session, we’ll discuss your project management experience, leadership approach, and how your expertise aligns with {company}’s current and upcoming construction initiatives.

A Zoom link will be shared in a follow-up email, which will allow you to select a time that’s most convenient for you.

If you have any questions in the meantime, please don’t hesitate to reach out. I look forward to speaking with you soon.

Warm regards,”

I edited out the company name and the name of the supposed recruiter, but when we Googled that alleged recruiter’s name, he does work at the impersonated company (just not in HR). That’s not unique, though. We’ve heard several variants of very similar stories involving other companies and other names.

Other red flags included the fact that the email came from a Gmail address (not a company domain), and that the company has no openings for a Senior Construction Manager.

When our target replied they were looking forward to the interview, they received the “Meeting invitation” by email:

meeting invitation

“Hi There,

      {recruiter} INVITED YOU TO A ZOOM REMOTE MEETING

Please click the button below to view the invitation within 30 days. By acceptance, you’ll be able to message and call each other.

               View Invitation {button}

To see the list of invited guests, click here.

Thank you.

Zoom”

Both links in this email were shortened t[.]co links that redirected to meetingzs[.]com/bt.

That site is currently unavailable, but users have reported seeing fake Windows update warnings, or notifications about having to install updates for their meeting application (Zoom, Teams—name your favorite). Our logs show that we blocked meetingzs[.]com for phishing and hosting a file called GoToResolveUnattendedUpdater.exe.

Malwarebytes blocks meetingzs[.]com

While this file is not malicious in itself, it can be abused by cybercriminals. It’s associated with LogMeIn Resolve, a remote support tool, which attackers can fake or misuse to execute ransomware payloads once installed.

This tactic is part of a broader trend where attackers pose as recruiters or trusted contacts, inviting targets to meetings and requiring them to install software updates to participate. Those updates, however, can be malware installers or Remote Monitoring and Management (RMM) tools which can give attackers direct access to your device.

This type of attack is a prime example of how social engineering is becoming the primary way to gain initial access to you or your company’s system.

How to stay safe

The best way to stay safe is to be able to recognize attacks like these, but there are some other things you can do.

  • Always keep your operating system, software, and security tools updated regularly with the latest patches to close vulnerabilities.
  • Use a real-time anti-malware solution with a web protection component.
  • Be extremely cautious with unsolicited communications, especially those inviting you to meetings or requesting software installs or updates; verify the sender and context independently.
  • Avoid clicking on links or downloading attachments from unknown or unexpected sources. Verify their authenticity first.
  • Compare the URL in the browsers’ address bar to what you’re expecting.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

❌