Cybercriminals opt for deepfakes to apply for remote jobs

Deepfakes and stolen Personally Identifiable Information (PII) can be used for more than hoaxes, phishing, and identity theft. Now, these tools are increasingly utilized to land a threat actor their dream job.

The FBI Internet Crime Complaint Center (IC3) issued a warning about cybercriminals opting for deepfakes and stolen PII to apply for remote work positions.

The investigated roles include information technology and computer programming, database, and software-related job functions. In some cases, they also involved access to sensitive corporate information, such as customer PII, financial data, corporate IT databases and/or proprietary information.

For landing a perfect job, deepfakes are usually the preferred option for threat actors. Deepfakes are AI-generated videos, audio, or images imitating a real person saying or doing something they’ve never actually said or done. As great as this technology is for entertainment and even protecting peoples’ privacy by masking their identity with AI, IT specialists are increasingly worried about its potential dangers. From election propaganda to fake video evidence and even leaders ordering their troops to retreat, there have been a variety of concerns associated with deepfakes.

Hiring managers reported an increase in applicants whose actions and lip movement didn’t align with the audio during virtual interviews. This included things like coughing and sneezing. In itself, it illustrates a shift towards the use of deepfakes for targeting regular users rather than political figures and celebrities.

“We start noticing cases which show that the attacks are moving towards being a threat to average internet users and private individuals,” Giorgio Patrini said.

In order to apply for these positions in the first place, threat actors exploited stolen PII and other peoples’ identities. As the technology continues to evolve, worries over how our publicly accessible data will be used in the future persist.

“Mishandled data can have such an impact on people’s lives, especially children who are 51 times more likely to be victims of identity fraud with mishandled and leaked data central to those activities,” Dr. Rachel O'Connell told Cybernews.