Scammers or criminals are using deepfakes and stolen personally identifiable information during online job interviews for remote roles, according to the FBI.
The use of deepfakes or synthetic audio, image and video content created with AI or machine-learning technologies has been on the radar as a potential phishing threat for several years.
These certifications can help you enter an industry with a high demand for skilled staff.
Read nowThe FBI's Internet Crime Complaint Center (IC3) now says it's seen an increase in complaints reporting the use of deepfakes and stolen personally identifiable information to apply for remote work roles, mostly in tech.
SEE:Phishing gang that stole millions by luring victims to fake bank websites is broken up by police
With some offices asking staff to return to work, one job category where there has been a strong push for remote work to continue is in information technology.
Reports to IC3 have mostly concerned remote vacancies in information technology, programming, database, and software-related job functions.
Highlighting the risk to an organization of hiring a fraudulent applicant, the FBI notes that "some of the reported positions include access to customer PII, financial data, corporate IT databases and/or proprietary information."
In the cases reported to IC3, the FBI says the complaints have been about the use of voice deepfakes during online interviews with potential applicants. But it also notes victims have noticed visual inconsistencies.
"In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually," the FBI said.
Complaints to IC3 have also described the use of stolen PII to apply for these remote positions.
"Victims have reported the use of their identities and pre-employment background checks discovered PII given by some of the applicants belonged to another individual," the FBI says.
The FBI in March 2021 warned malicious actors would almost certainly use deepfakes for cyber and foreign influence operations in the next 12 to 18 months.
It predicted synthetic content could be used as an extension to spearphishing and social engineering. It was concerned that fraudsters behind business email compromise (BEC) -- the most costly form of fraud today -- would transform into business identity compromise, where fraudsters create synthetic corporate personas or sophisticated emulation of an existing employee.
The FBI also noted that visual indicators such as distortions and inconsistencies in images and video may give away synthetic content. Visual inconsistencies typically presented in synthetic video include head and torso movements and syncing issues between face and lip movements, and associated audio.
Fraudulent attacks on recruitment processes is not a new threat, but the use of deepfakes for the task is new. The US Department of State, the US Department of the Treasury, and the Federal Bureau of Investigation (FBI) in May warned US organizations not to inadvertently hire North Korean IT workers.
These contractors weren't typically engaged directly in hacking, but were using their access as sub-contracted developers within US and European firms to enable the nation's hacking activities, the agencies warned.