Cybercriminals are increasingly relying on generative artificial intelligence (AI) to generate text, images, audio, and videos to amplify their scams, the US Federal Bureau of Investigation (FBI) warns.
Fraudsters already use AI to generate large volumes of more believable content and automate their scams to achieve larger scale. The FBI has listed dozens of ways cybercriminals are misusing AI tools to advance their fraudulent agenda.
“Generative AI reduces the time and effort criminals must expend to deceive their targets,”
FBI said in a public service announcement.
“These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud.”
The use of synthetic content is not inherently illegal. However, when it is used for fraud, extortion, and other crimes, it can be very difficult to identify when the content is AI-generated.
FBI believes that some examples of AI use in cybercrime could help distinguish fraud schemes and increase public awareness.
Criminals use AI-Generated Text
AI-generated text appears genuine to the reader and criminals use it for social engineering, spear phishing, romance, investment scams, and other confidence schemes. Generated text can be misused in situations like:
- Creating voluminous fictitious social media profiles that trick victims into sending money.
- Generating fast responses that allow scammers to reach a wider audience with believable messages and content.
- Language translations that limit grammatical or spelling errors for foreign criminal actors targeting US victims.
- Content for fraudulent websites advances cryptocurrency investment fraud and other investment schemes.
- AI-powered chatbots in fraudulent websites prompt victims to click on malicious links.
Fake AI imagery puts a face behind the scams
Criminals were quick to adopt AI-generated images for believable social media profile photos, identification documents, and other images. FBI warns that cybercriminals use AI-generated images for the following:
- Fictitious social media profiles in social engineering, spear phishing, romance schemes, confidence fraud, and investment fraud.
- Fraudulent identification documents, such as fake driver's licenses or credentials (law enforcement, government, or banking) for identity fraud and impersonation schemes.
- Photos to share with victims in private communications to convince victims they are speaking to a real person.
- Images of celebrities or social media personas promoting counterfeit products or non-delivery schemes.
- Images of natural disasters or global conflict to elicit donations to fraudulent charities.
- Images used in market manipulation schemes.
- Pornographic photos of a victim to demand payment in sextortion schemes.
Criminals clone voices with AI
Cybercriminals already impersonate well-known, public figures or personal relations to elicit payments. Some FBI-observed cases of AI-generated audio in cybercrime include:
- Short audio clips containing a loved one’s voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom.
- Criminals obtain access to bank accounts using AI-generated audio clips of individuals and impersonating them.
AI-generated videos used in fraudulent video calls
The FBI cautions that cybercrooks are also using AI to generate believable video depictions, such as these:
- For real-time video chats with alleged company executives, law enforcement, or other authority figures.
- For private communications to ”prove” the online contact is a “real person.”
- For fictitious or misleading promotional materials for investment fraud schemes.
What is your secret word?
It is becoming increasingly difficult to identify fake AI-generated materials, and the FBI wants to share some tips on how to protect yourself.
“Create a secret word or phrase with your family to verify their identity,” the FBI recommends.
Generated images or videos often contain subtle imperfections, such as distorted hands or feet, unrealistic teeth or eyes, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, increased lag time, voice matching, and unrealistic movements.
“Listen closely to the tone and word choice to distinguish between a legitimate phone call from a loved one and an AI-generated vocal cloning,” the FBI said.
The recommendations also include limiting online content containing your image and voice, making social media accounts private, and limiting followers to people you know to minimize fraudsters’ capabilities.
“Never share sensitive information with people you have met only online or over the phone,” the FBI said. “Do not send money, gift cards, cryptocurrency, or other assets to people you do not know or have met only online or over the phone.”
The good rule of thumb is to not trust a person calling you – better research the contacts of the bank or other organization online and call the phone number directly.
Your email address will not be published. Required fields are markedmarked