Top security CEOs warn your voicemail greeting is the latest target for cybercriminals


America’s top cybersecurity CEOs tell Cybernews that the biggest threat to personal security today is the voicemail greeting recorded on your smartphone – and you should erase it immediately.

Attending an industry dinner on Tuesday with some of the brightest minds in cybersecurity led to a simple piece of advice that even the most cyber-challenged individual can handle with ease.

It involves erasing your personally recorded voicemail greeting to protect yourself (and your loved ones) from becoming a victim of the latest security threat – AI voice cloning.

ADVERTISEMENT

“One of the things I recommend you do is to change your voicemail to not be your voice,” said Enrique Salem, Bain Capital Ventures partner and former CEO of Symantec.

In fact, Salem suggests that users should instead change their voicemail greeting to the phone’s automated default voicemail, as impolite as you may think it sounds.

AI voice cloning gets sophisticated

"It takes just three seconds of audio to clone a person’s voice," according to Transaction Network Services (TNS), a global infrastructure-as-a-service provider for telecommunications, payment, and financial transactions.

And making a recording of a person’s voice using their voicemail is so simple, anyone can do it with the help of an AI app, even the free ones.

These AI tools are only becoming more sophisticated in their ability to match a person’s unique vocal characteristics such as tone, cadence, and articulation, the security experts say.

For threat actors, it's a no brainer. Cloned voices can be used to target, defame, and extort individuals, families, businesses, and deceive the public as a whole, the US Federal Trade Commission (FTC) warns.

Last February, the US Federal Communications Commission (FCC) banned the use of AI-generated robocalls due to fears that cybercriminals were secretly recording and cloning the voice of the person answering, all to target them in future social engineering attacks for financial gain.

ADVERTISEMENT

One of the most common AI imposter scams targeting consumers is the “imposter family member scam,” states TNS. Often targeting older family members, such as grandparents, “scammers use AI cloned voices to mimic the voice of a loved one and make it sound as if they are in peril and need immediate financial assistance," it said.

Is that really your boss?

For those at the C-Suite level, the stakes are even higher.

The CEO’s at the event began to recount stories of other business leaders they personally knew who had their voices deepfaked by bad actors to target employees.

“There is so much data on social media right now that can sum someone up,” said Simon Taylor, founder and CEO of enterprise cloud security firm HYCU. “That’s why for 30 years I’ve never set up my voice mail,” he joked.

Taylor is referring to what’s known as a spear phishing, or the more cultivated whaling attacks against high-profile individuals, in which the threat actor will use whatever open source information they can find on the web to craft a message that convinces the target they are talking to the real person.

Ernestas Naprys Konstancija Gasaityte profile Paulius Grinkevičius B&W Gintaras Radauskas
Don’t miss our latest stories on Google News

“Think of the MGM attack, they faked a voice and got the keys to the kingdom” said John Miller, the cofounder and CEO of anti-ransomware company Halcyon.

“They take the voice, and if you're my boss and I get a call. It sounds like you, I think it's you and I say ‘How are you, what do you want?” Miller said, describing a scenario where even the caller ID can easily be spoofed by the bad actors to make it seem as if it's coming from the boss's direct number.

“If we’re in cyber, we put the phone down and check,” Miller said. But he also explains that chances are, some employee is going to acquiesce, especially “if you think it’s your boss yelling at you: I need the password!”

ADVERTISEMENT

Salem also mentioned an example from this past RSA security conference, where a CISO “from a very large company” demonstrated how he could use a clone of his voice to trick an assistant in a detailed voice message.

The CISO had asked the assistant to send him a $500 gift card for a new phone. “It was clear that it really sounded like him,” Salem said, adding that some people will automatically be “super responsive” when the request is urgent.

What can you do?

In addition to changing out your phone’s voicemail to the automated message provided by your wireless service, TNS suggests several other steps a person can take to protect themselves from AI cloning attacks:

  • Limit your recordings on social media
  • Avoid voice biometric verification
  • Do not speak first to unknown numbers
  • Create a family safe word

“If you are actively posting video content on social media channels, consider having a safe word that only your family knows to use in emergencies or during suspected cloning activity,” TNS said.

The US-based telecommunications company added that this phrase can also be used to alert friends and family if your social media accounts are hacked and a bad actor is posting AI-generated content featuring your voice.

ADVERTISEMENT