New social engineering threat: AI voice cloning


A new social engineering attack – artificial intelligence-based voice cloning – has emerged on social media. What does this mean? Put simply, imagine having your voice impersonated through AI by bad actors, painting a target on your back. Making “you” say things that you’d never say. If this sounds innocuous, think again.

Social engineering is the foremost skill in the toolkit of hackers of all hats. It involves the psychological manipulation of people, usually in the context of divulging sensitive information or convincing them to perform certain actions, usually against their self-interest.

In the grand scheme of things, whether we use manipulation to bend technology, networks, or people to our will, whether it be for good or ill, the point is clear. Without manipulation, there is no hacker. However, not everyone who hacks does it because they’re curious. That’s because the prospect of making fast money has become more attractive than exploring new technological landscapes to help others.

ADVERTISEMENT

According to one statistic, 98% of all cyber attacks contain a social engineering component, which depends on an element of deception designed to trick users, such as phishing for example. So, all those annoying phishing SMS messages and emails you get also depend on an element of social engineering.

Like deepfakes, AI voice cloning attacks are designed to exploit trust and can be used in derogatory or inflammatory ways, as you will read here. I wonder how technology like this can ever be regulated by the companies that produce it, knowing how easy it is to abuse it.

As I always say, time will tell.

AI voice cloning agent provocateurs

Recently, my team has been observing a new emerging threat targeting the hacker world itself. Our investigation began last year during the onset of Anonymous declaring war against Russia, best known by the hashtag #OpRussia. We were able to identify unknown actors who were successfully influencing Anonymous subgroups by claiming to be Ukrainian soldiers and diplomats.

Because Anonymous is largely driven by a strong wind of mob mentality, it was seemingly effortless for these agent provocateurs to gain positions of trust through social engineering to direct the hacktivists to attack targets of their choosing. My team was able to replicate the social engineering aspect of this campaign, confirming what we believed without compromising targets, of course.

In the present day, we’ve observed unknown actors using AI-generated voice cloning technology to try to cause intergroup conflicts among Anonymous and the people associated with them. In one instance, I came across a recording that sounded almost identical to the victim’s voice, making it appear that he was colluding with law enforcement to unmask the identities of a prolific hacktivist group.

The goal was clear. The unknown actors wanted the group to retaliate against the individual while simultaneously branding the person as an informant.

ADVERTISEMENT

In every case we’ve encountered that involved AI voice cloning, the goal was clear: to social engineer an individual or group by leveraging a victim’s voice to provoke a retaliatory response from the victim’s enemies.

Hypothetically speaking, due to the current trajectory of evolving social engineering attacks using AI, I believe the day is coming when AI cloning attacks will become an epidemic the more the technology is perfected.

I imagine a scenario where someone clones the voice of a victim’s spouse coupled with a caller ID spoofing attack in an effort to convince their partner that they were in an automobile crash and to drop what they’re doing and come to the hospital. Perhaps this would be used to scare a victim or manipulate them into leaving a physical space unattended and vulnerable to a physical layer intrusion.

However, nowadays, most high-profile cyberattacks are financially motivated. Weaponizing a person’s voice to exploit trust between relationships in order to extract money and data is definitely in our cybersecurity forecast.

After all, back in 2013, we all learned that hackers can influence the stock markets. The Twitter account belonging to the Associated Press was hacked, and the threat actors tweeted that the White House was attacked by two explosions, which injured then-President Barrack Obama. The malicious tweet caused a 143-point fall in the Dow Jones industrial average.

Exploiting the wetware

Wetware is slang for the human brain and, by extension, a person. The target of any social engineering campaign involves manipulating people. The goal is to trick or coerce them into giving up sensitive information or performing certain actions, usually to achieve some illicit monetary goal.

As a former threat actor myself from back in the day, I want to take a moment to offer some real-life context to these attacks for those of you on the outside of the looking glass. For example, at one time my hacking group was at war with a rival group. Our target was simple: take down our enemy’s website. Back then, it was considered the ultimate flex of power.

Due to the fact we weren’t aware of any vulnerabilities in the server software, I decided the most effective course of action was simply to social engineer their web host. Posing as an attorney I presented the web host with a cease and desist order.

This included a threat that if they did not comply, my client would have me raise it into a criminal complaint. They were hosting resources for a criminal hacking enterprise and didn’t want to get in trouble, so they promptly terminated their subscription.

ADVERTISEMENT

Without a host for their website, they were in the market, which placed them in a needy position. I was able to insert a sleeper cell and preposition them with the offer of subhosting them at a cheaper rate.

In the end, we were subhosting our enemy’s website and other services right under their noses. This gave us access to credentials, email addresses, IP addresses, and financial information.

However, the tables turned against me because, after a lengthy prison sentence for hacking industrial control systems, I fell victim to a cunning scam artist who masterfully manipulated me out of $100.

Ironically, he impersonated an attorney from Arizona and offered me legal services since I was searching for a civil rights attorney to investigate a matter involving grand jury fraud. The man was very apt at legal procedure, which was convincing enough since I studied law myself, although in an unofficial capacity.

He knew the cost of filing fees associated with civil litigation and convinced me that he would invest $250 of his own money to cover the filing cost because he felt we had a strong case and was more than willing to help cover parts of the financial burden. Moreover, he asked if I would contribute $100, and together, we’d meet the financial requirements for the filing.

Three times, I asked him what his BAR association number was because I wanted to do a background check on him, which is normal. But I was too eager to take the case to court and kept overlooking the fact he had never answered that question.

Until it was too late. He took the money and ran.

His only mistake wasn’t that he was an apt scammer but that he didn’t a single thing about online anonymity. One of my friends managed to get him to return the stolen money, and in return, I chose not to pursue him further. But I did notify the Attorney he had impersonated since the man was using his identity in his social engineering campaign.

As I come to a close, we are living on the threshold of new emerging high-tech social engineering threats that are seizing new technologies and raising the stakes. From deepfake scammers to AI voice cloning attacks, without a doubt, we will begin to see these attacks popularize among bad actors and become a trend.

Thus, social engineering will always remain the foremost weapon in the toolkit of hackers, scammers, and cybercriminals. That’s because exploiting trust is far easier than breaking into complex systems.

ADVERTISEMENT

More from Cybernews:

Got tips? US offers $15M reward for ALPHV/BlackCat ransom gang

Waymo recalls 444 self-driving vehicles over software error

Volt Typhoon takes the stage: what we know about “defining threat of our generation”

Want to return your Vision Pro headset? Friday’s the deadline

AI girlfriends feast on your data

.

ADVERTISEMENT