How AI voice cloning threatens the security of banking systems


Voice-based banking is promising to transform the world of fintech, as traditional friction points begin to disappear. But could these exciting new payment experiences be overshadowed by attackers cloning customer voices?

As our last best experience continues to shape our expectations for every future service and touchpoint on the customer journey, the future of payments promises to be frictionless and invisible. As a result, voice-based banking is a fast-growing industry expected to reach $3.7 billion by 2031.

ADVERTISEMENT

Voice banking: an inclusive, convenient financial future

Voice-based banking is raising expectations with a new level of convenience, speed, accessibility, and personalization. Customers with difficulty using a computer or mobile device can easily access banking services by simply using their voice, and can quickly check their account balances, transfer money, and make payments. This eliminates the need to navigate complex online interfaces or wait in long lines at bank branches.

Customers can quickly access their accounts and complete transactions with simple voice commands, eliminating the need to wait on hold for a customer service representative, or fill out paperwork at a bank branch. Moreover, voice-based banking is more accessible for customers with disabilities or those with limited technology access. For example, visually impaired customers with mobility issues can use voice-based banking to perform banking tasks independently, ensuring financial inclusion.

With increasing digital assistants at our disposal, Capital One allows customers to ask Alexa to make payments, check their balances, or track their expenses. Similarly, Barclays empowered Apple's Siri to accept voice commands when completing mobile payments. These trends are on the rise across multiple banks around the world. However, there is an increasing concern that they could pose serious threats to the security of voice-enabled banking systems.

Voice cloning: an inclusive, convenient weapon for bank fraudsters

Recently, a new type of fraud using artificial intelligence (AI) voice technology has emerged. Voice cloning allows cybercriminals to create fake audio clips or voice commands that sound like the person's original voice, which can lead to identity theft, fraudulent phone calls, and phishing emails. Worryingly, it has already claimed its first victim.

According to a report in The Wall Street Journal, an unnamed CEO of a UK-based energy firm was recently scammed out of €220,000 by an AI-powered deepfake of his German boss's voice. The fraudster used AI voice technology to mimic the parent company chief’s accent in three phone calls, convincing the victim to transfer funds to a Hungarian supplier's account.

The first payment was made before the victim grew suspicious of the fraudster's request for a follow-up payment. The stolen money was moved to a bank account in Mexico and disbursed to other locations. While officials believe this is the first reported case of AI voice technology being used in a scam, it is likely not the last. The incident warns businesses to be vigilant and ensure their employees know of such scams. It also highlights the importance of robust security measures and employee training to prevent fraud.

ADVERTISEMENT

AI has finally captured the attention of mainstream audiences around the world. It's reinventing everything, from how we communicate and search for information to how we do business. However, the success of any emerging technology also comes with a fresh set of challenges and risks, particularly regarding security. Our online world is increasingly filled with deepfake videos and AI-generated articles, so it can be difficult to determine what is real.

Losing your voice to an algorithm has become a very real prospect. For example, Microsoft's AI text-to-speech tool VALL-E recently hit the headlines for its ability to accurately mimic a speaker’s tone and emotion with just a few seconds of training. Elsewhere, ElevenLabs has enabled anyone to upload a recording and generate an artificial version of their voice. Predictably, the AI technology was quickly misused, as samples of Emma Watson reading Mein Kampf and Biden announcing the invasion of Russia went viral for all the wrong reasons.

Why you need to prioritize security over the novelty factor

These recent cases should serve as a warning about the risks of machine learning when it falls into the wrong hands. It's already imperative that companies invest in robust security measures, employee training, and additional authentication to prevent voice-cloning attacks intended to gain access to sensitive financial data.

From the hidden dangers of voice-data collection to our voices being used as a biometric surveillance tool, the threat is very real. Instead of falling for the hype, users should question if their bank also leverages multi-factor authentication, robust voice-biometric systems, or conversational AI for transaction verification to prevent voice banking fraud proactively.

Sometimes, taking a peak behind the curtain at the technology keeping you safe is way cooler than the latest marketing gimmick. While voice verification may seem like a cutting-edge feature, it's important to prioritize security over novelty. Unless you can bolster your voice with a second factor for authentication, disabling the feature may be the smartest move to protect your financial data from malicious actors.