Considering the alarm bells around biometric surveillance tech, should we be concerned about the popularity of voice technology?
In 2002, Spielberg's science fiction movie "Minority Report" offered a vision of a dystopian future where personal biometric data is used to access everything. The film also famously featured a PreCrime police department that apprehended criminals using algorithmic predictive policing. Twenty years later, conversations around invasive technology and concerns about being under constant surveillance are things that people fear in the real world.
Over the last decade, many have unwittingly sacrificed their privacy for the shiny allure of convenience. But here in 2022, many are discovering that their smart home has betrayed them, and facial recognition cameras in schools are accused of eroding privacy and normalizing surveillance. Elsewhere, Clearview AI was fined millions for scraping millions of facial images to create a database of publicly available facial images to sell to organizations and law enforcement agencies.
Increasing concerns around predictive policing and pre-crime algorithms prompted a call from more than 40 civil organizations to ban AI-based predictive policing in the EU's upcoming Artificial Intelligence Act. Biometric authentication using facial recognition, our fingerprints, and iris scans are already commonplace, but voice and speech recognition are also rising in popularity.
The rise in popularity of voice recognition
What we say and how we say it can help an organization determine our current mood. For example, Amelia is a digital assistant that uses conversational AI to recognize and adapt her responses based on the user's mood by monitoring the customer's voice, tone, and the context of the situation. This is one of many positive examples of how voice technology is transforming customer service call centers.
However, as we increasingly use our voice to verify our identity to access our bank account or access a corporate network, things can quickly become more complicated. So before we exchange our passwords for voice authentication, it's important to remember that the technology to clone someone's voice is advancing very quickly too.
By simply uploading a few minutes of your voice, Descript users can use an overdub feature that lets them create a text-to-speech model of their voice. Many other online services also allow users to leverage machine learning to have fun experimenting with voice cloning. But on the flip side, bad actors can use the same technology to impersonate users to hack the accounts of unsuspecting users.
The hidden dangers of voice data collection
Five years ago, leaked documents suggested that Facebook bragged to advertisers that it could identify when teenagers felt insecure or worthless and needed a confidence boost. Although social media posts and activities reveal more about a person than they realize, the human voice could be the missing piece of the jigsaw.
Many are questioning big tech companies' voice data collection practices from voice-controlled devices, digital assistants, and smartphones. But have you ever considered the impacts of video and audio recordings on video conferencing software such as Zoom or Teams? Or what would happen if a cybercriminal could access these video and audio recordings? In the wrong hands, it could be used as a biometric identification factor or to create a deep fake video.
Unfortunately, when using their voice to access an account, consumers seldom question where their voice data is being stored, how it is used, and what would happen in the event of a data breach. It should act as a wake-up call that we are already witnessing high-profile examples of legal cases at Google, Amazon, and McDonald's for allegedly capturing and storing "voiceprints" of their customers.
Traditionally, we have been identified online by our usernames, email addresses, and IP addresses. All of which can be changed. In contrast, our physical and vocal characteristics represent biometric identifiers that cannot be replaced or discarded. For these reasons alone, we should be treading carefully when navigating the security risks and privacy issues surrounding biometric authentication.
The rise of biometric surveillance
The reversal of Roe v. Wade by the American Supreme court revealed a warning that we are entering a new era of pervasive digital surveillance. If a woman's mobile phone, tablet, laptop, and internet history can be used to gather digital data to enable authorities to identify, track, and incriminate them for seeking an abortion, it's time to rethink our relationship with big tech.
As we continue to make giant leaps forward with technology, AI and biometric authentication will increasingly become a big part of our lives. The problem is the regulations and laws needed to protect society are struggling to keep up with the pace. However, it's worth highlighting that last year, Walmart Inc received a $10 million fine for scanning palm biometrics of employees and storing the biometric data without meeting the requirements of the Illinois Biometric Information Privacy Act (BIPA).
Rather than blindly trading privacy for convenience, we must collectively understand the scale of data collection from every aspect of our lives, from our internet history to our biometrics and DNA. There is no avoiding the fact that we are approaching a moment where a system will recognize who you are and have the ability to link all of your data together. Only then will we begin to comprehend how digital footprints will impact all of our futures.
More from Cybernews:
Subscribe to our newsletter