The immersive technology future being planned for all of us by big tech companies could be a cybersecurity nightmare.
Apple, among other big tech companies, is pushing the promise and potential of so-called “spatial computing.” In its vision of the future, we’ll all strap on virtual or augmented reality (VR or AR) headsets and step into a world supported by technology and digital representations of what we ordinarily would see in real life.
Companies are pushing hard for this vision as they develop new hardware and present the idea as living up to the potential that the metaverse had but never actually achieved. But there’s just one problem – it could be a cybersecurity nightmare.
That’s the findings of a new study by researchers from a range of universities and private companies. The study examined the potential for VR and AR (as well as mixed reality, or MR) devices to breach privacy.
“The cutting-edge gaze-controlled typing methods, now prevalent in high-end models of these devices, e.g., Apple Vision Pro, have not only improved user experience but also mitigated traditional keystroke inference attacks that relied on hand gestures, head movements, and acoustic side-channels,” the authors write.
Solve one problem, cause another
Yet that rosy picture only tells half the story. The reality is, the authors explain, that while getting rid of the keyboard from devices means that it’s no longer possible to track finger movement or to hear how people pass their hands over a physical keyboard, the places that people gaze within those headsets could cause its own safety issues.
The researchers proved what they dub GAZEploit by putting it to the test in the real world. As part of their study, they conducted Zoom calls with 30 participants, designed to mimic the ordinary behavior of people who might fall victim to this kind of attack. The issue at heart is that the avatars used in such devices mirror a user’s facial and eye movements, which can be analyzed to reconstruct keystrokes during typing activities such as entering passwords, messages, or emails.
The exploit the researchers identified was 80% successful at inferring keystrokes across 30 participants. Those behind the study claim more than 15 top-rated apps in the App Store are vulnerable to the so-called GAZEploit attack, leaving people vulnerable to hacking if they were to use those apps.
Passwords and message content at risk
The attack achieved high precision, particularly in messages, where it achieved 92.1% accuracy, passwords (77%), and passcodes (73%). The high probability of being caught out indicates really significant privacy risks for users of VR/MR devices that utilize gaze-controlled typing. And when it comes to the top five most likely inputs that it’s trying to guess, GAZEploit is even more powerful, reaching into the 80s and 90% accuracy.
GAZEploit leverages appearance-based eye gaze estimation, a method where eye movements are inferred from images or videos of the user’s face. The researchers say the approach is particularly stealthy because it doesn’t require access to raw data, making it an easy way to attack. It puts to work deep learning models, specifically convolutional neural networks (CNNs), which are deployed to estimate gaze directions based on the images of virtual avatars.
The big question for companies seeking to promote the use of AR, VR and MR – as well as the millions of users who could be onboarded to the tech in the years to come – is whether they can be kept safe now this is known. While manufacturers like Apple have built strong privacy frameworks, GAZEploit shows that even abstracted or anonymized gaze data can be exploited in certain contexts. And for that reason, the researchers recommend companies take action to try and prevent those issues from appearing time and time again.
Your email address will not be published. Required fields are markedmarked