Rise of deepfakes: who can you trust in the metaverse?


In this new virtual world of the metaverse, people may not always be who they seem.

In recent months, deepfakes of celebrities have been popping up in ads, with Elon Musk, Tom Cruise, Leonardo DiCaprio, and others endorsing businesses from real-estate investment to machine learning.

The ads have been carefully crafted to make it clear that they're not actually genuine representations of celebrities but serve to highlight the way that technology now means that we can't always believe what we see.

ADVERTISEMENT

Deepfakes can be hard enough to tell from the real thing already; and with the advent of the metaverse, are the dangers of deepfakes set to get worse?

According to Gartner, a quarter of people are expected to spend at least an hour a day in the metaverse by 2030, interacting with colleagues, business partners, educators, friends, and retailers.

Users will interact with one another through the use of various different sensors, eye tracking, face tracking and haptics, with the aim of creating a representation of each person's behavior that's as realistic as possible.

Importantly, this means that metaverse platforms will hold a massive amount of biometric and other data on their users - data that, if in the wrong hands, could be easily used to create an avatar that could be completely indistinguishable from the real thing.

Concerns from Europol

In a recent report, Europol examined some of the crimes that could take place in the metaverse and the ways in which law enforcement should aim to deal with them.

And, it points out, the opportunities for manipulation will be vastly increased. In a world of CGI, there's no need to create a photographic likeness in order to create a perfect deepfake; all a criminal has to do is simply reproduce - or steal - the markers of a particular identity. Once stolen, the fake avatar would be completely indistinguishable from the original, from facial expressions to the way they move.

"This creates issues of trust in the identity of the ‘people’ in the metaverse; how can you be sure of who you are actually speaking to? Can AI be used to process what you are looking at, how you feel, or how you interact with people, and can this be used to influence people?" the Europol report asks.

"This is, of course, an issue on the internet in general already, but metaverse applications, because of the significant increase in the amount of valuable biometric information it can gather, will present vastly more problems in this regard."

ADVERTISEMENT

Criminal possibilities

Already, deepfakes have been used to impersonate bosses and business contacts - last year, for example, criminals used the deepfaked voice of a company executive to fool a bank manager into transferring $35 million to them. Tactics like this could be all the more convincing in the metaverse.

Similarly, deepfakes could be used to create fake celebrity endorsements or to impersonate politicians, raising questions over misinformation. Brands could also be impersonated, perhaps by the creation of fake storefronts.

Hijacked identities could be used to create deepfake revenge porn or to implicate people in illegal or embarrassing behavior.

Most worrying of all, though, are the possibilities for child abuse: it could be even easier, for example, for abusers to present as children in order to lull victims into a false sense of security. Meanwhile, deepfaked child sexual abuse material (CSAM) could also be created, with the use of haptics to generate a physical experience.

As the metaverse becomes more realistic, with more accurate and detailed representations of individuals recorded, all these risks will become ever greater.

Platforms will work to protect data from identity thieves, create effective identity verification systems, and remove deepfakes where they can. However, not all will necessarily be entirely competent or willing to do so - after all, many existing platforms fail with the current internet, with much less complication to deal with.