What to do when deep fakes break our trust


The intersection of deep fakes and other verification abuses is breaking internet trust, possibly leading to a cascade of unstoppable fraud. But can identity decentralization save us?

Trust is so deeply embedded into the human psyche that we’d struggle to live without it. When you go to catch a train, you trust that the train will turn up. If you buy your weekly food shop online, you trust the supermarket will deliver it on the specified day. Trust is so intrinsic in our lives that we hardly notice it… until it breaks down.

Ensuring that trust is upheld is something that human societies have evolved over millennia as hunter-gatherers and farmers. Trust is so essential that it is fair to describe it as underpinning societal cooperation, fairness, and honesty.

As we build a digital world, trust has become an issue that’s as central as it is in the real world. But this new world order comes with new world issues that humans have little experience dealing with. Deep fakes are here, and they threaten the very soul of humanity by breaking down trust. At the intersection of this are verification abuses based on deep fakes.

Some believe that decentralization will help resolve the issue of deep fakes and trust, but is this the case, or are we all in for a rocky ride as trust breaks down?

How important is trust in the digital world?

Since the internet became part of everyday life, trust has become fuzzier. The old cartoon by Peter Steiner, "On the internet, nobody knows you're a dog," was first published by The New Yorker on July 5th, 1993. Thirty years later, even the dog's identity is suspect and probably a deep fake.

When the internet was developed, technological attempts at trust were made. The underlying protocols that provide the infrastructure for internet communications, i.e., TCP/IP, DNS, and HTTP, evolved to encompass trusted communications with the advent of digital certificates and encryption.

Trust was clearly an integral part of the internet as it became ubiquitous in our lives, and protocol designers and developers still work to ensure that the communications between software and computer systems are built upon layers of technological trust. Encryption, digital signatures, and identity verification are proxies for trust in the digital world.

However, even in the digital realm, trust is about humanity, too, and technological trust can only go so far. This is why phishing and social engineering are so successful. Instead of trying to break encrypted doors, it's easier to steal the key or, better still, trick someone into giving you it willingly.

Deep fakes are the next round of social engineering, but this time, they can also trick these technological barriers. Deep fakes are taking on the heart of trusted identity – verification.

Where are deep fakes breaking online trust?

Verification is relatively new to digital identity and is a complication of consumer/citizen identity. Where enterprise identity may rely on enterprise directories to check an employee's role and status, consumer/citizen identity must request verification of an individual.

This verification can come from various sources, but typically, it takes the form of identity document checks, e.g., a passport, credit reference agency checks (CRA), biometrics, and, more recently, bankID using open banking or other bank APIs. Identity verification is increasingly moving online, so all these checks are done on the fly as part of a registration process.

So, where does deep fake identity come into this? Let's go back to the 1960s and the infamous scammer Frank Abernarle. Frank impersonated, amongst other professions, a Pan Am pilot and doctor and used these 'identities' to con companies out of large amounts of money. Frank managed to scam companies by acting as a verified person, taking on the traits of that profession – in other words, he created a verified identity. Deep fakes are a modern way to help generate the same verified identities that can be used to commit fraud.

Verification is a proxy for online trust – deep fakes break this trust through pretense. The use of deep fakes in breaking trust is now a tsunami that should be causing alarm. In 2023, an estimated 500,000 video and voice deep fakes were shared on social media sites.

The AI boom has created "cheap fakes" with the warning that this will result in a proliferation of deep fakes. Fraudsters are using these cheap fakes and generative AI to build an identity profile and provide the documentation needed to pass a KYC (identity verification) process. Cheap deep fakes mean that fraudsters can more easily monetize their use in fraud schemes, driving a cheap fake industry that will swamp identity-based transactions with fraudulent activity based on a synthetic you.

Synthetic identity is not a new concept in financial fraud, but it’s been reaching new heights in recent years. Thomson Reuters found that 95% of synthetic identities presented during KYC checks are not detected. One way that verification services are used to circumvent synthetic identity scams is to incorporate biometrics into the process, namely facial recognition and liveliness tests. However, deep fakes are being used to trick facial recognition in verification processes.

Increasingly, verification is extended to integrate historical data to build a profile that creates a richer identity rather than a snapshot identity. However, deep fake KYC processes will add a new layer of obfuscation and trickery. One of the latest tactics presented as anti-deep fake prevention is identity decentralization.

Exploit trust, always fake

Identity decentralization comes with the promise of trust at scale and data under user control. To promote decentralized identity, W3C has developed an architecture and data model. W3C states that the “Decentralized Identifiers (DIDs) defined in this specification are a new type of globally unique identifier. They are designed to enable individuals and organizations to generate their own identifiers using systems they trust."

Underpinning a decentralized identity are verifiable credentials, i.e., identity data that has been proofed and stored on a blockchain (typically) and is, therefore, an immutable measure of someone's 'humanity.'

Some folks also claim that AI can be used to detect the misuse of verified credentials to ensure they are not being hijacked and used for fraud. Advocates of decentralized identity highlight the use of identity wallets to store identity-related items such as educational certificates and even NFTs to add further trust in a digital identity, i.e., to build up a picture of a person that is more than their name, address, and age.

My problem with this is what stops fraudsters from creating a decentralized identity? If so much trust is placed in the concept of a decentralized identity, then if fraudsters use this platform to develop deep fake IDs, trust will be truly broken. Suppose fraudsters work out how to generate deep fake verified credentials and a way to circumvent or hijack the checks against these verified credentials. In that case, they will have a trusted but fake identity to exploit” a new mantra of "exploit trust, always fake" will overcome any zero trust mantra of "never trust, always verify."

Verified but decentralized identity is a great concept, but we must refrain from thinking it is a panacea for trust. Identity changes over time. Decentralized identity and its verified credentials must always be dynamic and should always be verified, but doesn't this break the rule of decentralization? Perhaps decentralization is not a good word to describe these systems; perhaps the whole idea of decentralization in an online world is impossible.

A recent letter signed by over 1500 experts, including Bruce Schneier, and sent to the US Congress expressed deep concerns about blockchain technologies and fraud.

"Financial technologies that serve the public must always have mechanisms for fraud mitigation and allow a human-in-the-loop to reverse transactions; blockchain permits neither."

Fraud mitigation must extend to deep fakes as they seep into the trusted world we have built to permit secure online transactions.

Cyber reliance is the new cyber resilience

Trust is about reliance – can you rely on this information to be true? Cyber reliance is the new cyber resilience and a must-have requirement to persist trust where people's identity is concerned. This is a whole new challenge that the world must overcome. Like its cybersecurity equivalent of social engineering, controlling and manipulating our reality using deep fakes will not be a single-point fix.

A digital wallet containing verified credentials is still exploitable, whether a deep fake creation or via identity trojans. Like the internet pioneers, this fix will be multi-layered, bringing capabilities together from a variety of solutions to building an ongoing picture of a person.

Even then, we should expect fraudsters to find a way to circumvent our best efforts, although something I learned long ago was that security is about reducing risk, not stopping attacks altogether.