ChatGPT and the future of digital identity: bot, until proven otherwise


Generative AI models cannot yet create the perfect fake ID, but they will be able to mimic humans online to the point where “you have to assume someone is a bot until they prove otherwise,” digital identity expert Philipp Pointner says.

Once ChatGPT and other AI systems are fully hooked up to the internet and have live access to real-time data, they will be able to do whatever they want – whether we like it or not, Pointner told Cybernews in an interview.

“This whole new wave of chat AIs is really going to enable people to take misinformation through social media to the next level,” warned Pointner, chief of digital identity at ID verification company Jumio.

Social media companies will have to step up their game in ensuring user identity, or risk being swamped by AI-generated accounts that will be indistinguishable from real ones.

A group of academics, authors, and tech leaders, including Twitter’s Elon Musk and Apple co-founder Steve Wozniak, voiced concern earlier this week that AI systems could flood information channels with "propaganda and untruth" without proper safety protocols in place.

Europol, a pan-European policing body, has warned that ChatGPT can be abused to generate fake social media engagement and was “ideal” for propaganda and disinformation purposes.

The alarm has also been raised about its potential criminal use, from drafting convincing phishing emails to writing malicious code. According to Pointner, further advances in AI are set to make the life of cybercriminals even easier.

The following interview has been edited for length and clarity.

gpt_pointner
Philipp Pointner

OpenAI has recently released GPT-4, which is described as more powerful and advanced than its predecessor. There's also Google's Bard. How do you expect that to affect the cyber threat landscape?

The pace and quick succession in which we now see these tools being released is actually mind-blowing and surprising. I'm not sure if they can keep up this pace. At least at the moment it looks like there are larger and better models with new capabilities coming very soon.

ChatGPT is already connected to the internet in the beta or alpha stage and then it's coming to the public: it’s no longer limited to data before 2021, it can do real-time searches on the internet and interact with websites.

Once these things are hooked up to the internet, they can go on Reddit, create a Twitter account, and do whatever they want – whether we like it or not.

Do you think disinformation is one of the biggest threats posed by AI chatbots?

There are a ton of different risks. But I think that's one of the biggest. It's already tricky to know on the internet who you are talking to – are people there who they say they are in the real world? This problem is going to explode into a new dimension where you have to assume someone is a bot until they prove otherwise.

ChatGPT-empowered phishing is another concern. Should we be worried?

In an elaborate email scam, where you get some old lady to wire you money, you have to act with empathy. You have to act with context. There's a reply back and [before] you couldn't automate this stuff, but now you can. That's the scary part. A chatbot can now go into a back-and-forth conversation and be very convincing and argumentative.

What will keep people safe from falling victim to these new kinds of cyber threats?

Larger organizations that are actually responsible for keeping their platforms clean will need to look at identity solutions that help them create a uniqueness. Whether this is done by just doing a face liveness verification or whether this is actually checking documents is up to them.

But something has to happen on the identity side. At the same time, we as consumers and society as a whole need to start shifting our viewpoint towards being even more cautious when we consume what we believe is human-generated [content] because it might just not be anymore.

I've heard people say: “Just use CAPTCHA, that thing is helpless against it.” Well, that's not true. The world needs a new way of doing CAPTCHA.

What are some of the other measures organizations can take to prevent risks associated with ChatGPT and other AI bots?

They can recruit bot detection tools. You can look at device-fingerprinting to detect patterns and see whether it's always the same device with the same IP [internet protocol], with the same signals. This allows you to detect these duplication attacks.

There are dedicated solutions out there for bot detection. But they are targeting conventional bots that we've been dealing with for the last ten years. I don't think these solutions are necessarily equipped for the next generation.

Can you tell me about your experience with ChatGPT and its potential for creating malicious content?

I can happily report that so far image-generation AIs are not able to create fake IDs. I'm very happy about that. That would be horrifying. In terms of attacks, we have not seen them yet. But I think it's very clear that it's going in that direction. Identity has been on the minds of social media companies forever.

But from conversations we had, it has never reached the level where it was something that they really had to do or found important. Because, at the end of the day, when it comes to cybersecurity, the question always is: what's the level of friction that I'm willing to introduce to solve this problem?

And I think that's where we're going to see the shift that's going to [create] more friction for everybody, to keep these bots out. But this whole new wave of chat AI is really going to enable people to take misinformation through social media to the next level.

When it comes to AI bots, how do you see the threat landscape evolving in five years?

The life of fraudsters will get automated, just like all the other jobs in the world. The work for the fraudster becomes easier.

When we get to a point where image generation can create fake IDs, where CAPTCHAs can be solved by machines, then we're also going to see a flood of additional useless content. And that's when we're going to see an attempt to have way more curated, understood content creation.

The companies will really have to step up their game and try to find out how to keep that noise out, because otherwise it's just going to be this endless flood of information and you don't know where it comes from and what the intention behind it is.

Everybody's going to have the ability to get their own little chatbots that take their view of the world and then start making arguments in that direction. That's going to require the social media companies to take identity way more seriously, or at least do things that ensure every human being has only a single account associated with them.

Do you think state regulation should play any part in this?

I think not. I think the responsibility is mostly with the companies that are operating these platforms for users. Whether they will is another question. If it reaches a breaking point and the identity problem gets so rampant that it becomes a threat to the economy and society, that's probably when we are going to see governments step in.

But I hope that we can, as technologists altogether, keep it at bay before there is a need for government. The big question is: are we, this time, going to be smart enough to solve this problem before it even arises and stave it off, or is there first going to be a catastrophe and then measures being taken?

Which one do you think is going to happen first?

There are enough people who perceive this latest wave of AI as a threat, who are very loud about it. And so I think there's a chance that smart actions are going to be taken before it gets out of control.