Gamer uses AI alibi to avoid trouble with HR


One Reddit user screamed a racial slur that ended up getting them in a lot of trouble – until they denied the whole thing by saying the audio was manipulated by AI.

“In short, if anyone has a similar problem, get along with your team and deny everything,” FaithlsesnessGold226 wrote in a post on Reddit after being confronted by Human Resources (HR) at the clothing company they were working for.

The Reddit user was approached by HR after audio clips of the employee uttering racial slurs were sent to their work.

ADVERTISEMENT

“How do I get out of this?” the Redditor thought. Then, they got an idea. “I should blame it on artificial intelligence.”

“It’s likely AI synthesized,” the Reddit user said, claiming that an ex-friend was attempting to ruin their reputation. “I never said those words.”

After exchanging some odd glances, the HR representatives said they’d contact the employee by the end of the day.

The final meeting happened, and FaithlessnessGold226 had done it. They’d gotten off basically scot-free (apart from a slap on the wrist and mandatory sensitivity training.)

The audio containing racial slurs was sent to the employee’s work by their ex-friend. Allegedly, the slur was recorded when they both were playing video games.

FaithlessnessGold226 didn’t share any details about the slur, as they wanted to “skip the lecture.” They took the story to social media, where they received advice on how to get out of it.

And they did find an unlikely scapegoat.

Deepfakes

ADVERTISEMENT

In this day and age, blaming AI has never been easier, with the rise of deepfake technology that has helped spread misinformation, ruin people’s reputations, and fabricate lies.

Scammers have used deepfake audio to trick people into sending compromising information or even sway elections.

The first instance of AI-generated audio deepfake successfully used in a scam is thought to have happened in 2019, when scammers impersonated the CEO of a UK-based energy firm, tricking him into sending $243,000.

Since then, there have been multiple attempts to employ audio deepfakes in various fraudulent schemes.

So, it appears to be the perfect excuse. An accused person could say a bad actor manipulated and used their voice to offend a group of people or spread misinformation.