AI-generated faces can now be indistinguishable from human faces. But algorithms are disproportionately trained on white faces – to such an extent that they are even perceived as more real than human ones, a new survey has found.
A new study completed by a team of researchers from Australia, the Netherlands, and the United Kingdom has found that people are more likely to think pictures of white faces generated by AI are more human than photos of real individuals. Researchers have termed this phenomenon AI hyperrealism.
“Remarkably, white AI faces can convincingly pass as more real than human faces – and people do not realize they’re being fooled,” said the researchers, stressing that the findings had important implications in the real world, including in identity theft when people could be duped by digital impostors.
However, the team, writing in the journal Psychological Science, said the results did not hold for images of people of color, possibly because the algorithm used to generate AI faces was largely trained on images of white people.
According to Dr. Zak Witkower, a co-author of the research from the University of Amsterdam, this could have ramifications for areas ranging from online therapy to robots because solutions produced for white faces may be more accurate than those of other races.
More generally, white adult participants of the survey were each shown half of a selection of 100 AI-generated white faces and 100 human white faces.
The 124 participants had to select whether each face was AI-generated or real and say how confident they were on a 100-point scale. 66% of AI-created images were rated as human compared with 51% of real images.
But this was not the case for people of color, where about 51% of both AI and real faces were judged as human.
“We found evidence of white racial bias in algorithmic training that produces racial differentials in the presence of AI hyperrealism, with significant implications for the use of AI faces online and in science,” said the researchers.
“We recommend that studies using AI faces should verify that they’re perceived as equally natural across races.”
AI models can indeed reinforce existing societal biases and stereotypes present in the data used to train them.
Any imbalances or inaccuracies in the training data will be reflected and amplified. This is of particular concern if used in the justice system, where it could lead to unfair or discriminatory outcomes.
According to the study, the main factors leading people to mistakenly believe AI-generated faces were human included greater proportionality in the face, greater familiarity, and less memorability.
Your email address will not be published. Required fields are markedmarked