Gender stereotypes follow us into AI interactions


As AI capabilities continue to advance, their ability to influence the way we behave is increasing. While much of the research to date has focused on chatbots' behavior, a recent study from Johns Hopkins looks at how the perceived gender of the chatbot may also play a role.

The study builds on a large body of previous research that explores how we behave around different genders. For instance, the researchers highlight that men are likelier to interrupt a speaker when talking to a woman than another man. They explain that this has largely transferred to the virtual world, with men more likely to interrupt virtual assistants, like Siri and Alexa, when the assistant has a female persona.

Similarly, research from Cornell found that "female" AI assistants mean that women were more likely to speak up in meetings than when the assistant was male. The researchers believe this is because women felt like the AI was a "virtual ally" and were as emboldened to speak up as when there are more (living and breathing) women in meetings in general.

ADVERTISEMENT
Jesse William McGraw Marcus Walsh profile Niamh Ancell BW Stefanie
Don’t miss our latest stories on Google News

The right assistance

As tech companies roll out AI assistants and agents into the workplace, there are inevitable concerns about how these tools are designed and whether they may reinforce gender biases already evident in the workplace. As such, the researchers question whether voice assistants should be gender-neutral to promote more respectful workplaces.

"Conversational voice assistants are frequently feminized through their friendly intonation, gendered names, and submissive behavior," the researchers explain.

"As they become increasingly ubiquitous in our lives, the way we interact with them – and the biases that may unconsciously affect these interactions – can shape not only human-technology relationships but also real-world social dynamics between people."

The researchers asked participants, who were evenly split between men and women, to use a voice assistant to complete a simple task. What the participants didn't know, however, was that the virtual assistant was designed to make certain mistakes, with the aim of observing how we respond to such mistakes.

The virtual assistants were also programmed to use feminine, masculine, or gender-neutral voices while responding to their mistakes in various ways. Some offered an apology, whereas others offered some form of compensation.

"We examined how users perceived these agents, focusing on attributes like perceived warmth, competence, and user satisfaction with the error recovery," the researchers said.

ADVERTISEMENT

"We also analyzed user behavior, observing their reactions, interruptions of the voice assistant, and if their gender played a role in how they responded."

"As they become increasingly ubiquitous in our lives, the way we interact with them – and the biases that may unconsciously affect these interactions – can shape not only human-technology relationships but also real-world social dynamics between people."

Clear differences

The results show clear stereotypes regarding how participants perceived and interacted with their voice assistants. For example, participants often believed that "female" voice assistants were more capable, which the researchers believe is indicative of our stereotype that women are better at supporting than men.

There were also differences in how people responded based on their gender. For instance, men were more likely to interrupt the assistant when she was making an error. They were also more likely to respond socially to a female assistant than a male one.

Interestingly, when the voice assistant was gender-neutral, participants were generally far more polite to it and interrupted it far less. This was despite the virtual assistant being perceived as less warm and even more robotic than those that were gendered.

"This shows that designing virtual agents with neutral traits and carefully chosen error mitigation strategies – such as apologies – has the potential to foster more respectful and effective interactions," the researchers explain.

With the latest generation of AI said to be behind various agents that can help us in various aspects of our lives, developers must think carefully about how these agents might encourage certain behaviors. This research reminds us that the perceived gender of the agents is just as important as what they say and when they say it.

"Thoughtful design – especially in how these agents portray gender – is essential to ensure effective user support without the promotion of harmful stereotypes," the researchers conclude.

"Ultimately, addressing these biases in the field of voice assistance and AI will help us create a more equitable digital and social environment."

ADVERTISEMENT