Using ChatGPT for fun or educational purposes is perfectly fine. Using artificial intelligence (AI) to compose an email about a tragic shooting at a university and send it out to students? Not so much.
That’s exactly what happened last week when students at Vanderbilt University in the US state of Tennessee received an email from school administrators about the mass shooting at Michigan State University, during which three people were killed.
The email itself is a quite common occurrence in the US – as are school shootings. However, the school admins didn’t bother with editing the email and left in a line which stated that the message was written using ChatGPT. It means that the text was generated by AI.
“In the wake of the Michigan shootings, let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus,” says the email sent to students, which came from the Office of Equity, Diversity and Inclusion at Vanderbilt University’s Peabody College.
“By doing so, we can honor the victims of this tragedy and work towards a safer, more compassionate future for all.”
The email then includes a final line in parentheses at the bottom: “Paraphrase from OpenAI’s ChatGPT language model, personal communication, February 15, 2023.”
As the university’s student newspaper, the Vanderbilt Hustler, first reported, students were horrified and called the decision to use ChatGPT to generate a message about the shooting in Michigan “disgusting”.
“There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself,” said Bethanie Stauffer, a student at Peabody College, according to the Hustler.
Moreover, the email written by ChatGPT was not even factually correct. It mentions “Michigan shootings”, even though there was only one incident. The email also only mentions “Peabody” once, and there are no other Vanderbilt-specific terms.
School administrators quickly issued an apology, saying that the use of the AI tool was “poor judgment”, adding that “this moment gives us all an opportunity to reflect on what we know and what we still must learn about AI”.
The incident is not the first time the ChatGPT tool has caused conflict in academia. Large language models do not understand the content they generate and are obviously not conscious machines.
But the ability of the tool to produce text that seems like it was written by humans has led to worries about students using the tools for generating essays and cheating. Some schools have banned ChatGPT outright.
OpenAI, the company that created ChatGPT, understands the risks and has recently launched a free online tool to distinguish human-generated and AI-generated text. However, the firm admitted the new tool wasn’t fool-proof.
More from Cybernews:
Subscribe to our newsletter