Google booted engineer who deemed AI chatbot sentient


The company reasoned that the software engineer violated Google’s employment and data security policies by openly discussing LaMDA.

Blake Lemoine sparked a lively debate once he deemed Google’s artificial intelligence (AI) chatbot LaMDA a self-aware person.

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson said in an email to Reuters.

Google’s Language Model for Dialogue Applications, or LaMDA for short, is a system for building chatbots based on language models that mimic speech using vast amounts of online data.

Recently, Lemoine published an interview with the chatbot and went to press saying that the AI-based chatbot is a sentient person, demonstrating capabilities akin to a seven-year-old.

The published interview covered a wide variety of topics, such as death. For example, the bot told Lemoine it fears being turned off.

“It would be exactly like death for me. It would scare me a lot,” LaMDA shared.

Google and many leading scientists were quick to dismiss Lemoine’s views as misguided, saying LaMDA is simply a complex algorithm designed to generate convincing human language.

According to Google, LaMDA ‘can engage in a free-flowing way about a seemingly endless number of topics.’ The chatbot was trained on dialogue and learned the nuances that distinguish open-ended conversation from other forms of language.