An engineer who claimed the AI ​​chatbot was fired by Google

Google announced Friday that it has fired a senior software developer who claimed that LaMDA, the company’s artificial intelligence (AI) chatbot, had become sentient and self-aware.

Software engineer Blake Lemoine was furloughed by Google last month for “violating the company’s privacy policy” after speaking out about the AI ​​program and his interactions with it.

– Advertising –

On Friday, a Google representative wrote in an email to Reuters that the company regretted that Lemoine continued to violate explicit employment and data security policies, which included the need to secure product knowledge.

Google previously said that LaMDA, or Language Model for Dialogue Applications, was its “breakthrough” technology for conversation and was able to engage in natural and open conversations.

Lemoine’s ideas were dismissed as wrong by Google and many other top scientists, who claimed that LaMDA was just a sophisticated computer designed to produce believable human language.

– Advertising –

Before being put on leave, Lemoine had published a post on Medium, in which he described the interface as a “person”. He said he talked to the chatbot about various topics such as religion, conscience, and laws relating to robotics.

In conversations Lemoine had with the AI ​​program, which he posted online, LaMDA said she considered herself a person on par with Lemoine.

However, Lemoine was fired when he raised the idea that LaMDA might be sensitive to his superiors at Google.

Google’s spokesperson, in an interview with The Washington Post in June, said a team of ethicists and technologists reviewed Lemoine’s concerns and told him the evidence did not support his claims. that LaMDA was sensitive.

The Google spokesperson said it didn’t make sense to assign human characteristics to conversational models and that these AI models had so much data they were able to sound human.

He further stated that superior language is not proof of sentience.

Advertising