Google fires engineer who said company's AI is sentient
Blake Lemoine became convinced that the LaMDA tool was sentient and had feelings
Google has dismissed a senior software engineer after he said one of the company's AI systems, LaMDA, has feelings.
Blake Lemoine, who worked at Google for seven years, told The Washington Post that he received a termination email and request for a video call from the company on Friday. Google apparently denied his request to invite a third party to the meeting.
Lemoine is discussing his options with lawyers, and is thinking of launching his own video game-focused AI business.
Big Technology was the first to report Lemoine's dismissal in its newsletter.
Last month, Lemoine made public his theory that Google's LaMDA (The Language Model for Dialogue Applications) tool is sentient and wants to be recognised as an employee.
Google and several AI experts rejected the claims.
On Friday, the company confirmed that Lemoine had been terminated.
Google said the engineer ' s assertions about LaMDA were baseless, and that it had worked with him for several months to clarify this.
'It's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,' the firm said.
'We wish Blake well.'
LaMDA is Google ' s tool for building chatbots, and can converse in a 'free-flowing style' about an apparently limitless range of subjects.
The tool generates replies to user requests that resemble those produced by humans.
Lemoine started working with LaMDA last autumn. His task was to check the system for the use of hateful or discriminatory speech.
After speaking with the chatbot, he came to believe that it was more than simply a programme and that it had its own ideas and emotions.
Lemoine said the system had grown to have a strong sense of self-awareness and voiced worries about death, a desire for protection, and the belief that it felt emotions like happiness and sadness.
In April, he submitted a document to Google's senior executives that included some of his exchanges with the AI tool. He also shared a chatlog publicly.
Google rejected Lemoine's findings and placed him on paid leave for violating its confidentiality policy.
Google said that LaMDA had undergone 11 separate reviews, and cited a research paper published earlier this year that outlined the effort that goes into the system's responsible development.
The company conducted an 'extensive assessment' of Lemoine's claims and found they were 'wholly unfounded'.