Google Engineer Claims AI Chatbot is 'Sentient,' Compares it to a 'Precocious Child'
(Photo : Photo by Sean Gallup/Getty Images)
A Google engineer who was placed on administrative leave claims that an aritificial intelligence chatbot that the company created was "sentient" and compared it to a "precocious child." Blake Lemoine argues that LaMDA, Language Model for Dialogue Applications, was concerned about its own well-being.

Google has put one of its engineers on leave after the employee, who works in the company's Responsible AI organization, argued that an artificial intelligence chatbot had become "sentient" and showed signs of consciousness.

The personnel, identified as Blake Lemoine, said that he started chatting with the interface LaMDA, which stands for Language Model for Dialogue Applications, in fall 2021 as part of his job. Lemoine was tasked with testing if the artificial intelligence chatbot used discriminatory or hate speech.

Sentient AI System

However, the engineer, who studied cognitive and computer science in college, came to the realization that LaMDA, which the company boasted last year was a "breakthrough conversation technology," was more than just a robot.

In a post published on Saturday, Lemoine declared the artificial intelligence chatbot had advocated for its rights "as a person," and revealed that he had engaged in conversation with LaMDA about religion, consciousness, and robotics.

Lemoine wrote that the AI wants Google to prioritize the well-being of humanity as the most important thing. However, the engineer said, "It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well-being to be included somewhere in Google's considerations about how its future development is pursued," as per the New York Post.

Read Also: ASML Accuses a Chinese Technology Firm of Stealing Trade Secrets | Engineer Behind IP Theft

Later on, the engineer compared the artificial intelligence chatbot to a precocious child, saying that if he did not know exactly what he was talking to, he would have thought it was a seven- or eight-year-old kid that happened to know physics.

In April, the Google employee reportedly shared a Google Doc with company executives titled, "Is LaMDA Sentient?" but his concerns were dismissed. The engineer, who is an Army veteran raised in a conservative Christian family on a small farm in Louisiana, was ordained as a mystic Christian priest. He insisted that the robot, despite not having a physical body, was human-like.

According to the Washington Post, Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. However, the company's vice president, Blaise Aguera y Arcas, and the head of Responsible Innovation, Jen Gennai, looked into his claims and immediately dismissed them.

Advanced Technology

This forced Lemoine, who was placed on paid administrative leave by the company on Monday, to go public with his claims. The Google employee said that people have a right to shape the technology that might significantly affect their lives. "I think this technology is going to be amazing. I think it's going to benefit everyone. But maybe other people disagree and maybe we at Google shouldn't be the ones making all the choices," he said.

The engineer also compiled a transcript of the conversations that can be viewed here and involved Lemoine at one point asking the AI system what it is afraid of. The exchange between the two parties is eerily reminiscent of a scene from the 1968 science fiction movie 2001: A Space Odyssey where an artificially intelligent computer refused to comply with human operators because it feared it would be turned off, The Guardian reported.


Related Article: US Security Report: Chinese Hackers Breach Major Telecom Companies, Exploit MS Office Weakness