A storm in a glass of water ? Blake Lemoinea (former) engineer and researcher in the division AI of Googlewas fired for publicly asserting… that a Google chatbot had a soulor rather that the AI was self-aware! This strange case has its origin in an article by Blake Lemoine published on Medium. The latter gives excerpts of his conversations with the chatbot LaMDA (Language Model for Dialogue Applications), conversations that the engineer claims are evidence of some form of AI state of consciousness. In fact, some passages are disturbing: the chatbot asks for example not to be disconnected or to be recognized as an employee of Google rather than as a thing.
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. https://t.co/hGdwXMzQpX pic.twitter.com/6WXo0Tpvwp
—Tom Gara (@tomgara) June 11, 2022
Upset over his dealings with AI, Blake Lemoine, who is also a Conservative Christian (this may have played into the analysis), demanded that Google spare LaMDA any form of “experiment” that could harm to its integrity. Google’s response was rather scathing. Thereby, Brian GabrielGoogle’s spokesperson, issued a statement sharply challenging Lemoine’s interpretation:“Our team, made up of ethicists and technologists, has reviewed Blake’s concerns in accordance with our AI Principles and advised him that his claims are unsubstantiated. Some in the AI community envision the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing current conversational models, which are not sentient. » Following this mini-controversy, Blake Lemoine claims to have been pushed out by Google.
In fact, and in the state of current knowledge, a conversational type AI can absolutely not be aware of itself, which does not mean that it is not “intelligent”, if by intelligence have understands the ability to adapt to its environment and/or to an unforeseen situation (which many AIs are already perfectly capable of doing). It is also always a little ironic to hear certain specialists (not researchers) in AI, who pompously deny the capacities of a machine to be intelligent, to be caught red-handed in confusing intelligence, which is multiple, with the state of self-awareness, which is unique to only a few mammals on Earth, including humans (a bit of philosophy wouldn’t hurt some of these “experts”, hmm…).