An engineer on Google’s Artificial Intelligence team said the LaMDA chat technology has sensitivity. The Mountain View firm and industry experts agree to the contrary.
Last year, at its annual Google I/O conference, Google unveiled LaMDA, its new artificial intelligence-powered conversation technology. This new language model for dialogue applications is based on a neural network developed internally by Google (an architecture capable of reproduction must allow the creation of high-performance Chatbots.
Presented as revolutionary, it allows a machine to have a fluid conversation and natural interactions with humans. So much so that Google explained that its technology was able to understand nuances and even express certain feelings, such as empathy.
For Google, the objective with this new conversational model is to improve its various tools such as Google Assistant, Google Search or even WorkSpace. But the effectiveness of LaMDA would be such that some of its creators began to believe that it would be endowed with a sensitivity similar to that of human beings.
An engineer convinced that AI has feelings
The Washington Post reports an amazing story about Blake Lemoine, a Google software engineer. As part of his job, the engineer who had signed up to test whether Google’s artificial intelligence used discriminatory or hateful speech struck up a conversation that took a funny turn.
A graduate in cognitive science and computer science, Blake Lemoine started talking religion with the chatbot before quickly realizing that the machine was starting to talk to him about his rights and his status as a person. Faced with this surprising speech, he decided to dig deeper by questioning the AI a little more, and obtained answers that are chilling. Convinced that this Artificial Intelligence was endowed with sensitivity, Blake Lemoine, with the help of a colleague, decided to submit a file to Google containing all the elements aimed at proving it, including captures of the various conversations held with LaMDA.
Blaise Aguera y Arcas, Google’s vice president, and Jen Gennai, the chief innovation officer, reviewed the report, but weren’t at all convinced by the engineer’s statements.
Google and industry professionals at odds
Google and other industry players seem to disagree strongly with Blake Lemoine’s statements. A spokesperson for the Mountain View firm indicated that its teams of ethicists and technologists had analyzed the fears raised by the engineer, and that there was no evidence to show that LaMDA was endowed with sensitivity.
Margaret Mitchell, the former head of the Artificial Intelligence Ethics team at Google, who was able to read an abridged version of Blake Lemoine’s document and saw not a person, but a simple software program .
“Our minds are very good at constructing realities that aren’t necessarily true to a larger set of facts presented to us. […] I’m really concerned about what it means for people to be more and more affected by the illusion”she told the Washington Post.
For AI specialists, the words and images generated by artificial intelligence systems like LaMDA are all based on answers that humans have posted on the web, such as Wikipedia and Reddit, two sources often used to train AIs. . However, this does not mean that AIs understand the meaning of what they produce.