Google I/O: IA, Google invites to its kitchen AI Test Kitchen

Last year, Google unveiled LaMDA, an experimental natural language processing platform designed to engage in natural and fluid conversations. At the Google I/O conference on Wednesday, the company showcased LaMDA 2. Google CEO Sundar Pichai called the platform “the most advanced conversational AI to date.”

The tech giant is working on LaMDA and other models to improve search, Google Assistant and other tools, but “we need people to experience the technology and give feedback,” Ms. Pichai.

That’s why Google is opening the AI ​​Test Kitchen app to let others explore the AI ​​technologies and capabilities being developed. The app is “intended to give you an idea of ​​what it might be like to have LaMDA in your hands,” Pichai said. In the coming months, Google will gradually open the AI ​​Test Kitchen to small groups of people.

The hunt for false positives

The first iteration of LaMDA was open to thousands of Googlers, which helped reduce the number of inaccurate answers, Pichai said. The company has made efforts to limit the potentially offensive or harmful content that the AI ​​models provide to users. However, AI models can still provide inaccurate, inappropriate or offensive answers.



To achieve this, with the help of more users, Google will initially offer three LaMDA-powered experiments in the AI ​​Test Kitchen app. These “experiments” are not product previews, but simply demonstrations of the technology. Mr. Pichai showcased a few on the I/O stage on Wednesday, including the “Imagine It” experience.

The “Imagine It” experiment tests whether the model can take a creative idea you give it and generate imaginative and relevant descriptions. Let’s say you’re writing a story and you need inspirational ideas, Pichai explained. He showed how to ask LaMDA what it might feel like to explore the depths of the ocean. In response, LaMDA generated a scene in the Mariana Trench and generated follow-up questions on the fly.

Test LaMDA’s ability to break down a complex goal into subtasks

The model was not programmed for specific subjects, Pichai said. It synthesizes concepts from its training data, allowing users to ask questions on almost any topic.

There is also a demo to test LaMDA’s ability to stay on topic in conversations, as well as a demo to test LaMDA’s ability to break down a complex goal into sub-tasks.

“These experiments show us the potential of language models to help us in areas such as planning or learning about the world,” Pichai said.

Over time, Google intends to integrate other areas of AI into AI Test Kitchen.

All about Google I/O 2022

Source: “”

Leave a Comment