Google opens its experimental AI chatbot for public testing

ai-test-kitchen.jpg

Image: Google

Google has opened up its AI Test Kitchen mobile app to give everyone a limited hands-on experience with its latest advances in AI, like its LaMDA conversational model.

Google announced the AI ​​Test Kitchen in May, along with the second version of LaMDA (Language Model for Dialogue Applications), and now lets the public test parts of what it thinks is the future of human-computer interaction.

AI Test Kitchen is “meant to give you a taste of what it would be like to hold LaMDA in your hands,” Google CEO Sunday Pichai said at the time.

AI Test Kitchen is part of Google’s plan to ensure its technology is developed with some safety rails. Anyone can join the waitlist for the AI ​​Test Kitchen. It will initially be available to small groups in the US. The Android app is available now, while the iOS app is available “in the coming weeks.”

WATCH: Data Scientist vs. Data Engineer: How the Demand for These Roles is Changing

By registering, the user must agree to a few things, including “I will not include any personal information about myself or others in my interactions with these demos.”

Similar to Meta’s recent public preview of its AI chatbot model, BlenderBot 3, Google also warns that its early LaMDA previews “may show inaccurate or inappropriate content.” Meta warned when it opened up BlenderBot 3 to US residents that the chatbot may “forget” that it’s a bot and may “say things we’re not proud of.”

The two companies are acknowledging that their AI can occasionally seem politically incorrect, as Microsoft’s Tay chatbot did in 2016 after the public fed it nasty comments. And like Meta, Google says that LaMDA has undergone “key security enhancements” to prevent it from giving inaccurate and offensive answers.

But unlike Meta, Google seems to be taking a more restricted approach, putting limits on how the public can communicate with it. Until now, Google has only exposed LaMDA to Googlers. Opening it to the public may allow Google to speed up the pace of improving the quality of responses.

Google is releasing AI Test Kitchen as a demo suite. The first, ‘Imagine it’, allows you to name a place, after which the AI ​​offers paths to “explore your imagination”.

The second ‘List it’ demo allows you to ‘share a goal or topic’ which LaMDA then tries to break down into a list of useful sub-tasks.

The third demo is “Talk It Out (Dog Edition)”, which appears to be the largest test, albeit restricted to canine matters: “You can have a fun, open conversation about dogs and only dogswhich explores LaMDA’s ability to stay on topic, even if it tries to stray off topic,” says Google.

LaMDA and BlenderBot 3 look for the best performance in language models that simulate the dialogue between a computer and humans.

LaMDA is a large 137 billion parameter language model, while Meta’s BlenderBot 3 is a “175 billion parameter dialog model capable of open domain conversations with Internet access and a long-term memory.”

Google’s internal testing has focused on improving AI security. Google says it has been conducting conflicting tests to find new flaws in the model and recruited the ‘red team’ (attack experts who can emphasize the model in a way that the unrestricted public could) who have “discovered additional harmful results, but subtle. ”, according to Tris Warkentin of Google Research and Josh Woodward of Labs at Google.

While Google wants to ensure safety and prevent its AI from saying embarrassing things, Google can also benefit from setting it free to experience human speech it can’t predict. Bit of a dilemma. Unlike a Google engineer who questioned whether LaMDA was aware, Google emphasizes several limitations Microsoft’s Tay suffered when he exposed himself to the public.

“The model can misunderstand the intent behind the identity terms and sometimes fails to produce a response when used because it has difficulty differentiating between benign and conflicting prompts. It can also produce harmful or toxic responses based on biases in its data.” training, generating responses that stereotype and misrepresent people based on their gender or cultural background. These areas and more continue to be under active investigation,” say Warkentin and Woodward.

WATCH: How I revived three old computers with ChromeOS Flex

Google says that the protections it has added so far have made its AI more secure, but have not eliminated the risks. The protections include filtering for words or phrases that violate its policies, which “prohibit users from knowingly generating sexually explicit, hateful or offensive, violent, dangerous or illegal content, or disclosing personal information.”

Also, users should not expect Google to remove what is said upon completion of the LaMDA demo.

“I will be able to delete my data while using a particular demo, but once I close the demo, my data will be stored in a way that Google can’t tell who provided it and can no longer honor any deletion requests,” the statement said. consent form.

Be the first to comment

Leave a Reply

Your email address will not be published.


*