Google registers users to talk to its supposedly smart chatbot

Google’s supposedly sentient chatbot is one of the company’s most controversial projects that has raised many concerns about artificial intelligence. Despite all the controversy, Google is now signing up interested users to talk to its chatbot and give the company feedback.

At the I/O 2022 summit in May, Google CEO Sundar Pichai reported on the company’s LaMDA 2 conversational AI experimental model. He said the project would be open to beta testers in the coming months, and users can now make reservations to be among the first users to try out this supposedly smart chatbot.

LaMDA (Language Model for Dialog Applications) is said to be a responsive natural language processing (NLP) model. NLP is a kind of interface between humans and computers. Voice assistants like Siri or Alexa are prominent examples of NLPs that can translate human speech into commands. NLPs are also used for real-time captioning and translation applications.

Google’s allegedly sensitive chatbot got a senior software engineer fired

Dating back to July, Google reportedly fired one of its senior software engineers, Blake Lemoine, who claimed that LaMDA’s chatbot is intelligent and acts like a self-aware person. To justify the termination, Google said the employee violated employment and data security policies. In addition, two members of Google’s ethical AI research group left the company in February, saying they couldn’t cope with the firings.

Users who sign up for the LaMDA beta program can interact with this NLP in a controlled and monitored environment. Android users in the US are the first users to sign up, and the program will expand to iOS users in the coming weeks. This experimental program offers some demos to beta users to demonstrate the capabilities of LaMDA.

According to Google engineers, the first demo is called ‘Imagine It’, which allows users to name a place and offers paths to explore their imagination. The second demo is called ‘List It’, in which users can share a goal or topic and then break it down into a list of useful subtasks. Finally, the latest demo is ‘Talk About It (Dogs Edition)’, which enables an open conversation about dogs between users and chatbots.

Google’s engineering team says they have “performed dedicated rounds of adversarial testing to find additional flaws in the model.” Also, they do not claim that their system is infallible. “The model can misunderstand the intent behind identity terms and sometimes fails to produce a response when used because it has difficulty differentiating between benign and contradictory prompts. It can also produce harmful or toxic responses based on biases in your training data, generating responses that stereotype and misrepresent people based on their gender or cultural background.”

Be the first to comment

Leave a Reply

Your email address will not be published.


*