What is LaMDA: Google’s new tech to make Android phones talk like humans – Times of India
From using AI to predict what users want and assist them in doing tasks, Google is now taking steps forward to make AI friendly. And for a machine to be your friend, it needs to have the ability to carry conversations by talking in a friendly way. Until now, Google Assistant used to sound like a robotic voice merely announcing search results. But here comes LaMDA or ‘Language Model for Dialogue Applications’– Google new “breakthrough” technology to make Android phones, smart speakers, cars and other voice supported devices to converse like humans do.
With LaMDA, Google Assistant can make engaging conversations with you “in a free-flowing way about a seemingly endless number of topics.” This means if you ask Google Assistant about the current weather it would no longer stop by just saying “35-degree Celsius, sunny and 65% humidity”.
Google will reply to day-to-day queries like this in a more friendly manner like “It’s warm outside, you may want to carry a bottle of water”. Interestingly, it doesn’t stop there. Google will try to make it more engaging by continuing to talk to you. For example, the conversation may convert to “Hey, it may get slightly cooler during the evening, you may want to enjoy some beer at your favourite place.” And from the beer conversation, it may continue on to movies, food or other topics that you are interested in. This is similar to how humans talk to each other.
Google in an official blog post mentioned that the entire idea around LaMDA is replicating human conversations. “A chat with a friend about a TV show could evolve into a discussion about the country where the show was filmed before settling on a debate about that country’s best regional cuisine,” it said.
What’s the tech behind LaMDA
LaMDA’s “conversational skills” is a long research project and it will continue to evolve. “Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017,” said Google.
The older architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.
But LaMDA is different. It is trained on dialogue. “During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language,” it added. Whatever the AI is talking about needs to make sense and whenever you talk to a LaMDA-assisted voice AI, it’s entire goal is to think before responding.
“Basically: Does the response to a given conversational context make sense?”
Google is further trying to make LaMDA architecture respond to people with interesting answers. “We’re also exploring dimensions like “interestingness,” by assessing whether responses are insightful, unexpected or witty. Being Google, we also care a lot about factuality (that is, whether LaMDA sticks to facts, something language models often struggle with), and are investigating ways to ensure LaMDA’s responses aren’t just compelling but correct,” said Google.
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here