Thursday, December 5, 2024
HomeTech NewsGoogle AI Chatbot Thinks Like Humans, Is Your Job In Danger?

Google AI Chatbot Thinks Like Humans, Is Your Job In Danger?

Google AI Chatbot Google is working on artificial chatbot technology. To work on this, the company worked on the Deep Mind project. The head of this project is Blake Lemoine. Blake Lemoine has claimed that this AI chatbot works like a human brain. Further, he said that all the preparations have been done to develop it.

However, after this Blake Lemoine was sent to pad leave. Blake was accused of sharing confidential information about the company’s projects with third parties. Blake said in a Medium post that he could soon be fired for working on AI ethics. Blake then made strange and shocking claims about Google’s servers. Blake publicly claimed that he saw ‘sentient’ AI on Google’s servers. Blake also claimed that this AI chatbot can think like a human. The conversationalist was talking continuously in a human voice. That is, you can talk to it by constantly changing the topic like you do with a person.

This mechanical brain chatbot is showing feedback just like a human, its name is LaMDA. Blake Lemoine told The Washington Post that when he started chatting with the interface LaMDA (Language Model for Dialog Applications), he felt exactly like he was talking to a human. According to the information, Google had told LaMDA last year as a special success in communication technology. Google has said that this technology can be used in tools like Search and Google Assistant. The company had said that research and testing is going on on this.

Google’s clarification on Blake Lemoine’s paid leave

Google spokesperson Brian Gabriel said when the company reviewed Lemoine’s claim, the evidence he provided was insufficient. When Gabriel was asked about Lemoine’s leave, he said yes, he has been given administrative leave.

Gabriel continued, “Companies in the artificial intelligence space are looking at the long-term expectation of sentiment AI, but doing so does not imply that anthropomorphizing conversational devices are not sensitive.” He explained that “systems like LaMDA are more sensitive to human convergence.” Works by imitating the exchange types found in millions of sentences, allowing them to talk about imaginary subjects as well.”

Also Read: Jinping lifts COVID visa ban on Indians, People trapped for two years opened the way to return to China

Most Popular