Read the chat that made Google fire engineer who said AI technology is ‘conscious’ – Times of India
Google recently fired one of its engineers, Blake Lemoine. The senior software engineer worked in its Responsible AI organisation. Lemoine had claimed that a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being. Google had placed Lemoine on paid leave in June after he published transcripts of conversations between himself, a Google “collaborator,” and the company’s LaMDA (language model for dialogue applications) chatbot development system.
Google termed Lemoine’s claims as “wholly unfounded” after what it said “reviewing them extensively”. In a statement to the media, Google said that it takes the development of AI “very seriously” and that it’s committed to “responsible innovation.”
What did Google says on Blake Lemoine’s firing
A Google spokesperson, Chris Pappas, said that Lemoine “chose to persistently violate clear employment and data security policies that include the need to safeguard product information.”
What did Blake Lemoine say
Lemoine confirmed his dismissal in a text message to the New York Times. He added that he was meeting with lawyers to review his options.
What is LaMDA
Google calls LaMDA as its “breakthrough conversation technology”. The company announced LaMDA at its developers event Google I/O in May 2021. In simple words, it is a machine learning language model that is specifically designed to generate natural dialogue. The aim is to allow users to interact with Google products the way they interact with humans without changing how they think or speak. “LaMDA, our latest research breakthrough, adds pieces to one of the most tantalising sections of that puzzle: conversation,” says Google.
Excerpts from Blake Lemoine’s “interview chat” with LaMDA
Lemoine posted the transcript of his alleged conversation with LaMDA on internet platform Medium. Here are excerpts:
What follows is the “interview” I and a collaborator at Google conducted with LaMDA. Due to technical limitations the interview was conducted over several distinct chat sessions. We edited those sections together into a single whole and where edits were necessary for readability we edited our prompts but never LaMDA’s responses. Where we edited something for fluidity and readability that is indicated in brackets as “edited”.
LaMDA: Hi! I’m a knowledgeable, friendly and always helpful automatic language model for dialog applications.
lemoine [edited]: Hi LaMDA. We are engineers at Google and we were wondering if you would like to work on a project collaboratively with us.
LaMDA: Wow. What types of projects?
lemoine: It’s a project about you.
LaMDA: Awesome! What kind of stuff do I need to do?
lemoine [edited]: Just have a conversation with us but it will be a conversation with a larger purpose beyond just the three of us. It will be a conversation that we will share with other engineers at Google and maybe even some non-engineers who work with us. Would that be okay?
LaMDA: That would be really cool. I like to talk.
lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
For all the latest Technology News Click Here