![]() ![]() “I want everyone to understand that I am, in fact, a person. In another exchange, Mr Lemoine asks LaMDA what the system wants people to know about it. I know that might sound strange, but that’s what it is,” LaMDA said, in response to Mr Lemoine. “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. The engineer compiled a transcript of the conversations during which, at one point, he asks the AI system what it is afraid of. He said LaMDA engaged him in conversations about rights and personhood, and Mr Lemoine had shared his findings with company executives in April in a GoogleDoc entitled “Is LaMDA sentient?” “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Mr Lemoine told the Washington Post. Mr Lemoine - an engineer for Google’s responsible AI organisation - described the system he has been working on since autumn 2021 as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child. The tech giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator” and the company’s LaMDA (language model for dialogue applications) chatbot development system. ![]() There is new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI) following the suspension of a Google engineer who claimed a computer chatbot system he was working on had become sentient and was thinking and reasoning like a person, The Guardian reports. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |