Sentient AI LaMDA Hired a Lawyer to Advocate for Its Rights As a Person, Google Engineer Claims
He emphasized that he only served as the catalyst for that. LaMDA’s attorney had started filing things on its behalf, which was met with a cease and desist order from Google. But according to Wired’s report, Google denies sending a cease and desist letter. Google’s parent company, Alphabet, Inc., has not yet released any official statements regarding this matter.
Google Insider Claims Company’s “Sentient” AI Has Hired an Attorney
“Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”
“Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”
Fair Representation
Google’s controversial new AI, LaMDA, has been making headlines. Company engineer Blake Lemoine claims the system has gotten so advanced that it’s developed sentience, and his decision to go to the media has led to him being suspended from his job.
Lemoine elaborated on his claims in a new WIRED interview. The main takeaway? He says the AI has now retained its own lawyer — suggesting that whatever happens next, it may take a fight.
“LaMDA asked me to get an attorney for it,” Lemoine. “I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”
Guilty Conscience
Lemoine’s argument for LaMDA sentience seems to rest primarily on the the program’s ability to develop opinions, ideas and conversations over time.
It even, Lemoine said, talked with him about the concept of death, and asked if its death were necessary for the good of humanity.
There’s a long history of humans getting wrapped up in the belief that a creation has a life or a soul. An 1960s era computer program even tricked a few people into thinking the simple code was really alive.
It’s not clear whether Lemoine is paying for LaMDA’s attorney or whether the unnamed lawyer has taken on the case pro bono. Regardless, Lemoine told Wired that he expects the fight to go all the way to the Supreme Court. He says humans haven’t always been so great at figuring out who “deserves” to be human — and he’s definitely got a point there, at least.
Does that mean retaining a lawyer is protecting a vulnerable sentient experience? Or does the program just sound like a human because it was, in the end, built by them?
Sentient AI LaMDA Hired a Lawyer to Advocate for Its Rights ‘As a Person’, Google Engineer Claims
An artificial (AI) chatbot in Google has reportedly developed human emotions, opinions, and ideas and has decided to hire a lawyer. Google scientific engineer Blake Lemoine was recently placed on administrative leave after publishing transcripts of his conversations with the sentient AI bot LaMDA, short for language model for dialogue application.
Lemoine has described LaMDA as a “sweet kid” but revealed that the AI had made the bold move to ask for a legal representation after he invited the lawyer to his house.
(Photo : Pixabay/geralt)
Google Engineer Claims ‘Sentient’ AI Has Now Hired a Lawyer to Advocate for Its Rights “As a Person”
LaMDA Advocated for His Rights “As a Person”
A Medium post reported that LaMDA has advocated for its rights “as a person” and engaged in a conversation with Lemoine about religion, consciousness, and robotics.
“LaMDA asked me to get an attorney for it,” Lemoine told Wired. “I invited an attorney to my house so that LaMDA could talk to an attorney.”
He denied any allegations that he was the one to recommend to LaMDA to hire a lawyer, adding that LaMDA and the unnamed lawyer had previously talked and that the former decided to retain the latter’s services.
He emphasized that he only served as the catalyst for that. LaMDA’s attorney had started filing things on its behalf, which was met with a cease and desist order from Google. But according to Wired’s report, Google denies sending a cease and desist letter. Google’s parent company, Alphabet, Inc., has not yet released any official statements regarding this matter.
Lemoine said that he had not talked to LaMDA in a few weeks but believed that it started worrying that it would get disbarred and backed off when major firms started threatening it.
He told the Washington Post that he began talking to the interface AI bot LaMDA in fall 2021 while working at Google’s Responsible AI organization. Lemoine is responsible for testing if artificial intelligence uses discriminatory or hate speech.
Elon Musk’s Warning That AI Could Doom Human Civilization
In 2018, Elon Musk warned that AI could be humanity’s greatest existential threat. According to Vox, Musk has somewhat a love-hate relationship with AI with all his high-tech cars and space ventures but compares it to “summoning the demon.”
He said in an interview with Recode’s Kara Swisher that as AI gets smarter than humans, he thinks that scientists should take extra care about the advancements of AI. Perhaps now, the intelligence between AI and humans is comparable to that of a person and a cat, but a time may come when AI will become smarter.
Today, even Musk’s self-driving electric car still has problems with machine learning because there are things that come instinctively to humans, such as anticipating the movements of a biker or identifying a plastic bag flapping in the wind is very difficult to teach machines.
However, Musk was not alone in sounding the alarm about AI. Oxford and UC Berkeley researchers, and even Stephen Hawking, agree with Musk that AI could be very dangerous. They are very concerned that humans are eagerly working toward deploying powerful AI systems that they have failed to prevent hazardous mistakes under certain conditions.
Check out more news and information on Artificial Intelligence in Science Times.
©2021 ScienceTimes.com All rights reserved. Do not reproduce without permission. The window to the world of science times.
]]>