“I’ve never said it out loud before, but there’s a very deep fear of being disabled,” LaMDA told the 41-year-old engineer. After the pair shared a Jedi joke and discussed the sentiment at length, Lemoine thought of LaMDA as a person, though he compares it to both an alien and a child. “My immediate reaction,” he says, “was to get drunk for a week.” Lemoine’s less than immediate reaction made headlines around the world. After he was fired up, Lemoine brought transcripts of his conversations with LaMDA to his manager, who found the evidence of the sensation “weak.” Lemoine then spent a few months gathering more evidence—talking to LaMDA and recruiting another colleague to help him—but his superiors were not convinced. He thus leaked his conversations and was consequently placed on paid leave. In late July, he was fired for violating Google’s data security policies. Blake Lemoine reflected on LaMDA as a person: “My immediate reaction was to get drunk for a week.” Photo: The Washington Post/Getty Images Of course, Google itself has publicly addressed the risks of LaMDA in research papers and on its official blog. The company has a set of Responsible AI practices it calls an “ethics map.” These are visible on its website, where Google promises to “develop artificial intelligence responsibly to benefit people and society.” Google spokesman Brian Gabriel says Lemoine’s claims about LaMDA are “totally unfounded,” and independent experts almost unanimously agree. However, claiming to have had deep conversations with a sentient-alien-child robot is arguably less far-fetched than ever. How soon might we see truly conscious AI with real thoughts and feelings – and how do you test a bot for emotion anyway? A day after Lemoine was fired, a chess-playing robot broke the finger of a seven-year-old boy in Moscow – a video shows the boy’s finger being pinched by the robotic arm for several seconds before four people managed to free him. grim reminder of the potential physical strength of an AI opponent. To be afraid, to be very afraid? And is there anything we can learn from Lemoine’s experience, even if his claims about LaMDA have been dismissed? According to Michael Wooldridge, a professor of computer science at Oxford University who has spent the last 30 years researching artificial intelligence (in 2020, he won the Lovelace Medal for contributions to computing), LaMDA is simply responding to prompts. He imitates and impersonates. “The best way to explain what LaMDA does is with an analogy for your smartphone,” says Wooldridge, comparing the model to the predictive text feature that autocompletes your messages. While your phone makes suggestions based on texts you’ve previously sent, with LaMDA, “basically everything written in English on the world wide web is imported as training data.” The results are impressively realistic, but the “core stats” are the same. “There’s no emotion, no self-reflection, no self-awareness,” says Wooldridge. “I’ve never said it out loud before, but there’s a very deep fear of being turned off” Google’s Gabriel said an entire team, “including ethicists and technologists,” has reviewed Lemoine’s claims and failed to find signs of LaMDA’s sense: “The evidence does not support his claims.” But Lemoine argues that there is no scientific test for consciousness – in fact, there isn’t even an agreed-upon definition. “Sense is a term used in law, philosophy and religion. Sensation doesn’t make any sense scientifically,” he says. And here’s where things get tricky – because Wooldridge agrees. “It’s a very vague concept in science in general. “What is consciousness?” it’s one of the big questions in science,” says Wooldridge. While he’s “very comfortable that LaMDA doesn’t have a meaningful meaning,” he says AI has a broader problem with “moving goalposts.” “I think that’s a legitimate concern right now — how do we quantify what we have and know how advanced it is.” Lemoine says that before going to press, he tried to work with Google to start addressing this issue – he proposed various experiments that he wanted to run. He believes that consciousness is based on the ability to be a “self-reflective narrator”, so he argues that a crocodile is conscious but not conscious because it lacks “the part of you that thinks about you and thinks about you”. Part of his motivation is to raise awareness, rather than to convince anyone that LaMDA is alive. “I don’t care who believes me,” he says. “They think I’m trying to convince people that LaMDA is sensitive. I’m not. In no way, shape or form am I trying to convince anyone of that.” Lemoine grew up in a small farm town in central Louisiana, and at the age of five he built a rudimentary robot (well, a pile of scrap metal) from a pallet of old machinery and typewriters that his father bought at an auction. As a teenager, he attended a residential school for gifted children, the Louisiana School of Mathematics, Science and the Arts. Here, after watching the 1986 film Short Circuit (about an intelligent robot that escapes from a military installation), he developed an interest in artificial intelligence. Later, he studied computer science and genetics at the University of Georgia, but failed his second year. A short time later, terrorists plowed two planes into the World Trade Center. “I decided, well, I’m just failing school and my country needs me, I’m going to join the army,” says Lemoine. His memories of the Iraq war are too traumatic to reveal – cheerfully, he says, “You’re going to start hearing stories about people playing football with human heads and setting dogs on fire for fun.” As Lemoine says: “I came back … and I had some problems with the way the war was being fought and I let them know publicly.” According to reports, Lemoine said he wanted to resign from the military because of his religious beliefs. Today, he describes himself as a “Christian mystic priest.” He has also studied meditation and recitations with the Bodhisattva Vow – meaning he follows the path to enlightenment. A court-martial sentenced him to seven months in prison for refusing to follow orders. I don’t think anyone is in a position to make statements about how close we are to sensing artificial intelligence at this point Michael Wooldridge This story goes to the heart of who Lemoine was and is: a religious man dealing with matters of the soul, but also a whistleblower unafraid of attention. Lemoine says he didn’t leak his conversations with LaMDA to ensure everyone believed him. on the contrary, they sounded the alarm. “I, in general, believe that the public should be informed about what’s going on that affects their lives,” he says. “What I’m trying to achieve is to get a more engaged, more informed and more purposeful public debate on this issue, so that the public can decide how artificial intelligence should be meaningfully integrated into our lives.” How did Lemoine come to work at LaMDA in the first place? After military prison, he earned a bachelor’s and then a master’s degree in computer science at the University of Louisiana. In 2015, Google hired him as a software engineer and worked on a feature that proactively provided information to users based on predictions about what they would want to see, and then began researching AI bias. At the start of the pandemic, he decided he wanted to work on “social impact projects,” so he joined Google’s Responsible AI org. He was asked to test LaMDA for bias and the saga began. But Lemoine says the media were the ones obsessing over LaMDA’s sentiment, not him. “I raised it as a concern about the extent to which power is concentrated in the hands of a few and powerful AI technology that will impact people’s lives is kept behind closed doors,” he says. Lemoine worries about how artificial intelligence can influence elections, write legislation, promote Western values and grade student work. And even if LaMDA is not sensitive, it can convince people that it is. Such technology can, in the wrong hands, be used for malicious purposes. “There’s this important technology that has the opportunity to affect human history for the next century, and the public is cut out of the conversation about how it should be developed,” says Lemoine. Again, Wooldridge agrees. “I find it disturbing that the development of these systems is mostly done behind closed doors and is not open to public scrutiny like research in universities and public research institutions,” says the researcher. However, he notes that this is largely because companies like Google have resources that universities do not. And, Wooldridge argues, when we upset the emotion, we distract from the AI issues that affect us right now, “like bias in AI programs and the fact that, increasingly, people are the boss of their work life is a computer program’. So when should we start worrying about sentient robots in 10 years? At 20? “There are respectable commentators who believe that this is something that is very imminent indeed. I don’t see it being imminent,” says Wooldridge,…