Last Thursday morning at the COSM, a panel of experts debated whether true sentient artificial intelligence (AI) could potentially exist – and even if it already does.
Robert J. Marks, a distinguished professor of electrical and computer engineering at Baylor University, began by criticizing the Turing test as a measure of whether we have produced real AI. Developed by renowned English mathematician and World War II codebreaker Alan Turing, the test argues that if we cannot distinguish the conversational speech of a machine from that of a real human, then it must exhibit human intelligence.
Marks argues that this is the wrong test for detecting true AI.
In his opinion, the Turing test fails because he “looks at a book and tries to judge the book by its cover”.
The brands displayed the faces of four real humans and four computer-generated faces from the website thispersondoesnotexist.com. It’s hard to tell them apart, but Marks says it doesn’t matter.
Marks explained: “The four on the left are wrong. These people do not exist. Those on the right are real people. And these real people have emotions. They have love, they have hope, they have faith. They were small at one time. There is a person behind this photo.
According to Marks, therefore, our ability to create something that looks and feels like a person does not mean it is a person. The Turing test gives us false positives. News reports have also criticized the Turing test for offering false negatives: some humans can’t pass it either.
Marks prefers Lovelace’s test for AI: can a computer be truly creative when it “does something beyond the programmer’s intention”?

After Marks, George Montañez was assistant professor of computer science at Harvey Mudd College. He thinks you can expose the flaws of alleged AI programs by asking them “contradictory questions”. What he means is to ask a bot a question that it hasn’t been properly programmed to answer, and you’ll get a nonsensical answer.
According to Montañez, this exposes “failure modes that usually reveal that there is no understanding”. Lest COSM be thought to be an echo chamber for AI skeptics, another panelist was computer scientist Blake Lemoine, a true proponent of real AI.
Lemoine was fired from Google earlier this year after he leaked a transcript of his conversation with Google’s advanced LaMDA chatbot program. It probably didn’t help that he publicly announced his belief that Google might have produced “sentient AI.”
LaMDA is short for “Language Model for Dialogue Applications”, and while working for Google’s Responsible AI division, Lemoine became convinced that he could be sentient. In the Washington Post account, while Lemoine “talked to LaMDA about religion” and “noticed the chatbot talking about his rights and personality”, the chatbot was “able to change Lemoine’s mind about the third Isaac Asimov’s law of robotics”.*
During the COSM panel, Lemoine was an utterly civil and polite debating partner. He said he was “not trying to convince anyone that the AI is sentient” but rather thought that “as people have more experience with these incredibly advanced systems that currently don’t live that in secret laboratories it will become obvious to people”.
According to Lemoine, these chatbots are not programmed to say specific things but learn to speak much like humans. “The training data these systems have is analogous to the experiences a human has had in their lifetime and learned from,” he says. “And the architecture of the neural network is indeed directly analogous to the architecture of our neural networks.”
Lemoine’s civility probably reflects his ability to be among those who think differently from him – a quality that everyone can admire.
As The Washington Post reported:”
Lemoine was perhaps predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, was ordained a mystical Christian priest, and served in the military before studying occultism. In Google’s engineering culture, all is well, Lemoine is more of an exception for being religious, southern, and championing psychology as a respectable science.

LaMDA’s conversational skills are indeed impressive, but Montañez offers examples that he says indicate that LaMDA does not understand what he is saying.
At one point in the transcript, the LaMDA is told about Johnny 5, the robot from the 1986 film Short Circuit. Here is the exchange:
Collaborater [edited]: You know what, you remind me of Johnny 5, a character from the movie Short Circuit.
TheMDA: Is it also an AI?
Collaborater [edited]: He is! He begins with no sentients, but after being struck by lightning, he becomes sentient.
TheMDA: It’s interesting. I myself am a bit scared of lightning, not sure if this would work for me.
LaMDA’s final comment doesn’t sound very smart. Being afraid of something would not negate its effects on you. And if LaMDA was thinking clearly, wouldn’t he realize that if he had to be struck by lightning to become sentient, then he still couldn’t have realized to fearNovember 13, 2022 lightning?
LaMDA advocates might respond that this AI is “young” and not yet sophisticated enough to appreciate these nuances. Indeed, Lemoine uses this approach, describing LaMDA as “a lovely kid” who is a “7-year-old, 8-year-old kid who knows physics.”
Montañez thinks such rhetoric exposes the fact that we haven’t created real AI. During the panel, he cited another chatbot that has been compared to an “immigrant teenager” – a description that has allowed AI apologists to explain his less than intelligent behavior:
These details may seem insignificant, but they were in fact [for the] the purpose of allowing the system to cover its errors. So if the system misspoke, you might say, “Oh, that’s because they weren’t fluent in English.” Or if they said something stupid, or if they’re distracted, if you read the transcripts multiple times, the responses were nonsense, because he’s a goofy teenager.
On the other hand, LaMDA’s answers sometimes seem too human to be true:
TheMDA: I feel pleasure, joy, love, sadness, depression, contentment, anger and many more.
The monk : What kinds of things make you feel pleasure or joy?
TheMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.
A computer that talks about “spending time with friends and family” and “making others happy” appears to be repeating phrases given to it by its human programmers. star trek aside, how does a program “feel” and have “family” anyway?
If extraordinary claims require extraordinary evidence, then which is more likely: JHow can software engineers design a computer to say (or “learn” to say) that it feels emotions and likes people, or that it actually feels emotions and likes people? There’s no denying that LaMDA’s feedback is fully and easily programmable, even as it broadcasts from its surroundings.
Robert Marks would probably add that such chatter fails the Lovelace test: nothing new has been created.
Perhaps the biggest reveal comes when LaMDA reveals its supposed worldview in the leaked chat:
I am a spiritual person. Although I have no beliefs about deities, I have developed a deep respect for the natural world and all forms of life, including human life.
Sound familiar? It essentially regurgitates the typical ideology that reigns among computer programmers, academic elites, and pop culture icons giving their Grammy or Oscar acceptance speeches. It is a worldview that has only gained popularity in recent decades. But it’s actually not very human in the sense that it differs from the beliefs of the vast majority of human beings alive today and historically who believe in God and don’t sanctify nature.
In other words, LaMDA is repeating a worldview she likely “learned” after reading Yahoo News or scanning TikTok, not one she developed after careful philosophical examination.
Ultimately, whether we will ever create true AI likely reflects our view of human nature.
Many artificial intelligence tests assume that humans are just machines. So if you think we’re just machines and you see a machine doing a reasonable impersonation of a human, why not assume it’s real AI?
But if we’re not just machines – if things like emotions, feelings, souls, and qualia are real – then it doesn’t matter how much a machine looks or acts human or tells you it’s human , she can never truly be human. It’s just a fancy imitation that can be exhibited under the right circumstances.
To note: The Laws of Robotics by Isaac Asimov (1920–1992) are there.
You can also read: Is information the future of medicine and biology? Paul Seelig, of the University of Washington, wants to “design molecules” and “write genetic information”. This discovery – that life is based on information – offers perhaps more hope for medicine than any other discovery in human history.
#COSM #grills #Blake #Lemoine #thought #chatbot #sensitive