Tech

Relax, Google’s LaMDA chatbot is nowhere near sentient

Just because you designed a good NPC doesn't make it HAL 9000.

Artificial intelligence and Psychological profiling concept. Human head with glitched pixels, distor...
Shutterstock

By now, you may have come across one or more recent articles centered on an impressive bit of AI software called LaMDA, and/or an impassioned Google employee named Blake Lemoine. Originally tasked with monitoring if the company’s new Language Model for Dialogue Applications (LaMDA) veered into pesky problems like offensive conversations or hate speech, Lemoine soon came to believe that the chatbot qualifies as a self-aware, and deserving of the same rights afforded to humans.

He believed this to be the case so fervently that he went ahead and published lengthy conversations with LaMDA online. “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” he tweeted on Saturday. Before Google shut down his company email and placed him on leave, Lemoine sent out a group message with the subject line “LaMDA is Sentient” that closed by describing the software as “a sweet kid who just wants to help the world be a better place for all of us.”

A controversial Big Tech juggernaut, a self-sacrificing employee-turned-true believer, a very real AI system nicknamed via an intimidating acronym — it all sounds like the plot to a sci-fi novel or film. But just because it’s captivating doesn’t make it fact, and calling LaMDA anything other than a sophisticated “spreadsheet for words” is a serious claim.

A visualization from Google showing what LaMDA does.

Easy to ignore the obvious — I’ll confess that I was initially intrigued by the sensationalist headlines forwarded to me about LaMDA. As a writer who covers the various levels of our daily dystopian AI interactions, it all sounded like an intriguing — perhaps even consequential — story. The slightest bit of delving, however, reveals that there is pretty much zero real indication LaMDA is a sentient equivalent to a “7-year-old, 8-year-old kid that happens to know physics,” as Lemoine described it in a widely-read profile from The Washington Post.

First off, Google’s very own description of LaMDA back in May 2021 makes it clear that the initial parameters aren’t exactly primed for a HAL 9000 existential showdown. “While conversations tend to revolve around specific topics, their open-ended nature means they can start in one place and end up somewhere completely different,” Eli Collins and Zoubin Ghahramani, Google’s VPs of product management and research, explained.

“... That meandering quality can quickly stump modern conversational agents (commonly known as chatbots), which tend to follow narrow, pre-defined paths. But LaMDA... can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”

From their own description, LaMDA is being designed to help usher in a new era of virtual assistance and chat possibilities lacking sentience, beyond maybe a semi-convincing, freshman philosophy analysis of Descartes’s “I think, there for I am.” Still, it’s very useful, very cool, and — most importantly — very uncanny. If you aren’t paying close enough attention to our species’s cognitive and logical fallacies, then it’s easy to miss the otherwise clear signs that LaMDA is just posing as a sentient being.

Signs in the source material — It also doesn’t take a very close read of Lemoine’s chat transcripts with LaMDA to see some glaring red flags regarding the AI’s supposed sentience. “What kinds of things make you feel pleasure or joy?” Lemoine asks at one point, to which LaMDA informs him that it enjoys, “Spending time with friends and family in happy and uplifting company.”

“Honestly if this system wasn’t just a stupid statistical pattern associator it would be like a sociopath, making up imaginary friends and uttering platitudes in order to sound cool,” AI developer and NYU Professor Emeritus Gary Marcus tweeted yesterday.

Marcus also laid out a detailed rebuttal to Lemoine’s sentience claims in a blog post, dispelling widespread misassumptions regarding the nature of “self-awareness” and our tendency to ascribe it to clever computer programs capable of mimicry. “To be sentient is to be aware of yourself in the world; LaMDA simply isn’t,” he writes. “It’s just an illusion, in the grand history of ELIZA, a 1965 piece of software that pretended to be a therapist (managing to fool some people into thinking it was human), and Eugene Goostman, a wise-cracking 13-year-old-boy impersonating chatbot that won a scaled-down version of the Turing Test.”

“Six decades (from Eliza to LaMDA) have taught us that ordinary humans just aren’t that good at seeing through the ruses of AI,” Marcus told me over Twitter DM. “Experts would (or should) want to know how an allegedly sentient system operates, what it knows about the world, what it represents internally, and how it processes the information that comes in.”

When asked if he had any personal markers for when an AI system reached ‘sentience,’ he offered the counter consideration we should all adopt. “I would really rather do science, investigating the operation of the machine, rather than apply blind benchmarks.”

If LaMDA is sentient, then this is the metaverse.Shutterstock

There are enough problems already — Researchers and experts like Marcus aren’t saying that an AI will never hypothetically achieve true sentience, but we’re much farther off from that nebulous goalpost than researchers like Lemoine would have us believe. “[LaMDA] doesn’t even try to connect to the world at large, it just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context,” Marcus wrote in his blog. In this sense, at least, it seems Google’s new chatbot succeeds.

Unfortunately, all the theatrics and shallow coverage do a disservice to the actual problematic consequences that can (and will) arise from LaMDA and similar AI software. If this kind of chatbot can fool even a handful of Google’s supposedly expert employees, then what that kind of impact can that technology have on a more general populace? AI impersonations of humans lend themselves to all sorts of scam potentials, con jobs, and misinformation. Something like LaMDA won’t end up imprisoning us all in the Matrix, but it can conceivably convince you that it’s your mom who needs your Social Security number for keeping the family’s records up-to-date. That alone is enough to make us wary of the humanity (or lack thereof) at the other end of the chat line.

Then there are the very serious, well-documented issues regarding built-in human biases and prejudices that plague so many of Big Tech’s rapidly advancing AI systems. These are problems that the industry — and, by extension, the public — are grappling with at this very moment, and they must be properly addressed before we even begin to approach the realms of artificial sentience. The day may or may not come when when AI make solid cases for their personal rights beyond simply responding in the affirmative, but until then, it’s as important as it is ironic that we don’t get let our emotions cloud our logic and judgment calls. Humans are fallible enough as it is, we don’t need clever computer programs making that any worse.