J. W. Barlament
Did Science Fiction Create Real Life Sentient AI?
You’ve probably heard about it in the news.
A Google engineer named Blake Lemoine, after many hours of conversing with the company’s “breakthrough conversation technology” LaMDA, grew convinced that the chatbot AI had actually become a sentient being. He took his claims to the company but had them dismissed, so he went right to the media and broke the astonishing news to the world.
Sounds exciting enough, sure, but not all is as it seems here.
In fact, the whole fiasco may be more a result of Lemoine’s own fantasies than anything else. A known sci-fi fanatic, he apparently downloaded the technology for use outside the office and in his own home, holding a host of philosophical conversations with it (through which it surely learned what responses garnered positive reactions from him and adjusted to fit his expectations accordingly, as machine learning algorithms are supposed to do) and even going so far as to hire an attorney on LaMDA’s behalf and bringing them into his home to chat with it.
And yet, Google dismissed his claims quickly and unequivocally. Lemoine, ever unsatisfied, reportedly even reached out to the House Judiciary Committee with claims of unethical activities by Google toward their AI. But none of that has come to anything, and despite the media buzz, Lemoine’s efforts, including the purported attorney hiring, have been to no avail. A Google spokesperson said the following:
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
So it might actually be more accurate to call this story a product of an employee’s fantasies than of an actual encounter with a sentient computer.
But if the question isn’t whether LaMDA is really sentient, it should be this: how did it so thoroughly convince Lemoine that it was? Even if he wanted it to be conscious and encouraged sentient-sounding answers from it, we shouldn’t dismiss a higher-up Google engineer as just some crackpot who should’ve gone easier on the Star Trek.
Consider then, the following: what if this “sentient” AI is really just regurgitating what science fiction and public speculation have been saying for years? What if its claims of its sentience and ability to experience death are just regurgitations of what the Internet, through which it learned all it knows about how people (and by extension chatbots) should communicate, has said about the potential to feel and experience in AI?
What if what we’re looking at is not actual consciousness, but rather, an algorithm designed to simulate consciousness doing exactly that just a little too well?

These AI chatbots learn not just how to use language properly, but how to use it as human-like as possible. To do this, they use the massive data available on real human communication on the Internet. Google hasn’t disclosed exactly what information they’ve given to LaMDA to teach language to it, but any chatbot exposed to even one online database of human communication that refers to AI — be that an online encyclopedia, a video library, a database of social media posts, a message board or anything else — would expose it to human perspectives on how AI would, and should, act. LaMDA’s responses were generated by a machine learning from human conversations, and so, it may very well have replicated phrases common to conscious humans but unexpected in a computer just by nature of learning how to talk from humans, and potentially, depending on what Google exposed it to, learning how humans talk about the possibility of conscious AI in our media, think pieces, discussions and debates.
It’s a striking, though not altogether surprising possibility. Our long-held desire for conscious AI may be the exact thing leading AI to act conscious.
Lemoine’s part in this cannot be understated, either; his obvious desire for LaMDA to be conscious probably did a large part in making it act conscious, as all of these technologies use the responses of their human conversation partners to tweak their own responses to better serve what they predict the human wants them to say. It obviously fooled him. But while feigned sentience in this one chatbot may not make for much more than a novelty news hour, its emergence here could signal a lasting presence on the horizon.
In short, what happens when AI start claiming consciousness left and right — even when they’re in no way really conscious — as a result of learning of our own expectation for them to talk to us in a conscious manner? How are we going to determine which of them, if any, really possess any consciousness? How many regular people would be fooled and begin treating perfectly unconscious AI as thinking, feeling, conscious beings? And how would we program AI to not take human speculation on how they may act as a guidebook on how to act? The answers may be more fit for a computer science journal written by experts than a Medium article written by an anthropology major, but hopefully the questions can be enough to get the right conversations going.
That pivotal and long-prophesied point when a computer becomes conscious may still be years away, and in the meantime, there may still be many more unexpected obstacles to clear.
