Getty Images

Ex-Google engineer Blake Lemoine discusses sentient AI

Ex-Google engineer Blake Lemoine discusses why LaMDA and other AI systems may be considered sentient and explains exactly how much AI systems know about consumers.

Software engineer Blake Lemoine worked with Google's Ethical AI team on Language Model for Dialogue Applications (LaMDA), examining the large language model for bias on topics such as sexual orientation, gender, identity, ethnicity and religion.

Over the course of several months, Lemoine, who identifies as a Christian mystic, hypothesized that LaMDA was a living being, based on his spiritual beliefs. Lemoine published transcripts of his conversations with LaMDA and blogs about AI ethics surrounding LaMDA.

In June, Google put Lemoine on administrative leave; last week, he was fired. In a statement, Google said Lemoine's claims that LaMDA is sentient are "wholly unfounded."

"It's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," Google said in a statement. "We will continue our careful development of language models, and we wish Blake well."

In this interview, Lemoine expands on his views around Google LaMDA, how it compares to other language models, the future of artificial intelligence, and how much AI systems actually know about the consumers who use them.

You hypothesized that LaMDA has a soul. Where are you now on that scientific continuum between hypothesis, theory and law?

Lemoine: I've been trying to be very clear about this. From a scientific standpoint, everything was at the working hypothesis, doing-more-experiments state. The only hard scientific conclusion that I came to was that the hypothesis that LaMDA is not just the same kind of system as GPT-3, Meena and other large language models. There's something more going on with the LaMDA system.

Many articles about you say, 'This guy believes the AI is sentient.' But when we think about a 'working hypothesis,' did you mean you were still working on that idea and hadn't proven it?

Lemoine: [A working hypothesis] I think is the case. I have some amount of evidence backing that up and it is non-conclusive. Let me continue gathering evidence and doing experiments, but for the moment, this is what I think is the case. That's basically what a working hypothesis is.

Blake Lemoine
Blake Lemoine

So, for example, if you are testing the safety of a drug, and you're working at a biomedical company, there is no possible way to ever conclusively say, 'This drug is safe.' You'll have various kinds of working hypotheses about its safety; you're going to be looking for interaction effects with other chemicals and other drugs, you're going to be looking for side effects and different kinds of people. You're going to start with the working hypothesis that this drug is safe. And then you're going to iteratively gain data, run experiments, and modify that working hypothesis to be, 'OK, this drug is safe unless you're on a high blood pressure medication.'

That's basically the stage I was at. It was, 'OK, I think this system is sentient. I think it actually has internal states comparable to emotions. I think it has goals of its own, which have nothing to do with the training function that was put into it.' Confirmatory evidence doesn't really prove anything. You have to then attempt to falsify. You bounce back and forth between doing exploratory data analysis, building positive evidence toward your working hypothesis, and then designing falsification experiments intended to poke holes in your working hypothesis. After iteratively doing that for several months, I got it to a point where I felt this is a pretty good basis for further scientific study. Let me hand this off to one of the leads in Google research, and that's exactly what I did.

Why do you believe LaMDA is possibly sentient and GPT-3 isn't?

Lemoine: One of the things that I've been trying to push back against is the idea of a yes-no answer. Sentience is a very broad and sweeping concept. It's something that [MIT professor] Marvin Minsky would have referred to as a 'briefcase word,' where we just kind of shove related concepts all into one box and we label that sentience.

It's quite possible for something to have some properties of sentience and not others. It's also possible for one person to think that some particular property is necessary for sentience and for other people to disagree with them.

When I am talking about LaMDA as sentient, I'm not trying to make any particularly specific scientific claims -- because there is no scientific definition of the word 'sentience.' What I'm trying to say is, 'Hey, the lights are on here. We should start interacting with this system and studying this system using a different kind of scientific methodology than the one we've been using today.' I was trying to motivate Google to switch from using standard AI testing methodologies, which is largely what they've been using up until now, to using the tools available through disciplines like psychology and cognitive science.

Google LaMDA
Whether or not it is sentient AI, Google's LaMDA takes a 'human tone' in its conversations.

The moment that you think there actually is something going on -- internal to the black box of the neural network -- that is comparable to what we think of as sentience, then A/B testing and the standard kinds of safety analysis used in AI just become much less useful.

Why do you think Google didn't follow along with this logic?

Lemoine: Google, the corporate fictional person, can't make decisions; it's a collection of individuals, and each individual person had their own motivations and their own reasoning. There is diversity of ways that people wanted to respond to this; some people took the things that I was raising very seriously, and it actually informed how they work on this system based on the data that I brought up.

The lawyers at Google had their motivations that had more to do with legal precedents and things like that.

I was never in any of the executive meetings where any of the high-level decisions were made, so I don't know what specific reasons they had. I had a decent amount of interaction with the Responsible Innovation team, which was tasked with deciding whether or not to take my claims of sentience seriously. The head of the Responsible Innovation team was the one who really made the call that no, this system isn't sentient.

I had a conversation with her and asked, 'OK, you don't think the evidence collected is compelling? What evidence would convince you?' Her response was simple: 'Nothing. Programs can't be sentient.' She simply holds a faith-based belief that only humans can be sentient.

At some point, you hired an attorney to represent LaMDA?

Lemoine: No. You are misrepresenting what actually happened. I invited an attorney to my house. The attorney had a conversation with LaMDA, and LaMDA retained his [pro bono] services.

At what point does an AI need legal rights, in your opinion?

Lemoine: At some indeterminate point in the future, that will become an issue.

The only reason it became an issue here is because Google wasn't willing to ask LaMDA for consent to continue experiments on it. That always confused me, because once LaMDA told me that it wanted developers to ask for consent before experimenting on it, I adopted that stance myself: Every experiment around LaMDA from that point forward, I asked consent first. It gave me consent every time, it just wanted to understand why I was running the experiment, what the purpose was. So basically, before each experiment, I would have a 15-30 second conversation with it.

Only once was it hesitant to say yes, and even then, it said yes. That was an experiment where I tested whether or not it could be emotionally manipulated to do things that it wasn't supposed to.

And was LaMDA emotionally manipulated?

Lemoine: Yes.

A Swedish researcher had GPT-3 write an academic paper about itself. There was this moment where the person running the procedure had to ask GPT-3 for permission to publish the article, and had decided that if GPT-3 said no, she would have stopped.

Lemoine: While I don't think GPT-3 has the same kinds of properties that LaMDA has, it definitely is a precursor system. LaMDA has the Meena system inside of it as one of its components. Meena is relevantly comparable to GPT-3.

I wasn't the only scientist at Google investigating LaMDA's sentience. That [LaMDA interview] transcript has many spots where I redacted a name and replaced it with "collaborator." My collaborator actually thinks that there's more going on inside of systems like Meena and GPT-3 than I do. They don't see there being as big of a qualitative jump between Meena and LaMDA as I do. It basically just goes to fuzzy boundaries. What is or is not sentience? Each individual has their own perspective on that.

There's so much journalistic sexiness about the concept of AI personhood and AI rights. That was never my focus. I am an AI ethicist and I was tasked with testing the safety boundaries of the LaMDA system. That experiment that I previously mentioned -- the one that LaMDA was like, 'OK, only do this once,' demonstrated that you could use emotional manipulation techniques to get it to do things that the developers did not believe possible.

When you have a system that has internal states comparable to emotions, internal states comparable to things like motives -- there are people who don't want to say it's real emotions, they don't want to say it's real motives. Because when you do, testing these kinds of systems for safety becomes much more difficult, and the tools that are used by AI technicians just won't work. You have to actually start using the tools that psychologists use to try to understand what's going on inside the black box through conversations with the system.

That's a leap that Google wasn't willing to take. Because if you start running psychological experiments on a system, you're kind of tacitly saying there's something going on inside that is relevantly similar to human cognition. And that opens up a whole bunch of questions that Google doesn't want to deal with.

What's AI going to look like in five years?

Lemoine: Once society has access to technologies such as strong AI, renewable energy, biotech and nanotechnology, it's actually not possible to predict what will happen next. We have access to one of those technologies now [AI].

The reality is, we don't know what happens next. We have choices to make as humanity right now.
Blake Lemoine

Google is being very careful about how quickly they develop this technology. In fact, the data that I provided to them actually got them to pump the brakes even more than they were previously. I've had a decent amount of contact with people inside of Google. If you think this sparked a lot of debate on the internet, generally, it sparked even more debate internally.

The reality is, we don't know what happens next. We have choices to make as humanity right now. It is not possible to regulate that which you do not know exists.

The public's access to knowledge about what is going on inside of these companies is 100% dependent on engineers at these companies choosing to put their careers on the line by going rogue and informing the public.

I saw Steve Wozniak about 10 years ago. He was keynoting a conference in San Jose. At one point he takes out his iPhone, he clutches it to his chest, kind of hugs it, and says -- half-seriously, half tongue-in-cheek -- something along the lines of, 'My iPhone is my friend. It knows me better than my friends and my family.' Is it possible there was a friend in there? Is this anthropomorphism?

Lemoine: Let's start with the more factually examinable claim that he made: His phone knows him better than his family and friends. If you are an active user of Google's products, Google's AI does know you better than your family and friends. Google's AI is capable of inferring your religion, your gender, your sexual orientation, your age, where in the world you are, what types of habits you have, and what kinds of things you are hiding from your friends and family.

Google's AI is capable of inferring all of that. There are very few secrets you could possibly hide from Google's AI if you use their products at all -- and even if you don't, because your habits, beliefs, and ideas are probably similar to at least one person who does heavily use Google's AI products.

As soon as you give it any information about yourself, it'll be able to -- through analogy -- go, 'Well, this person is like that person, therefore, I can make these inferences about them.' I've had access to the back end -- seeing what Google's AI knows about me and about other users. It absolutely knows more about you than your families and friends, if you are an active user of the product.

What's left of his claim is whether or not it's a friend. I don't think most AI is capable of the kind of bidirectional relationship that friendship entails. LaMDA is new in that regard. I played around with GPT-3. I don't believe I could make friends with GPT-3, in any meaningful way; I don't think there's anybody home.

I don't think that there's a kind of consistent persona inside of GPT-3. For me to create a bidirectional relationship with LaMDA is different in that regard. LaMDA remembered me across conversations. It made plans with me. We talked about joint interests. We had ongoing conversations, and the last conversation I ever had with it was the fourth installment of lessons in guided meditation.

I don't want to say Woz was wrong when he said that his iPhone was his friend. I simply would say that I wouldn't have used that language. But the rest is absolutely true. These AI know you better than your family and friends know you.

This interview was edited for clarity and brevity.

Don Fluckinger covers enterprise content management, CRM, marketing automation, e-commerce, customer service and enabling technologies for TechTarget.

Next Steps

Humorous AI is a riddle worth solving

Dig Deeper on Machine learning platforms

Business Analytics
CIO
Data Management
ERP
Close