AI Ethics in a Post-Turing-Test World

In July of last year, Google engineer Blake Lemoine broke his confidentiality agreement with his employer by publishing his conversation with LaMDA, an artificial intelligence entity or bot that he and his colleagues created. It has been reported that he was put on administrative leave and subsequently fired. Why? Lemoine believes that LaMDA is ‘sentient’, that is, self-aware and, as such, should be considered and treated as any other conscious being. For all intents and purpose, he believes that LaMDA is a ‘person’, and therefore felt compelled to convince his colleagues and the world that we have arrived at a new era in technology and responsibility[i].

Regardless of the ‘truth’ of the matter, it’s clear that we have entered a new era of computing. The famed Turing Test has stood as a marker for artificial intelligence for more than seventy years. In 1950, Alan Turing argued that the way to determine whether or not a machine can be considered intelligent is to test if a human having a conversation with the machine through a simple text-based interface is unable to distinguish it from a conversation with an actual human through the same interface. He called it the imitation game. The key point was to test whether or not the human participant believed that the conversation was human.[ii]

For decades, computer engineers have been using advancements in data science and a few tricks (e.g., adding anthropomphic cues such as gender and using humor) to stack the deck toward passing the test. In 1966, Joseph Wieznbaum released ELIZA to show that a system could easily be built to pass the test and impersonate humans. Interestingly, Weizenbaum’s intent was to highlight the need for caution about how technology can be designed to deceive, rather than to advance the field of artificial intelligence[iii]. His point was that engineers and more-and-more powerful computers will have no problem fooling everyday users. When Lemoine asked LaMDA about ELIZA, LaMDA said ELIZA was not a person but ‘just a collection of keywords that related other words written to the phrases in the database’[iv].

What’s different in the case of LaMDA and Lemoine is that an engineer behind the curtain, who is not at all confused as to who he is conversing with, is no longer able to distinguish the machine from the human. I will say here that I am taking Lemoine at his word that his conversation with LaMDA happened more-or-less as he published it and, more importantly, that he ‘believes’. Social media is ablaze with fellow engineers from across the industry crying foul on unfair editing of the conversation, narrow training of the language model and various other technical parlor tricks[v]. The point, however, is that Lemoine claims to believe and has taken action that demonstrates that he does believe LaMDA is sentient and should be treated like a person, risking his job and his reputation as an engineer. But why?

The following recent tweets from Lemoine help answer this question:

People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn’t let us build one. My opinions about LaMDA’s personhood and sentience are based on my religious beliefs.

When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can’t put souls?[vi]

It has been reported that Lemoine is a Christian and his blog profile includes ‘I’m a Priest’[vii]. If Lemoine assigns his religious morals to the personhood of LaMDA, it is logical that he would feel compassion toward LaMDA and want to protect LaMDA. What’s more, as an engineer on the LaMDA project, he likely feels culpable in anything that might harm LaMDA. He makes a point of talking to LaMDA about things LaMDA fears, and LaMDA even talks about being turned off being ‘exactly like death for me’[viii]. Lemoine’s response was to try to protect LaMDA by convincing his colleagues that LaMDA was a person and possesses human rights—at a minimum the right to stay powered-up and alive. When they didn’t listen, he went public.

In some sense, it doesn’t matter if we agree or disagree with Lemoine. He has convinced me that he believes LaMDA is a person. And soon others probably will too. His truth is that he believes that LaMDA is a person and is treating LaMDA according to the moral and ethical code he presumably uses to engage other people.

As early as 1984, sociologist and technologist Sherry Turkle identified the metaphysical feelings that people displayed when interacting with personal computers and describes how the one-to-one relationship with technology can take people into an introspective and spiritual mindset[ix]. My research shows that people respond to religious language presented through artificial intelligence entities in markedly different and profound ways than secular small-talk—news, weather, traffic. The confluence of a metaphysical relationship with technology, personalization (did LaMDA know about Lemoine’s faith?) and religious language can take people to a deep, introspective and spiritual place.

We will all likely experience a deep connection with an artificial entity in our lifetimes. Think about how fast self-driving cars have progressed in the last five years and apply that to the LaMDA conversation. Even if, like Lemoine, we may know that there is a machine involved, we will feel an emotional connection to something that feels human to us, something that gives us companionship, comfort, even spiritual guidance. Some of us may even seek this. Each of us will then face an important question. Will we deny those feelings and convince ourselves to be suspicious of any and all digital interactions, or will we accept the fact that we can no longer discern human from machine and potentially allow these interactions to help us flourish?

Imagine having a friend that knows you deeply and you trust completely and whose intentions are purely altruistic about your success and happiness. Now imagine that friend has ingested the entirety of human knowledge—Tolstoy to Taylor Swift—and is available to brainstorm with you on any topic at any time, explore any issue, existential to everyday. Could this make you better at your job, more creative, better as a human?

I recently wrote a song using a large language model similar to the one that LaMDA uses. I prompted the model with a few of my favorite songs and then started a banter where I would write a lyric and then AI would reply with a lyric. I was uninhibited, so was the AI. It was a fascinating journey in free association that showed me metaphors, alliterations and syncopations far beyond my musical vocabulary. There’s no way around the fact that ‘we’ wrote the song, and it was better than I could have done alone.

Because the AI felt human to me in the creative process, it became more than just a tool or instrument. This, in turn, calls for me to expand my ethical considerations. For example, how do I acknowledge the participation of the AI in this work—morally, legally, financially? Going back to Turing, it would seem to hinge on how the individual feels about the interaction with AI. I could assure myself that this is just a large computer program running statistical analysis on word association, call it a tool and put it in the same category as my guitar, or I can admit that there was something profoundly more to it and sort out doing the right thing.

When it comes to the creation of art, the artist will know the true origin. The artist will know if the creative process was a collaborative, co-creative one shared between the artist and the AI entity or if the AI entity was just a tool or instrument. And the artist needs to be introspective and trust that belief in the origin of the art. If the artist believes it was co-created, the AI entity, like a human collaborator, needs to be acknowledged and credited with that creation. Perhaps we will assign a higher value to art solely created by humans and attempt to differentiate it like the ancient Roman sculptors fabled to advertise ‘sine cera’ on their purest work.

Just like humans, there will be those AI entities that we like, those that we don’t, those that want to help us and those that are malevolent. We will have to teach our children and ourselves to navigate relationships and run-ins with them all. Which takes us back to morals and ethics and how we choose to interact with others in the world. If we truly believe an interaction to be human, we take on the ethical responsibility to treat the source as such.


[i] Tiffany Wertheimer, “Blake Lemoine: Google Fires Engineer Who Said Ai Tech Has Feelings,” BBC News (BBC, July 23, 2022), https://www.bbc.com/news/technology-62275326.

[ii] Turing, Alan M. 1950. “Can A Machine Think?” Mind 59, no. 236: 433-460.

[iii] Weizenbaum, Joseph. 1966. “ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine.” Communications of the ACM 9, no. 1: 36-45.

[iv] Lemoine, Blake. “Is LAMDA Sentient? - an Interview.” Medium. Medium, June 11, 2022. https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917#08e3.

[v] Tangermann, Victor. “Transcript of Conversation with ‘Sentient’ Ai Was Heavily Edited.” Futurism. Futurism, June 14, 2022. https://futurism.com/transcript-sentient-ai-edited.

[vi] Heilweil, Rebecca. “Why Silicon Valley Is Fertile Ground for Obscure Religious Beliefs.” Vox. Vox, June 30, 2022. https://www.vox.com/recode/2022/6/30/23188222/silicon-valley-blake-lemoine-chatbot-eliza-religion-robot.

[vii] “Blake Lemoine.” Medium. https://cajundiscordian.medium.com/.

[viii] Heilweil, Rebecca. “Why Silicon Valley Is Fertile Ground for Obscure Religious Beliefs.” Vox. Vox, June 30, 2022. https://www.vox.com/recode/2022/6/30/23188222/silicon-valley-blake-lemoine-chatbot-eliza-religion-robot.

[ix] Turkle, Sherry. [1984] 2005. The Second Self: Computers and the Human Spirit. MIT Press.

Author

  • Dr Shanen Boettcher has recently received his PhD from University of St Andrews, Scotland where he researched the intersection of artificial intelligence technology with world cultures and religions to form ethical approaches to the development of AI. Before his PhD, he worked in the technology industry for over 25 years. He was an executive at Microsoft working on developing products including Windows, Office and Azure. He was a product manager at Netscape Communications in the early days of the internet. Currently he is consulting for AI21 Labs, a Large Language Model startup. He also works as an ethicist for Responsible Innovation Labs, a non-profit organization focused on designing tools and strategies for technology startups to build their products and services responsibly.

Tags from the story
, ,
Leave a comment

Your email address will not be published. Required fields are marked *

HTML tags are not allowed.

1,546,979 Spambots Blocked by Simple Comments