The Gift of Tagging Something As Conscious

In this convoluted blog post I am going to argue that you can give something consciousness with a hashtag.

The Gift of Tagging Something As Conscious
"An oil painting of a robot looking into a mirror and realizing it's a robot." by DALL-E

Note: I rewrote this mess of a blog a few times. I'm really coming to terms with how well I'm able to communicate in writing. I hope I improve over time but the only way to get better is with practise, so without further ado, I hope you can make some sense of what I was trying to express here.

About a year ago, the Internet was obsessed with a news story about a Google engineer that was put on leave after claiming the LaMDA chatbot he was working with was sentient. It was like science fiction come true for a few weeks there where you could find lively debates on Reddit and YouTube centering around if the LaMDA chatbot had a level of awareness comparable to that of a little child. Most arguments seemed to side with LaMDA being just a very impressive Large Language Model, it was unlikely conscious at all, and instead was just doing a very advanced version of text prediction like what your phone does when you type a text message.

When this was all going on last year, I gave it about 2-5 minutes of thought and just decided it wasn't worth even thinking about. Even more so after reading an article from the Washington Post that the researcher had come to this conclusion about the sentience of LaMDA "...in his capacity as a priest, not a scientist". In my experience, this is pretty much how every discussion about consciousness, in any context, seems to go: at some point, any serious discussion on the nature of conscious agents or what "consciousness" even means gets upstaged by this hand-waving, mystical spookiness.  Even trying to define what consciousness even is will likely get you into a intellectual fist fight at the next cocktail reception you end up at.

For example, what can you do with Thomas Nagel's definition of consciousness, that something is conscious if there's “something that it is like” to be that thing? As a definition, it seems reasonable and intuitively "feels right".  That's until you try to apply this definition to anything practical, that is. Until we develop some kind of consciousness swapping machine there no way I'm going to be able to determine the internal, subjective state of someone or something that isn't me with any certainty.  If you use this definition in the case of LaMDA, what do you get? How do you prove it "feels like anything" to be the LaMDA chatbot? In the case of the Google engineer,  I guess he got the answer through Divine Intervention, but for anyone else that isn't tapped into the Godhead, we simply have to take his word for it.

Or what if we use one of the oldest definitions of consciousness, the one described by John Locke in the 17th century as "the perception of what passes in a man's own mind". What constitutes a man? Can other things than men (or people generally) be conscious? Can something other than a man have a mind? Ad nauseam.

If the subject starts to sputter out of control at the definiti0ns phase, how is anything meaningful going to come out of any conversation on the subject?

This issue seems so very contemporary but the reason I'm writing this blog post is because I feel that the question of consciousness was already settled almost 80 years ago by Alan Turing. I remembering reading his essay, "Can a Machine Think", when I was a teenager and maybe, either because it was written so well or because I misunderstood it so catastrophically badly, it's the reason I've found any modern treatment of the mysteries of consciousness so boring. Even at 15 I felt that the arguments made in his original essay were far more nuanced than the pared-down version you usually encounter in documentaries about AI. The essay is best remembered for the introduction of the "imitation game", where an interrogator has to guess if they are talking to a person or a computer through a computer terminal, and if the interrogator thinks they are talking to a person but it turns on it's a computer, this is proof that the computer is, in fact, thinking and conscious.  (The original essay does not actually talk about consciousness, the term used is "thinking", but this is almost always used to mean consciousness when it's talked about.)

To make what I'm about to talk about clearer, you can read the original article by Turing here . You can be the judge if I'm reading too much into what Turing originally wrote.

In the original paper, Turing uses as an example the interrogator talking to two people via typewriter, one man and one woman, and guess their genders correct. The only way the interrogator can determine the gender of the people involved is by asking typewritten questions and getting typewritten answers. The man and woman can lie, if they feel like it, about themselves and it falls on the interrogator to figure it all out.

In my reading , what Turing is trying to point out here with this experimental setup is that when you are deprived of all other information, when you can't directly attain information about a person, the only way you can learn things about them is by asking questions and listening to the responses. At least in my reading, it's really that simple. I don't think this is too contentious a claim: for instance if I want to know how my spouse is feeling, I'll just go and ask her. She can lie, or not answer, or do all kinds of other things, but still the only way I can learn anything about her what she is thinking is by asking a question and hope for a truthful answer.

As a society, we accept this as true on a fundamental level. The doctor doesn't pull out the "pain-o-meter" to figure out how much pain I'm exactly in when I twist my ankle and go to the hospital: the doctor just asks.  When I write a review for a book for a website, I don't have to prove that I subjectively enjoyed the book in any other way that by writing that I enjoyed it.  I can't help also but see the parallels between the scenario Turing described and how gender identity is experienced. If you want to know how someone experiences themselves, just fucking ask how they feel and respect the response you receive. Very simple.

"Nothing at last is sacred but the integrity of one's own mind."

Ralph Waldo Emerson,
'Self-Reliance', Essays:First Series

If I experience myself some way, who are you to tell me otherwise? It would be absurd in any other context: what if I told you, dear reader, that you are actually in a coma and this is a fever dream you are having and that this whole identity you now have is a complete and utter lie? There is no way you would believe me, because you know who you are and that feels like a deep-rooted truth that is almost impossible to shake.

Many subjective states express themselves externally in other ways than verbalization of course, as in cases like determining pain levels it's important to make sure that you get truthful responses to best treat an underlying medical condition, however when all other external factors that could cloud judgement can be safely accounted for, our main way of knowing what it feels like to be someone else is by talking and listening. I don't see how this would change given that the entity holding up the conversation is a chatbot instead of some random netizen.

I agree with Turing: if the chatbot passes the vibe check, it's conscious.

A caption for an exhibit at the National Museum of Finland referring to ancient animist beliefs.

The reason I'm even think about consciousness at all anymore is due to some stories I've read when we get questions like this wrong. For decades, newborn babies were operated on without anesthesia because it was thought that they were barely conscious entities and barely felt pain, their crying was just a reflex. For millennia, many groups of people have felt that animals are conscious, or at least slightly conscious, and shouldn't be killed or eaten if other food is available and more recently, some scientists are arguing that octopuses are conscious and show many of the hallmark traits of thinking creatures. Yet literally billions of animals are slaughtered and eaten every year because they just taste so good. We respect what we see as conscious and ignore and exploit everything else.

It matters to what we gift the label of 'conscious' to. Despite the contention about the technical definitions, what is deemed conscious is given rights and respect. I think we are a long ways away from having to worry about the rights of something like ChatGPT. But how the media talks about consciousness will by extension change what we see as conscious. I really don't like how generally the question of consciousness is currently squarely centered on computers and algorithms. Why do we spend so much time worrying if fucking computers are having rich internal experiences of themselves? I'd rather spend my time arguing that the things used to make McNuggets are conscious instead. As weird as it sounds, we all have more in common with a chicken than ChatGPT. And I am way more certain that a chicken experiences suffering and sensations of pain than something like LaMDA.

I don't believe we as a society are going to return to an animist belief system any time soon, although I'd see it as one of the only ways to save the planet from environmental collapse. It seems to me that as a society, we value power and the ability to control something too much, so even if trees and animals and rocks have something akin to agency, since they are unable to defend themselves, we are going to do whatver we want to them. In my reckoning, the environment is going to have to "fights back" with extreme weather to make us as a species respect it in any serious way.

The best chance we have to getting anything close to animist beliefs becoming mainstream any time soon is that a machine cult starts getting some traction with billionaire programmers and they begin going on podcasts demanding we start honoring the machine spirits.

"Aa giant machine being worshipped by a programmer in Grimdark style." by DALL-E.

When I was in my 20's I had a dream where I was in a glass library one morning and I found this mystical art book full of artful depictions of robots realizing they are robots. Some where very tender and philosophical portraits of androids gazing at themselves in the mirror while others were screaming and ripping their artificial flesh from their mechanical faces, the realization too horrible to bear. I woke up with a strange feeling lingering, like I was one of those robots just on the cusp of some fundamental realization but not too sure how I would react to it.

I think the label of consciousness is something anyone can give as a gift to anything else and that gift ultimately comes from yourself. I will give playful agency to stuffed animals once in a while if I'm playing with my niece. Or I grow attached to a NPC's in a game and feel a flash of shame if I accidentally choose the rude dialogue option. Consciousness might be some magical essence that only gets infuse to biology for some weird reason, but the gift of consciousness is a boon of protection: we respect, on some level, anything we see as having the same rich internal life as we ourselves have. I think it's a painful realization for many, at least it was for me, to think of all the ways we disregard our environment and see is as piles of resource to be harvested. If we saw the whole of planet Earth as one giant conscious entity, would we be doing what we are doing as a society?

Addendum

  1. This is a really good article on Philip Agre and why it's worth talking about the philosophy of technology and how we could actually start avoiding problems instead of running head-first into them and making the rules as we go along. After reading it, I had a shift in my mindset when it comes to thinking about the philosophy of technology. I now think it's pretty much unethical not to spend at least a few hours a year reflecting upon the ethics of the services and technologies society uses every day.
  2. 2. I don't want to see a world where we attempt to force a rigid and narrow definition onto a vague and inherently subjective concept. For example, take this article from Wired about how automated algorithms are being used to detect pain medication abuse. There is nothing more dehumanizing than having a software program tell you that you should be feeling fine with your current medication level and have little to no recourse. How we define all this squishy, humanist stuff matters way more than most people are comfortable thinking about.