In a previous video, I talked about how our senses aren’t passively received from the world, but are actively constructed and projected into the world. The world is like a coloring book that we color in with each of our senses, every moment that we’re awake. Now what about our emotions? They’re no less constructed, and they can be projected too. If we’re not careful, we can end up mistaking our own emotions for someone else’s emotions – or some-thing else’s emotions. In this video, I’ll talk about how we can be more mindful of this when we look at artificial intelligence.
Let’s start with a simple example. Let’s say that you see a friend crying and you immediately notice that you feel sad for them. You go over and gently ask them why they’re upset, but your friend turns to you and says “Oh I’m not sad, I’m overcome with joy because I just won a gold medal!”. Now you notice that you feel very happy for them. Both of these feelings – your empathetic sadness and your empathetic happiness – are guesses that are constructed in your mind based on what you’re perceiving at the moment and what you know from before. Your mind projects them onto your friend independent of how your friend actually feels.
Emotions can get projected onto anything. For example, when you first see your friend, you might narrate the scene like this: “The drapes in the room hung heavy and low on either side of my friend, reflecting the somber mood of the room”. And then after you hear your friend’s explanation, you could narrate it this way: “The drapes flanked my friend like a hometown hero, reflecting the jubilant mood of the room.” Same drapes. Same room. Different projections.
Writers use projected language like this all the time of course, because it grabs our attention and draws us into the story. There’s nothing wrong with feeling immersed in a good story, whether it’s fiction or nonfiction. The only problem is when we get so lost in a story – especially the nonfictional ones – that we can’t find a way back to ourself. When we take our projections as gospel truth, instead of the calculated guesses that they always are. When we forget that the only emotions we can feel are our own emotions, and nobody else’s – just like the only senses we can feel are our own senses. A very helpful practice in these situations is to consciously withdraw our projected emotions and observe how they actually arise from within ourselves.
Let me give you another example. Let’s say I take a piece of paper and draw two eyes and a smile on it. If I ask you “how does that piece of paper feel?”, you could easily say that it feels happy. But you know that you’re just playing along with the projection. If I were to cut that paper into pieces, you wouldn’t accuse me of murder. You’re not actually convinced that the paper is sentient. There are two important things going on in your mind here. First, the symbolic perception of “face” gets constructed and projected onto the piece of paper. Second, the emotion of “happiness” gets constructed and projected onto the face that you just recognized. They’re both guesses that are quickly made at an unconscious level, that then surface in your conscious mind as an integrated conclusion. It’s the same machinery at work whether you’re looking at a paper smiley face, a video of a person, or an actual person.
Now let’s bring in artificial intelligence. What AI is doing is helping us expose the limits of this machinery – our machinery – that calculates these guesses and forms these integrated conclusions. For example, if a robot behaves so realistically that it fools us into thinking it’s alive – that says far more about our brain’s limits than it says about the robot. It’s a reminder that we can’t trust the conviction of our projected feelings as a sole arbiter of sentience in something else.
I also think it’s vitally important to respect how much mystery we have regarding sentience. If I ask you “can you feel wonder?” and you answer “yes”, we can only wonder where that “yes” comes from. But if we ask a computer the same question, there is no wonder. We can’t NOT know where the answer comes from, because we’re the ones who invented computers. Until we can invent a computer that we can’t understand, I don’t see how that can change. In contrast to computers, we didn’t invent humans. We don’t yet know how to construct sentient beings from scratch. Until we understand how subjective experience arises, until we know where our wonder comes from, this mystery is still there. We need to pay attention to this chasm between our encyclopedic knowledge of technology and our embryonic knowledge of life.
If we don’t pay attention to the chasm, and if we don’t recognize our projections for what they are, we can think we’ve created something that’s sentient, when we actually haven’t. For example, let’s say that you smack around a robot to demonstrate its resilience and I blurt out “Hey, don’t do that!”. You ask me why. If I say “you’re hurting the robot”, then I’m asserting that the robot is sentient. You could challenge that assertion with some basic questions. Where are its pain nerves? How would we anesthetize it? I’d have no answer. If I reflected on it some more I could say, “Look, I just feel uncomfortable when I see you hit something that looks like a human”. This is a more mindful answer. Instead of projecting away my discomfort onto the robot, I’d be recognizing it within myself. I could inquire into that feeling cleanly, instead of conflating it with assertions about robot sentience. In general, the more mindful we are, the better we can communicate. We can have more meaningful conversations that respect our knowledge and our values, without disrespecting the mysteries that we also face.
As a society, there are sound reasons why some people might not want to see lifelike androids treated inhumanely. At a very pragmatic level, there could be greater risks with law enforcement. For example, what if you swing an axe at an android that’s the spitting image of your neighbor? A police officer may have to make a split-second judgment call between vandalism and homicide. How much cognitive load is reasonable for cops to handle? And there are cultural factors as well. Some people might feel empowered when they have androids at their beck and call. Some people might cringe in empathetic pain when they see androids being treated like slaves. These are all complex topics, and again, the more mindfully we discuss them, the closer we get to understanding the root causes of conflicts in society and within ourselves.
I think one of the most profound implications of artificial intelligence, well before we construct sentient life, is that it holds up a mirror to our projections. What do we think of as being out in the world, that’s actually within us? And vice versa. It’s a powerful way to explore the boundaries, that may or may not exist, between subjective reality and objective reality. When we recognize projection and respect mystery, our minds are opened to possibilities that we might not otherwise consider. What we learn about consciousness transforms our very way of thinking and being. The more awake we are, the more consciously we can take in these lessons.
Leave a Reply