Reversing the Chinese Room: brains are computers, and all computers can feel and think

Far from supporting Searle’s conclusion, the Chinese Room is actually a very good argument for emergent qualities in physical systems like intelligence or consciousness.

The philosopher John Searle created a thought experiment to show that no computer can ever be said to have a mind or be conscious. By extension, this experiment would refute any suggestion that the human mind is nothing but the product of a physical, mechanical brain, for the brain is also “only” a computer, a physical system that obeys known rules.

The thought experiment goes like this: suppose there is a person who does not speak Chinese, locked into a room with a big book of rules. They receive questions written in Chinese on pieces of paper through a slot in the door. They then use pen and paper to follow the rules in the big book and produce some Chinese characters based on what they received on the pieces of paper. These turn out to be responses to the Chinese questions in Chinese, which they post back through the slot. For someone outside the room, it seems they are having a conversation in Chinese, yet the person does not speak a word of Chinese.

Since the book clearly does not understand Chinese (it is about Chinese, but has no understanding), and the person does not understand Chinese (they have understanding, but not of Chinese), they together don’t understand Chinese, Searle says. And they are just like a computer with a program. So no computer will ever understand Chinese, even if they appear to do so.

But far from supporting this conclusion, the Chinese room is actually a very good argument for emergent qualities. Neither the person nor the book has an understanding of Chinese, yet their combination does, and together they function as a Chinese-speaking person with consciousness. Similarly, no single neuron in the human brain can speak a language or be conscious, yet these are true of the whole brain.

Searle’s argument is said to apply to computers with programs only, but that is just another way of saying that as soon as you can separate a system into a processor and a program, or into neurons, somehow it suddenly loses all claims to be conscious, because clearly the component parts are not.

However, we have always ascribed consciousness, intelligence or emotions to things in the world without considering their components. In fact, we are forced to do so. This is because when we perceive something, we always do so on the physical level. (I started to call this simple tenet material perception.) We see light, we hear sound, we smell small molecules floating in the air, and so on. This is not a surprising statement, but it has wide-ranging consequences. One is that even though we say things like “I see you are happy,” we can never actually see the happiness, only that someone looks and sounds like someone who is happy. Therefore, if someone (or something) can mimic being happy, we are forced to assume that it is indeed happy, as we will find nothing to contradict this assumption. It also means that ultimately, machines can be intelligent and can have emotions, as there is no inherent reason why they cannot appear to be intelligent and have emotions.

This is why the computer scientist and mathematician Alan Turing, in his famous Turing test, was forced to define intelligence as something merely apparent. The test avoids having to define intelligence by saying that a machine is intelligent if it is indistinguishable from a human.

The exact mechanism of the test is that a human asks written questions of two contestants, a computer and another human, that are hidden and who respond in writing. After a while, the questioner needs to guess which one is the computer, and if the guess is no better than a guess completely by chance, it means that the computer fooled the questioner and appeared just as intelligent as a human. And so the computer can be said to be intelligent.

(By using written communication only the test avoids tasking computers with having to look and sound like humans, although with modern advances in synthesizing speech and faces this may no longer be a problem.)

In other words, it defines an intrinsic property, intelligence, based on whether the entity appears to be intelligent, very much in line with the tenet of material perception.

Both material perception and the Turing test highlight that when we judge something to be conscious or not, to have intelligence or not, or to have an understanding of Chinese or not, we can only use information of that thing that reaches us — in the case of the Chinese room, the pieces of paper coming through the slot. We cannot see “inside” the thing to make our judgment.  Something is conscious if it appears to be conscious, just like we deem another person sad or happy if they appear sad or happy — we don’t dissect their brain first to try to see if they really are.

Comments

Popular Posts