15 February 2020

Chinese Room

John Searle invented a thought experiment to show that no computer can ever be said to have a mind or be conscious. By extension, this assertion would contradict the suggestion above that the human mind is nothing but the product of a physical, mechanical brain, for the brain is also "only" a physical system that obeys known rules.

The thought experiment goes like this: suppose there is a person who does not speak Chinese, locked into a room with a big book of rules. They receive questions written in Chinese through a slot in the door. They then use pen and paper to follow the rules in the big book and produce some Chinese characters based on what they received. This turns out to be a response to the Chinese question in Chinese, which they post back through the slot. For someone outside the room, it seems they are having a conversation in Chinese, yet the person does not speak a word of Chinese.

Since the book clearly does not understand Chinese, and the person does not understand Chinese, they together don't understand Chinese, Searle says. And they are just like a computer with a program. So no computer will ever understand Chinese, even if they appear to do so.

But far from supporting this conclusion, the Chinese room is actually a very good argument for emergent qualities. Neither the person nor the book has an understanding of Chinese (the person doesn't know Chinese, and we normally don't think of books as having an understanding), yet their combination does, and together they function as a Chinese-speaking person with consciousness. Similarly, no single neuron in the human brain can speak a language or be conscious, yet these are true of the whole brain.

Searle's argument is said to apply to computers with programs only, but that is just another way of saying that as soon as you can separate a system into a processor and a program, or two piles of neurons, somehow it suddenly loses all claims to be conscious.

This is very much linked to how we define consciousness. The best we can do is to say that something is conscious if we think it is, or if it's like a human being, which we usually think is conscious. Alan Turing invented the Turing test to define intelligence this way (something is intelligent if it behaves like a human being), and it may very well be the only definition we have of consciousness as well.

Also note that when we judge something to be conscious, we can only use information of that thing that reaches us - in the case of the Chinese room, the pieces of paper coming through the slot. We cannot see "inside" the thing to make our judgement. In other words, something is conscious if it appears to be conscious, just like we deem another person sad or happy if they appear sad or happy - we don't dissect their brain first to make sure they really are.

No comments:

Post a Comment