Chinese Room Thought Experiment
Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a human Chinese speaker. To all of the questions that the human asks, it makes appropriate responses, such that any Chinese speaker would be convinced that he or she is talking to another Chinese-speaking human being.
Some proponents of artificial intelligence would conclude that the computer "understands" Chinese. This conclusion, a position he refers to as "strong AI", is the target of Searle's argument.
Searle then asks the reader to suppose that he is in a closed room and that he has a book with an English version of the aforementioned computer program, along with sufficient paper, pencils, erasers and filing cabinets. He can receive Chinese characters (perhaps through a slot in the door), process them according to the program's instructions, and produce Chinese characters as output. As the computer had passed the Turing test this way, it is fair, says Searle, to deduce that the human operator will be able to do so as well, simply by running the program manually.
Searle asserts that there is no essential difference between the role the computer plays in the first case and the role the human operator plays in the latter. Each is simply following a program, step-by-step, which simulates intelligent behavior. And yet, Searle points out, the human operator does not understand a word of Chinese. Since it is obvious that he does not understand Chinese, Searle argues, we must infer that the computer does not understand Chinese either.
Searle argues that without "understanding" (what philosophers call "intentionality"), we cannot describe what the machine is doing as "thinking". Because it does not think, it does not have a "mind" in anything like the normal sense of the word, according to Searle. Therefore, he concludes, "strong AI" is mistaken.
Really excellent article. Can you (or anyone else) give some context for the image at the top of the page?
ReplyDelete