LECTURE SEVEN CRITICISM OF MACHINE-STATE FUNCTIONALISM CHINESE ROOM ARGUMENT 对于机器状态功能主义的批评 汉字屋论证
LECTURE SEVEN CRITICISM OF MACHINE-STATEFUNCTIONALISM: CHINESE ROOM ARGUMENT 对于机器状态功能主义的批评: 汉字屋论证
MACHINE-STATE- FUNCTIONALISM AND ARTIFICIAL卟 NTELLIGENCE If machine-state-functionalism is right, then the nature of human mind is nothing but a properly programmed Turing-machine. Since the machine-table is multiply realizable by different physical substrates, human mental states can be also implemented by a properly programmed digital computer. That means, machines can also think as we do. Or in other words, artificial intelligence is at least theoretically possible
If machine-state-functionalism is right, then the nature of human mind is nothing but a properly programmed Turing-machine. Since the machine-table is multiply realizable by different physical substrates, human mental states can be also implemented by a properly programmed digital computer. That means, machines can also think as we do. Or in other words, artificial intelligence is at least theoretically possible. MACHINE-STATE-FUNCTIONALISM AND ARTIFICIAL INTELLIGENCE
The Chinese Room Argument a The Chinese Room argument, devised by John Searle, is an argument against the possibility of true artificial intelligence. The argument centers on a thought experiment in which someone who knows only English sits alone in a room following English instructions for manipulating strings of Chinese characters, such that to those outside the room it appears as if someone in the room understands Chinese. The argument is intended to show that while suitably programmed computers may appear to converse in natural language, they are not capable of understanding language, even in principle. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. Searle's argument is a direct challenge to proponents of Artificial Intelligence, and the argument also has broad implications for functionalist and computational theories of meaning and of mind As a result, there have been many critical replies to the argument
The Chinese Room argument, devised by John Searle, is an argument against the possibility of true artificial intelligence. The argument centers on a thought experiment in which someone who knows only English sits alone in a room following English instructions for manipulating strings of Chinese characters, such that to those outside the room it appears as if someone in the room understands Chinese. The argument is intended to show that while suitably programmed computers may appear to converse in natural language, they are not capable of understanding language, even in principle. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. Searle's argument is a direct challenge to proponents of Artificial Intelligence, and the argument also has broad implications for functionalist and computational theories of meaning and of mind. As a result, there have been many critical replies to the argument. The Chinese Room Argument
Historical Background Leibniz”Mil a Searle's argument has three important antecedents The first of these is an argument set out by the philosopher and mathematician gottfried leibniz 646-1716). This argument, often known as"Leibniz Mill appears as section7 of Leibniz Monadology(《单子论》) Like Searle's argument Leibniz' argument takes the form of a thought experiment leibniz asks us to imagine a physical system, a machine, that behaves in such a way that it supposedly thinks and has experiences ("perception") n I7. Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine, that perception must be sought for
Searle‘s argument has three important antecedents. The first of these is an argument set out by the philosopher and mathematician, Gottfried Leibniz (1646–1716). This argument, often known as “Leibniz’ Mill”, appears as section 17 of Leibniz’ Monadology.(《单子论》) Like Searle's argument, Leibniz’ argument takes the form of a thought experiment. Leibniz asks us to imagine a physical system, a machine, that behaves in such a way that it supposedly thinks and has experiences(“perception”). 17. Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions,so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine,that perception must be sought for. Historical Background: Leibniz’ Mill
Historical Background: Turings Paper Machine aA second antecedent to the Chinese Room argument is the idea of a paper machine, a computer implemented by a human this idea is found in the work of Alan Turing, for example in"Intelligent Machinery"(948). Turing writes there that he wrote a program for a"paper machine"to play chess. a paper machine is a kind of program, a series of simple steps like a computer program, but written in natural language (e. g, English), and followed by a human. The human operator of the paper chess-playing machine need not (otherwise)know how to play chess. All the operator does is follow the instructions for generating moves on the chess board In fact, the operator need not even know that he or she is involved in playing chess--the input and output strings, such as"QKP2-QKP3"need mean nothing to the operator of the paper machine a Turing was optimistic that computers themselves would soon be able to exhibit apparently intelligent behavior, answering questions posed in English and carrying on conversations. Turing(1950)proposed what is now known as the Turing Test: if a computer could pass for human in on-line chat, it should be counted as intelligent
A second antecedent to the Chinese Room argument is the idea of a paper machine, a computer implemented by a human. This idea is found in the work of Alan Turing, for example in “Intelligent Machinery” (1948). Turing writes there that he wrote a program for a “paper machine” to play chess. A paper machine is a kind of program, a series of simple steps like a computer program, but written in natural language (e.g., English), and followed by a human. The human operator of the paper chess-playing machine need not (otherwise) know how to play chess. All the operator does is follow the instructions for generating moves on the chess board. In fact, the operator need not even know that he or she is involved in playing chess—the input and output strings, such as “QKP2–QKP3” need mean nothing to the operator of the paper machine. Turing was optimistic that computers themselves would soon be able to exhibit apparently intelligent behavior, answering questions posed in English and carrying on conversations. Turing (1950) proposed what is now known as the Turing Test: if a computer could pass for human in on-line chat, it should be counted as intelligent. Historical Background: Turing's Paper Machine
Historical Background The Chinese Nation a Ned Block(born 1942) is an American philosopher working in the field of thephilosophy of mind who has made mportant contributions to matters ofconsciousness and cognitive science. In 1971. he obtained his ph D. fromHarvard University under Hilary Putnam. He went to Massachusetts nstitute of Technology(MIT)as an assistant professor of philosophy (1971-1977), worked as associate professor of philosophy(1977 1983), professor of philosophy(1983-1996) the philosophy section(1989-1995). He has, since 1996, been a professor in the departments of philosophy and psychology and at the Center for Neural Science at New York University(NYU)
Ned Block (born 1942) is an American philosopher working in the field of thephilosophy of mind who has made important contributions to matters ofconsciousness and cognitive science. In 1971, he obtained his Ph.D. fromHarvard University under Hilary Putnam. He went to Massachusetts Institute of Technology (MIT) as an assistant professor of philosophy (1971-1977), worked as associate professor of philosophy (1977- 1983), professor of philosophy (1983-1996) and served as chair of the philosophy section (1989-1995). He has, since 1996, been a professor in the departments of philosophy and psychology and at the Center for Neural Science at New York University (NYU). Historical Background: The Chinese Nation
The Chinese Nation Argument a In"Troubles with Functionalism, also published in 1978, Ned Block envisions the entire population of China implementing the functions of neurons in the brain. This scenario has subsequently been called"The Chinese Nation"or"The Chinese Gym". We can suppose that every Chinese citizen would be given a call- list of phone numbers, and at a preset time on implementation day, designated"input citizens would initiate the process by calling those on their call-list. When any citizen's phone rang, he or she would then phone those on his or her list, who would in turn contact yet others. No phone message need be exchanged; all that is required is the pattern of calling. The call- lists would be constructed in such a way that the patterns of calls implemented the same patterns of activation that occur in someones brain when that person is in a mental state-pain, for example. The phone calls play the same functional role as neurons causing one another to fire Block was primarily interested in qualia, and in particular, whether it is plausible to hold that the population of China might collectively be in pain, while no individual member of the population experienced any pain
In “Troubles with Functionalism”, also published in 1978, Ned Block envisions the entire population of China implementing the functions of neurons in the brain. This scenario has subsequently been called “The Chinese Nation” or “The Chinese Gym”. We can suppose that every Chinese citizen would be given a call-list of phone numbers, and at a preset time on implementation day, designated “input” citizens would initiate the process by calling those on their call-list. When any citizen's phone rang, he or she would then phone those on his or her list, who would in turn contact yet others. No phone message need be exchanged; all that is required is the pattern of calling. The call-lists would be constructed in such a way that the patterns of calls implemented the same patterns of activation that occur in someone's brain when that person is in a mental state—pain, for example. The phone calls play the same functional role as neurons causing one another to fire. Block was primarily interested in qualia, and in particular, whether it is plausible to hold that the population of China might collectively be in pain, while no individual member of the population experienced any pain. The Chinese Nation Argument
Searles argument is 32 years old now a In 1980, John Searle published"Minds, Brains and Programs"in the journal The Behavioral and Brain Sciences. In this article, Searle sets out the argument, and then replies to the half-dozen main objections that had been raised during his earlier presentations at various university campuses (see next section). In addition, Searle's article in BBS was published along with comments and criticisms by 27 cognitive science researchers.These 27 comments were followed by Searle's replies to his critics a Over the last two decades of the twentieth century, the chinese room argument was the subject of very many discussions. By 1984, Searle presented the Chinese Room argument in a book, Minds, Brains and Science. In January 1990, the popular periodical ScientificAmerican took the debate to a general scientific audience. Searle included the chinese room Argument in his contribution, Is the brains Mind a computer program? and Searle's piece was followed by a responding article, "Could a Machine Think?", written by Paul and Patricia Churchland. Soon thereafter Searle had a published exchange about the Chinese Room with another leading hilosopher Jerry Fodor(in Rosenthal(ed )1991)
In 1980, John Searle published “Minds, Brains and Programs” in the journal The Behavioral and Brain Sciences. In this article, Searle sets out the argument, and then replies to the half-dozen main objections that had been raised during his earlier presentations at various university campuses (see next section). In addition, Searle's article in BBS was published along with comments and criticisms by 27 cognitive science researchers. These 27 comments were followed by Searle's replies to his critics. Over the last two decades of the twentieth century, the Chinese Room argument was the subject of very many discussions. By 1984, Searle presented the Chinese Room argument in a book, Minds, Brains and Science. In January 1990, the popular periodical Scientific American took the debate to a general scientific audience. Searle included the Chinese Room Argument in his contribution, “Is the Brain's Mind a Computer Program?”, and Searle's piece was followed by a responding article, “Could a Machine Think?”, written by Paul and Patricia Churchland. Soon thereafter Searle had a published exchange about the Chinese Room with another leading philosopher, Jerry Fodor (in Rosenthal (ed.) 1991). Searle’s argument is 32 years old now
Searles target is StrongAI a Strong Al is the view that suitably programmed computers(or the programs themselves) can understand natural language and actually have other mental capabilities similar to the humans whose abilities they mimic. According to Strong Al,a computer may play chess intelligently, make a clever move, or understand language. By contrast, " weak Al"is the view that computers are merely useful in psychology, linguistics, and other areas, in part because they can simulate mental abilities But weak Al makes no claim that computers actually understand or are intelligent. The Chinese Room argument is not directed at weak Al, nor does it purport to show that machines cannot think--Searle says that brains are machines, and brains think. It is directed at the view that formal computations on symbols can produce thought
Strong AI is the view that suitably programmed computers (or the programs themselves) can understand natural language and actually have other mental capabilities similar to the humans whose abilities they mimic. According to Strong AI, a computer may play chess intelligently, make a clever move, or understand language. By contrast, “weak AI” is the view that computers are merely useful in psychology, linguistics, and other areas, in part because they can simulate mental abilities. But weak AI makes no claim that computers actually understand or are intelligent. The Chinese Room argument is not directed at weak AI, nor does it purport to show that machines cannot think—Searle says that brains are machines, and brains think. It is directed at the view that formal computations on symbols can produce thought. Searle’s target is “Strong AI
The reductio ad absurdum(归谬法) against Strong A d eductio ad absurdum(Latin: "reduction to the absurd") is a form of argument in which a proposition is disproven by following its implications logically to an absurd consequence a( )If Strong Al is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand chinese (2)I could run a program for Chinese without thereby coming to understand Chinese (3)Therefore Strong Al is false
eductio ad absurdum (Latin: "reduction to the absurd") is a form of argument in which a proposition is disproven by following its implications logically to an absurd consequence. (1) If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese. (2) I could run a program for Chinese without thereby coming to understand Chinese. (3) Therefore Strong AI is false. The reductio ad absurdum (归谬法) against Strong AI