Chinese Room Argument: A Robot Cannot Feel Pain

Standard

Introduction

To build a human like machine has always been the aim of Artificial Intelligence, to which it has partially succeeded and claims that more perfection will be achieved in future. It is not questioned whether behaviorally or in performance, we can build a machine which can be human like. Problem comes when things like Intentionality/Feelings/Emotions/Understanding/Meaning come into picture. With the help of behavior one cannot identify whether a machine is feeling emotions, feelings or would understand the meaning of words/statements/symbols it is computing, independent of the fact behaviorally it is showing to do so.

Even if we want to talk about machines having feelings, emotions, understanding, and pain etc. there exist no formal definition of these things, phenomena. Ultimately it becomes difficult to talk about these things in relation to machines and computational models.

In this essay I will try to talk about “intentional” and “feeling related” aspects for machines. I will not pretend to be neutral. I will try to defend the view that at least a computational model based on computation over any kind of representation can never have or realize intentional phenomenon, qualia, feelings, pain etc. Thus not just in practice, in principle too it is impossible to build such machines.

In this paper, I will go further to explain various theories proposed in order to explain how intentional phenomena, subjective experiences, qualia and feeling related aspects are explained in case of human beings. Here I will refer to the “Hard” and “Easy” problems of Consciousness. I will talk about how various efforts of Strong and Weak AI are working to solve the “easy problem” of consciousness and “hard problems” are still untouched.

Computation and Pain

John Searle’s Chinese Room Argument

With the help of Chinese Room Argument it can be shown that computation over any kind of representation is insufficient to realize Intentionality/Feelings/Emotions/Pain etc. Computation over representation is considered to be a promising theory of mind and is sometimes also referred to as “Computational Theory of Mind”. In 1980, John Searle published “Minds, Brains and Programs” in the journal The Behavioral and Brain Sciences. In this article, Searle sets out the Chinese Room Argument.

The heart of the argument is an imagined human simulation of a computer, similar to Turing’s Paper Machine. The human in the Chinese Room follows instructions in English for manipulating Chinese symbols, where a computer “follows” a program written in a programming language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does—manipulate symbols on the basis of their syntax alone—no computer, merely by following a program, comes to genuinely understand Chinese. If the argument with the phenomena of “Understanding” is tough to understand for some then they can take reference of “Pain”. There is no way, the above set-up, with a human being and rule book, to realize “Pain”. If it is not possible to realize subjective experience like “Pain” for the above set-up then it is not possible for any computational model which manipulates representation, to realize any subjective experience. Thus, strong AI is false.

Chinese Room Argument can be pictorially understood in following chart.

chinese-room

We might summarize the narrow argument as a reductio ad absurdum against Strong AI as follows. Let L be a natural language, and let us say that a “program for L” is a program for conversing fluently in L. A computing system is any system, human or otherwise, that can run a program.

  • If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese.
  • I could run a program for Chinese without thereby coming to understand Chinese.
  • Therefore Strong AI is false.

The second premise is supported by the Chinese Room thought experiment. The conclusion of this narrow argument is that running a program cannot create understanding. The wider argument includes the claim that the thought experiment shows more generally that one cannot get semantics (meaning) from syntax (formal symbol manipulation).

Chinese Room Argument was mainly given to show that computation over any kind of representation will lack understanding. Same argument can also be used to show that while human in Chinese room is manipulating symbols, there is no possibility of him experiencing any kind of “Understanding” or “Pain” in the task of manipulating symbols or “there is nobody to feel pain” in the system, so there is no pain.

Simple Explanation of “Chinese Room Argument”

Chinese room argument primarily says that any computational model based on representation is “in principle” incapable of producing any human intentional phenomena or subjective first person experiences.

Searle argues to understand the nature of “computation”. He says that a computation is nothing more than a combination of a “Rule Book” and an “Agent” which is required to manipulate the input on the basis of the “Rule Book”. Pictorially, it can be represented as follows. A computation is nothing more than what is shown in following diagram.

Chinese Room Argument

After establishing this analogy of computation, Searle asks the question to the reader, where is the possibility of realization of any human intentional phenomena, subjective experiences like pain, qualia, emotions or any kind of sensation in above setup?

Since there is no possibility of realization of any human intentional phenomena or subjective experiences in above setup, Searle argues that computation over representation, cannot “in principle” realize any human intentional phenomena or subjective experiences.

Video Explanation of “Chinese Room Argument”

 

First Video.

 

Second Video

 

Third Video

 

Further readings on the same

At this point one may also like read one of my other posts on the same issue, for greater understanding.
Can a robot feel pain? — https://devanshmittal.wordpress.com/2010/02/09/can-a-robot-feel-pain/

One may also like to read the original paper published by John Searle on Chinese Room Argument. Chinese Room Argument. Minds, Brains and Programs by John Searle.

David Chalmers and Hard Problem of Consciousness

When you look at this page, there is a whir of processing: photons strike your retina, electrical signals are passed up your optic nerve and between different areas of your brain, and eventually you might respond with a smile, a perplexed frown or a remark. But there is also a subjective aspect. When you look at the page, you are conscious of it, directly experiencing the images and words as part of your private, mental life. You have vivid impressions of colored flowers and vibrant sky. At the same time, you may be feeling some emotions and forming some thoughts. Together such experiences make up consciousness: the subjective, inner life of the mind.

The Hard Problem

Researchers use the word “consciousness” in many different ways. To clarify the issues, we first have to separate the problems that are often clustered together under the name. For this purpose, I find it useful to distinguish between the “easy problems” and the “hard problem” of consciousness. The easy problems are by no means trivial – they are actually as challenging as most in psychology and biology – but it is with the hard problem that the central mystery lies.

The easy problems of consciousness include the following: How can a human subject discriminate sensory stimuli and react to them appropriately? How does the brain integrate information from many different sources and use this information to control behavior? How is it that subjects can verbalize their internal states? Although all these questions are associated with consciousness, they all concern the objective mechanisms of the cognitive system. Consequently, we have every reason to expect that continued work in cognitive psychology and neuroscience will answer them.

The hard problem, in contrast, is the question of how physical processes in the brain give rise to subjective experience. This puzzle involves the inner aspect of thought and perception: the way things feel for the subject. When we see, for example, we experience visual sensations, such as that of vivid blue. Or think of the ineffable sound of a distant oboe, the agony of an intense pain, the sparkle of happiness or the meditative quality of a moment lost in thought. All are part of what I am calling consciousness. It is these phenomena that pose the real mystery of the mind.

Knowledge Argument

To illustrate the distinction, consider a thought experiment called “The Knowledge Argument” devised by the Australian philosopher Frank Jackson.

According to the knowledge argument, there are facts about consciousness that are not deducible from physical facts. Someone could know all the physical facts, be a perfect reasoner, and still be unable to know all the facts about consciousness on that basis.

Frank Jackson’s canonical version of the argument provides a vivid illustration. On this version, Mary is a neuroscientist who knows everything there is to know about the physical processes relevant to color vision. But Mary has been brought up in a black-and-white room (on an alter-native version, she is colorblind) and has never experienced red. Despite all her knowledge, it seems that there is something very important about color vision that Mary does not know: she does not know what it is like to see red. Even complete physical knowledge and unrestricted powers of deduction do not enable her to know this. Later, if she comes to experience red for the first time, she will learn a new fact of which she was previously ignorant: she will learn what it is like to see red.

Let me try to explain the argument again in different words.

Suppose that Mary, a neuroscientist in the 23rd century, is the world’s leading expert on the brain processes responsible for color vision. But Mary has lived her whole life in a black-and-white room and has never seen any other colors. She knows everything there is to know about physical processes in the brain – its biology, structure and function. This understanding enables her to grasp everything there is to know about the easy problems: how the brain discriminates stimuli, integrates information and produces verbal reports. From her knowledge of color vision, she knows the way color names correspond with wavelengths on the light spectrum. But there is still something crucial about color vision that Mary does not know: what it is like to experience a color such as red. It follows that there are facts about conscious experience that cannot be deduced from physical facts about the functioning of the brain.

Jackson’s version of the argument can be put as follows (here the premises concern Mary’s knowledge when she has not yet experienced red):

 

(1) Mary knows all the physical facts.
(2) Mary does not know all the fact
———————————————-
(3) The physical facts do not exhaust all the facts.

 

There are following very important implications of “Knowledge Argument”:

  1. Human Subjective Experiences as Phenomena are Not some illusionary phenomena. They are as real as anything else.
  2. Human Subjective Experiences “In Principle” cannot be captured in the Structural, Functional, Procedural, Material Information, even if the information is in the highest possible detail.
  3. Human Subjective Experiences “In Principle” can NOT be reduced in the Structural, Functional, Procedural, Material Information, even if the information is in the highest possible detail. This also implies that all the reductionist explanations of Consciousness are False!
 
One can put the knowledge argument more generally:

(1) There are truths about consciousness that are not deducible from physical truths.
(2) If there are truths about consciousness that are not deducible from physical truths, then materialism is false.
—————————————————
(3) Materialism is false.

 

Indeed, nobody knows why these physical processes are accompanied by conscious experience at all. Why is it that when our brains process light of a certain wavelength, we have an experience of deep purple? Why do we have any experience at all? Could not an unconscious automaton have performed the same tasks just as well? These are questions that we would like a theory of consciousness to answer.

 

One should definitely watch following TED Talk by David Chalmers in order to understand the Hard Problem of Consciousness.

And in order to research further on the topic, following resource by David J Chalmers is a MUST Read. It shows various issues in Mind Problem and concludes how “Hard Problem of Consciousness” is still unsolved and points towards the possibility that probably “Consciousness” may be an ontologically distinct entity.

Consciousness and Its Place in Nature — David J Chalmers

Conclusion

So we see there are certain problems with computational theory of mind, which are,

  1. Problem of Meanings/Semantics: Syntax cannot have Semantics. Chinese Room Argument proves it.
  2. Problem of Intentionality: How can the syntax be ”about” something. Again reference of Chinese Room Argument can be taken in this also.
  3. Problem of Consciousness: As Chalmers says what we can solve from Computational Theory of Mind is the Easy Problem and the Hard Problem still persists.
  4. Human Subjective Experiences as Phenomena are Not some illusionary phenomena. They are as real as anything else.
  5. Human Subjective Experiences “In Principle” cannot be captured in the Structural, Functional, Procedural, Material Information, even if the information is in the highest possible detail.
  6. Human Subjective Experiences “In Principle” can NOT be reduced in the Structural, Functional, Procedural, Material Information, even if the information is in the highest possible detail. This also implies that all the reductionist explanations of Consciousness are False!

At least in the case of a computational model based on computation being performed over a representation, one can see that Intentional and Feeling related aspects are not possible. Chinese Room and other similar arguments show that Intentionality, Qualia, Feeling related aspects are not realizable in a computational model.

After showing the limitation of computational model I talked about various researches which have happened till now in relation to explaining how intentional and feeling related aspects are explained in a human being. I talked about the “easy” and “hard” problems of consciousness. Most efforts in AI (both weak and strong) are trying to solve the “easy problem” of consciousness and “hard problem” as I showed is still untouched or unexplained.

In conclusion I would like to say that, till now there have not been any strong enough researches, arguments, proofs which can prove/show the existence of intentional phenomena or “feeling related” aspects like “pain” in case of machines. Arguments of “computation over representation” have already lost the game; arguments of “structure” (like principle of organizational invariance) are far from being accepted.

References

Advertisements

2 responses »

  1. Pingback: Can a Robot feel Pain? | Tribulations of A Fledgling Mind

  2. In a recent talk, Philosopher John Searle says consciousness is NOT a massive computer simulation, as it is widely believed. He says he proved it 30 years ago with the help of Chinese Room Argument. His stand on his argument can be known from following TED talk. He still believes in what he stood for 30 years ago. Searle suggested that time itself that roots of consciousness are NOT in Computation but in Biology.

    In this way, Searle is not contradicting with what Neuroscientist and other researchers may be saying. He is refuting the argument of Computation but not the argument that consciousness has biological basis. These two are different things. Denial of computational model to realize human subjective experiences doesn’t deny the possibility of realization of human subjective experiences in a biological system like brain. Brain definitely has computational model into it, but it is much “more” than just being a computational model, which is the area of further research. Brain “having” a computational model “also” (in addition to many other aspects) does NOT imply that every computational model (or computational model in isolation) will realize human subjective experiences. One should see the difference. Searle is arguing against the phenomenon of computation and not against brain. One should try to understand the Chinese Room Argument and its implications better. In short Searle argues that a computational model is nothing more than a set of symbols under manipulation. He says a symbol manipulation system cannot have semantics. Syntax cannot have semantics, intentionality, qualia and human subjective experiences. Human subjective experiences (like Aesthetics, Arts, Beauty, Motivation, Inspiration, Pain, Understanding etc.) cannot be measured in scientific terms, cannot be written down on a piece of paper and thus cannot be written down on a computer harddisk or a computer program (which is nothing more than a symbol manipulation system). Symbols do not capture experience. To show such thing he constructs the chinese room thought experiment in which the thesis comes out as an outcome and not assumed beforehand, but it requires “Observation” to see it. Symbols and information about functions and structures is “in principle” NOT sufficient to capture “experience” is something which is also shown in “Knowledge Argument”, which is relatively much easier to understand. Kindly see that argument too.

    Chinese Room Argument is tough to understand, but once understood, one can appreciate it a lot. It requires validation by observation than argument. Once the observation is done, argument becomes clear. One should take help from those who understand it well and spend some time with it, rather than depend on opinions of people on social networking. Once the argument is understood well then one can also dwell into the counter arguments and then form an opinion. One cannot understand any subject with the spirit to counter in the beginning itself. Agreement or disagreement have no meaning, if we do not understand the concept.

    David Chalmers later showed uncertainty about the existing notions of Consciousness and proposed his ideas in the talk and research papers I have included in the post. He summarizes all the ideas proposed so far on Consciousness and then shows limitations in them. His works are worth studying, if we wish to understand the nature of consciousness seriously. His research paper on following link is worth reading. Consciousness and its place in nature by David Chalmers.
    http://consc.net/papers/nature.pdf

    They are worth reading and understanding.

    The “Spirit of Understanding” is better than “Spirit of Debate”. First helps, second doesn’t lead anywhere. First builds trust and cooperation, second breeds hatred and competition, and thus be avoided.

    ====
    Apart from above:
    Prof. Sangal is one of the pioneers in Artificial Intelligence across the world. He can also be communicated with over email. One shouldn’t accept things on the basis of respect for other person, but respect sometimes also helps us in Self Doubting and thinking more over the subject, which can bring some deeper insights. Indian tradition has thus always put Guru-Shishya relation on the highest stage. Loss of sanctity of Guru-Shishya relation has been the cause of many problems which see in modern society.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s