Is The Hard Problem Of Consciousness A Problem For Artificial Intelligence?

in #life7 years ago

Is The Hard Problem Of Consciousness A Problem For Artificial Intelligence?

google-deepmind-artificial-intelligence.jpg

Within the last 50 years, there have been enormous developments in the field of artificial intelligence (AI) so that we have really started to worry about whether the conspiracy theories will come true or not. Are the intelligent robots the next step in evolution after homo sapiens? Will the homo sapiens extinct or will they lose their superiority in the nature? It looks like and most of the thinkers believe that eventually AI is going to succeed to simulate or may be even duplicate human mind. They believe that this is just a matter of time. When we look at the rapidly-developing success of AI after Alan Turing’s groundbreaking paper (Turing, 1950) is published in 1950, it seems like there is no conceptual boundaries in front of AI. One day, AI will realize the human mind on a machine so that homo sapiens will be the new gods and goddesses that create life, or as homo sapines wanted to steal the role of god from its creater, AI will also want to the same and will proclaim that “God is Dead” (Nietzsche, 2010, p. 181). However, it is certain that they will not be able to enjoy what they are doing. They will not have a subjective experience of anything. They will not be able to become exactly like humans or we will not be able to duplicate homo sapiens fully, even bats (Nagel, 1974). There is the hard problem of consciousness that prevents us from being completely multiply realizable as an artificial intelligence.

The hard problem of consciousness is the fact that how ‘qualia’ (singular quale), i.e. the subjective feeling of being in a certain state, can be studied from an objective point of view. How we can know what it is like to be a bat (Nagel, 1974) or the feeling of seeing a red colour (Jackson, 1982). Every knowledge that a subject has either depends on her introspections of herself or the inferences that she have observed in the world. She can know what it is like to be in a certain state, if she have been in that state before. However, if she have not been, she cannot know the feeling of it. She cannot open up the other people’s brain and see the feeling of it, or she cannot be in that state by reading or hearing the testimony of people who have that experience. She has to experience the feeling of to be in a certain state by itself. Qualia by its definition necessitate a subject. Frank Jackson gives the following example about the problem:

“Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. [...] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?” (Jackson, 1982, p. 130)

Yes, she will definetely learn something new, i.e. the feeling of seeing a certain colour. Because all the informations she has could not inform her about the subjective feeling of seeing a colour until she actually sees one of them. This shows that there is some aspect of consciousness that cannot be objectified, meaning that cannot be set free from subjectivity so that it can be know from a third person point of view. Thus, we are in a way cognitively closed (McGinn, 1989) to understand what it is like to be in a certain cognitive state.

20170319_TNA51Schulman1200x627.jpg

Since McCulloch and Pitts showed the similarity between all-or-none property of nervous system and the propositional logic (McCulloch and Pitts, 1943), artificial intelligence has been mainly based on the idea of functionalism, which states that what matters is not the material itself but the relation between them that has a certain functional role. In this respect, cognitive states can be detached from their bodies, be formalized in propositional logic and be realized on other bodies that have different “causal powers” (Searle, 1980). This is called “multiple realizability” (Putnam, 1967). For example, pain is not the activation of certain C-fibers in the body but the function of to be aware of that there is something wrong in a living things’ body. Its function is to inform the perceiver about the existence of a danger that would cause harm to her. Because it is a functional property, it can be realized on a machine in terms of its functional role without needing a biological embodiment. It can be formalized as zeroes and ones in propositional logic and can be realized on a digital machine. The activation of certain networks, let’s call them C-networks, can cause pain for a robot so that it can avoid from the actions that can harm itself. In this way, pain and the other cognitive states can be multiply realized on other bodies. However, the feeling of to be in the cognitive state of pain cannot be multiply realized.

Qualia cannot be multiply realized so that an artificial intelligence can have the subjective experience of pain. Because, by their definitions qualia are bound to subjectivity. We cannot objectify them so that we can represent them in propositional logic. For instance, we can know conceptually each neurons’ activation while experiencing a pain, or we can understand its functional role by looking at the input and the output, meaning what happens before the pain and what kind of behavior that is performed after the pain. In this way, we can understand the phenomena of pain and represent it in propositions which are true or false so that we can realize it on digital machines. However, we cannot represent the feeling of pain in a propositional way while it is subjective. You cannot say someone, who is having toothache, that she is not or understand the feeling of the pain that she is having. You cannot fully understand "What Is It Like to Be a Bat?" (Nagel, 1974), even if you would be a brilliant scientist like Mary (Jackson, 1982). In our case what it is like to be a bat or the feeling of pain are the colours outside of Mary’s black-white room and the room is free from subjectivity where we cannot find any answer to the hard problem of consciousness.

You might argue that though we cannot directly build machines that have qualia, the subjective feeling can emerge within them as they process information. However, it cannot be the case. Because what we do is, while multiply realizing a cognitive state on a machine, nothing but a symbol manipulation. We are in a way extracting the meaning from the syntax of our biological embodiment and give meaning to the syntax of machine’s embodiment. Consequently, nothing can emerge that we could not supply to machines. Because, the embodiment of a machine does not have the causal powers that a biological embodiment has. It cannot cause the emergence of qualia from the interaction between its body, unless it is encoded by humans. Searle’s Chinese room thought experiment, even though he presents it to reject the strong AI, can be given as an example of how machines lack the subjective experience of what they are doing:

“Suppose that I'm locked in a room and given a large batch of Chinese writing...[but] to me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that 'formal' means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols...from the point of view of somebody outside the room in which I am locked -- my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese.” (Searle, 1980, p. 417)

c-room.gif

The thought experiment shows that a machine does not have the feeling of to be in a certain state. In the example, though Searle can give proper responses to the questions and shows the perceiver that he knows Chinese, in fact he has no idea about it. He just manipulates symbols. Even he takes his time on and process information as long as he wanted, he cannot have the experience of knowing Chinese. Thus, in a similar manner, qualia cannot emerge within a machine by itself as it processses symbols.

As Kripke states (Kripke, 1972), pain does not rigidly designate C-fiber stimulation, meaning it is not necessary that C-fiber activation always cause the feeling of pain. In other words, the identity relation between pain and C-fiber stimulation is contingent, that is to say it might be the case that there is C-fiber activation but not the feeling of pain, or C-fiber stimulation might cause another subjective feeling. It is conceivable that there is a ‘zombie’ world where C-fiber stimulation occurs but the creatures do not feel pain. In this respect, we can state that the activation of C-networks does not cause the subjective feeling of pain. Artificial intelligences are just ‘zombies’ in this world, which do not have any qualia at all. Because, the subjective feeling of phenomenal experience neither can be grasped by humans so that it can be represented nor can emerge within a machine by itself as it processes symbols. Because of this, qualia cannot be multiply realized and machines do not and will not any subjective experience what they are doing.

In conclusion, whether the hard problem of consciousness is a problem for artificial intelligence or not depends on what we are aiming while building these machines. If we want to simulate human mind in every respect, the hard problem is a problem for AI that we will not ever overcome. On the other hand, if we just want to simulate cognitive processes without their intrinsic property, i.e. qualia, such as vision, movement, speaking etc. which can be studied from an objective point of view, we are fine. In this respect, the hard problem of consciousness will not be a problem for AI, although our conclusion does not rule out the possibility of the conspiracy theories that I have mentioned at the beginning. However, if artificial intelligence would bring the end of mankind, it is certain that they will not sincerely enjoy while doing it and that will not be an AI apocalypse, but be a zombie apocalypse.

References

Jackson, F. (1982). Epiphenomenal qualia. Philosophical quarterly, 32(127), 127-136.
Kripke, S. A. (1972). Naming and necessity. In Semantics of natural language (pp. 253-355). Springer Netherlands.
McGinn, C. (1989). Can We Solve the Mind--Body Problem?. Mind, 98(391), 349-366.
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115-133.
Nietzsche, F. (2010). The gay science: With a prelude in rhymes and an appendix of songs. Vintage.
Putnam, H. (1967). Psychological predicates. Art, mind, and religion, 1, 37-48.
Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(3), 417-424.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.