Some functionalists believe China would have qualia but that due to the size it is impossible to imagine China being conscious.[15] Indeed, it may be the case that we are constrained by our theory of mind[16] and will never be able to understand what Chinese-nation consciousness is like. Therefore, if functionalism is true either qualia will exist across all hardware or will not exist at all but are illusory.[17]
The Chinese room
The Chinese room argument by John Searle[18] is a direct attack on the claim that thought can be represented as a set of functions. The thought experiment asserts that it is possible to mimic intelligent action without any interpretation or understanding through the use of a purely functional system. In short, Searle describes a person who only speaks English who is in a room with only Chinese symbols in baskets and a rule book in English for moving the symbols around. The person is then ordered by people outside of the room to follow the rule book for sending certain symbols out of the room when given certain symbols. Further suppose that the people outside of the room are Chinese speakers and are communicating with the person inside via the Chinese symbols. According to Searle, it would be absurd to claim that the English speaker inside knows Chinese simply based on these syntactic processes. This thought experiment attempts to show that systems which operate merely on syntactic processes (inputs and outputs, based on algorithms; "how") cannot realize any semantics (meaning; "what") or intentionality (aboutness; "why"). Thus, Searle attacks the idea that thought can be equated with following a set of syntactic rules; that is, functionalism is an insufficient theory of the mind.
As noted above, in connection with Block's Chinese nation, many functionalists responded to Searle's thought experiment by suggesting that there was a form of mental activity going on at a higher level than the man in the Chinese room could comprehend (the so-called "system reply"); that is, the system does know Chinese. Of course, Searle responds that there is nothing more than syntax going on at the higher-level as well, so this reply is subject to the same initial problems. Furthermore, Searle suggests the man in the room could simply memorize the rules and symbol relations. Again, though he would convincingly mimic communication, he would be aware only of the symbols and rules, not of the meaning behind them.
Inverted spectrum
Another main criticism of functionalism is the inverted spectrum or inverted qualia scenario, most specifically proposed as an objection to functionalism by Ned Block.[14][19] This thought experiment involves supposing that there is a person, call her Jane, that is born with a condition which makes her see the opposite spectrum of light that is normally perceived. Unlike "normal" people, Jane sees the color violet as yellow, orange as blue, and so forth. So, suppose, for example, that you and Jane are looking at the same orange. While you perceive the fruit as colored orange, Jane sees it as colored blue. However, when asked what color the piece of fruit is, both you and Jane will report "orange". In fact, one can see that all of your behavioral as well as functional relations to colors will be the same. Jane will, for example, properly obey traffic signs just as any other person would, even though this involves the color perception. Therefore, the argument goes, since there can be two people who are functionally identical, yet have different mental states (differing in their qualitative or phenomenological aspects), functionalism is not robust enough to explain individual differences in qualia.[20]
David Chalmers tries to show[21] that even though mental content cannot be fully accounted for in functional terms, there is nevertheless a nomological correlation between mental states and functional states in this world. A silicon-based robot, for example, whose functional profile matched our own, would have to be fully conscious. His argument for this claim takes the form of a reductio ad absurdum. The general idea is that since it would be very unlikely for a conscious human being to experience a change in its qualia which it utterly fails to notice, mental content and functional profile appear to be inextricably bound together, at least in the human case. If the subject's qualia were to change, we would expect the subject to notice, and therefore his functional profile to follow suit. A similar argument is applied to the notion of absent qualia. In this case, Chalmers argues that it would be very unlikely for a subject to experience a fading of his qualia which he fails to notice and respond to. This, coupled with the independent assertion that a conscious being's functional profile just could be maintained, irrespective of its experiential state, leads to the conclusion that the subject of these experiments would remain fully conscious. The problem with this argument, however, as Brian G. Crabb (2005) has observed, is that it begs the central question: How could Chalmers know that functional profile can be preserved, for example while the conscious subject's brain is being supplanted with a silicon substitute, unless he already assumes that the subject's possibly changing qualia would not be a determining factor? And while changing or fading qualia in a conscious subject might force changes in its functional profile, this tells us nothing about the case of a permanently inverted or unconscious robot. A subject with inverted qualia from birth would have nothing to notice or adjust to. Similarly, an unconscious functional simulacrum of ourselves (a zombie) would have no experiential changes to notice or adjust to. Consequently, Crabb argues, Chalmers' "fading qualia" and "dancing qualia" arguments fail to establish that cases of permanently inverted or absent qualia are nomologically impossible.
A related critique of the inverted spectrum argument is that it assumes that mental states (differing in their qualitative or phenomenological aspects) can be independent of the functional relations in the brain. Thus, it begs the question of functional mental states: its assumption denies the possibility of functionalism itself, without offering any independent justification for doing so. (Functionalism says that mental states are produced by the functional relations in the brain.) This same type of problem—that there is no argument, just an antithetical assumption at their base—can also be said of both the Chinese room and the Chinese nation arguments. Notice, however, that Crabb's response to Chalmers does not commit this fallacy: His point is the more restricted observation that even if inverted or absent qualia turn out to be nomologically impossible, and it is perfectly possible that we might subsequently discover this fact by other means, Chalmers' argument fails to demonstrate that they are impossible.
Twin Earth
The Twin Earth thought experiment, introduced by Hilary Putnam,[22] is responsible for one of the main arguments used against functionalism, although it was originally intended as an argument against semantic internalism. The thought experiment is simple and runs as follows. Imagine a Twin Earth which is identical to Earth in every way but one: water does not have the chemical structure H₂O, but rather some other structure, say XYZ. It is critical, however, to note that XYZ on Twin Earth is still called "water" and exhibits all the same macro-level properties that H₂O exhibits on Earth (i.e., XYZ is also a clear drinkable liquid that is in lakes, rivers, and so on). Since these worlds are identical in every way except in the underlying chemical structure of water, you and your Twin Earth doppelgänger see exactly the same things, meet exactly the same people, have exactly the same jobs, behave exactly the same way, and so on. In other words, since you share the same inputs, outputs, and relations between other mental states, you are functional duplicates. So, for example, you both believe that water is wet. However, the content of your mental state of believing that water is wet differs from your duplicate's because your belief is of H₂O, while your duplicate's is of XYZ. Therefore, so the argument goes, since two people can be functionally identical, yet have different mental states, functionalism cannot sufficiently account for all mental states.
Most defenders of functionalism initially responded to this argument by attempting to maintain a sharp distinction between internal and external content. The internal contents of propositional attitudes, for example, would consist exclusively in those aspects of them which have no relation with the external world and which bear the necessary functional/causal properties that allow for relations with other internal mental states. Since no one has yet been able to formulate a clear basis or justification for the existence of such a distinction in mental contents, however, this idea has generally been abandoned in favor of externalist causal theories of mental contents (also known as informational semantics). Such a position is represented, for example, by Jerry Fodor's account of an "asymmetric causal theory" of mental content. This view simply entails the modification of functionalism to include within its scope a very broad interpretation of input and outputs to include the objects that are the causes of mental representations in the external world.
The twin earth argument hinges on the assumption that experience with an imitation water would cause a different mental state than experience with natural water. However, since no one would notice the difference between the two waters, this assumption is likely false. Further, this basic assumption is directly antithetical to functionalism; and, thereby, the twin earth argument does not constitute a genuine argument: as this assumption entails a flat denial of functionalism itself (which would say that the two waters would not produce different mental states, because the functional relationships would remain unchanged).
Meaning holism
Another common criticism of functionalism is that it implies a radical form of semantic holism. Block and Fodor[19] referred to this as the damn/darn problem. The difference between saying "damn" or "darn" when one smashes one's finger with a hammer can be mentally significant. But since these outputs are, according to functionalism, related to many (if not all) internal mental states, two people who experience the same pain and react with different outputs must share little (perhaps nothing) in common in any of their mental states. But this is counter-intuitive; it seems clear that two people share something significant in their mental states of being in pain if they both smash their finger with a hammer, whether or not they utter the same word when they cry out in pain.
Another possible solution to this problem is to adopt a moderate (or molecularist) form of holism. But even if this succeeds in the case of pain, in the case of beliefs and meaning, it faces the difficulty of formulating a distinction between relevant and non-relevant contents (which can be difficult to do without invoking an analytic-synthetic distinction, as many seek to avoid).
Triviality arguments
Hilary Putnam,[23] John Searle,[24] and others[25][26] have offered arguments that functionalism is trivial, i.e. that the internal structures functionalism tries to discuss turn out to be present everywhere, so that either functionalism turns out to reduce to behaviorism, or to complete triviality and therefore a form of panpsychism. These arguments typically use the assumption that physics leads to a progression of unique states, and that functionalist realization is present whenever there is a mapping from the proposed set of mental states to physical states of the system. Given that the states of a physical system are always at least slightly unique, such a mapping will always exist, so any system is a mind. Formulations of functionalism which stipulate absolute requirements on interaction with external objects (external to the functional account, meaning not defined functionally) are reduced to behaviorism instead of absolute triviality, because the input-output behavior is still required.
Peter Godfrey-Smith has argued further[27] that such formulations can still be reduced to triviality if they accept a somewhat innocent-seeming additional assumption. The assumption is that adding a transducer layer, that is, an input-output system, to an object should not change whether that object has mental states. The transducer layer is restricted to producing behavior according to a simple mapping, such as a lookup table, from inputs to actions on the system, and from the state of the system to outputs. However, since the system will be in unique states at each moment and at each possible input, such a mapping will always exist so there will be a transducer layer which will produce whatever physical behavior is desired.
Godfrey-Smith believes that these problems can be addressed using causality, but that it may be necessary to posit a continuum between objects being minds and not being minds rather than an absolute distinction. Furthermore, constraining the mappings seems to require either consideration of the external behavior as in behaviorism, or discussion of the internal structure of the realization as in identity theory; and though multiple realizability does not seem to be lost, the functionalist claim of the autonomy of high-level functional description becomes questionable.[27]
See also
References
-
^ Block, Ned. (1996). "What is functionalism?" a revised version of the entry on functionalism in The Encyclopedia of Philosophy Supplement, Macmillan. (PDF online)
-
^ Marr, D. (1982). Vision: A Computational Approach. San Francisco: Freeman & Co.
-
^ Lewis, David. (1980). "Mad Pain and Martian Pain". In Block (1980a) Vol. 1, pp. 216–222.
-
^ Armstrong, D.M. (1968). A Materialistic Theory of the Mind. London: RKP.
-
^ a b Putnam, Hilary. (1960). "Minds and Machines". Reprinted in Putnam (1975a).
-
^ a b Putnam, Hilary. (1967). "Psychological Predicates". In Art, Mind, and Religion, W.H. Capitan and D.D. Merrill (eds.), pp. 37–48. (Later published as "The Nature of Mental States" in Putnam (1975a).
-
^ a b Piccinini, G. (2010). "The mind as neural software? Understanding functionalism, computationalism, and computational functionalism," Philosophy and Phenomenological Research, 81(2), 269–311. doi:10.1111/j.1933-1592.2010.00356.x.
-
^ Gillett, C. (2007). “A Mechanist Manifesto for the Philosophy of Mind: The Third Way for Functionalists”. Journal of Philosophical Research, invited symposium on “Mechanisms in the Philosophy of Mind”, vol.32, pp. 21-42.
-
^ Gillett, C. (2013). “Understanding the Sciences through the Fog of ‘Functionalism(s)’”. In Hunneman (ed.) Functions: Selection and Mechanisms. Dordrecht: Kluwer, pp.159-81.
-
^ Machamer, P., Darden, L. & Craver, C. F. (2000). "Thinking about mechanisms," Philosophy of Science, 67(1), 1–25.
-
^ Craver, C. F. (2001). "Role functions, mechanisms, and hierarchy," Philosophy of Science, 68(1), 53–74.
-
^ Maley, C. J., and Piccinini, G. (2013). "Get the Latest Upgrade: Functionalism 6.3.1." Philosophia Scientioe, 17(2), 135-149.
-
^ Piccinini, G. & Craver, C. F. (2011). "Integrating psychology and neuroscience: Functional analyses as mechanism sketches," Synthese, 183(3), 283–311. doi:10.1007/s11229-011-9898-4.
-
^ a b Block, Ned. (1980b). "Troubles With Functionalism", in (1980a).
-
^ Lycan, W. (1987) Consciousness. Cambridge, MA: MIT Press.
-
^ Baron-Cohen, S., Leslie, A., & Frith, U. (1985). Does the Autistic Child Have a "Theory of Mind"? Cognition21, 37-46
-
^ Dennet, D. (1990) Quining Qualia. In W. Lycan, (ed), Mind and Cognition. Oxford: Blackwells
-
^ Searle, John. (1980). "Minds, Brains and Programs", Behavioral and Brain Sciences, vol.3. (online)
-
^ a b Block, Ned and Fodor, J. (1972). "What Psychological States Are Not". Philosophical Review 81.
-
^ Block, Ned. (1994). Qualia. In S. Guttenplan (ed), A Companion to Philosophy of Mind. Oxford: Blackwell
-
^ Chalmers, David. (1996). The Conscious Mind. Oxford: Oxford University Press.
-
^ Putnam, Hilary. (1975b). "The Meaning of 'Meaning'", reprinted in Putnam (1975a).(PDF online)
-
^ Putnam, H. (1988). Reality and representation. Appendix. Cambridge, MA: MIT Press.
-
^ Searle, J. (1990). Is the brain a digital computer? Proceedings and Addresses of the American Philosophical Association, 64, 21–37.
-
^ Chalmers, D. (1996). Does a rock implement every finite-state automaton? Synthese, 108, 309–333.
-
^ Copeland, J. (1996). What is computation? Synthese, 108, 335–359.
-
^ a b Peter Godfrey-Smith, "Triviality Arguments against Functionalism". 2009. Philosophical studies 145 (2). [1]/[2]
Further reading
-
Armstrong, D.M. (1968). A Materialistic Theory of the Mind. London: RKP.
-
Baron-Cohen, S., Leslie, A., & Frith, U. (1985). Does the Autistic Child Have a "Theory of Mind"? Cognition21, 37-46
-
Block, Ned. (1980a). "Introduction: What Is Functionalism?" in Readings in Philosophy of Psychology. Cambridge, MA: Harvard University Press.
-
Block, Ned. (1980b). "Troubles With Functionalism", in Block (1980a).
-
Block, Ned. (1994). Qualia. In S. Guttenplan (ed), A Companion to Philosophy of Mind. Oxford: Blackwell
-
Block, Ned. (1996). "What is functionalism?" a revised version of the entry on functionalism in The Encyclopedia of Philosophy Supplement, Macmillan. (PDF online)
-
Block, Ned and Fodor, J. (1972). "What Psychological States Are Not". Philosophical Review 81.
-
Chalmers, David. (1996). The Conscious Mind. Oxford: Oxford University Press.
-
Crabb, B.G. (2005). "Fading and Dancing Qualia - Moving and Shaking Arguments", Deunant Books.
-
DeLancey, C. (2002). "Passionate Engines - What Emotions Reveal about the Mind and Artificial Intelligence." Oxford: Oxford University Press.
-
Dennett, D. (1990) Quining Qualia. In W. Lycan, (ed), Mind and Cognition. Oxford: Blackwells
-
Levin, Janet. (2004). "Functionalism", The Stanford Encyclopedia of Philosophy (Fall 2004 Edition), E. Zalta (ed.). (online)
-
Lewis, David. (1966). "An Argument for the Identity Theory". Journal of Philosophy 63.
-
Lewis, David. (1980). "Mad Pain and Martian Pain". In Block (1980a) Vol. 1, pp. 216–222.
-
Lycan, W. (1987) Consciousness. Cambridge, MA: MIT Press.
-
Mandik, Pete. (1998). Fine-grained Supervience, Cognitive Neuroscience, and the Future of Functionalism.
-
Marr, D. (1982). Vision: A Computational Approach. San Francisco: Freeman & Co.
-
Polgar, T. D. (2008). "Functionalism", The Internet Encyclopedia of Philosophy (online)
-
Putnam, Hilary. (1960). "Minds and Machines". Reprinted in Putnam (1975a).
-
Putnam, Hilary. (1967). "Psychological Predicates". In Art, Mind, and Religion, W.H. Capitan and D.D. Merrill (eds.), pp. 37–48. (Later published as "The Nature of Mental States" in Putnam (1975a).
-
Putnam, Hilary. (1975a). Mind, Language, and Reality. Cambridge: CUP.
-
Putnam, Hilary. (1975b). "The Meaning of 'Meaning'", reprinted in Putnam (1975a).(PDF online)
-
Searle, John. (1980). "Minds, Brains and Programs", Behavioral and Brain Sciences, vol.3. (online)
-
Smart, J.J.C. (1959). "Sensations and Brain Processes". Philosophical Review LXVIII.
External links
-
Stanford Encyclopedia of Philosophy
-
Dictionary of the Philosophy of Mind