Philosophy of Mind - Cognitive Science

Transcrição

Philosophy of Mind - Cognitive Science
General notice: The script is for internal use only, do not quote! I deliberately
make use of other texts and textbooks without always indicating what source it is.
Philosophy of Mind
Lecture 9. Physicalisms (Functionalisms, cont.)
1. Functionalisms recap
2. Functionalism, physicalism, and realization
3. Problems of functionalism
In the last lecture, I distinguished between two types of functionalism, namely Computationrepresentation functionalism which is a special case of functional explanation, and sees psychological states as systematically representing the world via a language of thought, and
psychological processes are seen as computations involving these representations; and Metaphysical functionalism which is a theory of the nature of the mind, rather than a theory of
psychological explanation. It is concerned with the question what all mental states of a certain
type have in common. As we have seen, metaphysical functionalists characterize mental states
in terms of their causal roles, particularly, in terms of their causal relations to sensory stimulations, behavioral outputs, and other mental states.
In the following, I distinguished between
•
•
machine functionalism (or computer functionalism) which is specified by two functions: one from inputs and states to outputs, and one from inputs and states to states,
namely {input}×{state} → {output} and {input}×{state} → {state}
causal-role functionalism (or causal-theoretical functionalism) which is a more general
characterization that characterizes mental states by their having a certain causal role.
This generality is bought, however, at the prize of vagueness.
Eventually, I presented a more precise formulation of causal-role functionalism in following
David Lewis’ introduction of so-called Ramsey-sentences of a theory. The important thing is
that you can use Ramsey-sentences – which still have a free variable x and thus are open sentences – to define both the concept of the system under consideration and its functional states.
Metaphysical functionalists – by all inner diversity – agree on the following general principles
•
•
•
mental states are functional states, i.e., they are specified by three kinds of clauses
o
input clauses which say which conditions typically give rise to which mental
states
o
output clauses which say which mental states typically give rise to which behavioral responses
o
interaction clauses which say how mental states typically interact
the roles specified by the three clauses could be filled or occupied by quite different
kinds of things in different cases (multiple realizability)
mental states are inner states that occupy or fill the roles specified by the three clauses /
or mental states are the states of having the causal roles filled or occupied
Achim Stephan: Philosophy of Mind. Lecture 9. Physicalisms (functionalisms, cont.)
Comment. The last sentence points to the difference I mentioned last lecture, and that is discussed by Braddon-Mitchell & Jackson:
“from a functionalist perspective, there are two options concerning the metaphysics of
psychological properties. Functionalism says that x is in pain iff x is in a state playing
the pain role, but this thesis about truth conditions for being in pain is compatible with
holding, qua metaphysical thesis, either that pain is the realizer state or that it is the
role state” (Braddon-Mitchell & Jackson 1996, 101).
Comment. Many philosophers took functionalism as a physicalist theory of mind that avoids
inherent problems of behaviorism and identity theory and thus is better than these positions:
(i) it accepts mental states as inner states (against behaviorism) and it allows for multiple realization (against identity theory). However, strictly speaking, functionalism is ontologically
neutral. It does not specify qua functionalism what ontological status the states that occupy
the causal roles should have.
For that, notice how Aristotle defined anger in his rhetoric: Zorn sei ein mit Schmerz
verbundenes Trachten nach dem, was uns als Rache für das erscheint, worin wir eine Kränkung unserer selbst erblicken ..., und zwar durch jemanden, dem das Kränken nicht zukomme.
Ferner werde der Zorn in jedem Fall von einem gewissen Lustgefühl begleitet, das auf der
Hoffnung, sich rächen zu können, basiere.
Thus, what do we have here?
1)
2)
3)
4)
Behavior of somebody S that we take as an offense at us causes Z
Z is accompanied with pain
Z aims at a behavior of ourselves that is a revenge of S’s behavior
Z is accompanied with pleasure that results from the hope to be able to have revenge
Now, although Aristotle himself thought that boiling blood around the heart fills the role
specified here, the causal role, of course, could be and definitely is realized differently. However, if somebody behaves in the way Aristotle described would it really matter how the behavior is realized to call it anger? Today, we think that this state is realized by processes in
the brain. However, in principle, it could also be realized by something completely different,
by something non-physical.
Therefore, for really taking a physicalist position, functionalists have to add a further principle, namely
•
all mental states are realized by physical states.
Only adding this thesis makes functionalism a physicalist position.
Now, within metaphysical functionalism it is common to make a further distinction between
so-called common-sense-functionalism (or analytical functionalism) and psycho-functionalism. The two versions differ on how the causal roles are characterized in detail, that is to
say, what theory specifies the details of the causal roles – folk psychology or (scientific) empirical psychology, respectively. For example, Aristotle’s characterization of anger clearly is a
version of common-sense-functionalism.
For this, cf. the last criticism.
2
Achim Stephan: Philosophy of Mind. Lecture 9. Physicalisms (functionalisms, cont.)
Problems for functionalism
•
•
•
•
intentionality and the problem of understanding (Searle 1980)
strange realizations (Block 1978)
qualia
o
inverted spectrum
o
absent qualia
how to specify inputs and outputs
The problem of understanding. Remember, that we pointed to a linkage between metaphysical and epistemological claims. According to functionalists, mental states are internal
states we identify and name by the effect the world has on them, the effect they have on one
another, and the effect they have on the world causing our behavior. Thus, if there are two
systems that have states that cannot be distinguished according to these individuation principles we should say that they have the same inner states. Kim points to that as Turing’s thesis;
it says
If two systems are input-output equivalent, they have the same psychological status; in
particular, one is mental just in case the other is (Kim 1996, 98).
Now, against this idea John Searle has constructed a much debated thought experiment to
show that mentality cannot be equated with a computing machine running a program, no matter how complex [here, and in the following, I quote from Kim 1996, 99-101]:
Imagine someone (say, John Searle himself) who understands no Chinese who is confined in a room (the “Chinese room”) with a set of rules for systematically transforming strings of symbols to yield further symbol strings. These symbol strings are in fact
Chinese expressions, and the transformation rules are purely formal in the sense that
their application depends solely on the shapes of the symbols involved, not their meanings. Searle becomes very adapt at manipulating Chinese expressions in accordance
with the rules given to him (we may suppose that Searle has memorized the whole rule
book) so that every time a string of Chinese characters is sent in, Searle quickly goes
to work and promptly sends out an appropriate string of Chinese characters. From the
perspective of someone outside the room the input-output relationships are exactly the
same as they would be if someone with a genuine understanding of Chinese, instead of
Searle, were locked inside the room. And yet Searle does not understand any Chinese,
and there is no understanding of Chinese going on anywhere inside the Chinese room.
What goes on in the room is only manipulation of symbols on the basis of their shapes,
or “syntax”, but real understanding involves “semantics,” knowing what these symbols
represent, or mean. Although Searle’s behavior is input-output equivalent to that of a
speaker of Chinese, Searle understands no Chinese.
Searle’s further argument is that what goes on inside a computer (cf. machine-functionalism)
is exactly like what goes on in the Chinese room: rule-governed manipulation of symbols
based on their shapes. There is no more understanding of Chinese, or German, or English in
the computer than there is in the Chinese room. The conclusion that Searle draws is that mentality is more than rule-governed syntactic manipulation of symbols, and the Turing test,
therefore, is invalid to test of mentality. – Now, what intuitions do you have if this system is
within a robot that behaves differently in changing circumstances, i.e., that modifies its answer whether or not you have asked the question already before, and so on. Would you still
think that it does not understand Chinese? So, what does Searle’s example really show?
3
Achim Stephan: Philosophy of Mind. Lecture 9. Physicalisms (functionalisms, cont.)
Strange realizations. The China brain is a putative counter-example to functionalism, due to
Ned Block. Here is a updated version I take from Braddon-Mitchell & Jackson (1996, 105106):
Imagine that AI has advanced to the point where a program can be written which will
allow an android with a “brain” consisting of a computer running the program to behave much as normal humans do by mimicking the operation of a human brain at a
neuron by neuron level. The next step is to note that it won’t matter from a functionalists perspective if the computer running this program is outside the android’s body,
connected by a two-way radio link to it. The final step gives us the China brain. Suppose that instead of the program being run on an external computer made of silicon
chips, the entire population of China is enlisted to run the simulation. … The android
will behave in the various situations that confront it very much as we do … This is
certainly not a realistic fantasy … All the same, it does seem clearly intelligible, and if
it is intelligible, it is fair to ask for an answer to the question of whether the system
consisting of the robot plus the population of China has mental states like ours.
Comment. Functionally, the system is completely like us. The difference lies in the dramatic
difference in how the functional roles are realized, and that difference counts for nothing as
far as mental nature is concerned, according to functionalism.
Qualia. Let us suppose that some person, say Martine, has inverted qualia. That is when she
looks at a tomato, at the firemen’s car, or at poppies she has experiences we have when looking at cucumbers, zucchinis, frogs, etc. Nevertheless, since she has this anomaly since birth
she has adapted to respond in the right way to questions. If asked: What color do tomatoes
have? she answers: red. Equally, if asked what color cucumbers have, she says: green. …That
is, her color sensations play exactly the same causal role they play in us. She makes exactly
the same distinctions, names the things she sees correctly, etc. (If you think of Austen Clark,
he would think, according to his theory, that she sees tomatoes red and cucumbers green, or
doesn’t he?).
Even a bit more strange are so-called (philosophical) zombies. These are fabulous creatures
that do not have inverted qualia, but no qualia at all, although they also behave exactly like us.
From a functionalist perspective, there is no difference between them and us. (see Beckermann 1999, 170)
Now, if creatures with “absent” and “inverted” qualia are possible, how, then, can we assume
that functionalism captures the mental? Isn’t it the case that it’s essential for at least some of
our mental states to be accompanied by certain feelings and qualitative states?
Ansgar Beckermann (1999, 171-174) sagt dazu:
Erstens ist es keineswegs selbstverständlich, daß zwei mentale Zustände exakt dieselbe kausale Rolle innehaben können, obwohl sie mit verschiedenen Qualia verbunden
sind. Es ist doch intuitiv sehr unplausibel anzunehmen, daß sich eine Person genauso
verhält wie jemand, der starke Schmerzen hat, obwohl sich ihr Zustand gar nicht
schmerzhaft, sondern eher wie leichte Übelkeit anfühlt. Der qualitative Eindruck von
Empfindungen ist in der Regel verhaltensrelevant. Jemand, der in einem Zustand ist,
der mit dem spezifischen qualitativen Eindruck von Hunger verbunden ist, verhält sich
anders als jemand, der den Eindruck von Durst hat. [Begging the question! Frage: Wie
4
Achim Stephan: Philosophy of Mind. Lecture 9. Physicalisms (functionalisms, cont.)
kann ein qualitativer Eindruck kausal wirksam sein, wenn er nicht über die kausale
Rolle eingefangen werden kann?] ... Zweitens führt die Annahme, daß es mentale Zustände geben kann, die exakt dieselbe kausale Rolle innehaben, obwohl sie mit verschiedenen Qualia verbunden sind, zu ernsthaften epistemischen Schwierigkeiten.
Denn wie sollen wir feststellen können, daß jemand eine Grün- und keine Rotempfindung hat, wenn er sich in allen Dingen genauso verhält wie jemand, der tatsächlich eine Rotempfindung hat? Auch für die Person selbst scheint es unmöglich, dies festzustellen; denn sie kann ja ihre Empfindungen nicht mit denen anderer vergleichen. ...
Noch unplausibler als die Annahme vertauschter Qualia ist jedoch die These, daß es
philosophische Zombies geben könne ... Wenn sich zwei Wesen nicht nur genauso
verhalten, wie es für Wesen typisch ist, die Schmerzen empfinden, wenn sie vielmehr
auch beide in der gleichen Weise davon überzeugt sind, daß sie Schmerzen empfinden,
dann spricht nicht nur ihr Verhalten, sondern auch ihre Introspektion dafür, daß sie tatsächlich Schmerzen empfinden. Und was könnte dann noch dafür sprechen, daß dies
nicht so ist? [Erneut zu schnell! Wir haben keinen Zugang zu den „Introspektionen“
des anderen; diese können ebenfalls Quasi-Introspektionen sein, usw.] ... Wenn es philosophische Zombies gäbe, wären also nicht nur wir nicht in der Lage, zu erkennen,
daß es sich bei diesen Wesen um Zombies handelt, sie selbst könnten dies nicht herausfinden. [Also könnten auch wir nicht sicher sein, keine Zombies zu sein, oder?]
Input-output-specification. We saw that functionalists specify mental states by three kinds
of clauses, namely input clauses which say which conditions typically give rise to which
mental states, output clauses which say which mental states typically give rise to which
behavioral responses, and interaction clauses which say how mental states typically interact.
However, functionalists for a long time were rather silent about the essential question what
exactly the relevant inputs and outputs are. Prima facie we have to distinguish three possibilities:
•
•
•
Inputs are electro-chemical signals the brain gets from the sense organs, and outputs are
the electro-chemical signals the brain sends to the muscles.
Inputs are the physical stimuli that are processed by our sense organs, and outputs are
the movements of our extremities.
Inputs are diverse events and situations in the environment we are in, and outputs are
the changes in our environment we cause by our behavior.
Whatever possibility the functionalist chooses he will get severe problems. The first possibility makes the brain to the logical subject of mental states. This conflicts with our common
sense understanding of mental states. We ascribe mental states to persons, not to brains! Furthermore, this position leads to speciecism (or chauvinism) – only creatures that have brains
could be in such states. Also possibility 2 leads to similar problems for creatures not having
our kinds of sense organs and extremities. However, even possibility 3 leads to problems. For
if we ask under what circumstances a specific environmental situation causes a certain mental
state, say a belief that x is the case. Obviously one and the same environmental situation does
not cause such a belief under any condition. E.g., I only believe that 30 students are sitting in
front of me, if I can perceive you, if it is not dark, so that I can count, etc. Here also we get
problems with creatures that perceive their environments different from the way we do (think
of Uexkuell’s concept of Umgebung). Their “beliefs” will have different causal roles than
ours, thus they are not even beliefs, according to functionalism, or at least they have different
beliefs.
5
Achim Stephan: Philosophy of Mind. Lecture 9. Physicalisms (functionalisms, cont.)
References
Aristoteles. Rhetorik. Übersetzt von Franz G. Sieveke. München: Fink Verlag 1980.
Beckermann, Ansgar (1999) Analytische Einführung in die Philosophie des Geistes. Berlin:
de Gruyter.
Block, Ned (1978) Troubles with Functionalism. Reprinted in: In: N. Block (ed.) Readings in
Philosophy of Psychology. Vol. I. Cambridge, MA: Harvard University Press, 1980,
268-305.
Braddon-Mitchell, David & Frank Jackson (1996) Philosophy of Mind and Cognition. Oxford: Blackwell.
Kim, Jaegwon (1996) Philosophy of Mind. Boulder: Westview Press.
Searle, John (1980) Minds, Brains, and Programs. Behavioral and Brain Sciences 3, 417-424.
6

Documentos relacionados