next up previous contents
Next: Epistemological problems with the Up: Philosophy Lecture 2 Previous: Ontology (of an algorithm   Contents


Describing mental states

Two ways of describing mental states or ``internal states of an information processing system'' are in terms of Propositional Attitudes (PAs) and Qualia. PAs describe the intentional nature of the state while qualia refer to its qualitative content. Intentionality and qualia are what distinguish mental states from states of other functionally defined machines/ organisms (Fodor Sc. Am., Jan. 1981).

I. Propositional Attitudes (PAs)

Psychological states like beliefs, desires, hopes and fears are called ``propositional attitudes'' PAs.

Are PAs simply commonsense entities or will they ultimately have a place in a mature cognitive science? (``Gold'' is a commonsense category which has a place in Chemistry. The category of ``precious metals'' is only used in commonsense and Economics.)

Problem with PAs is illustrated by the following example. Grouping PAs by their content leads to difficulties.

See Kilroy, Betty and Sam example on p.334 of Stillings et. al.

Kilroy enters the bar

Betty's belief: The world's greatest cognitive scientist has entered the bar.

Sam's belief: The world's greatest goatherd has entered the bar.

Content of belief (as the bartender would testify): Kilroy has entered the bar.

(?Is the content of Betty's and Sam's beliefs identical?)

However the functional role of their PAs is not identical. This P is differently connected to their other beliefs. In any functionalist account the two beliefs are different.

Root of the difficulty is that beliefs are about the real world!

How can we take account of the intentional nature of representations?

Representation of beliefs usually takes the following 3 approaches (1a, 1b and 2):

  1. Individualism (belief == ip state)

    1. Methodological solipsism (Putnam, Fodor, Stich)

      • Ignore the external world
      • Autonomy principle
      • Formal operations on formal tokens
      • Assign meaning at the end

    2. Naturalistic individualism (Pylyshyn, Bach, McGinn)

      • Uninterpreted formal description of a physical system is impossible: 1.(a) would not work as a research strategy

      • Identifying the functional state needs data from the world

  2. Non-individualism

    Belief is not a state but a relation of the individual to the world.

1.(a) Solipsism is the metaphysical thesis that nothing exists outside the mind. Methodological solipsism (AI view) says that we can study mental states and processes without paying any attention to the outside world that they represent. (Recalls Descartes's position that we could have these identical representations even in the absense of the reality - though of course in that case they would be mistaken notions, but how could we tell?) One justification (Stich, 1983) is the autonomy principle which states that the proper matter for cognitive explanation includes only those states and processes that are realised entirely within the physical bound of the organism.

Achieve this by restricting cognitive science to discussing formal operations on formal tokens. Only after the exercise is completed we assign meaning to the symbols, and ultimately to the whole system, under some translation scheme.

Thus Betty is processing a formal set of symbols which when interpreted in the light of Betty's total belief system, translates to: The world's greatest cognitive scientist has entered the bar.

A similar process happens with Sam. Neither of their internal states is connected with Kilroy in any essential way. Two quite different people, having bizarre beliefs about Ronald Reagan, and on seeing him enter the bar, would have brain states identical to Betty and Sam's.

1.(b) Naturalistic individualism (eg. Pylyshyn) says that some kinds of explanation might be difficult, even impossible, for methodologically solipsistic cognitive science. Meaningless symbols cannot explain connection of PAs to stimulus or response: 1.(a) would not work as a research strategy. In strong terms, there is no such thing as an uninterpreted formal description of a physical system.

The second criticism of 1.(a) is that it is not consistent with functionalism. Since the same functional state should be shareable by different physical states, it's not possible to infer the functional state from the physical state. Observation of the organism's interaction with the environment is necessary to get the data that will justify any particular functional interpretation.

2. Non-individualistic conceptions These take naturalism one step further. They say that the belief may be a relation between the organism and the world rather than just a state of the organism. (Situated Cognition)

But the world is big and messy, and it may make sense to focus on the already difficult, albeit circumscribed, domain of individual information processing systems.

In summary the dilemma with PAs is:

  1. To figure in internal information processing i.e. to be computationally manipulable (and psychologically plausible), they must be individually construed.

  2. To have content and hence some explanatory power, they must be related to the outside world.

But no practicable alternative to PAs has been found.

Can PAs deal with emotions? An old question asked by philosophers is, what is the relationship between reason and feeling? Typically in the AI view emotions are thought to be complexes of simpler mental states like beliefs and desires ... it's not clear though if that will work.

II. Qualia

Quale (pl. Qualia) is the ``felt'' or ``experienced'' character of mental states. (Limited to introspectively accessible phenomena?)

Feelings and experiences vary widely - eg. I

run my fingers over sandpaper
smell a garbage pile
feel a sharp pain in my finger
seem to see bright purple
become extremely angry

Each of these mental states has a distinctive subjective character.

Which classes of mental states possess Qualia?

  1. Perceptual experiences, eg. seeing green, hearing loud film music, tasting chocolate, smelling the sea air, touching silk

  2. Bodily sensations, eg. a twinge of pain, an itch, hunger, stomach ache, feeling hot or dizzy

  3. Passions or emotions, eg. delight, lust, fear, love, grief, jealousy, regret

  4. Moods, eg. elation, depression, calm, boredom, tension, misery

Usually PAs are assumed to have no particular qualitative character or Qualia. But consider the experience of understanding a sentence, of suddenly thinking of something, of suddenly remembering something. Do these have associated sensory experiences or images? Some say no, others, yes.

The main (generally accepted) difference between Qualia and PAs is that the latter involve a relation between the subject and the object of the attitude. In contrast the qualia or qualitative character of perceptual states seem to be a monadic property of that state. Some philosophers hold that functionalism can deal with relational properties but not with monadic ones.

In the functionalist view qualia should be physically realizable in multiple ways. The underlying hardware is irrelevant. But there are two famous objections to functionalist theories of qualia:

1. The Inverted Spectrum: Suppose you see red wherever I see green and vice-versa; likewise for the other colors, so that our color experiences are phenomenally inverted. This may still be acceptable to the functionalist (as we would both classify red roses, red ribbons etc. together, feel angry on seeing my green ... ) but at a finer-grained level there might be differences in our internal functional organization, making our experiences phenomenally different.

A computational analogy might be: for any two numerical inputs, M and N, a given computer always produces as output the product M X N. A second computer does exactly the same thing but by running a completely different program. At one gross level the machines are functionally identical, but at lower levels they might be functionally different. (? This would be a problem for all explanations at a mentalistic level.)

2. The Absent Qualia Hypothesis: Functional duplicates of sentient creatures are possible, which entirely lack qualia. Block's example of one billion Chinese people (same as no. of neurons in our brain) each given a two-way radio with which to communicate with one another and with an artificial (brainless) body controlled by the signals; the signals themselves made in accordance with instructions received from a vast display in the sky which is visible to all of them. Does this system undergo qualia? Some reply yes.

But even if so, it would not show that individual qualia are functional in nature - i.e. inverted spectra may still be possible.


next up previous contents
Next: Epistemological problems with the Up: Philosophy Lecture 2 Previous: Ontology (of an algorithm   Contents