The objective and the biological modes of construal  Meaning and Mind in Monkeys 
Theory of Mind
A Model of Mental-state Attribution
(under construction November 15, 1997) 

Introduction

How do we understand each other? Although we are rarely explicitly aware of it, we utilize notions of invisible, intangible, and yet pragmatically very useful entities such as intentions, desires, beliefs, and knowledge to make human and animal behavior comprehensible and predictable. So automatic are these processes of inferences and attributions that it is not until something goes wrong that their extraordinary characteristics become salient and present themselves to our awareness. Clinical psychologists Alan Leslie and Simon Baron-Cohen began to note that their autistic patients appeared to lack the ability to understand other minds. This appeared to be a domain-specific deficit, in that there were instances--the most celebrated being that of Temple Grandin--of highly intelligent individuals who found human behavior incomprehensible and mysterious. The deficits appear to be correlated with organic damage to specific regions of the brain.

Working with normal children, Leslie and Baron-Cohen have been able to specify how infants develop the ability to model other minds in a series of stages, which allows them to understand in greater detail what autistics are specifically unable to do, and in some cases help them compensate for their disabilities. The emerging conception of a series of Theory of Mind Mechanisms, or ToMM, however, are also of great interest for the understanding of normal human psychology. Tongue in cheek, we might say that these conceptions render the work of the humanities scientifically respectable. But let us first turn to the details.
 

Self-propulsion and goal-directedness

Leslie operates with the notion of an agent as a basic conception. In the mind, this may be functioning as a conceptual primitive--that is, as something that is not decomposed into simpler elements. What are the properties that are utilized to identify something as an agent? Leslie suggests that agents are distinguished from non-agents on the basis of the criterion of self-propulsion. A variety of perceptual cues are utilized to infer self-propulsion; most obviously, self-propelled objects initiate and sustain motion independently of external impacts (cf. Spelke on the perception of objects). In addition, self-propelled objects have trajectories that differ from those of other objects; for instance, they change direction suddenly, and before they encounter another object. This leads to a second property: agents move in a goal-directed fashion. They actively avoid bumping into certain objects, and actively seek out others.

Drawing on Rosch's work on categorization, we may suggest that a rock is a prototypical non-agent object, wile water is a peripheral member, and wind is a borderline case. We find it much easier to attribute agency to wind than to a rock, since wind supplies one of the properties utilized to identify agents, that of self-propulsion. However, wind may be prevented from being classified as an agent by the use of the criterion of goal-directedness. A prototypical agent is an animal, which is self-propelled and goal-directed. The trajectory made by a fly is full of cues with which we infer that the fly is alive, self-propelled, and goal-directed.
 

Intentionality

Baron-Cohen starts at a slightly more complex level, with the notion of intentionality. There appears to be a cumulative increase in complexity in these three types of inferences:

  1. self-propulsion (with its own set of perceptual cues)
  2. goal-directedness (which presupposes self-propulsion, but adds to it)
  3. intentionality (which models a goal-directedness in terms of a hidden, invisible state)
Intentionality, then, may be primarily inferred on the basis of motion, but it introduces a mental-state term. It may be objected that the notion of a goal, or even that of self-propulsion, already presupposes some degree of interiority. However, it seems possible to consider objects along these dimensions purely on the basis of external criteria. Thus, we might say of a robot that it is self-propelled and goal-directed, without feeling the need to attribute intentionality to it.

An intention can be categorized along an intensity dimension--from "carrying out routine but generally goal-directed activities" (say, a fox walking leisurely along a path after a meal) to "desperately wanting to achieve a certain goal" (say, a fox trying to pull itself loose from a trap).
 

Eye-direction detection

One cue that can be utilized to infer what another is likely to do is eye-direction detection (EDD). Baron-Cohen found that infants are very sensitive to eye-direction, particularly for distinguishing between eyes that are looking at them and eyes that are looking away. This appears to be the simplest stage of the mechanism: am I being observed? Let us dub this the eye-contact detection mechanism.

It may be possible for eye-direction detection to operate without making the conceptual move of the attribution of intentional states. Let us imagine a robot with video-camera eyes, such as Rodney Brooks' Cog at MIT, but mobile. We might then use observations about the movements of the cameras--what the robot is 'looking at'--to predict what it is going to do next. This requires making trigonometric calculations about the distance between you and the robot, and the angle between the line of your vision and its line of vision. You can then scan for objects of potential interest to the robot along an imaginary line superimposed onto your own field of vision. If the computer's algorithms for responding to visual cues are relatively transparent, you could then use eye-direction detection calculations to predict its behavior without attributing intentional states to it.

In practice, however, people find they respond to the robot's 'eyes' in a much more complex manner, focusing first of all on whether they are themselves being observed. The computationally  rather simple operation of eye-contact detection appears to trigger a variety of responses in a very powerful way. Human infants down to the age of six months show a galvanic skin reaction when they notice someone is looking at them (Baron-Cohen 1995). Brooks reported in a talk at the Society for Literature and Science in Pittsburgh in November 1997 that a woman who knew quite a bit about Cog's construction found herself involuntarily flattered that the robot followed her with its (his?) camera eyes as she crossed the room. In another situation, she might have felt threatened. Her wholly involuntary reaction suggests a relatively informationally encapsulated response system.

In an ecological setting, in the case of both prey and predator animals, it would clearly be a useful adaptation to distinguish between when you have been spotted and when you have not. The simplest way of accomplishing this would not need to rely on the relatively complex trigonometric calculations that the EDD relies on: it would be enough, say, to detect the circular iris of the eye. But would this be useful without the further attribution of an intentional state? Well, lions don't always hunt; sometimes--in fact, most of the time--they just lie around. If a wildebeest, let's say, makes eye-contact with a lion in the heat of the day, it might not mean much. A mechanism that triggered a flight response at every eye-contact with a predator would lead to unnecessary expenditure of energy and lost grazing opportunities, and thus be selected against. Now, it might be possible to make the eye-contact detection sensitive to a large number of relevant variables; for instance, there might be a parameter sensitive to the time of day, or the movement of the predator. Such a parameter-based system might behave very much as if the wildebeest were attributing intentional states to the lion. In fact, one way to think about an intentionality-attribution model is that it serves as a representational housekeeping system for keeping track of such open variables. The empirical question here is how to tease apart these two functional designs: is the wildebeest, say, responding directly to a set of perceptual cues, or is it formulating a representation of the inner states of the lion to keep track of these cues, and acting on the basis of the representation. It would be only in the latter case that we would say it has a theory of mind.

Even such a theory of mind would be very elementary, modeling only a simple version of intentionality. However, once there is a theory of mind, this creates an opportunity for more complex mappings. Take, for instance, the example of motivation. It can be very informative to have a cognitive category that represents what is in the other's interest. Thus, children find it illuminating to note not merely that "the fox is chasing the chicken," but "the fox is trying to catch the chicken because it wants to eat." We may be tempted to assume that the chicken is also making this inference--that it is being chased because the fox is hungry and wants to eat it, and that it need not run if the fox is not hungry. However, the chicken's problem may be solvable without the use of such complex representational solutions. It may be enough to have a system of parameters that responds to cues such as eye-contact detection and the motion of the predator.

There is, however, a middle possibility between a parameter-based response system and a theory of mind: that of emotional communication.

 
Emotional communication
 
"Le coeur a ses raison que la raison ne connait pas," Blaise Pascal wrote, and emotions do indeed have logic of their own. The communication of emotional states appears to be largely involuntary; however, its functional complexity suggests adaptive design. How could the automatic betrayal of one's inner states be favored by natural selection? Or is emotional displays a form of cheating?

My suggestion is that emotions arise in situations where there is a cooperative potential. This raises the interesting question of what kinds of cues are utilized to judge whether the situation is potentially cooperative in some way, and whether the expressions of emotions are calibrated to the degree of perceived cooperative potential. This hypothesis also carries the inverse prediction: that in situations where there is no perceived possibility of cooperation, there will be no emotion.

Emotion, then, is proposed to perform the function of deliberately leaking information about the intentional states of the organism. There must be a large amount of honesty in this leakage, otherwise it would not pay to develop mechanisms for interpreting the emotions. The honesty does of course not need to be perfect; a situation of generally honest emotional communication creates an opportunity for very lucrative cheating. On average, however, in evolutionary history, emotional expressions must have been honest signals of inner states, and we might even look for mechanisms that respond negatively to cheaters--an emotional cheater-detection system.

Now, what are some situations where animals might benefit from a cooperative solution? The primary is perhaps that of dominance hierarchies, a form of  conspecific competition for resources. Struggles for dominance are generally handled without inflicting serious damage on the participants; contests are methods of measuring rather than actual combats. Once dominance has been settled, it may be to everyone's advantage to settle minor challenges through a system of signaling rather than through continual fighting. This is a promising arena for the evolution of emotional states. In addition to dominance, territoriality is a promising candidate for potential cooperation. Again, issues of territoriality are frequently settled without the loss of life.

The key suggestion here is that there is a range of problems that animals face that lend themselves to non-fatal solutions. In these situations, an animal that is more powerful may communicate that power though its anger, and the less powerful may similarly communicate its inferiority through the faithful communication of fear. The assumption in both acts of communciation is that the other animal will respond in an appropriate manner: to anger by backing down, to fear by not killing the opponent, who is now no longer perceived to be a serious threat. Emotiion permits conviviality.

So how does emotional communication work? What kinds of representational processes are required? It is not clear that emotions require the representation of mental states; rather, it appears more plausible that the emotional system really is a relatively independent system, one that advertizes the organism's evaluation of its own strengths. It is able to respond to others' similar communication not by representing it, but by directly picking up on the emotion. Thus, emotion may be thought of as a way of communicating through analogy. When you are angry at me, I feel your anger directly; I don't need to form the representation of you being angry. That anger enters into my assessment of my own relative strength, and helps me make a decision about how to act in my relation to you. All of this, I suggest, can take place without conscious representational activity.

This account of emotional communication suggests that the basic phenomenon is one of emotional contagion. The issue is inadequately explored in the literature, but is seems likely that emotional contagion is a way of solving the problem of understanding others without needing to have recourse to sophisticated mental simulations. In emotional contagion, the inner phenomenology of one animal--and it seems to me highly implausible to deny that animals have emotions--is communicated to another, not through the mechanism of a representational model, as in ToMM, but by calling up the same phenomenology in the other animal.

There is some evidence that this mode of communication is very old; for instance, humans and dogs are able to understand each others' emotions to a fairly high degree, and human infants show evidence of a capacity for emotional contagion from the age of about 6 months (cf. the Developmental Timeline). In the case of infants, the contagion is initially so complete that the individual is unable to distinguish between emotions that originate from its own situation and emotions that originate in another person. This effect does not appear to be fully accounted for by the preceding discussion, which dealt with emotion as a way of limiting the severity of conflict. In the case of infants, emotions may have evolved in an environment with a much higher degree of cooperation.

Emotional communication may form part of a powerful analog system of communication, including gestures and body language, which evolved alongside representational thinking. However, the claim that emotional contagion can operate without a representational component raises some interesting questions. For instance, does not individual A need to be able to keep two emotional responses going at the same time: that of the other, and one's own? Do we not need to decouple the emotions of others from our own primary emotions?

This view of emotions may be missing the essential features of the system. While it may be possible, given a representational capacity, to trace an emotion back to another, the emotional system itself does not appear to be making such a distinction--an observation which, if substantiated, says something remarkable about emotions. For the moment, I will stick to the suggestion that the exchange of emotions should be analyzed as an economy independent of decoupled thinking.

For an extensive discussion of the evolution of decoupled thinking, see The Time of Unrememberable Being. The key suggestion is that an ability to read minds requires the evolution of a reality-monitoring system that enables the mind to entertain hypothetical scenarios.
 

Theory of Mind

A full-fledged theory of mind, then, requires a representational system. This permits the representational mapping of others' emotional states in a manner that is different from picking up their emotions directly. For instance, an intention can be mapped onto a representational emotional topology, going from "the fox is chasing the chicken" (goal-directed) through "the fox is trying to catch the chicken" (intentionality) through "the fox wants to eat the chicken" (motivational) to "the fox is chasing the chicken and trying to catch it because it is hungry and wants to eat it" (emotional). Similarly for the chicken: it is running (goal directed) away from the fox (intentionality) because it is afraid (emotional) of being eaten (motivational).

Such motivational and emotional attributions may precede the attribution of epistemic states to others, which is the hallmark of a Theory of Mind. A brief description of this can be found in the relevant section of Conceptual Adaptations. (I will return to the issue here when I have a chance.)

This discussion leaves several interesting questions: Doesn't it seem likely that an attribution of motivational and emotional states precedes the child's notion that the chicken knows the fox is trying to catch it? At what point do children in fact begin to make attributions of motivational and emotional states to others?
 
 

Further reading

Alan Leslie's work on domain-specific learning and core cognitive architecture keeps a leg in the philosophical tradition, in cognitive science, and in experimental psychology; to top it off, he also explicitly bases his findings in an evolutionary context. His ToMM, ToBy, and Agency: Core architecture and domain specificity (1994) is the most recent overview of his work.

Simon Baron-Cohen's Mindblindness (1995) provides a fascinating introduction to the issue, and is warmly recommended.

 

The objective and the biological modes of construal 
Next: Meaning and Mind in Monkeys
Meaning and Mind in Monkeys 
Return to the CogSci index 
 
 
Bibliography
© 1997 Francis F. Steen, Communication Studies, University of California, Los Angeles