The Evolutionary Basis of Religion and Consciousness
Daniel Dennett has proposed what he calls the
intentional stance, which is basically the way that when we interact with other people or animals (and sometimes things), we act as if there’s a mind there that intends to behave in a certain way. If confronted with an angry dog, we behave as if that dog is an agent that intends to do us harm or to chase us off of its property, rather than, say as if it were a machine for barking.
We humans are good at this. In fact, one of the things our minds are very good at is modeling other minds. It’s easy to see why this would have arisen: it’s very useful to be able to predict how elements of one’s environment are going to behave, whether those elements are bricks, trees, tigers, or other people. Animals, whether predators, prey, domestic animals, or companions, often behave as if they have a mind that wants things, pursues goals, and avoids harm. This is even more true of people. So being able to predict how a herd of antelope will react to a sudden noise, or how a woman will react to a gift, provides an evolutionary advantage, and would have been selected for.
But in order to make these predictions, we need a mental model of the agent whose behavior we want to predict. Since each of us has a mind capable of making decisions, it can be coopted to run experiments about what other minds think.
This implies some kind of self-reference: a mind that uses itself as a model for how other minds work can observe itself at work. It seems that such self-reference should be an important component of consciousness. If I may quote from Greg Egan’s novel Diaspora (the characters are actually sort-of humans simulated in a computer; the viewpoint character was generated more or less at random by the computer):
The orphan hesitated. “I don’t know what Inoshiro thinks.”
The symbols for the four citizens shifted into a configuration they’d tried a thousand times before: the fourth citizen, Yatima, set apart from the rest, singled out as unique—this time, as the only one whose thoughts the orphan could know with certainty. And as the symbol network hunted for better ways to express this knowledge, circuitous connections began to tighten, redundant links began to dissolve.
There was no difference between the model of Yatima’s beliefs about the other citizens, buried inside the symbol for Yatima… and the models of the other citizens themselves, inside their respective symbols. The network finally recognized this, and began to discard the unnecessary intermediate stages. The model for Yatima’s beliefs became the whole, wider network of the orphan’s symbolic knowledge.
And the model of Yatima’s beliefs about Yatima’s mind became the whole model of Yatima’s mind: not a tiny duplicate, or a crude summary, just a tight bundle of connections looping back out to the thing itself.
The orphan’s stream of consciousness surged through the new connections, momentarily unstable with feedback: I think that Yatima thinks that I think that Yatima thinks…
Then the symbol network identified the last redundancies, cut a few internal links, and the infinite regress collapsed into a simple, stable resonance:
I am thinking—
I am thinking that I know what I’m thinking.
Yatima said, “I know what I’m thinking.”
There are two categories of mind that we need to model: those of humans, and all others (I’ll ignore the question of whether anything other than humans have minds. I’m working under the assumption that it’s useful to behave as if there are). I suspect that the evolutionary pressure to develop an efficient and accurate model of human minds was greater than the pressure to come up with a good model for nonhuman minds. After all, our ancestors were probably mostly concerned with whether animals were predators, prey, or irrelevant. But other humans can be our mates, rivals for mates, allies in the hunt, suckers to be exploited, and much else. Not only that, but evolutionary competition is always fiercest between members of the same species, than across species.
So a mind that could model other human minds by observing itself would have had a distinct advantage over minds that used heuristics learned by observing the external behavior of agents with minds. Perhaps consciousness arose this way.
But note what this can easily lead to: when we look in a mirror, we see another person in mirror-land, similar to the other people around us. Likewise, when we observe ourselves though the mental loopback mechanism, we see a mind similar to the ones we perceive in other beings. That is, we perceive an observer inside our heads. This is the
cartesian theater,
a sort of room in our heads where a being receives input from our senses and directs the body in response.
As far as I know, this is the normal illusion of how we think: it doesn’t feel as if “mind” is the product of normal operation of our brain, in the same way that “business” is the product of normal operation of an office. Rather, it feels as though our consciousness is an actual thing inside us, directing our body like a tiny man inside a giant robot.
From there, it is but a small step to astral travel, body-swapping (a la Freaky Friday), and souls that can survive death. Richard Dawkins talks about some of this in Chapter 5 of The God Delusion.
So it seems possible that religion is, in fact, an accidental byproduct of the need to make sure hubby wasn’t sleeping around, and that people who had slacked off during the hunt didn’t get a big portion of the food.
The next big question is, how can these ideas be tested? I don’t have many answers, but can suggest some avenues to explore. For starters, we can test whether other animals model other beings’ minds, both of their own species and of other species. We know, for instance, that chimpanzees take into account what other chimpanzees are likely to do in a given situation. I don’t know what research has been done to see whether chimps have different mental models for chimps and for non-chimp animals. We also know that they can recognize themselves in a mirror (and other self-observation devices), which suggests that perhaps they have yet another mental model for “me”.
I also don’t know whether similar research has been done on other animals. It seems that the ability to model prey would be very useful in predators, and the ability to model other members of one’s species would be very useful in any social species. Of course, it would also seem that the ability to model a predator would be useful in prey animals, but as a rule herbivores don’t seem to be as intelligent as carnivores. Perhaps fancy defensive tactics aren’t nearly as useful as just running away very quickly (or at least faster than the guy next to you), or just breeding fast enough that some offspring are bound to survive to adulthood.
Another possible approach is to study people with mental abnormalities. People with
Asperger syndrome (and probably other versions of autism),
for instance, have trouble recognizing body language, and have difficulty with social conventions. This suggests that something’s wrong with the way that they model other people’s minds. But do they also have trouble observing their own mind? Do they have unusual difficulty with the word “I”?
Temple Grandin,
perhaps the world’s best-known autistic person, says she has difficulty understanding other people, but not cattle. This suggests that the mechanism for modeling other humans is distinct from that used for modeling other agents. Or perhaps the mechanism for modeling humans used elements coopted from an existing mechanism for modeling other minds. Research into animal psychology might show which of these evolved first, and was coopted by the other.
It might also be possible to combine these approaches. I don’t know whether there are autistsic chimpanzees, but if there are, and the cause of the autism can be identified, it might be enlightening to see how chimp autism compares to human autism (of course, it’s likely that only a small percentage of chimps are autistic, so it would be good to increase their numbers. Not only would we then have more test subjects, there would be more chimps, period, and that would be a Good Thing).
Of course, it’s likely that a lot of these questions have already been answered, and I’m asking them only because I’m unfamiliar with the research literature. But I think this is an interesting conjecture.