This Article Is About Self-Reference and Complexity (but This Title Isn’t)

This Article Is About Self-Reference and Complexity (but This Title Isn’t)

I’m reading Douglas Hofstadter’s I Am A Strange Loop, and there’s something that doesn’t sit well with me.

In Chapter 4, he discusses his fascination with self-reference and feedback loops of all kinds. He talks about the operation of a toilet, in which water enters the tank, which raises the floater, which in turn cuts off the intake valve. The toilet can be said to “want” to be full. He then asks,

Why does this move to a goal-oriented — that is, teleological — shorthand seem appealing to us for a system endowed with feedback, but not so appealing for a less structured system?

(italics in the original.)

He seems to be saying that feedback ⇔ teleology (or intentionality, which is what I think Dennett and other philosophers might call it). In this, I think he’s wrong, though in an interesting way.

There are counterexamples on both sides. Hofstadter himself talks about audio feedback later in the chapter. This system clearly has feedback, but I don’t see how this invites intentionality. I can’t see how anyone would describe the PA as “wanting” to do anything.

On the other hand, one of the most famous examples of intentionality is the saying that “water seeks its own level”, meaning in effect that water “wants” to be flat, and at the lowest point possible.

Similarly, I remember a science class in, I guess it must have been elementary school, in which the teacher explained how electrons fill orbital shells by comparing them to spectators at a circus: the electrons want to be as close to the ring (the nucleus) as possible, but don’t want to sit next to someone else; but when they have to make a choice, they’ll sit next to someone if it means they get to be closer. This is a simple system with no feedback that I can see, yet it is well explained by appealing to teleology or intentionality.

Still, Hofstadter is probably right, at least for the most part: if a system uses feedback to reach a state (or “goal”), then a teleological explanation is appealing; if a system reaches a state without feedback, then we are less likely to use such explanations.

But the way he puts it suggests that our brains have feedback detectors, which seems wrong. So Hofstadter is right, but for the wrong reason.

Dawkins and Dennett have both hypothesized that humans have “agency detectors”, i.e., we’re good at recognizing things caused by beings who strive toward a certain goal. If we pile up firewood for the winter, and the next morning some of it is missing, we’ll suspect a thief, even if we haven’t seen any suspicious-looking prowlers.

In fact, according to Dennett and Dawkins, we have hyperactive agency detectors (HADDs), which are prone to detecting agency even when there isn’t any. The penalty for seeing an intentional agent when there isn’t one is embarrassment or superstition, while the penalty for not noticing an intentional agent can be to become lunch for that agent. Thus, there is selective pressure for an agency detector that errs on the side of caution, and therefore sees minds where there aren’t any.

This also explains why we like teleological explanations: we’re already accustomed to thinking in terms of agents striving for goals, so saying that a toilet tank “wants” to be full, or that electrons “want” to be close to the nucleus and “want” to sit apart from each other allows us to easily construct a working mental model of how a system behaves.

But why should teleological explanations be more appealing for systems with feedback, than for systems without? In part, it may be because negative feedback can produce a basin of attraction: a toilet will tend to fill itself up whether it has a large or small tank, whether it’s half-empty or completely empty, whether it’s hooked up to municipal water or to a beer barrel. But mainly, I think, because feedback is a good mechanism for generating a lot of complexity cheaply.

In the environment in which we evolved, the real intentional agents that matter — predators who want to kill us, friends who can help us hunt, rivals for sex or status or whatever — exhibit complex behavior, the kind of behavior not exhibited by rocks or rivers or poisonous plants. Unfortunately, I can’t give a rigorous definition of what I mean by “complex” here, though I hope the reader shares my intuitive notions. This is behavior that is not adequately modeled by simple linear models such as “rocks fall down in a straight line”. A hunter who assumed that antelope come to the watering hole at dawn and just stay there for several minutes no matter what would be a very poor hunter indeed, and would learn the error of his ways as soon as he spooked them by revealing himself to his prey.

I suggest, then, that what we detect is not feedback per se, but complex behavior. The reason systems with feedback usually invite teleological explanations is that they tend to exhibit complex behavior; intentional agents also exhibit complex behavior; therefore, it is natural to explain feedback systems in terms of intentional agents.

Of course, there’s another connection: as I mentioned before, feedback can quickly generate a lot of complexity, so it’s natural that evolution should have used it a lot. A predator that follows its prey will be more successful than one that ambles about until it randomly stumbles upon something edible. Prey will live longer if it actively avoids its predator. But then a predator that anticipate how prey will try to avoid it, and move to counter that, will be more successful still, and so forth. A woman who knows how men behave will be less likely to get pregnant by some creep, while a man who can figure out how women behave (including understanding what men want) will be more likely to get laid.

Thus, the intentional agents that were important during human evolution are feedback systems, selected by evolution (which in turn is another complex feedback system, but that’s beside the point). So in effect, we did evolve to recognize certain types of feedback systems, but that’s not the same as saying that we recognize feedback itself.

One thought on “This Article Is About Self-Reference and Complexity (but This Title Isn’t)

  1. You make a very good point about feedback systems overall and hyperactive feedback systems in particular. It makes me wonder, if blue whales evolved enough intelligence so that they formed societies, do you think they’d have religion? They don’t really have any natural predators, so I’m thinking they would have less of a selection pressure for hyperactive agent detectors than we humans do.

  2. Cyde Weys:
    They may not have predators, but they if they live in societies, then they have rivals. I don’t know enough about blue whales to make an informed comment, but I suspect that if “Hey! That guy is trying to steal my girlfriend!” or “I’d better get to that piece of food before the other guy does” are useful concepts, then they’ll want agency detectors.

    And, of course, there are the giant squid, which might qualify as predators.

    Oh, and intelligence isn’t required to form societies, as I’m sure you’re aware. I’m not sure what you meant to say. Story-telling societies, perhaps?

    Dunno about religion, though. There are all sorts of complicating factors, including the fact that whales don’t have hands or fire, which severely limits their use of technology, which puts them at a disadvantage when building a society with enough leisure time for the upper classes to become priests.

Comments are closed.