Chapter 6: The lump under the rug
In this section, Feser argues that the existence of the mind is incompatible with materialism. Not only that, but materialist explanations of mind often refer, if only implicitly or subconsciously, to aristotelian concepts.
But first, he has to dispel a misconception:
to say that something has a final cause or is directed toward a certain end or goal is not necessarily to say that it consciously seeks to realize that goal. […] Thus it is no good to object that mountains or asteroids seem to serve no natural function or purpose, because Aristotelians do not claim that every object in the natural world necessarily serves some function. [pp. 237–238]
As I understand it, this is like saying that a pair of glasses is for improving sight, but of course the glasses themselves can’t possibly be conscious of this.
This is indeed an important point to keep in mind, and it’s a pity that the next sentence is
What they do claim is that everything in the world that serves as an efficient cause also exhibits final causality insofar as it is “directed toward” the production of some determinate range of effects.
Yes, but pretty much everything is the efficient (or proximate) cause of something. The mountains and asteroids that Feser just mentioned are the efficient cause of certain photons being reflected from the sun into my eye. Their gravity also attracts me, though only in hard-to-measure ways. A mountain can affect the weather and climate around it, and depending on its orbit, the asteroid might be on its way to kill all life on Earth. Does this “production of some determinate range of effects” automatically mean that they have final causes? Are these final causes related to what they do as efficient causes? That is, if a star looks beautiful in a telescope, does that mean that it’s for looking beautiful? Or, to come back to an earlier example, would an aristotelian say that the moon orbits, therefore it’s for orbiting?
If so, then this reflects a childish understanding of the world, one where bees are there to pollinate plants, rain is there to water them, and antelopes are there to feed lions. If not, and if a thing’s final cause can be very different from its efficient cause (e.g., the moon orbits the Earth, and reflects light, but maybe its final cause is something else, like eclipses), then why bring it up?
The Mind as Software
Next, Feser considers the currently-fashionable metaphor of seeing the brain as a computer that processes symbols. Since I criticized him earlier for not understanding software, or even of considering “Form” as a type of software, I was interested to see what he had to say.
First of all, nothing counts as a “symbol” apart from some mind or group of minds which interprets and uses it as a symbol. […] By themselves they cannot fail to be nothing more than meaningless neural firing patterns (or whatever) until some mind interprets them as symbols standing for such-and-such objects or events. But obviously, until very recently it never so much as occurred to anyone to interpret brain events as symbols, even though (of course) we have been able to think for as long as human beings have existed. [p. 239]
Here, Feser confuses the map with the territory: we can explain the brain at a high level by comparing it to a computer processing symbols. But symbols are only symbols if they’re interpreted as such by a mind. So neural firing patterns aren’t true according-to-Hoyle symbols, therefore checkmate, atheists!
This is like saying that the circadian rhythm is not a clock, because clocks have hands and gears.
Likewise, a little later, he writes:
No physical system can possibly count as running an “algorithm” or “program” apart from some user who assigns a certain meaning to the inputs, outputs, and other states of the system. [p. 240]
Again, Feser is paying too much attention to the niceties and details at the expense of the gist.
Imagine a hypothetical anthill. In the morning, the ants head out from the anthill, roughly at random, dropping pheromones on the ground as they do so. If one of the ants stumbles upon a piece of food, it picks it up and follows its trail back to the anthill. If its left antenna senses pheromone but the right one doesn’t, it turns a bit to the left; if its right antenna senses pheromone but its left one doesn’t, it turns a bit to the right. If both sense pheromone, it continues in a straight line. If we trace the biochemical pathways involved, we might find that the pheromone binds to a receptor protein that then changes shape and affects the strength with which legs on one or the other side of the body push against the ground, which makes the ant turn left or right.
We can imagine similar mechanisms by which other ants, sensing that one trail smells twice as strongly of pheromone (because the first ant traversed it twice) and will prefer to follow that trail rather than wander at random.
These ants, of course, have no real brain to speak of. There’s no question of an ant being able to understand what a symbol is, let alone interpret it, let alone consciously follow an algorithm. All of the above is just fancy chemistry. And so Feser would, no doubt, say that the first ant is not following a “retrace my tracks” algorithm. Nor are the other ants following an algorithm to look for food where some food has already been discovered. Whatever it is that these ants are doing, it’s not an algorithm, because no one is assigning meaning to any part of the system.
But that doesn’t change the fact that the ants are finding food and bringing it back to the anthill. In which case, who cares if it’s a proper algorithm, or just something that looks like one to us humans?
Only what can be at least in principle conscious of following such rules can be said literally to follow an algorithm; everything else can behave only as if it were following one. [p. 241]
Feser then imagines a person who assigns arbitrary meanings to the buttons and display on a calculator (I like to think of a calculator whose buttons have been scrambled, or are labeled in an alien alphabet):
For example, if we took “2” to mean the number three, “+” to mean minus, and “4” to mean twenty-three, we would still get “4” on the screen after punching in “2,” “+,” “2,” and “=,” even though what the symbols “2 + 2 = 4” now mean is that three minus three equals twenty-three. [p. 242]
And likewise, if the pattern of pixels “All men are mortal” were interpreted to mean that it is raining in Cleveland, that would lead to absurd results.
What Feser ignores is that no one would use that calculator, because it doesn’t work. Or, at least, anyone who put three apples in a basket, then ate three of them, and expected to be able to sell 23 apples at market would soon realize that Mother Nature doesn’t care for sophistry.
If we had a calculators where the keycaps had all been switched around, or were labeled in alienese, we could eventually work out which button did what, by using the fact that any number divided by itself is 1, that any number multiplied by zero is zero, and so on. The specific symbols used for these operations, the numerical base the calculator uses, and other details don’t matter so long as the calculator can be used to do arithmetic, any more than a car’s speed changes depending on whether you refer to it in miles per hour, kilometers per hour, knots, or furlongs per fortnight.
Feser also applies his reasoning to Dawkins’s theory of memes:
If the competition between memes for survival is what, unbeknown to us, “really” determines all our thoughts, then we can have no confidence whatsoever that anything we believe, or any argument we ever give in defense of some claim we believe, is true or rationally compelling. For if the meme theory is correct, then our beliefs seem true to us, and our favored arguments seem correct, simply because they were the ones that happened for whatever reason to prevail in the struggle for “memetic” survival, not because they reflect objective reality. [p. 245]
This is reminiscent of Alvin Plantinga’s idea that since natural selection selected our senses for survival rather than for accuracy, then they can’t be trusted. That is, if I see a river in front of me, it’s merely because perceiving the current situation (whatever it might be) as a river helped my ancestors survive, and not necessarily because the current situation includes a river. Feser’s argument is similar, but applied to thoughts instead of senses.
https://www.youtube-nocookie.com/embed/hou0lU8WMgo?rel=0
This argument is technically correct, but less interesting than one might think: for one thing, we don’t need to speculate about whether our senses or thought processes are fallible: we know that they are. Every optical illusion tricks us into seeing things that aren’t there, and the psychological literature amply catalogs the ways in which our thoughts fail us (for instance, humans are notoriously bad at estimating probabilities). And for another, the best way to respond correctly to objects in the environment is, to a first approximation, to perceive them accurately.
If I may reuse my earlier illustration, imagine a person who thinks that the word “chair” refers to a yellow tropical fruit, the one that you and I call “banana”, and vice-versa. How long would it take this person to realize that they have a problem? If I invited them into my office and said, “take a chair”, they might look around for a bowl of fruit, but after two or three such instances, they’d probably realize that “chair” doesn’t mean what they think it does. On the other hand, it took me years before I realized that “gregarious” means “friendly” rather than “talkative”.
A clever writer can probably devise a dialog where “chair” can mean either “chair” or “banana”, but it would be difficult to do so, and would probably sound stilted. By comparison, it would be much easier to write a piece that makes sense whether you think that “gregarious” means “friendly” or “talkative”. And likewise, we can imagine an animal whose senses are mis-wired in such a way that it perceives a dangerous predator as a river, and has muscles and nerves mis-wired such that when it thinks it’s walking toward the river, it’s actually running away from the predator. But this is a contrived example, and unlikely in the extreme to be useful in the long run. A far more effective strategy (and one far more likely to evolve) is having some simple rules give the right answer 80% or 90% of the time. That is, to perceive the world accurately enough to survive in most plausible situations.
Feser and Plantinga are committing what’s been called the “any uncertainty implies great gobs of uncertainty” fallacy.
Series: The Last Superstition