All posts by Andrew Arensburger

Triggered by the Right Side of History

There’s a bit of controversy going on at Yorktown High School in Virginia, where teachers have put up signs:

Screen capture of controversial sign at Yorktown High School

Patriots Know:

Facts are not political
Diversity stengthens [sic] us
Science is real
Women’s rights are human rights
Justice is for all
We’re all immigrants
Kindness is everything
We are Yorktown

I gather that “Patriots” is the name of the school’s sports team, and by extension refers to the student body, not simply people who love their country.

What’s odd here is that conservatives have complained about these signs being overly-political. TV personality Tucker Carlson is quoted as saying,

Carlson called the signs “the sneakiest type of propaganda… propaganda passing itself off as obvious observations.” He asked [senior student John] Piper if anyone at the school thinks that science “is not real.”

Oh, I’m sure you could find some Juggalos to tell you that

Water, fire, air and dirt
Fucking magnets, how do they work?
And I don’t wanna talk to a scientist
Y’all motherfuckers lying, and getting me pissed

but I’ll concede Carlson’s point: pretty much everyone thinks that science is real, or at least supports that notion. But of course not everyone knows that science isn’t a body of knowledge, but a method for figuring out what’s true. Not only that, but a lot of people are very selective about which scientific findings they accept. And, well, not to put too fine a point on it, one major US party (hint: it rhymes with “Reschmuglican”) has turned into an anti-science party.

And therein lies the problem: as long time reader Fez pointed out, if the statements on the signs are seen as political — and specifically leftist — it’s only because the political right has rejected much that should be uncontroversial. Like the reality of climate change, and it doesn’t matter how many jobs you save if New York is under water and Nebraska is too arid for anything to grow.

Likewise, even though it’s obvious that women’s rights are human rights, since the eighties the Republican party has been running on the idea that imply, even if it’s rarely made explicit, that women take a back seat to men, and that their rights rank below those of a pre-sentient (not merely pre-sapient) bundle of cells.

Likewise diversity, justice for all, and tolerance of immigrants. The American political right is on the wrong side on all of these issue, and I think they know it and feel defensive about it.

This all reminds me of something Patton Oswalt wrote a little while ago:

But when I Tweet something POSITIVE, or HOPEFUL, in support of a group that’s been made to fear or doubt because of Trump and his ghoul brigade’s actions? A helpful link for peaceful action? Praising someone who speaks up eloquently against the smirking racism of Trump’s parking lot carnival of an administration?

THAT’S when the responses get violent, and threatening, and ominous. As if the language itself — the grammar of thoughtfulness — lands in their guts like glass shards. Empathy and understanding literally feel like an attack to them.

I don’t think he’s quite right about this, but I have to say that the Yorktown HS kerfuffle is data in his favor.

So if positive ideas bother you, or the implications of those ideas (e.g., if we’re all immigrants, then maybe someone who looks and speaks differently from you, whose cooking smells weird, and whose accent you can’t decipher, might move in next door to you), perhaps it would be a good idea to think about what it is that bothers you, and whether your fears are justified. Or, for that matter, whether you’re on the right side of history.

 

Now What?

So we’ve survived the first week of Trump’s presidency. Have some cake. If you were one of the many people who took part in activism, pat yourself on the back. If you weren’t, it’s not too late to start.

It’s great that everyone’s riled up. And while we’re pumped up and paying attention to government, it might be worth figuring out what our long-term plans should be. Here’s my list, in no particular order:

Gerrymandering reform

In case you forgot, gerrymandering is the practice of drawing legislative districts to favor one party (see, for instance, this map of Maryland). Gerrymandering is one of the factors pushing divisions between the left and right: Representatives can be attacked for being insufficiently ideologically pure, which pushes them away from the center, and have no real incentive to compromise.

For me, as a Marylander, it means that the Republicans have written me off, and the Democrats take me for granted. I’d like both parties to court my vote, and for the biyearly congressional elections to be a meaningful referendum on Representatives’ job performance.

Electoral College reform

I think everyone agrees that while the Electoral College may have been useful at one time, it’s not the XVIII century anymore. Time to get with the times and implement majority vote.

Since the Electoral College is enshrined in the Constitution, there’s no way to eliminate it without an amendment, which is difficult. But there’s a hack: each state can pass its own laws about how its Electors vote. And in most states, they have to vote the way a majority of that state’s voters voted, which makes perfect sense. But what if each state had a law saying that its Electors will vote whichever way the entire US voting population voted?

Obviously, people in Massachusetts will be upset if a Republican gets all of their Electoral votes just because he won a majority of the US vote, just as Oklahomans won’t like their Electoral votes going to a Democrat. But this already happens, in effect, in that people get a president they don’t want.

Of course, you don’t want your state to be the only one that apportions its Electors this way. This only makes sense if there are enough states doing this, that they can decide the outcome of the election — that is, if there’s a group of states that adds up to 270 Electoral votes or more.

Thankfully, there’s a project to do exactly that. Contact your state legislators and encourage them to join in.

Ranked voting

This one’s more of a long shot than the others, but I’ll throw it out there anyway. In our current system, you can only vote for one candidate, and whoever gets the most votes wins. This leads to a problem with third-party supporters. In 2000, if you were liberal, maybe you liked Green Party candidate Ralph Nader, could tolerate Democrat Al Gore, and disliked Republican George Bush. So do you vote for Nader, knowing that he can’t win, and that you’re taking away a vote for Gore (and against Bush)? Or do you hold your nose, vote for Gore against Bush, and help confirm the idea that third parties don’t stand a chance?

Under ranked voting, also known as instant-runoff voting, you vote for multiple candidates, ranking them in order of preference. Our hypothetical voter, above, might vote

  1. Ralph Nader
  2. Al Gore
  3. George Bush

meaning “I like Nader, but I’ll settle for Gore.”

Yes, there are problems with ranked voting, and there are situations where it fails. But its problems are rarer and less severe, I believe, than those with our present system.

Campaign finance reform

This is related to the previous item, in that the current systems helps perpetuate a system where only the major players have a chance. If candidates were treated equally, say all given $100 million to make their case, then it would make it more likely that candidates are judged by their experience and policies, rather than their ability to raise money.

On the other hand, there’s a danger that extremist whackjobs might appear reasonable by virtue of being treated as equals with sane-party candidates. But then again, given who’s living in the (Oh So Very) White House right now, we may be past that point already.

While I don’t have a firm opinion on this topic and am open to being educated, I do think the Citizens United SCOTUS decision needs to be overturned. In case you forgot, that’s the one that said that donating money to a campaign is political speech, and since you can’t abridge free speech, you can have unlimited amounts of money pouring into politics.

Education, education, education

This one is fundamental. We need better education, and more of it.

People complain about American jobs being shipped overseas. But most of those are unskilled jobs. It’s never going to be cheaper to hire an American than a Bangladeshi, or a robot. So let’s prepare our population for better jobs.

For starters, we can fund elementary and high schools properly. I’m ashamed for my country every time I hear of a teacher having to buy supplies out of her own pocket. Federal funds can help with this: when I pick up the phone to talk to tech support, I might get someone who went to school in Arkansas or Oregon, so it’s in my benefit to help education in other states.

College is crazy expensive. The University of Maryland, a state university, estimates that it’ll cost $25,000 per year to send your child there. $47,000 if you’re not a Maryland resident. That’s mortgage-level expensive.

Why can’t we bring the costs down? One simple approach would be an education tax. Raise taxes on everyone by a bit, and bring tuition costs down a lot for those going to college. This would have all sorts of knock-on effects: more people getting educated; more people inventing new things, or writing books, or starting businesses; more people making a better living; more people hiring other people.

And I think that’ll do it for now.

Women’s March, Rights, and Politics

I attended the Women’s March on Washington, yesterday. It turned out, I’m told, to be the largest inauguration protest in the history of the United States, and possibly the largest political protest ever, if you count the sister marches in other cities around the globe (including Antarctica).

At one point, we ran into, I believe, the American Socialist Party. They are, as I believe, the Communist-Lite bunch that Sean Hannity warned you about. I don’t remember seeing them out on the Mall before, so I suspect that they may have stepped up their activities in recent years.

If they have, they’re not alone. Witness the popularity of Bernie Sanders, who may not have won so much as the Democratic nomination, but got pretty damn far for an American who describes himself as a democratic socialist.

But of course he and Clinton lost the presidency to Trump. He may not have won a majority of votes, but he did get 46%, a not inconsiderable amount.

So what this all seems to suggest is a repeat of history: we’re living in a new gilded age, with income inequality at record-high levels, and populist factions appear to be gaining popularity in response: on one hand, on the left, people like Sanders and Warren, who promise to, basically, make the rich bastards pay their fair share so that the little guy can get a fair shake. I’m pretty sure that the communists of the early 20th century had the same message, and that that’s what made them so appealing to so many.

And on the right, there’s Donald Trump, who would be very happy to be the object of a cult of personality, and clearly feels most at home at the top of an autocratic dictatorship like — yeah, I’m gonna say it — Nazi Germany or Fascist Italy.

I can’t prove that history is repeating itself, but it does look that way. And so, we are faced with the problem of how to avoid both a fascist dictatorship and a communist dictatorship (thankfully, the two look so much alike (the key word is “dictatorship”) that we really only need one plan for both contingencies).

Of the two, I’m far more worried about the fascist dictatorship: these days, in America, the right is the side more likely to make threats of violence (“Second Amendment remedies, anyone?). The lefties I’ve met are far more likely to get everyone’s input, not execute kulaks.

On the right, on the other hand, I see more right-wing authoritarians (RWAs) who enjoy having a strongman in charge, and have a history of passing laws to prevent people from voting.

Does Christianity Offer the Best Basis for Science?

There’s an argument I’ve run across several times, that theism, and specifically Christianity, forms a much better basis for science than does atheism. Indeed, some people go so far as to claim that only Christianity provides a foundation for science. Matt Slick at CARM lays it out well (though Don Johnson Ministries makes a similar argument). After listing a number of influential scientists who were Christians, Slick writes:

To many Christians, the idea that God existed and brought the universe into existence meant that the universe could be understood because God was a God of order and his character would be reflected in creation (Rom. 1:20).  Instead of a Pantheon of gods who ran the universe in an unpredictable fashion, Christianity provided the monotheistic bedrock (Isaiah 43:10; 44:6,8; 45:5) upon which the scientific study of nature could be justified.  Many Christians expected to find the secrets that God had hidden in the universe and were confident in being able to discover them.  This is a critical philosophical foundation that is necessary if an emerging culture is to break the shackles of ignorance and superstition in order to discover what secrets exist in the world around them.

This emphasis on order seems odd, since one of the main features of Christianity is miracles, that is, violations of natural law. Without at least the resurrection of Jesus, there is no Christianity. Add to that the various miracles Jehovah, Jesus, and various and sundry saints are said to have performed, the common notion that God sometimes responds to prayer by performing additional miracles, and weekly transubstantiation in church, and you get a picture of reality in which any regularities, any laws of nature exist only so long as a malleable deity permits them to exist.

If scientists like Kepler and Newton saw the Christian God as fundamentally one of order rather than caprice, and drew inspiration for their scientific pursuits from that, fine. But that’s hardly the only type of Christianity out there. I doubt that theirs was even a majority view. But in a time and place where pretty much everyone was Christian (and where not being Christian often carried either social stigma or legal penalty), of course Christians are going to be the ones doing science.

It seems to me that Taoism is a much better match for Matt Slick’s description than Christianity. You could, I think, make a strong case for the notion that the Tao is natural law. There’s certainly the notion that you can either go with the Tao, or you can wear yourself out trying to go against it.

(Yes, this still leaves the question of why so many scientific discoveries came from Europe rather than China. But that’s an interesting question for another day. I suspect that the fact that Europeans wrote American history textbooks has something to do with it.)

I suppose it wouldn’t do to mention alchemy and algebra, whose prefix “al” betrays their Muslim origin. Or the fact that a large proportion of visible stars have Arabic names.

I also don’t see why it takes a whole religion or worldview to want to figure out what makes the world tick. Anyone can see that day follows night, summer follows spring, rocks always roll downhill, never up, and that oaks only come from acorns. Clearly there are some regularities, and these can be investigated. We’re curious creatures; figuring stuff out is fun.

There’s a related claim to the one that Christians founded all the sciences: that Christians founded all the major universities. I haven’t checked this, but I see no reason to doubt this claim.

This brings me to my final point: let’s grant, at least for the sake of argument, that Christians, motivated by their understanding of God as a lawmaker, got all of the sciences started; that most or all of the major universities were founded as institutions to learn how God set up the universe; that Christianity is the only religion — the only worldview — that could have kickstarted science this way, and that out of those beginnings grew science as we know it today… so what? Why keep religion around today?

A scaffolding is essential when beginning a new building. But after a certain point, it needs to go. I was on an all-milk diet for the first, crucial part of my life, and that helped make me into the person I am today. But that doesn’t mean that I should continue to drink milk as an adult; I especially shouldn’t be on an all-milk diet.

Whatever benefits religion may once have provided to science, these days it just gets in the way, from creationism to anti-gay “conversion therapy” to faith-based climate change denialism. It’s time to jettison it.

A Modest Proposal for Anti-Abortion Catholics (and Some Others)

When I recently ran across yet another of BillDo’s rants against abortion, I was struck by an idea: during transubstantiation, a priest turns a piece of bread into living flesh. But surely this is a reversible operation, no? People turn living wheat into nonliving bread all the time.

In addition, if there’s any kind of conservation law, the after centuries of Catholic rites, there’s bound to be mountains of bread accumulating somewhere, that could be put to good use.

So I propose the following: if a woman wants an abortion, a priest can cast a reverse-transubstantiation spell, and turn the fetus into a piece of bread. And then the abortion can proceed normally.

If Catholic priests can’t or won’t do this, then I’ll do it. I’m ordained, and I have as much evidence to back up my supernatural claims as they do.

Cover of "The Last Superstition"
The Last Superstition: Conclusion

So now that we’ve come to the end of the book, what have we learned? There are two comments that stick in my mind. One is by Steve Watson:

I think Aristotle systematized a lot of what we now call folk physics and folk biology, which was a good enough way to start, back then

While neither Steve nor I offer any data to support this, I think it’s a pretty good explanation: a lot of what Aristotle thought, or at least what Feser reports as Aristotle’s belief, reads like someone trying to systematize what he saw around him — what would later become physics — but handicapped by not having observational tools like microscopes, or even mental tools like the scientific method.
The other comment is at Chris Hallquist’s blog, by reader Patrick:

[Feser] interacts with people in two ways. 1) Patiently explaining what Augustinian metaphysics is, in the apparently belief that if he just explains in sufficient detail, people will have an “Aha!” moment and come around. 2) Railing at them in rage because, in spite of all his explanations, they refuse to admit that Augustinian metaphysics are self evidently true. Obviously, since Augustinian metaphysics are self evidently true per hypothesis, the only explanation for someone’s refusal to admit that they’re true is either astounding stupidity and ignorance, or else a willful and culpable refusal to publicly admit what they know to be true.

This comes through well in The Last Superstition. For all his explanations of Plato’s and Aristotle’s ideas, he fails to answer a lot of elementary questions, like how we can find out which Forms an object instantiates. This makes sense if he thinks what he’s explaining is obvious, and can’t see it from an outsider’s perspective.
As for his attitude toward people who don’t agree with him, well, I’ve mentioned Feser’s tone a few times. Suffice it to say that I agree with Patrick.

But beyond the book itself, what else can we learn?
I was originally surprised at the number of positive reviews the book got on Amazon, given the number of elementary fallacies Feser commits. But in retrospect, it makes sense: like all successful apologists, Feser is good at reassuring people who already hold religious beliefs that it’s okay to hold those beliefs. This is partly due to what Steve Watson pointed out above: that Aristotle tried to formalize folk science. That is, Feser is able not only to present a framework of ideas rather than a grab-bag of intuitions, he can also recruit one of the biggest names in philosophy to lend weight to his argument.

Finally, I think this book is useful for its insight into someone else’s mind. In particular, it seems to me that Platonic Forms (and Aristotle’s essences) are a type of essentialism. There are (at least) two ways of looking at the world: one is that things are whatever they are, and it’s up to us to draw boundaries; some things are worth calling “triangle”, and some aren’t. A triangle drawn with straightedge and pencil may not be perfect, but it can still be called a triangle; but at some point, a chalk doodle is so different from a perfect triangle that it no longer deserves the name. A person who hasn’t eaten meat for a day probably shouldn’t be called a vegetarian, but someone who hasn’t eaten meat for ten years probably should. That is, sometimes there are clear ways to define the categories that interest us, while at other times we need to draw arbitrary lines. But ultimately, it’s up to us.

The other view is that the categories are already there, and have been since the beginning of time. We merely need to figure out which category an item belongs in. In my experience, a lot of creationists think this way: they can’t grasp the idea of new species appearing, because to them, if the descendants of a fish are so different as to not be worth calling fish anymore, then they must be in one of the other categories: dogs, or birds, or dinosaurs, or something.

Likewise, as Feser argues, if you see a three-week-old fetus and a thirty-year-old woman as having the same human essence, as their single most important characteristic, then it’s easy to see abortion as murder.

So while I don’t think the book adds to my understanding of either science or philosophy, it is useful in understanding how a lot of people on the right think.

Series: The Last Superstition

Followup on Faithless Electors

Four months ago, I wondered whether there would be faithless electors in this election. And as it turned out, there were. Nine of them, in fact, of whom six were successful. That seems like a lot: according to Wikipedia, these days there are usually zero or one faithless elector. There were 8 in 1912, and 27 in 1896.

When I wrote that article, I expected to be surprised, and I was. But I stand by my comment about the dumpster fire consuming the GOP.

Cover of "The Last Superstition"
The Last Superstition: The Final Insult

Chapter 6: Irreducible teleology, cont.

Having exoriated biologists over the fact that popular science writers use terms like “purpose” and “blueprint”, Feser moves on to nonliving systems, in which he also sees purpose and intentionality. For instance, the water and rock cycles (I’d never heard of a “rock cycle” before, but okay):

The role of condensation in the water cycle, for example, is to bring about precipitation; the role of pressure in the rock cycle is, in conjunction with heat, to contribute to generating magma, and in the absence of heat to contribute to generating sedimentary rock; and so forth. Each stage has the production of some particular outcome or range of outcomes as an “end” or “goal” toward which it points. [p. 258]

Here, Feser implies that the water cycle is supposed to exist, and condensation exists to further that goal. Likewise, of course you have to have pressure, otherwise how can you have magma? It seems as though he is projecting his opinions onto the world so hard that he can’t imagine that maybe water does what water does, and it’s only because the temperature on the surface of this planet oscillates in a certain range that allows water to behave in such an interesting fashion.

Basic laws of nature

Moving on to fundamental science, Feser graces us with a rather interesting idea of how minds work:

Mental images are vague and indistinct when their objects are complex or detailed, but the related concepts or ideas are clear and distinct regardless of their complexity; for example, the concept of a chiliagon, or 1000-sided figure, is clearly different from the concept of a 999-sided figure, even though a mental image of a chiliagon is no different from a mental image of a 999-sided figure. [p. 260]

I’m not quite sure what he’s trying to say, though the best spin I can put on it is that we have trouble imagining complex things clearly. I agree, and this means that we need to be careful when thinking about complex things, because we’re likely to overlook something.

But since Feser brings this up in the context of thinking about abstract things, I have to wonder. When he talks about the possibility of purely material minds, he sounds like someone who thinks that a DVD has to have little pictures on it; that if you put a CD close enough to your ear, you’ll hear the music on it. Maybe I’m wrong; but that’s the impression I get, especially after the bit in Chapter 4 where he seemed to think that thinking about triangles would have to involve part of your brain becoming triangular.

He goes on for a bit against David Hume and complaining about the “anti-Aristotelian ideological program” (p. 261) of modern science. Basically, he tells us, science cannot proceed without Aristotle, but scientists are fiercely opposed to him on ideological grounds. Probably because they just want to sin, or something. In fact,

Despite the undeniable advances in empirical knowledge made during the last 300 plus years, then, the work of the scientists who made those advances simply does not support the philosophical interpretation of those advances put forward by the proponents of the “Mechanical Philosophy” and the contemporary materialists or naturalists who are their intellectual heirs [p. 264]

See, scientists are smart people who have been very successful at figuring out how the universe operates, so successful that we now take things like nuclear weapons and GPS receivers for granted. But they’re not smart enough to figure out the implications of their work.

If you look around the Internet, you can find any number of religious figures or just plain cranks who are convinced that their holy book, prophet, or whoever predicted various facts long before scientists did. They usually do this by taking some vague or poetic passage in scripture, combining it with some scientific discovery, and interpreting the former to describe the latter. For example, this page on Islam and embryology explains that

“The three veils of darkness” [in the Quran] may refer to: (l) the anterior abdominal wall; (2) the uterine wall; and (3) the amniochorionic membrane

And this page explains that “[he that] stretcheth out the heavens as a curtain” in the Bible refers to cosmic expansion.

Likewise, in this chapter, Feser talks about scientists rediscovering the genius of Aristotle. But it’s also painfully obvious that the scientific revolution did not begin in earnest with Aquinas, but rather several centuries later. That, combined with the fact that science has been so wonderfully successful despite the fact that the average scientist probably couldn’t give a summary of Aristotle’s or Aquinas’s ideas strongly suggests that they’re simply irrelevant to science.

It’s the moon, stupid

By this point, Feser thinks that he’s established that the millennia-old ideas of Aristotle, refined by Aquinas’s medieval insights, are correct. He bemoans the fact that they’ve fallen into obscurity:

But if Aristotle has, by virtue of developments in modern philosophy and science, had his revenge on those who sought to overthrow him at the dawn of the modern period, why is this fact not more widely recognized? One reason is the prevailing general ignorance about what the Aristotelian and Scholastic traditions really believed, what the actual intellectual and historical circumstances were that led to their replacement by modern philosophy in its various guises, and what the true relationship is between the latter and modern science. [p. 266]

The blame for the “general ignorance” part seems to land squarely on Feser’s shoulders. It’s up to him and his colleagues to educate the rest of us. But honestly, maybe ignorance is his ally: Feser’s exposition of Aristotle’s and Aquinas’s ideas makes it clear that they’re largely based on ignorance and superstition, and can be safely relegated to History of Ideas class, and ignored in everyday life.

He closes by quoting the proverb, “When the finger points at the moon, the idiot looks at the finger” (p. 267) as an analogy to the way objects “point to” things beyond themselves, but “the secularist” doesn’t realize this. Fittingly, he closes on an insult: “It’s the moon, stupid.” (p. 267)

Series: The Last Superstition

Cover of "The Last Superstition"
The Last Superstition: Ubiquitous Teleology

Chapter 6: Irreducible teleology

We’re in the home stretch. In this penultimate section, Feser tries to make the case that teleology, or goal-directedness, permeates the world.

To start with, he tells us that human minds deal with final causes all the time: we conceive plans and execute them, and we build things for specific purposes. So yes, final causes in this sense do exist. But Feser has something much more extensive in mind; not just the existence of final causes, but their ubiquity.

Biological phenomena

[Biologists] speak, for example, of the function of the heart, of what kidneys are for, of how gazelles jump up and down in order to signal predators, and in general of the purpose, goal, or end of such-and-such an organ or piece of behavior. […] Darwin himself once said that it is “difficult for any one who tries to make out the use of a structure to avoid the word purpose.” [pp. 248–249]

Yes, the appearance of design in biology is compelling, so much so that Richard Dawkins wrote in The Blind Watchmaker that “Biology is the study of complicated things that give the appearance of having been designed for a purpose”. But of course that was Darwin’s great insight, that while we normally think of minds selecting one option or another, with living things, nature itself can, without thought, “choose” which beings reproduce and which ones don’t. That “natural selection” is not an oxymoron.

And yes, it’s difficult to look at nature without seeing design. It’s also difficult to look at clouds without seeing the shapes of people and animals.

Feser gives us a capsule version of evolution:

To say that the kidneys existing in such-and-such an organism have the “function” of purifying its blood amounts to something like this: Those ancestors of this organism who first developed kidneys (as a result of a random genetic mutation) tended to survive in greater numbers than those without kidneys, because their blood got purified; and this caused the gene for kidneys to get passed on to the organism in question and others like it. [p. 250]

But:

One rather absurd implication of this theory is that you can’t really know what the function of an organ is until you know something about its evolutionary history. [p. 251]

Well, no. We can talk of the function of an organ without knowing anything about its evolutionary history, by seeing what the organ does, and what it seems to be good at. For instance, before we start investigating how it is that such-and-such lizard came to be so good at digesting mulberries, it’s important to make sure that it is good at digesting mulberries. Fortunately, we can test this without knowing anything about its evolutionary history.

This is perhaps more obvious in genetics, where we can ask what a gene does, rather than what an organ does. To find out, geneticists typically try to knock the gene out, that is, to raise a generation of fruit flies or mice or zebrafish or what have you that don’t have the gene in question, then see what goes wrong. For instance, when the eyeless gene in fruit flies is damaged or missing, the resulting flies develop without eyes (hence the name).

It gets more complicated than this, of course. Scientists can try to activate the gene in different parts of the body or at different times, and see what happens. Or they can compare different alleles of the gene, or artificially-mutated versions, to see what happens (perhaps it doesn’t control eyes specifically, but all round body parts? Or perhaps it directs each segment to become whatever it’s “supposed” to become?), but this sort of experimentation and observation allow scientists to figure out what a gene (or an organ) does.

Now, this is a bit different from asking what a gene or organ is for. The latter phrasing implies that the gene or organ only does one thing, or has one primary function, and perhaps one or two secondary ones. And while this works in a lot of cases, there are a lot of cases where it doesn’t. For instance, I think it works to say that “the heart is for pumping blood”, because that’s something it does; it also does a good job of pumping blood; it’s the only organ I have to pump my blood, so I rely on my heart to do this; and I can’t do anything else with it. (One might, however, look at it from the point of view of a man-eating tiger, who doesn’t care what I plan to do with my heart. From its point of view, the purpose of my heart is to provide it with nourishment, same as my liver and lungs.)

But what about a bird’s wing? Is it for flight? (Not in ostriches, it isn’t.) Or perhaps it’s for displaying colorful plumage, the better to attract a mate. Or is it for protecting its eggs? Birds do all of these things with wings. And so, I suggest that it’s better to ask “what can you do with it?” rather than “what is it for?” (Besides, think how boring movies like Cast Away or The Martian would be if their protagonists only used things for their intended purpose.)

Now, it may be that when Feser says that a thing is “directed toward” something, he means much the same thing as I do when I ask what that thing is good for. If so, then I think the difference is that I try to allow for the possibility of a thing having multiple uses, while Feser prefers that things have one and only one use. For instance, we saw that he considers sex to have one main purpose — reproduction — and every other use (fun, bonding) is secondary to that.

Series: The Last Superstition

Cover of "The Last Superstition"
The Last Superstition: Great Gobs of Uncertainty

Chapter 6: The lump under the rug

In this section, Feser argues that the existence of the mind is incompatible with materialism. Not only that, but materialist explanations of mind often refer, if only implicitly or subconsciously, to aristotelian concepts.

But first, he has to dispel a misconception:

to say that something has a final cause or is directed toward a certain end or goal is not necessarily to say that it consciously seeks to realize that goal. […] Thus it is no good to object that mountains or asteroids seem to serve no natural function or purpose, because Aristotelians do not claim that every object in the natural world necessarily serves some function. [pp. 237–238]

As I understand it, this is like saying that a pair of glasses is for improving sight, but of course the glasses themselves can’t possibly be conscious of this.

This is indeed an important point to keep in mind, and it’s a pity that the next sentence is

What they do claim is that everything in the world that serves as an efficient cause also exhibits final causality insofar as it is “directed toward” the production of some determinate range of effects.

Yes, but pretty much everything is the efficient (or proximate) cause of something. The mountains and asteroids that Feser just mentioned are the efficient cause of certain photons being reflected from the sun into my eye. Their gravity also attracts me, though only in hard-to-measure ways. A mountain can affect the weather and climate around it, and depending on its orbit, the asteroid might be on its way to kill all life on Earth. Does this “production of some determinate range of effects” automatically mean that they have final causes? Are these final causes related to what they do as efficient causes? That is, if a star looks beautiful in a telescope, does that mean that it’s for looking beautiful? Or, to come back to an earlier example, would an aristotelian say that the moon orbits, therefore it’s for orbiting?

If so, then this reflects a childish understanding of the world, one where bees are there to pollinate plants, rain is there to water them, and antelopes are there to feed lions. If not, and if a thing’s final cause can be very different from its efficient cause (e.g., the moon orbits the Earth, and reflects light, but maybe its final cause is something else, like eclipses), then why bring it up?

The Mind as Software

Next, Feser considers the currently-fashionable metaphor of seeing the brain as a computer that processes symbols. Since I criticized him earlier for not understanding software, or even of considering “Form” as a type of software, I was interested to see what he had to say.

First of all, nothing counts as a “symbol” apart from some mind or group of minds which interprets and uses it as a symbol. […] By themselves they cannot fail to be nothing more than meaningless neural firing patterns (or whatever) until some mind interprets them as symbols standing for such-and-such objects or events. But obviously, until very recently it never so much as occurred to anyone to interpret brain events as symbols, even though (of course) we have been able to think for as long as human beings have existed. [p. 239]

Here, Feser confuses the map with the territory: we can explain the brain at a high level by comparing it to a computer processing symbols. But symbols are only symbols if they’re interpreted as such by a mind. So neural firing patterns aren’t true according-to-Hoyle symbols, therefore checkmate, atheists!

This is like saying that the circadian rhythm is not a clock, because clocks have hands and gears.

Likewise, a little later, he writes:

No physical system can possibly count as running an “algorithm” or “program” apart from some user who assigns a certain meaning to the inputs, outputs, and other states of the system. [p. 240]

Again, Feser is paying too much attention to the niceties and details at the expense of the gist.

Imagine a hypothetical anthill. In the morning, the ants head out from the anthill, roughly at random, dropping pheromones on the ground as they do so. If one of the ants stumbles upon a piece of food, it picks it up and follows its trail back to the anthill. If its left antenna senses pheromone but the right one doesn’t, it turns a bit to the left; if its right antenna senses pheromone but its left one doesn’t, it turns a bit to the right. If both sense pheromone, it continues in a straight line. If we trace the biochemical pathways involved, we might find that the pheromone binds to a receptor protein that then changes shape and affects the strength with which legs on one or the other side of the body push against the ground, which makes the ant turn left or right.

We can imagine similar mechanisms by which other ants, sensing that one trail smells twice as strongly of pheromone (because the first ant traversed it twice) and will prefer to follow that trail rather than wander at random.

These ants, of course, have no real brain to speak of. There’s no question of an ant being able to understand what a symbol is, let alone interpret it, let alone consciously follow an algorithm. All of the above is just fancy chemistry. And so Feser would, no doubt, say that the first ant is not following a “retrace my tracks” algorithm. Nor are the other ants following an algorithm to look for food where some food has already been discovered. Whatever it is that these ants are doing, it’s not an algorithm, because no one is assigning meaning to any part of the system.

But that doesn’t change the fact that the ants are finding food and bringing it back to the anthill. In which case, who cares if it’s a proper algorithm, or just something that looks like one to us humans?

Only what can be at least in principle conscious of following such rules can be said literally to follow an algorithm; everything else can behave only as if it were following one. [p. 241]

Feser then imagines a person who assigns arbitrary meanings to the buttons and display on a calculator (I like to think of a calculator whose buttons have been scrambled, or are labeled in an alien alphabet):

For example, if we took “2” to mean the number three, “+” to mean minus, and “4” to mean twenty-three, we would still get “4” on the screen after punching in “2,” “+,” “2,” and “=,” even though what the symbols “2 + 2 = 4” now mean is that three minus three equals twenty-three. [p. 242]

And likewise, if the pattern of pixels “All men are mortal” were interpreted to mean that it is raining in Cleveland, that would lead to absurd results.

What Feser ignores is that no one would use that calculator, because it doesn’t work. Or, at least, anyone who put three apples in a basket, then ate three of them, and expected to be able to sell 23 apples at market would soon realize that Mother Nature doesn’t care for sophistry.

If we had a calculators where the keycaps had all been switched around, or were labeled in alienese, we could eventually work out which button did what, by using the fact that any number divided by itself is 1, that any number multiplied by zero is zero, and so on. The specific symbols used for these operations, the numerical base the calculator uses, and other details don’t matter so long as the calculator can be used to do arithmetic, any more than a car’s speed changes depending on whether you refer to it in miles per hour, kilometers per hour, knots, or furlongs per fortnight.

Feser also applies his reasoning to Dawkins’s theory of memes:

If the competition between memes for survival is what, unbeknown to us, “really” determines all our thoughts, then we can have no confidence whatsoever that anything we believe, or any argument we ever give in defense of some claim we believe, is true or rationally compelling. For if the meme theory is correct, then our beliefs seem true to us, and our favored arguments seem correct, simply because they were the ones that happened for whatever reason to prevail in the struggle for “memetic” survival, not because they reflect objective reality. [p. 245]

This is reminiscent of Alvin Plantinga’s idea that since natural selection selected our senses for survival rather than for accuracy, then they can’t be trusted. That is, if I see a river in front of me, it’s merely because perceiving the current situation (whatever it might be) as a river helped my ancestors survive, and not necessarily because the current situation includes a river. Feser’s argument is similar, but applied to thoughts instead of senses.

https://www.youtube-nocookie.com/embed/hou0lU8WMgo?rel=0

This argument is technically correct, but less interesting than one might think: for one thing, we don’t need to speculate about whether our senses or thought processes are fallible: we know that they are. Every optical illusion tricks us into seeing things that aren’t there, and the psychological literature amply catalogs the ways in which our thoughts fail us (for instance, humans are notoriously bad at estimating probabilities). And for another, the best way to respond correctly to objects in the environment is, to a first approximation, to perceive them accurately.

If I may reuse my earlier illustration, imagine a person who thinks that the word “chair” refers to a yellow tropical fruit, the one that you and I call “banana”, and vice-versa. How long would it take this person to realize that they have a problem? If I invited them into my office and said, “take a chair”, they might look around for a bowl of fruit, but after two or three such instances, they’d probably realize that “chair” doesn’t mean what they think it does. On the other hand, it took me years before I realized that “gregarious” means “friendly” rather than “talkative”.

A clever writer can probably devise a dialog where “chair” can mean either “chair” or “banana”, but it would be difficult to do so, and would probably sound stilted. By comparison, it would be much easier to write a piece that makes sense whether you think that “gregarious” means “friendly” or “talkative”. And likewise, we can imagine an animal whose senses are mis-wired in such a way that it perceives a dangerous predator as a river, and has muscles and nerves mis-wired such that when it thinks it’s walking toward the river, it’s actually running away from the predator. But this is a contrived example, and unlikely in the extreme to be useful in the long run. A far more effective strategy (and one far more likely to evolve) is having some simple rules give the right answer 80% or 90% of the time. That is, to perceive the world accurately enough to survive in most plausible situations.

Feser and Plantinga are committing what’s been called the “any uncertainty implies great gobs of uncertainty” fallacy.

Series: The Last Superstition