Tag Aristotelianism

Do You Even Science, Frater?

The other day, I went to a Thomistic Society talk about Aquinas’s views on the Problem of Evil and other topics. At one point, the presenter casually mentioned that humans engage in self-destructive behavior, like alcoholism, self-mutilation, drug addiction, etc., while non-human animals don’t.

That made my [citation needed] sense tingle, so I looked around. Among other things, I found Animal models of self-destructive behavior and suicide:

Research on nonhuman primates has demonstrated that self-mutilation is a common reaction to extreme disruptions of parental caretaking in other mammalian species as well. For example, isolated young rhesus monkeys engage in self-biting and head slapping and banging (21). Analgesia is also common in self-destructive animals.

Or this non-scholarly page about the effects of drugs, including addiction, in animals such as horses, goats, and even bees.

So apparently this speaker simply wasn’t aware of self-destructive behavior in non-human animals. I don’t remember what her point was, so it might have been a minor thing, but still, it wasn’t true.


But this brought to mind the previous Thomistic Institute talk I went to: there, the presenter casually mentioned that humans engage in abstract reasoning, while animals don’t.

Again, this didn’t seem quite right. This study from 2007 involved teaching dogs to push a button when shown a set of pictures of dogs, and another button when shown a set of pictures of landscapes.

Interestingly, presentation of pictures providing contradictive information (novel dog pictures mounted on familiar landscape pictures) did not disrupt performance, which suggests that the dogs made use of a category-based response rule with classification being coupled to category-relevant features (of the dog) rather than to item-specific features (of the background).

Or this paper, entitled simply Concept Learning in Animals, whose abstract says:

We suggest that several of the major varieties of conceptual classes claimed to be uniquely human are also exhibited by nonhuman animals. We present evidence for the formation of several sorts of conceptual stimulus classes by nonhuman animals: perceptual classes involving classification according to the shared attributes of objects, associative classes or functional equivalences in which stimuli form a class based on common associations, relational classes, in which the conceptual relationship between or among stimuli defines the class, and relations between relations, in which the conceptual (analogical) relationship is defined by the relation between classes of stimuli. We conclude that not only are nonhuman animals capable of acquiring a wide variety of concepts, but that the underlying processes that determine concept learning are also likely to be quite similar.

No one will deny that humans can perform mental feats that non-human animals can’t, as far as we can tell. Other animals can’t play chess, prove mathematical theorems, or form complex sentences, as far as I know. But at the same time, the issue isn’t a black-and-white “humans can reason abstractly and animals can’t.”


Lastly, I’ve written at length about Thomist Edward Feser, and his ignorance of science from Newton on up.

Individually, each of these mistakes are just that: mistakes. Or ignorance: philosophers can’t be expected to be masters of nuclear physics or animal cognition. Or simplifications that gloss over a complex idea in order to make a broader point.

But collectively, I do see a pattern of Thomists being wrong on matters of science in a thousand small ways. That suggests that either they don’t bother checking whether their beliefs are true, where possible, and correct their errors, or else they have other beliefs that lead them to erroneous conclusions. And either way, if I can’t trust them on the small stuff, why should I believe them on the big stuff?

 

Cover of "The Last Superstition"
The Last Superstition: The Final Insult

Chapter 6: Irreducible teleology, cont.

Having exoriated biologists over the fact that popular science writers use terms like “purpose” and “blueprint”, Feser moves on to nonliving systems, in which he also sees purpose and intentionality. For instance, the water and rock cycles (I’d never heard of a “rock cycle” before, but okay):

The role of condensation in the water cycle, for example, is to bring about precipitation; the role of pressure in the rock cycle is, in conjunction with heat, to contribute to generating magma, and in the absence of heat to contribute to generating sedimentary rock; and so forth. Each stage has the production of some particular outcome or range of outcomes as an “end” or “goal” toward which it points. [p. 258]

Here, Feser implies that the water cycle is supposed to exist, and condensation exists to further that goal. Likewise, of course you have to have pressure, otherwise how can you have magma? It seems as though he is projecting his opinions onto the world so hard that he can’t imagine that maybe water does what water does, and it’s only because the temperature on the surface of this planet oscillates in a certain range that allows water to behave in such an interesting fashion.

Basic laws of nature

Moving on to fundamental science, Feser graces us with a rather interesting idea of how minds work:

Mental images are vague and indistinct when their objects are complex or detailed, but the related concepts or ideas are clear and distinct regardless of their complexity; for example, the concept of a chiliagon, or 1000-sided figure, is clearly different from the concept of a 999-sided figure, even though a mental image of a chiliagon is no different from a mental image of a 999-sided figure. [p. 260]

I’m not quite sure what he’s trying to say, though the best spin I can put on it is that we have trouble imagining complex things clearly. I agree, and this means that we need to be careful when thinking about complex things, because we’re likely to overlook something.

But since Feser brings this up in the context of thinking about abstract things, I have to wonder. When he talks about the possibility of purely material minds, he sounds like someone who thinks that a DVD has to have little pictures on it; that if you put a CD close enough to your ear, you’ll hear the music on it. Maybe I’m wrong; but that’s the impression I get, especially after the bit in Chapter 4 where he seemed to think that thinking about triangles would have to involve part of your brain becoming triangular.

He goes on for a bit against David Hume and complaining about the “anti-Aristotelian ideological program” (p. 261) of modern science. Basically, he tells us, science cannot proceed without Aristotle, but scientists are fiercely opposed to him on ideological grounds. Probably because they just want to sin, or something. In fact,

Despite the undeniable advances in empirical knowledge made during the last 300 plus years, then, the work of the scientists who made those advances simply does not support the philosophical interpretation of those advances put forward by the proponents of the “Mechanical Philosophy” and the contemporary materialists or naturalists who are their intellectual heirs [p. 264]

See, scientists are smart people who have been very successful at figuring out how the universe operates, so successful that we now take things like nuclear weapons and GPS receivers for granted. But they’re not smart enough to figure out the implications of their work.

If you look around the Internet, you can find any number of religious figures or just plain cranks who are convinced that their holy book, prophet, or whoever predicted various facts long before scientists did. They usually do this by taking some vague or poetic passage in scripture, combining it with some scientific discovery, and interpreting the former to describe the latter. For example, this page on Islam and embryology explains that

“The three veils of darkness” [in the Quran] may refer to: (l) the anterior abdominal wall; (2) the uterine wall; and (3) the amniochorionic membrane

And this page explains that “[he that] stretcheth out the heavens as a curtain” in the Bible refers to cosmic expansion.

Likewise, in this chapter, Feser talks about scientists rediscovering the genius of Aristotle. But it’s also painfully obvious that the scientific revolution did not begin in earnest with Aquinas, but rather several centuries later. That, combined with the fact that science has been so wonderfully successful despite the fact that the average scientist probably couldn’t give a summary of Aristotle’s or Aquinas’s ideas strongly suggests that they’re simply irrelevant to science.

It’s the moon, stupid

By this point, Feser thinks that he’s established that the millennia-old ideas of Aristotle, refined by Aquinas’s medieval insights, are correct. He bemoans the fact that they’ve fallen into obscurity:

But if Aristotle has, by virtue of developments in modern philosophy and science, had his revenge on those who sought to overthrow him at the dawn of the modern period, why is this fact not more widely recognized? One reason is the prevailing general ignorance about what the Aristotelian and Scholastic traditions really believed, what the actual intellectual and historical circumstances were that led to their replacement by modern philosophy in its various guises, and what the true relationship is between the latter and modern science. [p. 266]

The blame for the “general ignorance” part seems to land squarely on Feser’s shoulders. It’s up to him and his colleagues to educate the rest of us. But honestly, maybe ignorance is his ally: Feser’s exposition of Aristotle’s and Aquinas’s ideas makes it clear that they’re largely based on ignorance and superstition, and can be safely relegated to History of Ideas class, and ignored in everyday life.

He closes by quoting the proverb, “When the finger points at the moon, the idiot looks at the finger” (p. 267) as an analogy to the way objects “point to” things beyond themselves, but “the secularist” doesn’t realize this. Fittingly, he closes on an insult: “It’s the moon, stupid.” (p. 267)

Series: The Last Superstition

Cover of "The Last Superstition"
The Last Superstition: Great Gobs of Uncertainty

Chapter 6: The lump under the rug

In this section, Feser argues that the existence of the mind is incompatible with materialism. Not only that, but materialist explanations of mind often refer, if only implicitly or subconsciously, to aristotelian concepts.

But first, he has to dispel a misconception:

to say that something has a final cause or is directed toward a certain end or goal is not necessarily to say that it consciously seeks to realize that goal. […] Thus it is no good to object that mountains or asteroids seem to serve no natural function or purpose, because Aristotelians do not claim that every object in the natural world necessarily serves some function. [pp. 237–238]

As I understand it, this is like saying that a pair of glasses is for improving sight, but of course the glasses themselves can’t possibly be conscious of this.

This is indeed an important point to keep in mind, and it’s a pity that the next sentence is

What they do claim is that everything in the world that serves as an efficient cause also exhibits final causality insofar as it is “directed toward” the production of some determinate range of effects.

Yes, but pretty much everything is the efficient (or proximate) cause of something. The mountains and asteroids that Feser just mentioned are the efficient cause of certain photons being reflected from the sun into my eye. Their gravity also attracts me, though only in hard-to-measure ways. A mountain can affect the weather and climate around it, and depending on its orbit, the asteroid might be on its way to kill all life on Earth. Does this “production of some determinate range of effects” automatically mean that they have final causes? Are these final causes related to what they do as efficient causes? That is, if a star looks beautiful in a telescope, does that mean that it’s for looking beautiful? Or, to come back to an earlier example, would an aristotelian say that the moon orbits, therefore it’s for orbiting?

If so, then this reflects a childish understanding of the world, one where bees are there to pollinate plants, rain is there to water them, and antelopes are there to feed lions. If not, and if a thing’s final cause can be very different from its efficient cause (e.g., the moon orbits the Earth, and reflects light, but maybe its final cause is something else, like eclipses), then why bring it up?

The Mind as Software

Next, Feser considers the currently-fashionable metaphor of seeing the brain as a computer that processes symbols. Since I criticized him earlier for not understanding software, or even of considering “Form” as a type of software, I was interested to see what he had to say.

First of all, nothing counts as a “symbol” apart from some mind or group of minds which interprets and uses it as a symbol. […] By themselves they cannot fail to be nothing more than meaningless neural firing patterns (or whatever) until some mind interprets them as symbols standing for such-and-such objects or events. But obviously, until very recently it never so much as occurred to anyone to interpret brain events as symbols, even though (of course) we have been able to think for as long as human beings have existed. [p. 239]

Here, Feser confuses the map with the territory: we can explain the brain at a high level by comparing it to a computer processing symbols. But symbols are only symbols if they’re interpreted as such by a mind. So neural firing patterns aren’t true according-to-Hoyle symbols, therefore checkmate, atheists!

This is like saying that the circadian rhythm is not a clock, because clocks have hands and gears.

Likewise, a little later, he writes:

No physical system can possibly count as running an “algorithm” or “program” apart from some user who assigns a certain meaning to the inputs, outputs, and other states of the system. [p. 240]

Again, Feser is paying too much attention to the niceties and details at the expense of the gist.

Imagine a hypothetical anthill. In the morning, the ants head out from the anthill, roughly at random, dropping pheromones on the ground as they do so. If one of the ants stumbles upon a piece of food, it picks it up and follows its trail back to the anthill. If its left antenna senses pheromone but the right one doesn’t, it turns a bit to the left; if its right antenna senses pheromone but its left one doesn’t, it turns a bit to the right. If both sense pheromone, it continues in a straight line. If we trace the biochemical pathways involved, we might find that the pheromone binds to a receptor protein that then changes shape and affects the strength with which legs on one or the other side of the body push against the ground, which makes the ant turn left or right.

We can imagine similar mechanisms by which other ants, sensing that one trail smells twice as strongly of pheromone (because the first ant traversed it twice) and will prefer to follow that trail rather than wander at random.

These ants, of course, have no real brain to speak of. There’s no question of an ant being able to understand what a symbol is, let alone interpret it, let alone consciously follow an algorithm. All of the above is just fancy chemistry. And so Feser would, no doubt, say that the first ant is not following a “retrace my tracks” algorithm. Nor are the other ants following an algorithm to look for food where some food has already been discovered. Whatever it is that these ants are doing, it’s not an algorithm, because no one is assigning meaning to any part of the system.

But that doesn’t change the fact that the ants are finding food and bringing it back to the anthill. In which case, who cares if it’s a proper algorithm, or just something that looks like one to us humans?

Only what can be at least in principle conscious of following such rules can be said literally to follow an algorithm; everything else can behave only as if it were following one. [p. 241]

Feser then imagines a person who assigns arbitrary meanings to the buttons and display on a calculator (I like to think of a calculator whose buttons have been scrambled, or are labeled in an alien alphabet):

For example, if we took “2” to mean the number three, “+” to mean minus, and “4” to mean twenty-three, we would still get “4” on the screen after punching in “2,” “+,” “2,” and “=,” even though what the symbols “2 + 2 = 4” now mean is that three minus three equals twenty-three. [p. 242]

And likewise, if the pattern of pixels “All men are mortal” were interpreted to mean that it is raining in Cleveland, that would lead to absurd results.

What Feser ignores is that no one would use that calculator, because it doesn’t work. Or, at least, anyone who put three apples in a basket, then ate three of them, and expected to be able to sell 23 apples at market would soon realize that Mother Nature doesn’t care for sophistry.

If we had a calculators where the keycaps had all been switched around, or were labeled in alienese, we could eventually work out which button did what, by using the fact that any number divided by itself is 1, that any number multiplied by zero is zero, and so on. The specific symbols used for these operations, the numerical base the calculator uses, and other details don’t matter so long as the calculator can be used to do arithmetic, any more than a car’s speed changes depending on whether you refer to it in miles per hour, kilometers per hour, knots, or furlongs per fortnight.

Feser also applies his reasoning to Dawkins’s theory of memes:

If the competition between memes for survival is what, unbeknown to us, “really” determines all our thoughts, then we can have no confidence whatsoever that anything we believe, or any argument we ever give in defense of some claim we believe, is true or rationally compelling. For if the meme theory is correct, then our beliefs seem true to us, and our favored arguments seem correct, simply because they were the ones that happened for whatever reason to prevail in the struggle for “memetic” survival, not because they reflect objective reality. [p. 245]

This is reminiscent of Alvin Plantinga’s idea that since natural selection selected our senses for survival rather than for accuracy, then they can’t be trusted. That is, if I see a river in front of me, it’s merely because perceiving the current situation (whatever it might be) as a river helped my ancestors survive, and not necessarily because the current situation includes a river. Feser’s argument is similar, but applied to thoughts instead of senses.

https://www.youtube-nocookie.com/embed/hou0lU8WMgo?rel=0

This argument is technically correct, but less interesting than one might think: for one thing, we don’t need to speculate about whether our senses or thought processes are fallible: we know that they are. Every optical illusion tricks us into seeing things that aren’t there, and the psychological literature amply catalogs the ways in which our thoughts fail us (for instance, humans are notoriously bad at estimating probabilities). And for another, the best way to respond correctly to objects in the environment is, to a first approximation, to perceive them accurately.

If I may reuse my earlier illustration, imagine a person who thinks that the word “chair” refers to a yellow tropical fruit, the one that you and I call “banana”, and vice-versa. How long would it take this person to realize that they have a problem? If I invited them into my office and said, “take a chair”, they might look around for a bowl of fruit, but after two or three such instances, they’d probably realize that “chair” doesn’t mean what they think it does. On the other hand, it took me years before I realized that “gregarious” means “friendly” rather than “talkative”.

A clever writer can probably devise a dialog where “chair” can mean either “chair” or “banana”, but it would be difficult to do so, and would probably sound stilted. By comparison, it would be much easier to write a piece that makes sense whether you think that “gregarious” means “friendly” or “talkative”. And likewise, we can imagine an animal whose senses are mis-wired in such a way that it perceives a dangerous predator as a river, and has muscles and nerves mis-wired such that when it thinks it’s walking toward the river, it’s actually running away from the predator. But this is a contrived example, and unlikely in the extreme to be useful in the long run. A far more effective strategy (and one far more likely to evolve) is having some simple rules give the right answer 80% or 90% of the time. That is, to perceive the world accurately enough to survive in most plausible situations.

Feser and Plantinga are committing what’s been called the “any uncertainty implies great gobs of uncertainty” fallacy.

Series: The Last Superstition

Cover of "The Last Superstition"
The Last Superstition: Back to the Cave

Chapter 5: Back to Plato’s cave

This last section of Chapter 5 is basically a long jeremiad against everything and everyone Feser doesn’t like, with paranoid rants about the motivations of those who prefer post-Thomistic philosophies:

More precisely, their desire to re-orient human life toward this world and reduce the influence of religion led the early modern thinkers to abandon traditional philosophical categories and to redefine scientific method so that reason could no longer provide religion with the support it had always been understood to give it, at least not in any robust way. [p. 221]

The sexual revolution:

Traditionally, sodomy has been classified together with murder, oppression of the poor, and defrauding a laborer of his wages as one of the four sins that “cry out to heaven for vengeance.” [p. 223]

I can’t help wondering why sodomy — an ill-defined category that traditionally includes at a minimum anal sex, but also often includes oral sex — “cr[ies] out to heaven for vengeance”. Who, exactly, is being wronged? Who needs to be avenged? (Obviously I’m not talking about anal rape, where the operative word is “rape”.)

The word “traditionally” is an appeal to antiquity, the idea that an idea is good because it it old. In 1860 in the US, one could have defended slavery on the grounds that it has always been practiced.

Feser ends the chapter with an appeal to common sense (boldface added):

When we get clear on the general metaphysical structure of reality – the distinction between actuality and potentiality, form and matter, final causality, and so forth (all of which are mere articulations or refinements of common sense, and thus on all fours with the ordinary man’s belief in what his senses tell him) – we see that the existence of God, the immateriality and immortality of the soul, and the natural law conception of morality all follow. [p. 228]

Again, if there’s one thing we should have learned from the past few centuries of scientific endavor, it’s that what common sense and our senses tell us is often wrong: the earth orbits the sun. The tiny speck Betelgeuse is many times larger than our entire world; over 90% of all the matter in the universe is invisible and barely deigns to interact with us; heavy objects do not fall faster than light ones; objects in motion don’t just stop on their own; light beams sometimes behave like waves, and sometimes like ball bearings; two events aren’t simultaneous or non-simultaneous in an absolute sense.

If your metaphysics contradict physics, rather than explaining it, I’m pretty sure you’ve got a problem.

Series: The Last Superstition

Cover of "The Last Superstition"
The Last Superstition: A Grab-Bag of Objections

Chapter 5: Universal acid

Here Feser continues his earlier theme, listing more alleged problems caused by modernism. This is a grab-bag of philosophical problems, and while a lot of them are interesting in and of themselves, for the most part they have little or nothing to do with atheism — New or otherwise — and seem to be included here primarily so that Feser throw up his hands, declare these problems insoluble, and run back to aristotelianism. So I’ll be skipping a lot.

The problem of skepticism

In Aristotelianism, when the mind thinks about a thing, that thing’s essence exists in the mind. That is, when you think about a triangle, there’s triangularity in your mind. But if there’s no such matching of like to like (the universal triangularity impressing triangularity on your mind), how, Feser would like to know, can there be knowledge? Without universals, presumably there can be only representations.

I’m not sure I see a problem. This seems to be like asking how NOAA’s National Hurricane Center’s computer models can “be about” hurricanes without wind and rain in the data center.

Personal identity

In Feser’s view, a human being is a composite of soul and body, and a blastocyst is as much of a human being as Desmond Tutu or Terry Schiavo. But if we don’t start with these premises, then we have to figure out what constitutes a person. For instance, does the Star Trek transporter kill a person each time? (That is, it destroys one body and creates an identical one some distance away.) Various non-aristotelian approaches create paradoxes, or gray areas, or conclusions that Feser doesn’t like (e.g., that our lives have as much value as we give them), so they must be wrong.

Free will

In aristotelianism, there’s a significant difference between considered, voluntary action and involuntary action; between, say, proposing to your girlfriend after thinking about it for a year, and a hiccup (bold face added):

The formal and final causes of the action – that which gives intelligible structure to the movements – is just the soul considered as a kind of form, and in particular the activities of thinking and willing that are distinctive of the soul’s intellective and volitional powers. The action is free precisely because it has this as its form, rather than having the form, say, of an involuntary muscular spasm. [p. 208]

whereas under materialism,

human behavior differs in degree but not in kind from the behavior of billiard balls and soap suds. [p. 209]

This seems to be a case of mistaking the simplicity of the model for its purpose. That is, a person might say that the mind is ultimately deterministic, and bring up a model of a deterministic system that’s simple enough to be easily understood (billiard balls) by way of illustration. The other person thinks, “minds aren’t simple like billiard balls! This model must be wrong.”

But beyond this, I don’t see that aristotelianism really solves anything: Feser’s summary, above, seems to say that an action (like the decision to marry) is free if it has the Form of a free action. That sounds, well, arbitrary. How can we find out which actions have the Form of free actions? That is, how do we define “free action”? I’m sure there’s an interesting discussion to be had, but it’ll have to do with where to draw the boundary between “free” and “not free”, and I don’t see how casting this in terms of Forms or essences helps.

Natural rights

In Feser’s model, humans are rational animals by virtue of having human DNA, and we’ve all been given the same purposes: to know God, to reproduce, and so on. Thus, we have a right not to have those purposes interfered with.

But if you don’t start with Feser’s model, morality becomes messy and complicated. Not only that, but people come to different conclusions about what is and isn’t moral than they did in centuries past (Feser doesn’t say which, but I’m guessing he means gay rights). So he will have none of it.

Morality in general

This section boils down to, “How can we figure out what’s right and what’s wrong without being able to check our answers in the back of the book?” He throws in the usual conservative arguments about how if morality isn’t objective, then everything is just a matter of personal preference and whim:

Nor does [Hume] really have anything to say to a group of sociopaths – Nazis, communists, jihadists, pro-choice activists, or whomever – who seek to remake society in their image, by social or genetic engineering, say. [p. 213]

I like to point out that while the statement “life is better than death” is subjective — and you can find people who would disagree with it — the statement “the vast majority of people would rather live than die” is objective. And if we’re trying to come up with a set of rules that allow us to get along as best we can, then “don’t kill people” is a good one, since it’ll line up with their desires 99.999+% of the time, and they in turn won’t try to kill you back, which almost certainly lines up with your own desires.

Yes, a lot of the details, and even many of the broad strokes, are messy and uncertain. But I think we can all see a difference between, say, life under the British Parliament and life under the Taliban.

Well, maybe not all of us:

This attitude [acceptance of the “social contract” — arensb] has largely prevailed, though by no means completely, which is why modern Western civilization is only largely a stinking cesspool, and not yet entirely one. Give the Humeans and contractarians time though. [p. 215]

Thank you for that ray of sunshine.

Series: The Last Superstition

Cover of "The Last Superstition"
The Last Superstition: The Essence of Opium

Chapter 5: Feser v. Molière

In Molière’s play “Le Malade imaginaire” (The Imaginary Invalid or The Hypochondriac), there’s a scene between an oh-so-pretentious doctor and an equally pretentious medical student. The doctor asks the student, in dog Latin why it is that opium causes sleep. The student replies that opium has “virtus dormitiva” (Latin for “sleeping power”) which has the power to cause sleep. In other words, it causes sleep because it causes sleep. But if you say it in Latin, it sounds like an explanation.

Feser explains why this is an unfair characterization of Scholastic thought:

whatever the specific empirical details about opium turn out to be, the fundamental metaphysical reality is that these details are just the mechanism by which opium manifests the inherent powers it has qua opium, […] The empirical chemical facts as now known are nothing other than a specification of the material cause underlying the formal and final causes that define the essence of opium. [p. 181]

In other words, opium causes sleep because it has such-and-such chemical characteristics, and these characteristics in turn are just the implementation of opium’s power to cause sleep, a power that is part of opium’s essence. That’s part of what makes it opium; opium without somniferious powers wouldn’t be opium.

According to Wikipedia, opium is latex derived from opium poppies. One of its most important components is morphine, originally named after Morpheus, the Greek god of dreams, for its sleep-inducing properties. As far as I know, it works by binding to opioid receptors in the brain and triggering a cascade of biochemical reactions in the body, one effect of which is sleep.

The important part here seems to be that it binds to specific receptors in the brain. That is, some part of the morphine molecule has the correct shape to align itself with its matching molecules in the brain. Even if this explanation isn’t quite right, I hope it’s close enough for jazz.

So let’s imagine that we’ve managed to extract the morphine from a bottle of opium, and we’ve put some into a brownie or other foodstuff, so that if someone eats the laced brownie, they’ll fall asleep.

I think Feser would say that the bottle contains corrupted or denatured opium: it still has “sleep-inducing” as part of its essence, but due to tampering by humans, this feature can no longer be expressed (in the same way that a brain-damaged person retains the essence of a rational animal). The morphine is really just the implementation of opium’s sleep-causing essential property, and we’ve broken this implementation.

And on the other hand, we have a corrupted brownie, or at least an altered one: even if there’s nothing in the brownie essence about causing or preventing sleep, we now have a brownie that does cause sleep. The sleep-neutral essence remains the same, but the implementation does cause sleep.

So by moving a chemical, morphine, from A to B, we’ve “moved” the sleep-causing property from A to B, regardless of what their respective essences are. So “essence” doesn’t seem to be a useful concept, here. If we want to know whether some entity X will cause sleep (and that’s an important of the essence of opium, remember), we’re better off asking whether X contains morphine than whether X has a sleep-causing essence.

How, exactly, does “essence” help us figure out how the world works? How do we determine something’s essence?

What makes a human being a rational animal, on the Aristotelian view, is not that he or she actually does or can exercise rationality at some point or other, but rather that an inherent potential for the exercise of rationality is actually in every human organism in a sense in which it is not in a turnip, or a dog, or a skin cell. […] And yet an immature or damaged human being is still a human being, which entails that it has the form of a human being and thus the potentials inherent in that form, whether or not they are ever actualized. [p. 182]

I think we can all agree that the term “human being” covers a wide variety of entities, including men, women, infants, centenarians, and much variation besides. And we can also, I think, agree that a bundle of HeLa cells is not a human being, even though each such cell has human DNA, and traces its ancestry back to one specific person who was unquestionably human. That is, some distinctions matter, and others don’t: there are many differences between Anne Frank and Nelson Mandela, but they’re small enough that both of them count as full-fledged human beings. The differences between Henrietta Lacks and a HeLa cell, on the other hand, seem big enough that it seems worth having different terms for the two. Or, as Feser would probably say, Henrietta Lacks and HeLa cells have different essences. The multi- vs. unicellularity, the presence or absence of individual organs, seem to make this a good joint at which to carve nature, to use Plato’s phrase.

Feser seems to think that nature is all joints; that everything falls into one category or another, and that these categories are natural and objective. That’s why he’s adamant that a newly-fertilized egg is as much of a human being as a thirty-year-old woman. He doesn’t seem to accept that we humans ultimately decide where we want to draw boundaries between categories, or even whether we want to draw boundaries at all. But if natural, objective boundaries were there, presumably there wouldn’t have been any argument over whether Pluto is a planet. Instead, astronomers agreed on the physical characteristics of Pluto and the other planets — their mass, size, position, velocity, sphericity, chemical composition (approximate), and so forth — and disagreed over which criteria ought to be used to classify something as a planet.

So yes, there are joints at which to carve nature, but they often depend on what we’re trying to do: if you were a pet store clerk and had a blind kitten – an entity that’s just like an ordinary kitten, aside from being blind – this one difference seems small enough that you could still find someone to adopt it. But if you had an entity that’s just like an ordinary parrot, aside from being dead, that one difference seems much more of a deal-breaker.

https://www.youtube-nocookie.com/embed/4vuW6tQ0218?rel=0

Series: The Last Superstition