Who Needs Morals, Anyway?
The most-often-asked question when debating morality with theists is, “but where do you get your morals?” Of course, if the theist says “I get my morality from the Vedas/Quran/Bible/Dianetics”, that doesn’t help, since it just raises the question that Matt Dillahunty posed at his debate at UMBC: let’s say some being comes along and says, “I am a god. Here’s a book with my moral system”, then so what? How do we decide whether the system in the book is any good?
I thought I’d step back for a moment and ask, what if there were no morals?
Maybe there are no rules, or no one to give them. Maybe there are rules, but nobody knows them. Maybe the rules are known, but they’re ignored, and there is no mechanism for enforcing them, not even a twinge of guilt. What then?
I don’t think anyone has any trouble imagining this sort of world: theft and lying are rampant, people will kill each over a can of beans and not feel remorse. In fact, there wouldn’t be any cans of beans, because the industry required to produce them couldn’t exist without some kind of stable society and the ability to form long-term associations. A world where you’re constantly looking over your shoulder, lest your own child stab you in the back.
Okay, so this vision may not be accurate. Maybe some combination of game theory and psychology can show that there might be amoral societies where life doesn’t suck as much as what I described.
But I think it’s safe to say that the vision of a world without morals that I described above, or the one that you imagined, represents our fear of what would happen without some sense of morality.
If you’re with me so far, then presumably you’ll agree that then morality is a way of avoiding certain Bad Things: living in fear, being killed or seeing your loved ones killed, and so on; and also of being able to get some Good Things: establishing trust, assuring some level of stability from day to day, and so forth.
We may not agree on anything. You might want to security cameras on every street corner, to make the risk of being robbed as small as possible, and I might feel that the feeling of not being watched all the time is worth the occasional mugging. But if we can agree in broad outline that certain outcomes (like being killed) are bad, others (like knowing where our next meal is coming from) are good, then morality reduces to an engineering problem.
That is, it’s simply(!) a matter of figuring out what kind of world we want to live in, what rules will allow us to get along, and how to get there.
Obviously, this is a thorny problem. But nobody said this was going to be easy. Well, nobody who wasn’t trying to sell you something. As is the case with every engineering project ever, not only are there conflicting requirements, but they change over time. Everyone wants to put their two cents in, and everyone thinks their personal pet cause is the most important one of all. Finding a solution requires political and diplomatic negotiation, and convincing people to give up something in order to strike a deal. It’s enough to make your head spin.
But this strikes me as a huge problem, not an intractable one. We can tract this sucker. We have enough history behind us, and enough data collection methods, that we can see what works and what doesn’t, which sorts of societies are worth living in and which aren’t, and try to figure out how to get where we want.
Saying “I get my morals from an old book” is a lazy cop-out. It’s the response of someone who doesn’t want to look at the problem, let alone try to solve some part of it. And if you’re not going to help, the least you can do is stay out of the way of those who are trying to fix things.
You’ve touched tangentially on one of the religious apologists’ scare tactics: that without god-ordained morals, society would disintegrate into squalid anarchy, ergo we need religion. But surely this argument is self-refuting, as it invokes our instinctive understanding that we need some sort of cooperative system to make life bearable. Since we each know what sort of things make us happy, and there’s a fair bit of commonality around that across humanity, there’s no obvious reason we can’t cut out all these gods, priests and other middle men, and just hash out some sort of basic lets-get-along agreement on rights, obligations and behaviours.
Which is more-or-less what secular democracy is, or aspires to be.
This in turn goes back to the Euthyphro dilemma: is something good because the gods say so, or do the gods say that X is good because X is good on its own? If it’s the former, then murder or rape would be good if God said so; if the latter, then God’s just relaying a lesson, but if we can figure out what’s good on our own, then we don’t need God.
Any time there’s a discussion of secular morality, someone is bound to bring up the example of the psychopath who takes delight in tormenting people. The implication is that everyone’s desires are different and mutually-incompatible, and that no opinion is better than any other. But this ignores the fact that the vast majority of people do, in fact, want certain things, like health, safety, and the ability to do what they want. And people don’t put the same value on everything: while I might, in principle, want to be able to go out and kill someone, I’ll gladly sacrifice that freedom in exchange for an assurance that I won’t be killed myself. It’s things like this that allow us to get a hold on the problem.
One problem discussed of late (eg. Sam Harris’ book) is trying to come up with “objective” reasons why one situation is “better” than another, in a way that doesn’t run afoul of Hume. On a pure relativist view, all states are considered indistinguishable. However, I think this is wrong. Clearly, if I am miserable, I will try to make myself happy, but if I am happy, I will simply try to stay happy. Thus Happiness is a more stable state than Misery. If we have a large assemblage of people, there will always be instability as long as some are Miserable — all else being equal, the system will tend towards an aggregate state where everyone is reasonably Happy. Therefore, there is an objective difference between universal Happiness and any state which contains some admixture of Misery.
Note, I am not attempting to show that achieving a stable state is a moral good, only to show that there is an objective way to distinguish among the possible states of the system, ie. there exists a unique state objectively different from all the others. Granted, my model is abstract and simplistic: in the real world, individuals striving for their particular Happy Place frequently conflict with one another (often due to resource limitations). From the need to resolve such competing interests arise much of meta-ethics and politics.