The Ken Ham vs. Bill Nye Debate

I watched the debate between Bill Nye “The Science Guy” and Ken Ham, director of Answers in Genesis, the outfit that runs the creation museum in Kentucky.

When I heard that Nye had agreed to the debate, I thought it was a bad idea, for all the usual reasons, and in particular that it would give creationism too much credibility: if you put Neil DeGrasse Tyson on stage with someone who thinks we can travel to mars by growing pixie wings, the latter has a lot more to gain than the former. Pixie-wing-guy gets to brag that he discussed issues with a prominent scientist, whereas Tyson has to admit that he wasn’t allowed to laugh pixiw-wing man out of the room.

And so it was last night. A man who basically believes that, as Robin Ince put it, “Magic Man done it!” got to share a stage with a man who has mountains of real-world evidence behind his assertions.

Having said that, it didn’t turn out as badly as I feared. Not so much because Nye did well, though for the most part he did. Rather, because Ken Ham did a pretty good job of explaining what young-earth creationism is: it has nothing to do with evidence (he said in the Q&A that there was nothing that could change his mind) and everything to do with believing a particular interpretation of the Bible.

It’s traditional to say that no one’s mind is ever changed by such debates, but that’s not always the case. I don’t know how many people were on the fence last night. But if any of them didn’t know what creationism was before, they do now. As stealth-creationist Casey Luskin puts it:

People will walk away from this debate thinking, “Ken Ham has the Bible, Bill Nye has scientific evidence.”

I haven’t done an extensive search, but the consensus seems to be that Nye won the evening. Yes, that’s what you’d expect from sites like Pharyngula or Friendly Atheist or Daily Kos, but Uncommon Descent, Evolution News & Views seem to agree as well. Charisma News doesn’t have any comments, gloating or otherwise. The Blaze’s comments seem about evenly split between “Ham won” and “Nye won”; given its readership, I would’ve expected it to tilt much farther toward Ham’s side.

I’m also surprised at how big a deal this was. I’ve seen plenty of these sorts of debates over the years, but typically they don’t interest anyone except the sorts of wonks who actually follow this stuff. But this one was streamed live on CNN, and covered in the Washington Post and on NPR. So it’s possible that a lot of people who haven’t thought much about creationism have now been introduced to it, and hopefully shown that it’s not science, not even close.

See also

Paul Taylor’s Fickle Exactness

Paul Taylor, who helps Eric Hovind run the family misinformation mill in the absence of his father Kent, accuses people who think Noah’s Ark was just a story of not having done our homework:

Question 1: How did they all fit on the boat and who put them in there? Don’t forget that there were two of each specie(sic) (male & female) 

[…] The questioner makes a more serious error, however, by not actually reading the Bible. If he had read the account in Genesis, then he would have realized that the biblical account does not even refer to “species.” Instead, it refers to kind. The Hebrew word for kind is mîn. For this reason, creation biologists have started to use their own technical term for this grouping of creatures—baramin. The Hebrew bara means “created,” so baramin is a created kind.

Got that, scoffers? Specific words have specific meanings, and unless you’re careful to use just the right word, you’re arguing against a straw man!

For example:

Question 4: Didn’t Noah have to wait for many years to get the snail on-board?

Noah did not take invertebrates onto the Ark, only animals with lungs (Genesis 7:15). Invertebrates can survive such conditions.

Spineless, lungless, eh, what’s the dif’?

Anatomical snail diagram
Source: Wikimedia Commons.
Aside from the absence of a spine, note the presence of a rather large lung, which Taylor says doesn’t exist.

Of course, that’s just a diagram. That snail lung could be as fictitious as dragons and unicorns. How about a photo?

Snail lung. Photo by salyangoz, from here.

Yeah, but that’s ‘shopped. I can tell by the pixels. Pixels are scoffers. Well-known fact, that.

Who Needs an Ultimate Source of Authority?

Over at Creation Today, Paul Hovind (son of Kent “Dr. Dino” Hovind) and Eric Taylor have a video entitled What Is Your Ultimate Source of Authority?. The blurb says,

Paul and Eric welcome guest Jay Seegert of the Creation Education Center to discuss the importance of world-views, historical science versus observational science, and the importance of the authority of Scripture.

As much as I love each and every one of you, I couldn’t bring myself to watch the video to critique its specific points for you, so what I say below may not have any bearing on their actual positions. If it helps, imagine I’m having a conversation with a fundie sockpuppet that bears only the most fleeting resemblance to any person or event, living or dead.

But presumably the point is that people are unreliable, observations are unreliable, historical records are unreliable, chains of reasoning are unreliable, and so you need some kind of pole star to guide you. And, of course, the only reliable guide is the word of God because we’ve made up our minds that God never lies; and that the Bible is the word of God because we’ve made up our minds that it is. QED.

But what if the Bible isn’t reliable? What if there aren’t any reliable pole stars by which we can unambiguously guide the truth or falseness of a proposition? Would that mean that we can’t know anything? Do we, in short, need an ultimate source of authority?

Actually, murder mysteries are an entire literary genre where stories often take place in a context where there are no 100% reliable witnesses. Any of the suspects might or might not be lying; any given clue may or may not have been planted; anyone might be concealing information or covering for someone or lying for some other reason. And yet, the detective usually manages to figure out whodunit.

The thing is that just because something isn’t 100% reliable doesn’t make it absolutely unreliable. The GPS unit in your car is only reliable to something like 7 feet (and it was worse back when they had Selective Availability turned on), so it may not be able to tell you whether you’re in the northbound or southbound lane, but it can tell you whether you’re in Washington or Baltimore. Weather forecasts are often wrong, but if you consistently bet even money on tomorrow’s forecast being wrong, you’re going to lose money.

You could, of course, ask how we can know that the weather report was wrong. For all we know, meteorologists are always right and it’s only our lying eyes that tell us it’s raining on a day that was supposed to be sunny. Except that when we see rain, we usually have multiple lines of evidence: we can hear the rain, feel it on our skin, hear from friends who also think it’s raining, etc. So you have a cluster of information sources (sight, hearing, touch, friends) that confirm each other, and one outlier (the weather report).

As we grow up and observe the world around us through our senses and other sources of information, we can figure out how reliable those sources are, and under which conditions. For instance, if it’s broad daylight and I don’t see a cat in front of me, it’s a safe bet that there’s no cat in front of me; if it’s dark, then the fact that I don’t see a cat is a far less reliable indicator of the absence of cat (sorry about your tail, kitty!).

In fact, we can look at the scientific method as an ongoing search to figure out which observations are reliable and which ones aren’t, one that has so far come up with hundreds or thousands of Ways of Being Wrong. All the business with lab coats and double-blind studies and such is secondary, in service of the primary goal of avoiding Ways of Being Wrong.

Everything I’ve said above also applies if one of our sources of information is 100% reliable. If, say, the Bible as interpreted by Eric Hovind were absolutely correct in all cases, we should be able to figure it out by comparing it to other sources of information that we’re pretty sure are pretty reliable, like scientific observation. But unfortunately for him, we have far too many cases where multiple independent lines of evidence (such as radiometric dating, dendochronology, and historical records) agree with each other, and disagree with the Bible. That’s not what we’d expect to see if the Bible is 100% reliable and scientific investigation is 95% reliable.

But my broader point is that we don’t need to assume that there are any 100%-reliable sources of information or authority, so Hovind’s and Taylor’s question is premature; first we need to establish that there is an ultimate source of knowledge. It’s also malformed: the word “your” implies (with the caveats noted above) that he uses the Bible, and if I don’t, then I’m wrong. But if the Bible is the reliable source of information that he thinks it is, then he should be able to demonstrate that it is. But the fact that Hovind isn’t taken seriously even by a majority of other Christians tells me that he still has a lot of work to do in that regard.

I’d Rather Have a Long List of Scary Warnings than Nothing at All

I recently participated in a coversation—or maybe I’m conflating two or more conversations, but no matter—in which my interlocutor said that she prefers alt-med natural remedies because mainstream drugs all have a long list of scary potential side effects.

But when I asked whether alt-med drugs actually lower cholesterol or help prevent heart attacks or whatever they claim to do, she said that people who sell alternative medicines tend to avoid making medical claims. They’ll say the product “enhances well-being” or some such, but not “this product helps regulate LDL”.

Because what happens is this: if you make a specific claim about physiological effects or the like, that’s a medical claim, and the FDA expects you to back it up. So Pfizer comes along and says, “this new drug, XYZ, improves blood-clotting.” The FDA says, “Oh, yeah? Show me.” And so Pfizer performs studies, or cites independent studies, that show that yes, as a matter of fact, patients who receive XYZ tend to clot better than patients who don’t, even after taking into account other possible explanations, like luck or the placebo effect. And the FDA says “All right, you’ve made your case. You can claim that XYZ improves blood-clotting in your advertisements.” At least, that’s how we want it to go; how we hope that it goes.

Unfortunately, the world is complicated, and it’s never as simple as “take this drug and you’ll get better.” Different people have different bodies and react to things differently—for instance, I have a friend who doesn’t drink caffeine because it puts him to sleep. So at best you’ll have “take this drug, and it’ll most likely help, but it might not do anything.” More often, you get a drug that does what it’s intended to do in the majority of cases, but also has a list of possible, hopefully rare, side effects. But the more participants in the study (which is good), the greater the chance that one of them will have a heart attack or something that can be plausibly be attributed to the drug being studied. So the Scary List O’ Adverse Effects grows.

So yeah, traditional herbal remedies that don’t have words like “vomiting” or “stroke” on the label look appealing by comparison. But that’s only because the people selling the herbs aren’t required to test them, or to publish the negative results. If someone out there did make a specific claim, like “echinacea helps relieve flu symptoms”, and the FDA said “Oh, yeah? Show me”, and they showed ’em, and ran tests and studies and such, there would almost certainly be some adverse side effects to report. If you’re not seeing any, then either someone’s hiding them, or else no one’s looked for them.

In the real world, everything has problems. Saying you prefer alternative remedies to conventional medicine because it doesn’t have a scary list of adverse effects is like getting your financial advice from a psychic instead of an investment banker because instead of scary disclaimers about lawsuits and patents and the possibility of losing all your money, she just has the friendly statement “For entertainment purposes only.”

Lens Flare in the Eye of the Beholder

We’re all familiar with lens flare, those circles of light that appear in a photo or video when the camera points too close to the sun. When the scene is too bright, light bounces off of camera parts that it shouldn’t, and reflections of the inner workings of the lens show up in the picture. (Paradoxically, even video games often include lens flare, because we’re so used to seeing the world through a camera that adding a camera defect is seen as making the scene more realistic, even though we’re supposedly seeing it through the protagonist’s usually-organic eyes.)

But still, there are people who get taken in by it. That is, they mistake what’s on the photo due to a camera defect, for what’s actually in the scene.

This happens quite often, actually: people looking for evidence of aliens (I mean people who thought the face on Mars was an artificial construct, not the SETI institute people) will blow up or process an image until the JPEG artifacts become obvious, and then claim that these artifacts are alien constructs. Ghost hunters have been known to do the same thing with audio, claiming that MP3 lossy-encoding artifacts are evidence of haunting.

The common thread here is that these people are using their instrument (camera, audio recording, etc.) in ways that it’s known to be unreliable. Every instrument has limitations, so the best thing to do is to learn how to recognize them so that you can work around them: if you see a bright green star in your photo of the night sky, check other photos taken with the same camera: if the star appears in different places in the sky, but always at the same x,y coordinates on the photo, then it’s likely a dead pixel in the camera, not a star or an alien craft.

But if this applies to instruments like cameras, JPEG and MP3 files, and so on, shouldn’t the same principle apply to our brains, which are after all the instrument we use to figure out what the world is like? What are the limitations of the brain? Under what circumstances does it give us wrong answers? And just as importantly, can we recognize those circumstances and work around them?

Yes, actually: every optical illusion ever exploits some problem in our brains, some set of circumstances in which they give the wrong answer.

The checker shadow illusion is among the most compelling ones I know. No matter how long I look at it, I can’t see squares A and B as being the same color. I accept that they are, because every technique for checking, be it connecting the squares, or examining pixels with an image-viewing tool, say that they’re the same color. Yes, in this situation, I trust Photoshop more than my own eyes.

There are also auditory illusions, tactile illusions, and probably others.

So if we can’t always trust our eyes, or our ears, or our fingertips, why should we assume that our brain, the organ we use to process sensory data and make sense of the world around us, is infallible? It seems silly.

In fact, it’s beyond silly: it’s demonstrably wrong: stage magic is largely based on flaws in the mind. The magician picks up a ball with his left hand, moves his left hand toward his right hand, then away, and shows you a ball in his right hand. You then assume (perhaps incorrectly) that he showed you the same ball twice, and that his left hand is now empty. Gary Marcus talks a lot more about the kludginess of the brain in his book, Kluge: the Haphazard Construction of the Human Mind.

But the bottom line is that if we’re serious about wanting to figure out what the world is like, we need to be aware of the limitations of our equipment. This includes not only cameras and recorders, but also eyes and brain.

Catholic Church 99 44/100% Pure

BillDo has a post in which he plays down the Catholic priesthood’s image problem:

Catholic League president Bill Donohue comments on the findings of the 2011 Annual Report on priestly sexual abuse that was released by the bishops’ conference; the survey was done by a Georgetown institute:

The headlines should read, “Abuse Problem Near Zero Among Priests,” but that is not what is being reported.

According to the 2011 Official Catholic Directory, there are 40,271 priests in the U.S. The report says there were 23 credible accusations of the sexual abuse of a minor made against priests for incidences last year. Of that number, 9 were deemed credible by law enforcement. Which means that 99.98% of priests nationwide had no such accusation made against them last year. Nowhere is this being reported.

If that’s his standard of purity, then I’m sure Bill would have no problem drinking a glass of 99.98% water and only 0.02% urine, right?

The thing is that very few men in general are child abusers. The question (or one question) is, does the Catholic clergy contain more child abusers than the population at large?

I wasn’t able to quickly find child-abuse statistics for the United States, but I did find the FBI’s Uniform Crime Reporting statistics for violent crime in 2010, which shows an aggregate of 27.8 forcible rapes per 100,000 victims. The FBI defines “forcible rape” as:

The carnal knowledge of a female forcibly and against her will. Rapes by force and attempts or assaults to rape, regardless of the age of the victim, are included. Statutory offenses (no force used—victim under age of consent) are excluded.

So the numbers are not directly comparable: the report Donohue is quoting concerns itself only with sexual abuse of minors, while the FBI’s number covers all rape. The FBI’s 2010 number excludes sexual abuse of males, while BillDo emphasizes that in the report he’s quoting, “almost all the offenses involve homosexuality“. And BillDo calculates the rate per offender while the FBI counts the rate per victim, which means that BillDo’s number tends to undercount priests who abused multiple victims, compared to what the FBI counts.

Having said that, BillDo’s figure of 9 credible accusations and 40,271 priests works out to 22.3 per 100,000, compared to the FBI’s 27.8 per 100,000. So the number of pedophile priests seems to be in the same ballpark as the number of rapists in the US as a whole. That seems pretty bad, especially for a group that presents itself as the guardians of morality.

BillDo also ignores, as usual, that the Catholic church’s problem is not so much one of having rapists in its ranks — any large organization is bound to have some — but of covering up its members’ crimes. The abuse itself can be blamed on individual priests, sure. But the coverup is a problem for the organization.

What’s Three Orders of Magnitude Among Friends?

(Alternate title: “Numbers Mean Things”.)

The increasingly-irrelevant Uncommon Descent blag had a post today, commenting on an article in Science News.

Right now, UD’s post is entitled “Timing of human use of fire pushed back by 300,000 years”, but when it showed up in my RSS reader, it was “Timing of human use of fire pushed back by 300 million years“. This mistake survives in the post’s URL:
http://www.uncommondescent.com/human-evolution/timing-of-human-use-of-fire-pushed-back-by-300-million-years/

From skimming the Science News article, it looks as though a new study found evidence of fire being used one million years ago, pushing back the earliest-known use of fire by 300,000 years. So presumably the previous record-holder was 700,000 years ago.

The author at Uncommon Descent reported the 300,000-year difference as “300 million years”. But hey, what’s a factor of 1000 between friends?

To illustrate, imagine a student in school in 2012, writing a report about, say, e-commerce. At first, she dates the origin of e-commerce to 1994, when Amazon.com was founded. But upon further investigation, she finds an example of a company selling stuff on the Internet in 1987 and revises her report to say that e-commerce is 25 years old, not 18. That’s about the magnitude of what the scientists found.

Now, along comes UD and reports this as “Origin of e-commerce pushed back to 22,000 BC.” That’s the size of their mistake.

It’s easy to make fun of primitive people whose counting system goes “one, two, three, many”. But the truth is, we all do this to some extent. Imagine a newspaper headline that says, “Federal budget increases by $600 billion, including $300 million increase in NASA funding.” Did you think, “holy cow! NASA got half of that extra money!”? If so, I’m talking to you: you’re not counting “one, two, three, many”, but you are counting “ten, hundred, thousand, illion”.

At any rate, I still question the numeracy of whoever wrote that UD headline. If you’re going to spell out “million” in letters, it should trigger a reality-check mechanism in your brain that makes you ask, “Wait a sec. 300 million years ago. That’s the age of dinosaurs or earlier.”

Natural Selection in the Fossil Record

table.figure {
background-color: #f0f0f0;
}
table.figure caption {
caption-side: bottom;
font-size: 90%;
padding: 1em;
background-color: #f0f0f0;
}

For quite some time now, I’ve had a question:

We can see evolution in the present.
And we can see natural selection in the present.
And we can see lots of evolution in the fossil record.
But can we see natural selection in the fossil record?

Read More

How Not to Report Science

One of the stories in the news today is about a study showing that no, US presidents don’t have their lifespans shortened by the rigors of office. The AP writes:

Using life expectancy data for men the same age as presidents on their inauguration days, the study found that 23 of 34 presidents who died of natural causes lived several years longer than expected.

This set off little skeptical alarm bells in my head. And indeed, a few paragraphs later, we find:

Given that most of the 43 men who have served as president have been college-educated, wealthy and had access to the best doctors, their long lives are actually not that surprising, [study author S. Jay Olshansky] said.

I haven’t found the text of the study in question, but LiveScience writes:

“To me, it’s a classic illustration of the benefits of socioeconomic status,” Olshansky told LiveScience. “All but 10 of the presidents were college-educated, they were all wealthy, and they all had access to medical care.”

So yeah, maybe I’m jumping to conclusions, but I suspect that being able to afford living in a neighborhood where you’re not going to get shot by a drug dealer, and getting regular checkups at Walter Reed may have a teensy bit to do with one’s life expectancy.

So really, what this story tells us is that the stress of the presidency, when combined with good lifestyle and health care, is not enough to lower a man’s life expectancy to the national average. What it doesn’t say is what effect the presidential lifestyle has on people’s health. For that, it would be necessary to compare presidents’ life spans to those of people of comparable wealth and access to health care. From the remarks above, I suspect that Olshansky understands this perfectly well, but I don’t know whether that study has been done.

Learning to Learn

The Aug. 29, 2011 episode of 60-Second Science talks about a finding that drawing helps scientists develop their ideas.

I can’t say I’m terribly surprised at this. Drawing seems to me to be more concrete than speech (or raw thought). Just as a simple example, I can say “two circles”, or I can draw two circles. If I draw two circles, rather than just talking about them, I must necessarily place them next to each other, or one above the other; close together or far apart; of equal or different sizes; and so on. Depending what the circles represent, these small choices might matter, and force me to think about some aspect of the problem.

Chabris and Simon’s The Invisible Gorilla describes something similar: pick some object that you know well — the example they use is that of a bicycle — and draw a diagram of it. No need for artistic verisimilitude, just try to get all the important parts and how they relate to each other. Now, compare your drawing to the real thing. Are the pedals attached to the frame? Do the pedals go through the chain? Is the chain attached to both wheels, by any chance? According to the authors, a lot of people make glaring mistakes. I think it’s because, while people know how to use a bicycle (or a stove, or a TV set), we rarely if ever need to think about the way the parts have to fit together to actually work.

Which brings me to my own field:

It has often been said that a person does not really understand something until after teaching it to someone else. Actually a person does not really understand something until after teaching it to a computer, i.e., expressing it as an algorithm.

— Donald E. Knuth, in American Scientist:61(6), 1973, quoted here

What I mean is that if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that’s really the essence of programming. By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve learned something about it yourself.

— Douglas Adams, Dirk Gently’s Holistic Detective Agency

Computers have a nasty habit of doing exactly what you tell them, and only what you tell them (or at least they did back when I learned programming; since then, they’ve occasionally attempted to be helpful, which usually means they’re not even doing what you tell them). This means that to write any kind of program, you have to think about absolutely every step, and make decisions about everything. And the machine isn’t at all shy about letting you know that YOU GOT IT WRONG HAHAHAHA LOSER!, although it usually lets you know through a cryptic error message like segmentation fault (core dumped) or dropping your Venus probe into the Atlantic.

But in most disciplines, we are not so lucky to have such stupid students, or to receive the kind of feedback that programmers do, so we need to resort to other methods.

Explaining things to someone else helps, probably because it forces you to explicitly state a lot of the things that you can just gloss over when you’re thinking about it. John Cleese has talked about the importance of test audiences in improving movies: they’ll tell you about all sorts of problems with the film that you never would have noticed otherwise. One of the cornerstones of science is peer-review, which basically means that you throw your ideas out there and let your colleagues and rivals take pot-shots at them. And the study I mentioned at the the top of this post says that it helps to draw pictures of what you’re thinking about.

It seems to me that the common element is looking at every aspect of a design, the better to try and make its flaws evident. The human brain is a remarkable organ, but it’s also very good at rationalizing, at overlooking details, at making connections that aren’t there, and the like.

But the good news is that we do have techniques like doodling, explaining, soliciting feedback, and so on. And that suggests that we can learn to think better. Genius may not be something innate, something bestowed by whichever Fates decided your genetic makeup, but rather something that you can learn over time and improve through practice, like playing piano or baking a soufflé.

I hope this is the case. It would mean that our children can be better than we are, and there’s something we can do about it. Heck, it would mean that we can improve ourselves.