I’d Rather Have a Long List of Scary Warnings than Nothing at All

I recently participated in a coversation—or maybe I’m conflating two or more conversations, but no matter—in which my interlocutor said that she prefers alt-med natural remedies because mainstream drugs all have a long list of scary potential side effects.

But when I asked whether alt-med drugs actually lower cholesterol or help prevent heart attacks or whatever they claim to do, she said that people who sell alternative medicines tend to avoid making medical claims. They’ll say the product “enhances well-being” or some such, but not “this product helps regulate LDL”.

Because what happens is this: if you make a specific claim about physiological effects or the like, that’s a medical claim, and the FDA expects you to back it up. So Pfizer comes along and says, “this new drug, XYZ, improves blood-clotting.” The FDA says, “Oh, yeah? Show me.” And so Pfizer performs studies, or cites independent studies, that show that yes, as a matter of fact, patients who receive XYZ tend to clot better than patients who don’t, even after taking into account other possible explanations, like luck or the placebo effect. And the FDA says “All right, you’ve made your case. You can claim that XYZ improves blood-clotting in your advertisements.” At least, that’s how we want it to go; how we hope that it goes.

Unfortunately, the world is complicated, and it’s never as simple as “take this drug and you’ll get better.” Different people have different bodies and react to things differently—for instance, I have a friend who doesn’t drink caffeine because it puts him to sleep. So at best you’ll have “take this drug, and it’ll most likely help, but it might not do anything.” More often, you get a drug that does what it’s intended to do in the majority of cases, but also has a list of possible, hopefully rare, side effects. But the more participants in the study (which is good), the greater the chance that one of them will have a heart attack or something that can be plausibly be attributed to the drug being studied. So the Scary List O’ Adverse Effects grows.

So yeah, traditional herbal remedies that don’t have words like “vomiting” or “stroke” on the label look appealing by comparison. But that’s only because the people selling the herbs aren’t required to test them, or to publish the negative results. If someone out there did make a specific claim, like “echinacea helps relieve flu symptoms”, and the FDA said “Oh, yeah? Show me”, and they showed ’em, and ran tests and studies and such, there would almost certainly be some adverse side effects to report. If you’re not seeing any, then either someone’s hiding them, or else no one’s looked for them.

In the real world, everything has problems. Saying you prefer alternative remedies to conventional medicine because it doesn’t have a scary list of adverse effects is like getting your financial advice from a psychic instead of an investment banker because instead of scary disclaimers about lawsuits and patents and the possibility of losing all your money, she just has the friendly statement “For entertainment purposes only.”

Lens Flare in the Eye of the Beholder

We’re all familiar with lens flare, those circles of light that appear in a photo or video when the camera points too close to the sun. When the scene is too bright, light bounces off of camera parts that it shouldn’t, and reflections of the inner workings of the lens show up in the picture. (Paradoxically, even video games often include lens flare, because we’re so used to seeing the world through a camera that adding a camera defect is seen as making the scene more realistic, even though we’re supposedly seeing it through the protagonist’s usually-organic eyes.)

But still, there are people who get taken in by it. That is, they mistake what’s on the photo due to a camera defect, for what’s actually in the scene.

This happens quite often, actually: people looking for evidence of aliens (I mean people who thought the face on Mars was an artificial construct, not the SETI institute people) will blow up or process an image until the JPEG artifacts become obvious, and then claim that these artifacts are alien constructs. Ghost hunters have been known to do the same thing with audio, claiming that MP3 lossy-encoding artifacts are evidence of haunting.

The common thread here is that these people are using their instrument (camera, audio recording, etc.) in ways that it’s known to be unreliable. Every instrument has limitations, so the best thing to do is to learn how to recognize them so that you can work around them: if you see a bright green star in your photo of the night sky, check other photos taken with the same camera: if the star appears in different places in the sky, but always at the same x,y coordinates on the photo, then it’s likely a dead pixel in the camera, not a star or an alien craft.

But if this applies to instruments like cameras, JPEG and MP3 files, and so on, shouldn’t the same principle apply to our brains, which are after all the instrument we use to figure out what the world is like? What are the limitations of the brain? Under what circumstances does it give us wrong answers? And just as importantly, can we recognize those circumstances and work around them?

Yes, actually: every optical illusion ever exploits some problem in our brains, some set of circumstances in which they give the wrong answer.

The checker shadow illusion is among the most compelling ones I know. No matter how long I look at it, I can’t see squares A and B as being the same color. I accept that they are, because every technique for checking, be it connecting the squares, or examining pixels with an image-viewing tool, say that they’re the same color. Yes, in this situation, I trust Photoshop more than my own eyes.

There are also auditory illusions, tactile illusions, and probably others.

So if we can’t always trust our eyes, or our ears, or our fingertips, why should we assume that our brain, the organ we use to process sensory data and make sense of the world around us, is infallible? It seems silly.

In fact, it’s beyond silly: it’s demonstrably wrong: stage magic is largely based on flaws in the mind. The magician picks up a ball with his left hand, moves his left hand toward his right hand, then away, and shows you a ball in his right hand. You then assume (perhaps incorrectly) that he showed you the same ball twice, and that his left hand is now empty. Gary Marcus talks a lot more about the kludginess of the brain in his book, Kluge: the Haphazard Construction of the Human Mind.

But the bottom line is that if we’re serious about wanting to figure out what the world is like, we need to be aware of the limitations of our equipment. This includes not only cameras and recorders, but also eyes and brain.

How Not to Report Science

One of the stories in the news today is about a study showing that no, US presidents don’t have their lifespans shortened by the rigors of office. The AP writes:

Using life expectancy data for men the same age as presidents on their inauguration days, the study found that 23 of 34 presidents who died of natural causes lived several years longer than expected.

This set off little skeptical alarm bells in my head. And indeed, a few paragraphs later, we find:

Given that most of the 43 men who have served as president have been college-educated, wealthy and had access to the best doctors, their long lives are actually not that surprising, [study author S. Jay Olshansky] said.

I haven’t found the text of the study in question, but LiveScience writes:

“To me, it’s a classic illustration of the benefits of socioeconomic status,” Olshansky told LiveScience. “All but 10 of the presidents were college-educated, they were all wealthy, and they all had access to medical care.”

So yeah, maybe I’m jumping to conclusions, but I suspect that being able to afford living in a neighborhood where you’re not going to get shot by a drug dealer, and getting regular checkups at Walter Reed may have a teensy bit to do with one’s life expectancy.

So really, what this story tells us is that the stress of the presidency, when combined with good lifestyle and health care, is not enough to lower a man’s life expectancy to the national average. What it doesn’t say is what effect the presidential lifestyle has on people’s health. For that, it would be necessary to compare presidents’ life spans to those of people of comparable wealth and access to health care. From the remarks above, I suspect that Olshansky understands this perfectly well, but I don’t know whether that study has been done.

Craziness Loves Company

Recently Kent Hovind’s International House of Lunacy offered to send out free DVDs to anyone who asked. So naturally, I had to take them up. Yesterday, it was delivered to my… let’s say “imaginary roommate”, with the oh-so-subtle name “Sevil Natas” (thanks to Fez for suggesting that).

I haven’t watched the DVD yet. But it came with bunch of ads for God and related products, including a CSE Ministries catalog. And that’s what I want to talk about. But I need to preface that with a bit of non-snark:

The insidious thing about HIV is that it doesn’t kill you. At least, not directly, by dissolving your cell walls or anything like that. Rather, it weakens your immune system. This makes your body less able to fight off HIV itself, and also leaves you vulnerable to other diseases. So what kills you is not AIDS per se, but something unrelated, that you normally would have been able to fight off easily.

I suspect that something similar goes on with woo: if you’re prone to hold one kind of irrational belief, then you’re probably prone to believing other kinds of irrational beliefs. If you don’t have the mental toolkit to recognize why astrology is bogus, then you might not recognize that dowsing or feng shui are also bogus.

But the thing about religion — certainly Christianity as it is widely practiced in the US and Europe — is that, like HIV, it actively attacks people’s mental defenses against bullshit, by teaching people that believing things without evidence is a virtue, or that religious ideas should be immune from criticism.

And now, on to the woo! Read More

Three Different Things that Look Similar

Here are three statements:

  • St. Anselm says that no one really disbelieves in God.
  • Stephen Hawking says that spacetime is smooth at the Big Bang.
  • PZ Myers says that “The only appropriate responses should involve some form of righteous fury, much butt-kicking, and the public firing and humiliation of some teachers”

All three are of the form “person X says Y“, but they’re really three different types of statement. See if you can figure it out before meeting me after the jump.

Read More

Cheap Signaling

I talked about appropriating the biological concept of costly signaling for general skepticism, and it occurred to me to wonder whether there’s such a thing as cheap signaling.

Costly signaling is when the investment required to transmit a message, like “trust me” or “have sex with me” is so high that only the worthy applicant (a trustworthy source, or a good mate) can send it.

Indiana driver's license, 1940
Cheap signaling, in contrast, would then be when the cost of transmitting a message is low enough that unworthy senders can afford it. So for instance, if your state’s driver’s licenses have a simple design, then anyone with a printer and a laminator can fake one, which allows sixteen-year-olds get into bars.

Or, more generally, are there any cheap tricks that someone can use to sell you something you don’t want?

Hm. Put that way, I think it’s obvious that yes, . Even aside from outright lying, there are subtler tricks like acting friendly, offering you free stuff to instill a sense of obligation, and the like. Basically, just look up “sales tricks” (which is all I did).

(And just in passing, I notice that there’s a bit of an industry in sermon stories. I’m guessing that that’s because a story told in the first person is more convincing than one in the third person.)

Costly Signaling for Lay Skeptics

This was originally posted at Secular Perspectives.

Let’s say you’re an average person, of average intelligence, average education, with an average job, and you’ve run across several news articles.

One says that an asteroid has just been detected that will hit the earth in 2015. Another says that taking vitamin B3 daily can improve your cholesterol levels. A third says that increasing defense spending will help balance the budget. Another says that evidence of extraterrestrial life has been found in an Antarctic meteorite. A fifth one says that the Gospel of Mark has been dated as having been written between 40 and 50 CE. And finally, a story that people who prayed to a statue of Krishna have been cured of cancer and blindness.

How do you, as a lay person with a full-time day job, determine which ones to believe, and which ones to disregard?

I don’t have a good answer, by the way. I’m hoping you can suggest something in the comments.

All such articles are trying to “sell” you an idea, in a broad, general sense. Sometimes the selling is literal, as when a company tries to convince you that you’re a pathetic malodorous loser who’ll never be accepted by the in-crowd or find true love unless you buy their product. Other times, it’s metaphorical: “I want you to know this, because…” well, that’s the question, isn’t it? “Because we’ll all benefit if people who will implement these ideas get elected.” “Because I’ll make a ton of money if you help elect people who’ll implement these ideas.” “Because I care about you and your health.” “Because this will help save your soul from eternal damnation.” “Because this idea, while bland, is true, and I think it’s better if we know the truth.”

It would be great if there were a single source to which one could turn to to get the truth, or if news articles came with a little checkmark, the way Twitter shows that “neilhimself” is the famous Neil Gaiman, while “NeilGaiman” is someone else. Unfortunately, that’s not the case. The problem is that true ideas and false ideas can look an awful lot like each other.

But it occurs to me that nature has come up with a solution to this problem. In sexual species, males often try to communicate that “you should mate with me; I’ll provide our offspring with plenty of food, and they’ll be resistant to parasites and predation.” In such cases, it’s often advantageous to lie: a male who convinces a female he’s in it for the long haul can impregnate her, then ditch her to impregnate someone else. Preferably while some other male sucker gets stuck caring for the liar’s offspring.

So what’s a female to do? How does she figure out who’s serious about helping to feed the kids, and who’s just trying to get inside her cloaca? One solution is known as costly signaling. “Signaling” refers to the “I’ve got great genes” message, above. The “costly” part means that the signal should be sent in a way that’s difficult or expensive (in time, effort, ability, etc.) to fake. The usual example is that of the peacock, who demonstrates his worth by the fact that he’s managed to survive despite having a huge, flashy tail that prevents him from flying, and hinders escape from predators. If he’s managed to overcome such a handicap, he must have superior genes indeed.

The idea of costly signaling is more general than that: it basically means that the signaler has to invest enough effort or resources into the communication to be taken seriously, that cheating isn’t worth it.

(As an aside, I can think of a few possible instances in human society: an engagement ring sends the message that “I’m willing to spend a pile of money on a small rock; so I’m in this for the long haul, not just for a quick fling”. Taking a prospective client to dinner or to a ball game says “We don’t do this for just anyone; but we’re willing to do what it takes to get your business.” And an Italian sports car and designer clothes say “I have so much money that I can afford to waste it on an expensive logo. Of course I’ll be able to feed our family and send our kids to college.”)

So getting back to my original point, it might be possible to identify costly signals to distinguish trustworth news sources from untrustworthy ones.

For instance, was the article published by a major news outlet, or by some local paper you’ve never heard of? In principle, the greater the reputation of the publication, the more editors and fact-checkers it has had to pass through to get published. Unfortunately, given the state of American journalism, this may not be as safe an assumption as one might hope.

A related criterion might be: do they have a fancy web site, or does it look like it was slapped together by someone’s kid in the 1990s? Unfortunately, this doesn’t work at all, since organizations like Americans for Prosperity, BP, and Answers in Genesis can easily afford good web designers.

Do the authors have letters after their name? An article on medicine written by an MD, or an article on science written by a Ph.D. is probably more trustworthy than one written by a beat reporter. The time and effort required to go through grad school or med school to obtain those letters should weed out the fakers.

Of course, the competence has to be in a relevant field: I tend to trust what Paul Krugman writes about the economy, because he has a degree and a Nobel prize in economics, but not if he writes about, say, medicine or geology.

And, of course, it’s very easy to just say that one has a Ph.D., or to buy a degree from a diploma mill, without putting in the effort to learn a subject well enough to speak authoritatively about it. To combat this, there accreditation institutes that investigate schools and give their stamp of approval to the ones that require students to learn something before graduating. Of course, now that a lot of people have learned to ask “is your degree from an accredited school?”, there are accreditation mills, which will accredit any diploma mill for a fee.

Has the author published any peer-reviewed research? Peer review is intended as a filter to make sure that research journals don’t publish any old garbage. This criterion is probably pretty good, though not flawless. For one thing, it usually requires effort on the reader’s part to seek out the author’s publication record. For another, various creationist organizations publish cargo-cult “peer-reviewed” journals where articles are reviewed by a panel of fellow creationist before publication.

Trusted endorsements: this might be called the poor man’s peer review. When Phil Plait, an astromer, writes a blog post that links to a post on astronomy, that’s a good sign. It means that the article on the other end of the link hasn’t raised Phil’s baloney-meter. That tends to make me trust the article more, because Phil would notice errors that I wouldn’t.

Does the site link to contrary views? In its heyday in the 1990s, one notable difference between the pro-evolution site talkorigins.org and anti-evolution sites was that talkorigins.org usually linked to the creationist sources they were discussing, and to creationist rebuttals of their articles. To me, this said “we’re going to make it easy for you to read the other side’s rebuttal, because we’re confident that the facts are on our side, and even if you read both sides, you’ll agree with us.”

Any others? Ideally, the sort of costly signal should be something hard for the writer to produce, and easy for the reader to verify, without requiring too much effort (because we want to dismiss bogus claims quickly) and without requiring special knowledge. And if the criterion fits on a bumper sticker, so much the better.

Ionized Bracelets

This ad doesn’t actually say that the Q-Ray bracelet does a damn thing, but it sure as hellshit[] implies it:

Thanks to the Ask an Atheist guys for the pointer.

Hey, the Q-Ray people aren’t saying the bracelet does anything. That would be an invitation to get sued. No, the athlete is saying it. And she’s not saying it, either; she’s just saying it might.

And her testimonial is filmed in what appears to be a doctor’s office (or, more likely, a doctor’s office set. At least, my doctor doesn’t have any anatomical charts on his wall).

So it’s not actionable. But if you should happen to get the impression that this magic bracelet is part of a medical regimen endorsed by the medical profession, well, they won’t try to disillusion you.

The narrator says that magnets have been “used for centuries to promote a healthy lifestyle”. Of course, the same could be said of leeches.

As far as I can tell, the only verifiable claims made by the Q-Ray people are 1) it has magnets, and 2) “beautifully crafted, with an expandable steel band”.

This Hour Has 22 Minutes says pretty much the same thing, but won’t let me embed the video.


[] Text changed to refer to something for which there’s actual evidence.

Speaker-to-Volcanoes

The AP reports:

For 33 years, Maridjan spoke to Mount Merapi, believing he could appease its unpredictable spirits by throwing offerings of rice, clothes and chickens into the volcano’s gaping crater.

Maridjan was believed by many to have the ability to speak directly to the mountain and led ceremonies every year to hold back its lava flows by throwing rice, clothes and chickens into its dome.

(emphasis added.)

Well, duh. Of course he could speak to the volcano. Anyone can talk to a mountain, or a river, or dead ancestors. To quote Shakespeare’s Henry IV, Part I:

Glendower: I can call spirits from the vasty deep.

Hotspur: Why, so can I, or so can any man;
But will they come when you do call for them?

The real question is, does anything happen as a result of talking to a mountain?

Want to Restore Sanity? Join the Club

Do you want to restore sanity and rationality to political discourse? Sure, we all do!

But do you also want to promote sanity and rationality in general? Then you should join the Washington Coalition of Reason this Saturday on the National Mall as they participate in the Rally to Restore Reason.

Look for the #unitedcor hashtag on Twitter.

Oh, and the guys from the American Freethought podcast will be there as well. They’ll also be announcing their location on Twitter, so find out where they are, then stop by and say hi.