Lens Flare in the Eye of the Beholder
We’re all familiar with lens flare, those circles of light that appear in a photo or video when the camera points too close to the sun. When the scene is too bright, light bounces off of camera parts that it shouldn’t, and reflections of the inner workings of the lens show up in the picture. (Paradoxically, even video games often include lens flare, because we’re so used to seeing the world through a camera that adding a camera defect is seen as making the scene more realistic, even though we’re supposedly seeing it through the protagonist’s usually-organic eyes.)
But still, there are people who get taken in by it. That is, they mistake what’s on the photo due to a camera defect, for what’s actually in the scene.
This happens quite often, actually: people looking for evidence of aliens (I mean people who thought the face on Mars was an artificial construct, not the SETI institute people) will blow up or process an image until the JPEG artifacts become obvious, and then claim that these artifacts are alien constructs. Ghost hunters have been known to do the same thing with audio, claiming that MP3 lossy-encoding artifacts are evidence of haunting.
The common thread here is that these people are using their instrument (camera, audio recording, etc.) in ways that it’s known to be unreliable. Every instrument has limitations, so the best thing to do is to learn how to recognize them so that you can work around them: if you see a bright green star in your photo of the night sky, check other photos taken with the same camera: if the star appears in different places in the sky, but always at the same x,y coordinates on the photo, then it’s likely a dead pixel in the camera, not a star or an alien craft.
But if this applies to instruments like cameras, JPEG and MP3 files, and so on, shouldn’t the same principle apply to our brains, which are after all the instrument we use to figure out what the world is like? What are the limitations of the brain? Under what circumstances does it give us wrong answers? And just as importantly, can we recognize those circumstances and work around them?
Yes, actually: every optical illusion ever exploits some problem in our brains, some set of circumstances in which they give the wrong answer.
The checker shadow illusion is among the most compelling ones I know. No matter how long I look at it, I can’t see squares A and B as being the same color. I accept that they are, because every technique for checking, be it connecting the squares, or examining pixels with an image-viewing tool, say that they’re the same color. Yes, in this situation, I trust Photoshop more than my own eyes.
There are also auditory illusions, tactile illusions, and probably others.
So if we can’t always trust our eyes, or our ears, or our fingertips, why should we assume that our brain, the organ we use to process sensory data and make sense of the world around us, is infallible? It seems silly.
In fact, it’s beyond silly: it’s demonstrably wrong: stage magic is largely based on flaws in the mind. The magician picks up a ball with his left hand, moves his left hand toward his right hand, then away, and shows you a ball in his right hand. You then assume (perhaps incorrectly) that he showed you the same ball twice, and that his left hand is now empty. Gary Marcus talks a lot more about the kludginess of the brain in his book, Kluge: the Haphazard Construction of the Human Mind.
But the bottom line is that if we’re serious about wanting to figure out what the world is like, we need to be aware of the limitations of our equipment. This includes not only cameras and recorders, but also eyes and brain.
[spoiler alert] To convince yourself that A and B are the same value, squint.
Good grief — the guy “adds color” back to a B&W photo? With what — a “sepia” tool? Then blows it up until you can see individual pixels? And thinks that means anything?
These people have watched too many episodes of CSI — you know, where they’re always running some magical image enhancement algorithm that can reconstruct facial details from, like, six blurry pixels.
Well, we know from Blade Runner that 21st-century cameras take photos with infinite resolution that you can magnify forever (or enhance) to get more and more detail, so it’s all good, right?