The Thing, and the Name of the Thing
Yesterday, during a routine medical examination, I found out that I have a dermatofibroma.
Don’t worry about me. My prognosis is very good. I should still have a few decades left. It means that at some point I got bitten by an insect, but a piece of stinger was probably left behind, and scar tissue formed around it.
But if you thought, if only for a moment, that something with a big scary name like “dermatofibroma” must be a big scary thing, well, that’s what I want to talk about.
I’ve mentioned elsewhere that as far as I can tell, the human mind uses the same machinery to deal abstract notions and patterns as it does with tangible objects like coins and bricks. That’s why we speak of taking responsibility, of giving life, of sharing our troubles, and so forth. (And I bet there’s research to back me up on this.)
A word is the handle we use to grab hold of an idea (see what I did there?), and sometimes we’re not very good at distinguishing between the word and the idea. I know that it’s a relief to go to the doctor with some collection of symptoms and find out that my condition has a name. Even if I don’t know anything about it, at least it’s a name. It’s something to hold on to. Likewise, I remember that back in the 80s, simply coming up with the name “AIDS” seemed to make the phenomenon more tractable than some unnamed disease.
I think a lot of deepities and other facile slogans work because people tend not to distinguish between a thing, and the word for that thing. Philosophers call this a use-mention error. C programmers know that it’s important to distinguish a variable, a pointer to that variable, a pointer to a pointer to the variable, and so forth1.
The solution, I’ve found, is to keep a mental model of whatever the discussion is about, kind of like drawing a picture to help you think about a math problem. For instance, if a news report says that “seasonally-adjusted unemployment claims were up 1% in December” and I wonder why the qualifier “seasonally-adjusted” was thrown in there, I can think of department stores hiring lots of people for a few months to take handle the Christmas rush.
Richard Feynman describes this process in Surely You’re Joking, Mr. Feynman. In the chapter Would You Solve the Dirac Equation?, he writes:
I can’t understand anything in general unless I’m carrying along in my mind a specific example and watching it go. Some people think in the beginning that I’m kind of slow and I don’t understand the problem, because I ask a lot of these “dumb” questions: “Is a cathode plus or minus? Is an an-ion this way, or that way?”
But later, when the guy’s in the middle of a bunch of equations, he’ll say something and I’ll say, “Wait a minute! There’s an error! That can’t be right!”
The guy looks at his equations, and sure enough, after a while, he finds the mistake and wonders, “How the hell did this guy, who hardly understood at the beginning, find that mistake in the mess of all these equations?
He thinks I’m following the steps mathematically, but that’s not what I’m doing. I have the specific, physical example of what he’s trying to analyze, and I know from instinct and experience the properties of the thing. So when the equation says it should behave so-and-so, and I know that’s the wrong way around, I jump up and say, “Wait! There’s a mistake!”
This sort of thinking is a way to have the analytical and intuitive parts of your mind working in tandem. If you have an intuitive understanding of the system in question — be it computer code or preparing a Thanksgiving meal for twelve — you can apply that intuition toward understanding how everything is supposed to work. At the same time, your analytical mind can work out the numerical and logical parts. Normally, they should give the same result; if they don’t, then there’s probably an error either in your analysis or in your intuition.
The downside of this approach is that I tend to get very frustrated when I read theologians and philosophers — or at least the sorts of philosophers who give philosophy a bad reputation — because they tend to say things like “a lesser entity can never create something greater than itself” without saying how one can tell whether X is greater or lesser than Y, and without giving me anything to hang my intuition on. And if a discussion goes on for too long without some sort of anchor to reality, it becomes hard to get a reality check to correct any mistakes that may have crept in.
Since I started with jargon, I want to close with it as well. Every profession and field has its jargon, because it allows practitioners to refer precisely to specific concepts in that field. For instance, as a system administrator, I care whether an unresponsive machine is hung, wedged, angry, confused, or dead (or, in extreme cases, simply fucked). These all convey shades of meaning that the user who can’t log in and do her work doesn’t see or care about.
But there’s another, less noble purpose to jargon: showing off one’s erudition. This usage seems to be more prevalent in fields with more, let’s say bullshit. If you don’t have anything to say, or if what you’re saying is trivial, you can paper over that inconvenient fact with five-dollar words.
In particular, I remember an urban geography text I was assigned in college that had a paragraph that went on about “pendular motion” and “central business district”s and so on. I had to read it four or five times before it finally dawned on me that what it was saying was “people commute between suburbs and downtown”.
If you’re trying to, you know, communicate with your audience, then it behooves you to speak or write in such a way that they’ll understand. That is, you have a mental model of whatever it is you’re talking about; and at the end of your explanation, your audience should have the same model in their minds. Effective communication is a process of copying data structures from one mind to another in the least amount of time.
That geography text seemed like a textbook example (if you’ll pardon the expression) of an author who knew that what he was saying was trivial, and wanted to disguise this fact. I imagined at the time that he wanted geography to be scientific, and was jealous of people in hard sciences, like physicists and astronomers, who can set up experiments and get clear results. A more honest approach, it seems to me, would have been to acknowledge from the start that while making geography scientific is a laudable goal, it is inherently a messy field; there are often many variables involved, and it is difficult to tease out each one’s contribution to the final result. Add to this the fact that it’s difficult or impossible to conduct rigorously controlled experiments (you can’t just build a second Tulsa, but without the oil industry, to see how it differs from the original), and each bit of solid data becomes a hard-won nugget of knowledge.
So yes, say that people commute. Acknowledge that it may seem trivial, but that in a field full of uncertainty, it’s a well-established fact because of X and Y and Z. That’s the more honest approach.
1: One of my favorite error messages was in a C compiler that used 16 bits for both integers and pointers. Whenever my code tried to dereference an int or do suspicious arithmetic with a pointer, the compiler would complain of “integer-pointer pun”.
(Update, 11:43: Typo in the Big Scary Word.)
Daniel Dennett refers to generating those kinds of concrete examples as priming an “intuition pump.”
RBH:
I’m not sure of that. It seems to me that when Dennett talks about an intuition pump, it’s a layer in front of the system he’s talking about.
For instance, in Elbow Room, he uses the example of a wind-up robot as a model for humans who behave in a deterministic manner (yes, he then tears that model to pieces, but it’s the only example I can come up with off the top of my head). The way this works is: we have a hard time thinking about deterministic humans, because humans are complicated and subtle. So let’s use the model of a wind-up robot, because that’s simple and we can easily imagine how it behaves. We can then intuitively see that if the initial state of the robot is known, then all of its future behavior can be predicted. We can then apply this model to humans, and say that if the initial state (a vast sea of data) of a deterinist human were known, then it would be in principle (given infinitely-fast computers and such) possible to predict their future behavior.
So given a system, he proposes a model that helps us build a separate model of the system. Hopefully that made sense.
What I’m talking about is imagining the system itself. If I’m planning Thanksgiving dinner or how to redecorate my bathroom, I don’t need to come up with a clever analogy: I can just think of my dinner table with twelve people around it and whether there are enough chairs for them and what a 10lb turkey would look like. There doesn’t need to be an intermediate layer to help me think about the system, because I can already intuit how its component parts work.
Granted, the distinction I’m trying to make is a small one. In cases where the system I’m thinking about doesn’t have a good representation, I’m still forced to resort to models. Heck, when I’m thinking of computer code, I tend to think in terms of boxes and arrows.
Maybe I’m just arguing for the sake of being contrarian. Make of it what you will.
arensb said:
I think it would look like a number of hungry people. Once you subtract the weight of the bones and liquid that will be lost during roasting that isn’t a whole lot of turkey for 12 people.
Equal parts breast and thigh meat for me, thanks.
Fez:
My point exactly.