Another Problem With Searle’s Chinese Room

Another Problem With Searle’s Chinese Room

(Update, Aug. 20: John Wilkins, an honest to God philosopher, tells me in the comments that I’m wrong. So take this with a grain of salt.)

For those not aware of it, John Searle’s Chinese room is an argument against the possibility of artificial intelligence.

As recounted by Roger Penrose in The Empereor’s New Brain, it goes something like this: let’s say someone has written a program that understands natural language. This program reads a short story in a human language (e.g., “a man went to a restaurant. When his meal arrived, it was burned to a crisp. He stormed out without paying the bill or leaving a tip.”), then takes questions (e.g., “did the man eat his meal?”) and answers them. Now, let’s make two changes: first of all, the program “understands” Chinese, rather than English. And secondly, instead of a computer, it is John Searle (who doesn’t speak a word of Chinese), who will be executing the program. He is sealed in a room, given detailed instructions (in English), and some Chinese text. The instructions explain what to do with the Chinese characters; eventually, the instructions have him draw other Chinese characters on a sheet of paper, and push them out through a slot in the wall. The instructions don’t include a dictionary or anything like that; he is never told what the characters mean. Searle’s argument, then, is that although to the people outside it appears that the room has read a story and answered questions about it, no actual understanding has taken place, since Searle still doesn’t speak Chinese, and has no idea what the story was about, or even that it was a story. It was all just clever symbol manipulation.

One objection that recently occurred to me is this: what if, instead of a natural-language recognition program, the Chinese researchers had given Searle a program that forecasts the weather, or finds a route from one address to another, or typesets Chinese text, or plays go, or even one that does simple arithmetic (written out in Chinese, of course)?

I don’t see that this makes a significant difference to the argument, so if the Chinese room argument is sound, then its conclusion should stand. Let’s assume that Searle, a philosopher, knows absolutely nothing about meteorology, and is given a weather-forecasting program. To the people outside, it looks as though the room is predicting the weather, however well or poorly. But Searle, inside, has no understanding of what he’s doing. Therefore, by his earlier argument, there is no true weather forecasting, just clever symbol manipulation. Therefore, computers cannot forecast the weather.

I think we can all agree that this is nonsense: of course computers can forecast the weather: they do it all the time. They also find routes between addresses (surely no one thinks that Mapquest has a bunch of interns behind the scenes, frantically giving directions), and all of the other things listed above. In short, if the Chinese room argument worked, it would prevent computers from doing a whole lot of things that we know perfectly well that they can do. The programs may just be clever symbol manipulation, but if the solution can be implemented using sufficiently-clever symbol manipulation, then what’s the problem? (BTW, I don’t imagine that I’m the first person to discover this flaw; I just happened to rediscover it independently.)

The real problem with the Chinese room argument, as I see it, is that in his analogy, he takes the place of the CPU (and associated hardware), and the detailed English instructions are analogous to software. While a statement like “my computer can play chess” is quite uncontroversial, if I were to say “my Intel Pentium can play chess”, people would think that I don’t know what I’m talking about (or at best, ask me to explain myself).

Of course, Searle came up with this argument in 1980, back before everyone had a computer, so perhaps he can be forgiven this misunderstanding. Or perhaps I’m misunderstanding some subtle aspect of his argument, though I don’t think so.

One thought on “Another Problem With Searle’s Chinese Room

  1. wouldn’t it be enough to just switch that ‘non natural language speaking computer’ with a person who just speaks chinese? Give him the instructions to read the papers coming in, and if they are questions, answer them. If i understand the argument, that would give the english speaking person the same answers, so you’d conclude that the chinese person doesn’t understand natural english? that would be… strange

  2. Yes, you are misunderstanding his argument. I’m teaching this right now. Searle’s argument is that there is something minds do – understanding – that computers, or rather computer-based artificial intelligence based on what Searle calls “classical AI”, can’t do. Computers can forecast weather, because all that is involved is computation based on data. But the data being about the weather is something the computer can’t do, says Searle.

    The argument is supposed to show that not all aspects of cognition can be run as programs by a computer, whether wet hardware or dry.

  3. I’m not entirely clear on the distinction between clever symbol manipulation and clever electric pulse manipulation by a neural network. Is the data really about the weather simply because neurons do it rather than when transistors?

    To me, the more interesting question is whether the person in the box has a legitimate understanding of what he’s doing. Certainly, his understanding is not the same as the understanding of a native Chinese speaker, but if he can use it to construct answers to arbitrary questions with the same accuracy as a native speaker, he clearly has an understanding of some sort, even if the mechanisms don’t match. As I see it, it’s similar to mathematics: You may have an interpretation of what an operation does that’s completely correct and very useful for performing and applying that operation. Another person may have a different, but equally correct interpretation. Who understands and who doesn’t? Think of a Fourier transform. The link shows 3 different ways of looking at it, all independently correct and useful. Each one is complete on its own and allows the engineer to do useful work. If three different engineers adopt three different ways of looking at it, do two of them not really understand it?

  4. Is the data really about the weather simply because neurons do it rather than when transistors?

    As I understand Searle’s paper (which I probably should have read before posting this), it sounds as if yes. Or rather, that neurons in concert act as non-algorithmic machine.

    He considers the idea that someone might simulate a brain using water pipes (to simulate the firing of interconnected neurons), but argues that this simulation doesn’t simulate the important parts of a human brain, “namely its causal properties, its ability to produce intentional states.” I have no idea what that means.

    To me, the more interesting question is whether the person in the box has a legitimate understanding of what he’s doing.

    Yes and no. Or, to steal from the eminently quotable Douglas Adams, he has no more idea of what he’s doing than a tea leaf has of the history of the East India Company.

    It’s possible to describe the operation of a brain in terms of chemistry: peptide bonds forming here, neurotransmitters flowing there, and so forth. This is certainly a valid—albeit horrendously unwieldy—description of brain operations.

    If we step back a bit, we lose sight of individual molecules, but we now see the brain in terms of cells, including neurons firing. This is also a valid description of the brain (and is the one seen by the “simulator” Chinese room man). However, at this level, the a human brain is pretty much indistinguishable from a rat brain. It’s probably even hard to tell the difference between the neocortex and the spinal cord.

    If we step back some more, we lose fine details again, but can see neurons firing in patterns, in groups, perhaps reinforcing, triggering, and suppressing each other. Step back some more and we start seeing abstract symbols and associations being triggered by other symbols. (“Abstract” meaning that symbol A298676 is associated with symbol B9836362.) Somewhere in here may be the “default” Chinese room man’s understanding of what he’s doing.

    As we step back further and further, we lose more and more details, but we can see higher and higher levels of abstraction, until we eventually arrive at psychology. And somewhere along the way, we see that symbol A298676 corresponds to “the smell of peach cobbler” and symbol B9836362 corresponds to “Grandma”. (BTW, this theme is explored in Douglas Hofstadter’s I Am A Strange Loop.)

    So yes, the man in the box has a legitimate understanding of what’s going on, but at his level, he can’t tell whether he’s emulating a human brain, a rat brain, or, indeed, a World of Warcraft server farm. And I suspect that Searle’s objection may be that he can’t step back far enough from the details to see the high-level abstractions.

  5. John:

    Searle’s argument is that there is something minds do – understanding – that computers, or rather computer-based artificial intelligence based on what Searle calls “classical AI”, can’t do.

    Could you please elaborate, or perhaps point me at your class notes? Or is this some subtle point not understandable by a layman who hasn’t taken a few semesters of philosophy?

  6. As I understand Searle’s paper (which I probably should have read before posting this), it sounds as if yes. Or rather, that neurons in concert act as non-algorithmic machine.

    Hmmmm… I haven’t dug into the paper yet, but this sounds an awful lot like an assertion based on intuition alone (or the assumption that they’re something other than physical mechanisms making our brains work). I’m way closer to being a microprocessor designer than a neurologist, but I’m just not seeing a major difference. Maybe there’s just something important that I don’t understand about the physiology of brains. Alternately, maybe he and I are just working on different assumptions and he’s bringing in some sort of spirit that’s independent of neurons. Either way, I think I’ll have to do a little reading before I’m able to buy into it.

  7. Troublesome Frog:

    this sounds an awful lot like an assertion based on intuition alone

    I agree. In one part, he allows that someone could build a simulation of a brain, and asks “but where’s the understanding?”. I wanted to say, “wherever it is in a real brain. Duh.”

    Maybe there’s just something important that I don’t understand about the physiology of brains. Alternately, maybe he and I are just working on different assumptions and he’s bringing in some sort of spirit that’s independent of neurons.

    I’m pretty sure he doesn’t believe in a supernatural soul or anything like that. There’s a bit where he says, basically, that if you took a pile of individual neurons and whatnot and stitched them together into a brain, the resulting brain would have real understanding and all that good stuff.

    As best I can tell, he thinks that the architecture of a brain is so different from that of a computer (i.e., there’s no direct mapping from neurons to registers, or something like that) that never the twain shall meet. But if that’s the case, then it’s nothing that can’t be fixed with a layer of indirection, e.g., defining a virtual machine (like the Java VM) with a more suiltable architecture.

  8. As best I can tell, he thinks that the architecture of a brain is so different from that of a computer (i.e., there’s no direct mapping from neurons to registers, or something like that) that never the twain shall meet. But if that’s the case, then it’s nothing that can’t be fixed with a layer of indirection, e.g., defining a virtual machine (like the Java VM) with a more suiltable architecture.

    That’s basically what I was thinking. Digital computers do a great job of simulating all sorts of complex analog circuits, so given sufficient computing power, it should be possible to build a simulation of the pile of neurons in question. I doubt we can do it now (think of how long it takes SPICE to crunch through some complicated circuits and multiply that by an ungodly amount), but the net result should still be that you have a “brain” with “understanding” running on the analog computer. I guess the question then becomes one of AI: Is it the computer that’s the intelligent entity, or is it the program? Bah.

  9. Troublesome Frog:

    think of how long it takes SPICE to crunch through some complicated circuits and multiply that by an ungodly amount

    You sound like someone who would enjoy Greg Egan’s Permutation City: think of a computer running a cellular automaton that grows an endless cube by adding “bricks” to the outer surfaces. Each “brick” is in turn a Turing machine running an emulator of a more conventional processor. These processors link up together to create an ever-expanding compute cluster, running lots and lots of simulations of people.

    Another objection that Searle brings up is that a simulation is sort of cheating, in that it doesn’t tell us how the brain works. I think that’s true up to a point: if someone gave you the plans for someone else’s chip, you could simulate it and see that it does, indeed, do what it’s supposed to do, but that wouldn’t tell you how it works.

    To this I’d reply that simulating a brain, if it were feasible, could be a useful tool in understanding human intelligence, and intelligence in general. We tend to assume that things like blood type don’t affect the mind, so there’s no need to include them in our simulations. In fact, a neural network reduces the vast and complex molecular machinery of neurons to a fairly simple switch.

    It wouldn’t surprise me to learn that while loops and function calls are the wrong tools for writing programs that think. We probably need better tools. But I see no reason to believe that these tools can’t run on a Turing machine. It’s possible that when Searle wrote his article, he was thinking of the fairly straightforward mapping from languages like Fortran to machine code, and things like Prolog or the Java VM, or writing applications in Firefox, would have been quite alien to him.

    Of course, I just got out of a course on virtualization, so I don’t mind thinking about multiple levels of virtual machine.

  10. Andrew, I think Searle is indeed appealing to intuitions (this is the paper that occasioned Dennett’s phrase “intuition pump”). He broadly thinks that if we are thinking machines, we aren’t computable machines. “Understanding Chinese” is an unanalysed term that allows people to say, yes, the Room understands Chinese, or yes, the man in the machine will come to understand Chinese, or no, there is no understanding there at all.

    Connectionist models of competency tend to presume that these systems are subsymbolic, which means that there is no representation of the symbols being processed in the network. Hence the network functionally processes symbols without representing them, and hence it generates its own semantics. The classical AI approach assumes that by processing symbols according to syntactical rules, understanding (i.e., semantics) follows. The connectionist approach has ways in which context and semantics can emerge merely in virtue of processing similar inputs. Searle denies both, because as far as he sees it, connectionist neural nets are simply just busier Chinese Rooms, and the argument carries over.

  11. I suppose it’s relevent to point out here that the Krasnow Institute’s Decade of the Mind videos are available for viewing. Of the ones I managed to catch in person I can recommend Nancy Kanwisher’s Functional Specificity in the Cortex:
    Selectivity, Experience and Generality
    ; she’s entertaining and shows how to go good science. I was disappointed with John Holland’s presentation; he lectures for his alloted time and … just stops, as if he couldn’t be bothered to fit his talk within the given period.

Comments are closed.