Wanted: Calendar Feature

PDAs have solved or simplified a lot of the problems I used to have
before I started carrying around a backup brain. But there’s one type
of reminder that they still can’t deal with: “do X under when Y
happens”. E.g., “Return Paul’s book next time I see him” or “Look up
Janice if I’m ever in London.”

Read More

Another Problem With Searle’s Chinese Room

(Update, Aug. 20: John Wilkins, an honest to God philosopher, tells me in the comments that I’m wrong. So take this with a grain of salt.)

For those not aware of it, John Searle’s Chinese room is an argument against the possibility of artificial intelligence.

As recounted by Roger Penrose in The Empereor’s New Brain, it goes something like this: let’s say someone has written a program that understands natural language. This program reads a short story in a human language (e.g., “a man went to a restaurant. When his meal arrived, it was burned to a crisp. He stormed out without paying the bill or leaving a tip.”), then takes questions (e.g., “did the man eat his meal?”) and answers them. Now, let’s make two changes: first of all, the program “understands” Chinese, rather than English. And secondly, instead of a computer, it is John Searle (who doesn’t speak a word of Chinese), who will be executing the program. He is sealed in a room, given detailed instructions (in English), and some Chinese text. The instructions explain what to do with the Chinese characters; eventually, the instructions have him draw other Chinese characters on a sheet of paper, and push them out through a slot in the wall. The instructions don’t include a dictionary or anything like that; he is never told what the characters mean. Searle’s argument, then, is that although to the people outside it appears that the room has read a story and answered questions about it, no actual understanding has taken place, since Searle still doesn’t speak Chinese, and has no idea what the story was about, or even that it was a story. It was all just clever symbol manipulation.

One objection that recently occurred to me is this: what if, instead of a natural-language recognition program, the Chinese researchers had given Searle a program that forecasts the weather, or finds a route from one address to another, or typesets Chinese text, or plays go, or even one that does simple arithmetic (written out in Chinese, of course)?

I don’t see that this makes a significant difference to the argument, so if the Chinese room argument is sound, then its conclusion should stand. Let’s assume that Searle, a philosopher, knows absolutely nothing about meteorology, and is given a weather-forecasting program. To the people outside, it looks as though the room is predicting the weather, however well or poorly. But Searle, inside, has no understanding of what he’s doing. Therefore, by his earlier argument, there is no true weather forecasting, just clever symbol manipulation. Therefore, computers cannot forecast the weather.

I think we can all agree that this is nonsense: of course computers can forecast the weather: they do it all the time. They also find routes between addresses (surely no one thinks that Mapquest has a bunch of interns behind the scenes, frantically giving directions), and all of the other things listed above. In short, if the Chinese room argument worked, it would prevent computers from doing a whole lot of things that we know perfectly well that they can do. The programs may just be clever symbol manipulation, but if the solution can be implemented using sufficiently-clever symbol manipulation, then what’s the problem? (BTW, I don’t imagine that I’m the first person to discover this flaw; I just happened to rediscover it independently.)

The real problem with the Chinese room argument, as I see it, is that in his analogy, he takes the place of the CPU (and associated hardware), and the detailed English instructions are analogous to software. While a statement like “my computer can play chess” is quite uncontroversial, if I were to say “my Intel Pentium can play chess”, people would think that I don’t know what I’m talking about (or at best, ask me to explain myself).

Of course, Searle came up with this argument in 1980, back before everyone had a computer, so perhaps he can be forgiven this misunderstanding. Or perhaps I’m misunderstanding some subtle aspect of his argument, though I don’t think so.