Bill Dembski Gets A Paper Published

Huh. Looks like Bill Dembski got a paper published in IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, Volume 39 Issue 5, Sept. 2009.

I haven’t had a chance to read this yet, but here’s the abstract:

Abstract—Conservation of information theorems indicate that any search algorithm performs, on average, as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure. Combinatorics shows that even a mod- erately sized search requires problem-specific information to be successful. Computers, despite their speed in performing queries, are completely inadequate for resolving even moderately sized search problems without accurate information to guide them. We propose three measures to characterize the information required for successful search: 1) endogenous information, which measures the difficulty of finding a target using random search; 2) ex- ogenous information, which measures the difficulty that remains in finding a target once a search takes advantage of problem- specific information; and 3) active information, which, as the differ- ence between endogenous and exogenous information, measures the contribution of problem-specific information for successfully finding a target. This paper develops a methodology based on these information measures to gauge the effectiveness with which problem-specific information facilitates successful search. It then applies this methodology to various search tools widely used in evolutionary search.

From a quick glance, it looks like he’s still on his No Free Lunch Theorem kick. In the conclusion, the authors write:

To have integrity, search algorithms, particularly computer simulations of evolutionary search, should explicitly state as follows: 1) a numerical mea- sure of the difficulty of the problem to be solved, i.e., the endogenous information, and 2) a numerical measure of the amount of problem-specific information resident in the search algorithm, i.e., the active information.

which to me sounds like they think that people who use evolutionary algorithms are cheating by using a search method that performs well given the problem at hand.

In any case, I’m sure this paper will be bandied about as a sterling example of the research cdesign proponentsists are doing.

Update, Aug. 21: Mark Chu-Carroll has weighed in on this paper, and pretty much confirms my suspicion: at the core of the paper is a moderately-interesting idea — that it’s possible to quantify the amount of information in a search algorithm, i.e., how much it knows about the search space in order to produce quick results — along with some fluff that allows him to brag that he got a “peer-reviewed pro-ID article in mainstream […] literature“.