From its inception in the 1930s, the rich and vigorous field of computer science
has been concerned with the resources, both in time and in memory, needed to
carry out a computation. A number of fundamental theorems were discovered
that resorted to a worst-case analysis. The central question was whether a given
algorithm could be guaranteed to terminate a computation in finite time whatever
the inputs, and, if so, in which class of complexity it lay, given the control
parameters: polynomial, exponential, and so on. Therefore, in 1991, a paper by
Cheeseman, Kaneefsky, and Taylor came as a bolt from the blue. Indeed, while
its title, “Where the really hard problems are”, was not altogether disturbing, its
content was. Broadly speaking, the authors argued that even if it was important
to analyze worst cases, it was just as essential to look for the typical complexity
of computations, the complexity encountered when solving typical problems.
And there lies a gem: the transition from the region of problems that are hard, in
terms of algorithmic complexity, to the region of problems that are easy can be
quite sharp. Moreover, these regions and transitions are not related to the worst
cases.
We remember that this 1991 paper, presented at the International Joint Conference
on Artificial Intelligence (IJCAI), started a commotion, though how profound
this would be was not at first apparent.We were among those who felt that
this paper and others that promptly followed, from physicists in particular, were
significant beyond the obvious. However, this event did not alter the course of
machine learning, our field, for many years. In machine learning too the theoretical
analysis that was at that time taking shape dealt with a type of worst-case
study; this new statistical theory of learning was sweeping the field and gaining
momentum as new learning algorithms, inspired in part by its lessons, were
devised.