|
Randomization has become a standard approach in algorithm design. Efficiency
and simplicity are the main features of randomized algorithms that
often made randomization a miraculous springboard for solving complex problems
in various applications. Especially in the areas of communication, cryptography,
data management, and discrete optimization, randomization tends
to be an indispensable part of the development of software systems. We know
several situations and algorithmic tasks for which any deterministic approach
leads to so much computer work that it is impossible to perform it in practice,
but for which randomized methods can be successfully applied. This huge
gain of going from an intractable amount of computer work1 (computational
complexity) to short computations of randomized algorithms is paid for by
the risk of computing a wrong output. But the probability of executing an
erroneous computation of a randomized algorithm can often be reduced to
below the probability of winning the first prize in a lottery in the first attempt.
This means that one pays for saving a huge amount of computing time
with a very small loss in the degree of reliability. How to orchestrate such
huge quantitative jumps in computational complexity by small relaxations of
reliability (correctness) constraints and why such big gains for small “prices”
are at all possible are the topics of this book. |
|
|
|