|
Digital electronic computer systems have gone through several generations,
and many changes, since they were first built just before and during World
War II. Machines that were originally implemented with electromechanical
relays and vacuum tubes gave way to those constructed with solid-state
devices and, eventually, integrated circuits containing thousands or millions
of transistors. Systems that cost millions of dollars and took up large rooms
(or even whole floors of buildings) decreased in price by orders of magnitude
and shrank, in some cases, to single chips less than the size of a postage
stamp. CPU clock speeds increased from kilohertz to megahertz to gigahertz,
and computer storage capacity grew from kilobytes to megabytes to
gigabytes and beyond.
While most people have noticed the obvious changes in modern computer
system implementation, not everyone realizes how much has remained
the same, architecturally speaking. Many of the basic design concepts and
even the advanced techniques used to enhance performance have not
changed appreciably in 30, 40, or even 50 years or longer. Most modern
computers still use the sequential, von Neumann programming paradigm
that dates to the 1940s, they accept hardware interrupts that have been a
standard system design feature since the 1950s, and they store programs and
data in hierarchical memories that are, at least conceptually, very similar to
storage systems built in the 1960s. While computing professionals obviously
need to stay abreast of today’s cutting-edge architectural breakthroughs and
the latest technical wizardry, it is just as important that they study historical
computer architectures, not only because doing so gives a valuable appreciation
for how things were done in the past, but also because, in many cases,
the same or similar techniques are still being used in the present and may
persist into the future. |