Windows Azure was officially announced at PDC 2008, but looking back, I had a
quick look in the kitchen of Windows Azure in 2007 while I was visiting Redmond
during the Lead Enterprise Architect Program (LEAP) sessions. Pat Helland, a senior
architect at Microsoft, gave a talk on The irresistible forces meet the movable objects.
Pat described the nature of the forces where he pitted big servers and fast
CPUs against commodity hardware (ordinary machines you can buy everywhere).
Moore's Law, (The number of transistors on circuits doubles every year) is applicable
to many hardware components. Though still accurate, it is getting more and more
expensive to double CPU speed. Increasing CPU speed is still possible, but at a
price. The costs for scaling out a single server are generally higher than scaling up
to multiple processors or servers. If we look solely at the speed of the CPU, we can
conclude that the growth is flattening. Parallel computing is cheaper than scaling
out single servers.
Looking back at the history of Windows Azure, Pat Helland actually stated that there
should be something like low-cost, highly-available, high-bandwidth, high-storage,
and high computing power-based datacenters, all around the world, that can run
both existing and new applications.
Guess what? The concept envisioned was officially announced at PDC 2008!
Windows Azure was born, and this very first release of the platform actually
contained everything that was envisioned during this talk on LEAP 2007. Lots
of cheap hardware runs in datacenters all around the globe that offer massive
computing power, storage, and bandwidth. All these components are available
like electricity; you start paying from the moment you start using it. Operational
expenses (OpEx) instead of capital expenses (CapEx) enable you to experiment
more easily, since you do not need to buy hardware but just take it from Windows
Azure. When your experiment is successful and you need more computing power
or storage to serve all your customers, you can easily scale up.