Cloud computing has recently emerged as one of the buzzwords in the ICT
industry. Numerous IT vendors are promising to offer computation, storage,
and application hosting services and to provide coverage in several continents,
offering service-level agreements (SLA)-backed performance and uptime promises
for their services. While these “clouds” are the natural evolution of
traditional data centers, they are distinguished by exposing resources (computation,
data/storage, and applications) as standards-based Web services and
following a “utility” pricing model where customers are charged based on their
utilization of computational resources, storage, and transfer of data. They offer
subscription-based access to infrastructure, platforms, and applications that
are popularly referred to as IaaS (Infrastructure as a Service), PaaS (Platform
as a Service), and SaaS (Software as a Service). While these emerging services
have increased interoperability and usability and reduced the cost of computation,
application hosting, and content storage and delivery by several orders of
magnitude, there is significant complexity involved in ensuring that applications
and services can scale as needed to achieve consistent and reliable
operation under peak loads.
Currently, expert developers are required to implement cloud services. Cloud
vendors, researchers, and practitioners alike are working to ensure that potential
users are educated about the benefits of cloud computing and the best way to
harness the full potential of the cloud. However, being a new and popular
paradigm, the very definition of cloud computing depends on which computing
expert is asked. So, while the realization of true utility computing appears closer
than ever, its acceptance is currently restricted to cloud experts due to the
perceived complexities of interacting with cloud computing providers.