Cloud computing refers to the use of Internet ("cloud") based computer technology for a variety of services. It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet . Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them - Wikipedia
Utility computing is the packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility (such as electricity, water, natural gas, or telephone network).
“ Utility computing” usually envisions some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. Multiple servers are used on the “back end” to make this possible. These might be a dedicated computer cluster specifically built for the purpose of being rented out, or even an under-utilized supercomputer. The technique of running a single calculation on multiple computers is known as distributed computing.
Autonomic Computing is an initiative started by IBM in 2001. Its ultimate aim is to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth.
An autonomic system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions.
The majority of cloud computing infrastructure consists of reliable services delivered through data centers and built on servers with different levels of virtualization technologies. The services are accessible anywhere in the world, with The Cloud appearing as a single point of access for all the computing needs of consumers.
Many cloud computing deployments depend on grids, have autonomic characteristics and bill like utilities but cloud computing can be seen as a natural next step from the grid-utility model
Amazon Elastic Compute Cloud (Amazon EC2) – A web service that provides resizable compute capacity in the cloud. One can configure an Amazon Machine Instance (AMI) and load it into the Amazon EC2 service. Allows to quickly scale capacity, both up and down, as your computing requirements change.
Amazon SimpleDB – A web service for running queries on structured data in real time. This service works in close conjunction with Amazon S3 and Amazon EC2, collectively providing the ability to store, process and query data sets in the cloud
Amazon Simple Storage Service (Amazon S3) – A simple web services interface that can be used to store and retrieve large amounts of data, at any time, from anywhere on the web.
Amazon CloudFront – A web service for content delivery. It integrates with other Amazon Web Services to provide an easy way to distribute content to end users with low latency and high data transfer speeds.
Amazon Simple Queue Service (Amazon SQS) – A reliable, highly scalable, hosted queue for storing messages as they travel between computers. By using Amazon SQS, developers can simply move data between distributed components of their applications that perform different tasks, without losing messages or requiring each component to be always available.
Amazon EC2 is a web service that enables you to launch and manage server instances in Amazon's data centers using APIs or available tools and utilities
Instances are available in different sizes and configurations
For example, one can use an m1.small instance (one Amazon EC2 Compute Unit) as a web server, an m1.xlarge instance (eight Amazon EC2 Compute Units) as a database server, or an extra large High-CPU instance (twenty Amazon EC2 Compute Units) for processor intensive applications
All the instances can be managed using the Web Service APIs
This is akin to virtualization concept (albeit as a Web Service)
Amazon SimpleDB is a web service for running queries on structured data in real time
Provides the core functionality of a database - real-time lookup and simple querying of structured data - without the operational complexity
Amazon SimpleDB requires no schema, automatically indexes your data and provides a simple API for storage and access. This eliminates the administrative burden of data modeling, index maintenance, and performance tuning
Objects - Objects are the files you want CloudFront to deliver. This typically includes web pages, images, and digital media files
Origin Server - An origin server is the location where you store the original, definitive version of your objects. One can place any objects that are required to be delivered through CloudFront in the bucket.
Distributions - After the objects are stored in the origin server, there is a link create between an Amazon S3 bucket (the origin server) and a domain name (which CloudFront automatically assigns)
Edge Locations - An edge location is a geographical site where CloudFront caches copies of your objects.
A queue is a temporary repository for messages that are awaiting processing
Amazon SQS is a distributed queue system that enables web service applications to quickly and reliably queue messages that one component in the application generates to be consumed by another component
The queue acts as a buffer between the component producing and saving data, and the component receiving the data for processing. This means the queue resolves issues that arise if the producer is producing work faster than the consumer can process it, or if the producer or consumer are only intermittently connected to the network
SQS ensures delivery of each message at least once, and supports multiple readers and writers interacting with the same queue.
A single queue can be used simultaneously by many distributed application components, with no need for those components to coordinate with each other to share the queue
Redundant infrastructure— Guarantees delivery of the messages at least once, highly concurrent access to messages, and high availability for sending and retrieving messages
Multiple writers and readers— Multiple parts of the system can send or receive messages at the same time. SQS locks the message during processing, keeping other parts of the system from processing the message simultaneously.
Configurable settings per queue— All of the queues don't have to be exactly alike
For example, one queue can be optimized for messages that require a longer processing time than others.
Variable message size— the messages can be up to 8 KB in size
Unlimited queues and messages— One can have as many queues and messages in the Amazon SQS system as you want
There are three main actors in the overall system:
The components of your distributed system
Messages in the queues
In the following diagram, your system has several components that send messages to the queue and receive messages from the queue. The diagram shows that a single queue, which has its messages (labeled A-E), is redundantly saved across multiple SQS servers.