This document discusses cloud computing and grid computing. It begins with an introduction to cloud computing technologies and history. Grid computing is then introduced as a type of distributed computing where computer resources from multiple computers are shared as a single system. Key requirements for grid computing include administrative software to manage the system and middleware to enable resource sharing. Examples of applications that use grid computing are projects for searching for extraterrestrial intelligence and protein structure analysis.
3. Outline
• Cara kerja & teknologi di balik cloud computing
• Sejarah Perkembangan Cloud
• Grid Computing Background
• Grid Computing Introduction
• Grid Computing Requirement
• Grid Computing Applications
5. Sejarah perkembangan cloud
computing
• Distributed systems
• Virtualization
• Web 2.0
• Service-oriented computing
• Utility-oriented computing
6. Sejarah perkembangan cloud
computing cont.
•Three major milestones have led to
cloud computing distributed systems:
•Main frame computing
• Cluster computing
• Grid computing
7. Grid Computing Background
• "The Grid" takes its name from an analogy with the
electrical "power grid". The idea was that accessing
computer power from a computer grid would be as
simple as accessing electrical power from an electrical
grid".
8. Grid Computing introduction
• Grid Computing is a computer network in which each
computer's resources are shared with every other
computer in the system. Processing power, memory and
data storage are all community resources that
authorized users can tap into and leverage for specific
tasks. A grid computing system can be as simple as a
collection of similar computers running on the same
operating system or as complex as inter-networked
systems comprised of every computer platform you can
think of.
9. Grid Computing introduction cont.
• The grid computing is a special kind of distributed
computing. In distributed computing, different
computers within the same network share one or more
resources. In the ideal grid computing system, every
resource is shared, turning a computer network into a
powerful supercomputer.
• Computer resource:
• Central processing unit (CPU)
• Memory
• Storage
10. Grid Computing introduction cont.
• Beberapa istilah dalam Grid Computing
• Cluster
• Extensible Markup Language (XML)
• Hubs
• Integrated Development Environment (IDE)
• Interoperability
• Open standards
• Parallel processing
• Platform
• Server farm
• Server virtualization
• Service
• Simple Object Access Protocol (SOAP)
• State
• Transience
11. Grid Computing requirement
• At least one computer, usually a server, which handles
all the administrative duties for the system
• A network of computers running special grid computing
network software
• A collection of computer software called middleware
13. Tugas
• join to group Facebook
(Konsep Cloud Computing - S.2.2)
14. Referensi
• Rajkumar, Cristian, S.Thamarai, Mastering Cloud Computing
foundation and application programming, morgan kaufman,
2013
• Lee Newcombe, Securing Cloud Services, Capgemini, 2012
• S. Srinivasan, Cloud Computing Basics, Springer, 2014
• http://www.youtube.com/watch?v=UorIwPZU_eg
• http://www.youtube.com/watch?v=ae_DKNwK_ms
• http://www.youtube.com/watch?v=uYGQcmZUTaw
• http://www.youtube.com/watch?v=SgujaIzkwrE
• http://www.youtube.com/watch?v=F7i7K_ecayg
Editor's Notes
Ketua kelas: rara mutiara (nanda elya 135410053)
Wakil: revan agung saputra (Mochamad agung saputra 135410039)
http://computer.howstuffworks.com/cloud-computing/cloud-computing.htm
Rajkumar, Christian, S. Thamarai, Mastering Cloud Computing, page 11-12
http://www.diceitwise.com/how-does-cloud-computing-work-and-technology-behind-it/
======================
Gambar pertama:
cloud computing services offerings into three major categories: Infrastructure-as-a-Service(IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS).
Infrastructure-as-a-Service solutions deliver infrastructure on demand in the form of virtual hardware, storage, and networking. Virtual hardware is utilized to provide compute on demand in the form of virtual machine instances. These are created at users’ request on the provider’s infrastructure, and users are given tools and interfaces to configure the software stack installed in the virtual machine.
Platform-as-a-Service solutions deliver scalable and elastic runtime environments on demand and host the execution of applications. These services are backed by a core middleware platform that is responsible for creating the abstract environment where applications are deployed and executed.
Software-as-a-Service solutions provide applications and services on demand. Most of the common functionalities of desktop applications
Gambar kedua:
Cloud computing reference model
Gambar ketiga:
On the back end of the system are the various computers, servers and data storage systems that create the "cloud" of computing services. In theory, a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server.
The front end includes the client's computer (or computer network) and the application required to access the cloud computing system. Not all cloud computing systems have the same user interface. Services like Web-based e-mail programs leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique applications that provide network access to clients.
Rajkumar, Christian, S. Thamarai, Mastering Cloud Computing, page 15-22
======================
A distributed system is a collection of independent computers that appears to its users as a single coherent system.
Virtualization It encompasses (meliputi) a collection of solutions allowing the abstraction of some of the fundamental elements for computing, such as hardware, runtime environments, storage, and networking.
Web 2.0, the Web encompasses asset of technologies and services that facilitate interactive information sharing, collaboration, user centered design, and application composition. This evolution has transformed the Web into a rich plat form for application development. Web 2.0 brings interactivity and flexibility into Web pages, providing enhanced user experience by gaining Web-based access to all the functions that are normally found in desktop applications. These capabilities are obtained by integrating a collection of standards and technologies such as XML, Asynchronous Java Script and XML (AJAX), Web Services, and others. These technologies allow us to build applications leveraging the contribution of users, who now become providers of content.
Service-oriented computing, A service is an abstraction representing a self-describing and platform-agnostic component that can perform any function—any thing from a simple function to a complex business process.
Utility-oriented computing, Utility computing is a vision of computing that defines a service-provisioning model for compute services in which resources such as storage, compute power, applications, and infrastructure are packaged and offered on a pay-per-use basis.
Rajkumar, Christian, S. Thamarai, Mastering Cloud Computing, page 15-17
======================
Mainframes. Even though mainframes cannot be considered distributed systems, they offered large computational power by using multiple processors, which were presented as a single entity to users. These were the first examples of large computational facilities leveraging multiple processing units. Mainframes were powerful, highly reliable computers specialized for large data movement and massive input/output (I/O) operations. They were mostly used by large organizations for bulk data processing tasks such as online transactions, enterprise resource planning, and other operations involving the processing of significant amounts of data.
Cluster computing. [3][4] started as a low-cost alternative to the use of mainframes and supercomputers. The technology advancement that created faster and more powerful mainframes and super computers eventually generated an increased availability of cheap commodity machines as a side effect. These machines could then be connected by a high-bandwidth network and controlled by specific software tools that manage them as a single system. Starting in the 1980s, clusters become the standard technology for parallel and high-performance computing. Built by commodity machines, they were cheaper than mainframes and made high-performance computing available to a large number of groups, including universities and small research labs.
Grid computing. [8] appeared in the early 1990s as an evolution of cluster computing. In an analogy to the power grid, grid computing proposed a new approach to access large computational power, huge storage facilities, and a variety of services. Users can “consume” resources in the same way as they use other utilities such as power, gas, and water. Grids initially developed as aggregations of geographically dispersed clusters by means of Internet connections. grid was a dynamic aggregation of heterogeneous computing nodes, and its scale was nationwide or even worldwide.
http://www.gridcafe.org/what-is-the-grid.html
======================
Electrical power grid
whether it from coal, nuclear plant, You simply know that when you plug your toaster in to the wall socket, it will get the electrical power you need to do the job.
It links together power plants of many different kinds with your home, through transmission stations, power stations, transformers, powerlines and so forth.
you ask for electricity, and you get it. You also pay for what you get.
The Grid
from anywhere super computer resource, You simply know that when you plug your computer in to the Internet, it will get the computer power you need to do the job.
It links together computing resources such as PCs, workstations, servers, storage elements, and provides the mechanism needed to access them.
you ask for computer power or storage capacity and you get it. You also pay for what you get.
Http://computer.howstuffworks.com/grid-computing.htm
======================
With the right user interface, accessing a grid computing system would look no different than accessing a local machine's resources. Every authorized computer would have access to enormous processing power and storage capacity.
Grid computing systems link computer resources together in a way that lets someone use one computer to access and leverage the collected power of all the computers in the system. To the individual user, it's as if the user's computer has transformed into a supercomputer.
http://computer.howstuffworks.com/grid-computing2.htm
======================
Cluster: A group of networked computers sharing the same set of resources.
Extensible Markup Language (XML): A computer language that describes other data and is readable by computers. Control nodes (a node is any device connected to a network that can transmit, receive and reroute data) rely on XML languages like the Web Services Description Language (WSDL). The information in these languages tells the control node how to handle data and applications.
Hubs: A point within a network where various devices connect with one another.
Integrated Development Environment (IDE): The tools and facilities computer programmers need to create applications for a platform. The term for an application testing ground is sandbox.
Interoperability: The ability for software to operate within completely different environments. For example, a computer network might include both PCs and Macintosh computers. Without interoperable software, these computers wouldn't be able to work together because of their different operating systems and architecture.
Open standards: A technique of creating publically available standards. Unlike proprietary standards, which can belong exclusively to a single entity, anyone can adopt and use an open standard. Applications based on the same open standards are easier to integrate than ones built on different proprietary standards.
Parallel processing: Using multiple CPUs to solve a single computational problem. This is closely related to shared computing, which leverages untapped resources on a network to achieve a task.
Platform: The foundation upon which developers can create applications. A platform can be an operating system, a computer's architecture, a computer language or even a Web site.
Server farm: A cluster of servers used to perform tasks too complex for a single server.
Server virtualization: A technique in which a software application divides a single physical server into multiple exclusive server platforms (the virtual servers). Each virtual server can run its own operating system independently of the other virtual servers. The operating systems don't have to be the same system -- in other words, a single machine could have a virtual server acting as a Linux server and another one running a Windows platform. It works because most of the time, servers aren't running anywhere close to full capacity. Grid computing systems need lots of servers to handle various tasks and virtual servers help cut down on hardware costs.
Service: In grid computing, a service is any software system that allows computers to interact with one another over a network.
Simple Object Access Protocol (SOAP): A set of rules for exchanging messages written in XML across a network. Microsoft is responsible for developing the protocol.
State: In the IT world, a state is any kind of persistent data. It's information that continues to exist in some form even after being used in an application. For example, when you select books to go into an Amazon.com shopping cart, the information is stateful -- Amazon keeps track of your selection as you browse other areas of the Web site. Stateful services make it possible to create applications that have multiple steps but rely on the same core data.
Transience: The ability to activate or deactivate a service across a network without affecting other operations.
http://computer.howstuffworks.com/grid-computing3.htm
======================
Sharing Resource: Several companies and organizations are working together to create a standardized set of rules called protocols to make it easier to set up grid computing environments. It's possible to create a grid computing system right now and several already exist. But what's missing is an agreed-upon approach. That means that two different grid computing systems may not be compatible with one another, because each is working with a unique set of protocols and tools.
At least one computer, usually a server, which handles all the administrative duties for the system. Many people refer to this kind of computer as a control node. Other application and Web servers (both physical and virtual) provide specific services to the system.
A network of computers running special grid computing network software. These computers act both as a point of interface for the user and as the resources the system will tap into for different applications. Grid computing systems can either include several computers of the same make running on the same operating system (called a homogeneous system) or a hodgepodge of different computers running on every operating system imaginable (a heterogeneous system). The network can be anything from a hardwired system where every computer connects to the system with physical wires to an open system where computers connect with each other over the Internet.
A collection of computer software called middleware. The purpose of middleware is to allow different computers to run a process or application across the entire network of machines. Middleware is the workhorse of the grid computing system. Without it, communication across the system would be impossible. Like software in general, there's no single format for middleware. keep personal information private by protecting data using encryption. control who can access the system and use its resources. The middleware and control node of a grid computing system are responsible for keeping the system running smoothly. Together, they control how much access each computer has to the network's resources and vice versa.
http://computer.howstuffworks.com/grid-computing5.htm
======================
Search for Extraterrestrial Intelligence (SETI). The mission of the SETI project is to analyze data gathered by radio telescopes in search of evidence for intelligent alien communications. There's far too much information for a single computer to analyze effectively. The SETI project created a program called SETI@home, which networks computers together to form a virtual supercomputer instead.
Pande Group. Folding@home project administered by the Pande Group, a nonprofit institution in Stanford University's chemistry department.The research includes the way proteins take certain shapes, called folds, and how that relates to what proteins do. Scientists believe that protein "misfolding" could be the cause of diseases like Parkinson's or Alzheimer's. It's possible that by studying proteins, the Pande Group may discover new ways to treat or even cure these diseases.