1. Presentation Topic:
Top Cloud Computing Technologies
Course Code: CSE 6145
Course Title: Cloud Computing
Course Teacher:
Dr. A.K.M. Muzahidul Islam,
Professor,
United International University.
Department: Computer Science & Engineering
Program: MSCSE
Section: M
2.
3. Types of Cloud Computing Technologies:
These technologies are different innovation of cloud computing & they work behind the cloud computing platform to make it flexible, reliable
and usable.
Cloud Computing Technology (CCT):
Cloud computing is a next-generation technology based on the internet and network which provides services to the user in multiple ways. It is a
simple data outsourcing resource and can be used temporarily also and it is cost-effective because clients can pay for what they use. It offers
scalable access on-demand to the client instantly by sharing its pool resources to client web pages or IP.
The cloud computing technology is expanding very quickly and is a purposeful concept. It can be used for private cloud implementation either
on-premises or in the data centre of client’s choice.
Reference: (i) https://data-flair.training/blogs/cloud-computing-technology/ (ii) https://www.tutorialride.com/cloud-computing/cloud-computing-technologies.htm (iii) https://www.educba.com/cloud-computing-technologies/
4. A. Virtualization:
Virtualization in cloud computing is a creation of virtual resource such as the desktop operating system, physical storage into virtual form. It is
the ability which allows sharing the physical instance of a single application or resource among multiple organizations or users. This technique is
done by assigning a name logically to all those physical resources & provides a pointer to those physical resources based on demand.
Virtualizations also manage the workload by transforming traditional computing and make it more scalable, economical and efficient. With the
help of virtualization, the customer can maximize the resources and reduces the physical system which is in need.
Working of Virtualization:
Following are the couple of ways that allows to enable virtualization in the cloud:
OS Level Virtualization: Here multiple instances of an application can run in a single OS.
Hypervisor-based Virtualization: Here the OS shares the hardware of the host computer and hence it allows multiple to run on a single host.
Grid Approach: Here the processing workloads are distributed among different physical servers and their results are then collected as one.
Reference: (i) https://data-flair.training/blogs/virtualization-in-cloud-computing/ (ii) https://www.educba.com/virtualization-in-cloud-computing/ (iii) https://software.intel.com/content/www/us/en/develop/articles/the-advantages-of-using-virtualization-technology-in-the-enterprise.html
5. Types of Virtualizations:
Hardware Virtualization
Operating System Virtualization
Server Virtualization
Storage Virtualization
Virtualization of cloud can be categorized into four different
types based on their characteristics.
Benefits of Virtualization:
Firewalls and encryption ensure that all that lies inside the
virtualization cloud is kept protected and any unauthorized
access can be prevented.
It saves us the cost for physical machines examples for which are
servers and hardware.
The users are not required to find the hard drives or storages for
the purpose of data transfer or retrieval.
It enables far more flexible operation which is very efficient and
agile supportive.
Data which is stored in the cloud can be retrieved or transferred
at any time from any device.
Reference: (i) https://www.educba.com/virtualization-in-cloud-computing/ (ii) https://data-flair.training/blogs/virtualization-in-cloud-computing/ (iii) https://www.w3schools.in/cloud-computing/cloud-virtualization/
6. Hardware Virtualization:
In hardware virtualization, the virtual machine manager i.e. VMM is located and installed on
the hardware system. The VMM installs as software in the hardware system and hardware
virtualization is enabled. The main use of hypervisor over here is to monitor and control the
memory, processor and other resources of hardware. Once hardware virtualization is
enabled, one can install a different OS on it and many applications can be run on installed
operating systems.
Types of Hardware Virtualization:
o Full Virtualization: Here the hardware architecture is completely simulated. Guest
software doesn't need any modification to run any applications.
o Emulation Virtualization: Here the virtual machine simulates the hardware & is
independent. Furthermore, the guest OS doesn't require any modification.
o Para-Virtualization: Here, the hardware is not simulated; instead the guest software runs
its isolated system.
Operating System Virtualization:
In operating system virtualization, the virtual machine manager or the virtual machine
software gets installed in the Operating system (OS) of a host than to hardware. The main
use of operating system virtualization is for testing the applications on different operating
systems i.e. over a different platform of OS.
Types of Operating System Virtualization:
o Linux OS Virtualization: To virtualized Linux systems, VMware workstation software is
used. To install any software virtually, users need VMware software to install first.
o Windows OS Virtualization: Users need to install VMware first to install windows OS
virtually.
Reference: (i) https://www.w3schools.in/cloud-computing/cloud-virtualization/ (ii) https://en.wikipedia.org/wiki/Hardware_virtualization (iii) https://www.virtuatopia.com/index.php?title=An_Overview_of_VirtualBox_2
7. Types of Hypervisor:
Type-1: Bare-metal hypervisor is installed directly on the top of the host
hardware. It manages all the hardware resources which are installed inside
the tin. The hardware resource is further allocated to the virtual machine.
Example: VMware vSphere ESXi
Type-2: Hosted hypervisor runs directly on the top of the conventional
operating system. Type 2 hypervisor has some architecture limitation. They
are quite popular in a nonproduction environment.
Example: VMware Workstation for VirtualBox
Server Virtualization:
In server virtualization, the virtual machine manager or the virtual machine software is directly installed on the server
system where it can be divided into many servers based on resource usage with the help of load balancing. This is done to
fulfill the demand of resources and the server administrator plays the role of dividing a physical server into many servers.
Reference: (i) https://www.atlantic.net/what-is-server-virtualization/ (ii) https://data-flair.training/blogs/storage-virtualization-in-cloud-computing/ (iii) https://www.educba.com/virtualization-in-cloud-computing/
8. Storage Virtualization:
In storage virtualization, a grouping of physical storage from different server which are from different
network devices/places happens. Once this is done, it looks like a single storage device. These all are
managed by the virtual storage system. It can also be implemented using software applications. In
storage Virtualization in Cloud Computing the servers are not known about the location of data
storage. The main usage of storage virtualization is to provide back-up and recovery process.
Types of Storage Virtualization:
Block Virtualization:
Here it basically separates the logical storage from that of the physical so that the user/administrator can
access without having to access the physical storage.
File Virtualization:
Here it basically removes the dependencies caused in accessing the data at file level to that of the location
where they are actually present.
Reference: (i) https://en.vcenter.ir/storage/storage-virtualization/ (ii) https://data-flair.training/blogs/storage-virtualization-in-cloud-computing/ (iii) https://www.educba.com/storage-virtualization/
9. Features of Virtualization:
Partitioning: Multiple virtual servers can run on a physical server at the same time
Encapsulation of data: All data on the virtual server including boot disks is encapsulated in a file format
Isolation: The Virtual server running on the physical server are safely separated & don't affect each other
Hardware Independence: When the virtual server runs, it can migrate to the different hardware platform
Different Virtualization Platforms:
These are the top five virtualization platform which can be used and implemented by anyone whether it’s a small business or a large company.
Reference: (i) https://www.w3schools.in/cloud-computing/cloud-virtualization/ (ii) https://www.educba.com/virtualization-platforms/
10. B. Service-Oriented Architecture (SOA):
SOA is an application framework which takes everyday business applications and
divides them into separate business functions and processes called services. This
unique component of cloud application enables cloud-related arrangements that can
be modified and adjusted on request as business needs.
Service-oriented system diffuses two major components, one is Quality as service and
other as software as service.
The function of Quality of service is to identify the function and behavior of a service
from a different view.
The function of Software as a service is to provide a new delivery model of software
which is inherited from the world of application service providers.
It has four properties:
i. It defines a business activity with a specific result, logically.
ii. It is self-contained.
iii. For its customers, it is a black box, meaning the consumer does not have to
be aware of the service's inner workings.
iv. It may consist of other underlying services.
Example:
Web services
- A web page can call multiple loosely coupled system such as payment system.
Reference: (i) https://www.tutorialandexample.com/cloud-computing-technologies/ (ii) https://www.tutorialride.com/cloud-computing/cloud-computing-technologies.htm (iii) https://medium.com/@SoftwareDevelopmentCommunity/what-is-service-oriented-architecture-fa894d11a7ec
11. C. Grid Computing
• Grid computing is a processor architecture that
combines computer resources from various
domains to reach a main objective. In grid
computing, the computers on the network can
work on a task together, thus functioning as a
supercomputer.
• A grid is connected by parallel nodes that form
a computer cluster, which runs on an operating
system, Linux or free software. The technology
is applied to a wide range of applications, such
as mathematical, scientific or educational tasks
through several computing resources. It is often
used in structural analysis, Web services such
as ATM banking, back-office infrastructures,
and scientific or marketing research.
12. History
• The idea of grid computing was first established in the early 1990s by Carl Kesselman, Ian
Foster and Steve Tuecke. They developed the Globus Toolkit standard, which included
grids for data storage management, data processing and intensive computation
management.
Grid vs Conventional
• “Distributed” or “grid” computing in general is a special type of parallel computing that
relies on complete computers (with onboard CPUs, storage, power supplies, network
interfaces, etc.) connected to a network (private, public or the Internet) by a conventional
network interface producing commodity hardware, compared to the lower efficiency of
designing and constructing a small number of custom supercomputers.
• There are also some differences in programming and MC. It can be costly and difficult to
write programs that can run in the environment of a supercomputer, which may have a
custom operating system, or require the program to address concurrency issues.
13. How it works?
• Grid computing works by running
specialized software on every computer
that participates in the data grid. The
software acts as the manager of the entire
system and coordinates various tasks across
the grid. Specifically, the software assigns
subtasks to each computer so they can
work simultaneously on their respective
subtasks. After the completion of subtasks,
the outputs are gathered and aggregated to
complete a larger-scale task. The software
lets each computer communicate over the
network with the other computers so they
can share information on what portion of
the subtasks each computer is running, and
how to consolidate and deliver outputs.
14. How it used?
• Grid computing is especially useful when
different subject matter experts need to
collaborate on a project but do not
necessarily have the means to immediately
share data and computing resources in a
single site. By joining forces despite the
geographical distance, the distributed
teams are able to leverage their own
resources that contribute to a bigger effort.
This means that all computing resources do
not have to work on the same specific task,
but can work on sub-tasks that collectively
make up the end goal. For example, a
research team might analyze weather
patterns in the North Atlantic region, while
another team analyzes the south Atlantic
region, and both results can be combined
to deliver a complete picture of Atlantic
weather patterns.
15. Projects and Applications
• Grid computing offers a way to solve Grand Challenge problems such as protein folding,
financial modeling, earthquake simulation, and climate/weather modeling, and was
integral in enabling the Large Hadron Collider at CERN. Grids offer a way of using the
information technology resources optimally inside an organization. They also provide a
means for offering information technology as a utility for commercial and noncommercial
clients, with those clients paying only for what they use, as with electricity or water.
• As of October 2016, over 4 million machines running the open-source Berkeley Open
Infrastructure for Network Computing (BOINC) platform are members of the World
Community Grid. One of the projects using BOINC is SETI@home, which was using more
than 400,000 computers to achieve 0.828 TFLOPS as of October 2016. As of October 2016
Folding@home, which is not part of BOINC, achieved more than 101 x86-equivalent
petaflops on over 110,000 machines.
• Beside this Sixth Framework Programme of European Union, NASA Advanced
Supercomputing facility (NAS), United Devices Cancer Research Project used grid
computing.
16. Advantages of Grid Computing
Grid computing provides a framework and deployment platform that enables resource
sharing, accessing, aggregation, and management in a distributed computing environment
based on system performance, users' quality of services, as well as emerging open
standards, such as Web services. This is making possible functionality that was previously
unimaginable -- near real time portfolio rebalancing scenario analysis; risk analysis models
with seemingly limitless complexity; and content distribution with speed and efficiency
hereunto unparalleled.
References:
i. https://hazelcast.com/glossary/grid-computing/
ii. https://www.techopedia.com/definition/87/grid-computing
iii. http://ecomputernotes.com/fundamental/introduction-to-computer/grid-computing
iv. https://en.wikipedia.org/wiki/Grid_computing
v. http://azhar-paperpresentation.blogspot.com/2010/04/grid-computing_5337.html
vi. http://www.dartmouth.edu/~rc/classes/intro_grid/Grid-Advantages.html
Can solve larger, more complex problems in a shorter time
Easier to collaborate with other organizations
Make better use of existing hardware
17. D. Utility Computing
• Organizations pay for computing they have
been used
• processing power
• network bandwidth
• software applications
• Utility computing uses a virtualized
infrastructure
• With a virtualized infrastructure:
• people, process and technology are focused on service levels
• capacity is allocated dynamically
• the entire infrastructure is simplified and flexible
• enables a utility or pay-per-use model for IT services
18. Properties of Utility Computing
Although there are many different definitions of utility computing, it will normally include
the following five characteristics of utility computing.
oScalability
The utility computing must be ensured that under all conditions sufficient IT resources are available.
Increasing the demand for a service may, its quality (e.g., response time) does not suffer.
oDemand pricing
So far, companies have to buy his own hardware and software when they need computing power. This IT
infrastructure must be paid in advance of the rule, regardless of the intensity with which the company uses
them later. Technology vendors to achieve this link, for example, the fact that the lease rate for their
servers depends on how many CPUs has enabled the customer. If it can be measured in a company as
much computing power to claim the individual sections in fact, may be the IT costs in internal cost directly
attributable to the individual departments. Other forms of connection with the use of IT costs are possible.
19. oStandardized Utility Computing Services
The utility computing service provider offers its customers a catalog of standardized services. These may
have different service level agreements (Agreement on the quality and the price of an IT) services. The
customer has no influence on the underlying technologies such as the server platform.
oUtility Computing and Virtualization
To share the web and other resources in the shared pool of machines can be used virtualization
technologies. This will divide the network into logical resource instead of the physical resources available.
An application is assigned no specific pre-determined servers or storage of any but a free server runtime
or memory from the pool.
oAutomation
Repetitive management tasks such as setting up a new server or the installation of updates can be
automated. Moreover, automatically allocate resources to services and the management of IT services to
be optimized, with service level agreements and operating costs of IT resources must be considered.
20. Types of Utility Computing
Utility computing is of two types:
i. Internal Utility
ii. External Utility
Internal utility means that the computer network is shared only within a company.
External utility is used by several different computer companies to pool together a
special service provider.
21. Advantages of Utility Computing
i. The client doesn't have to buy all the hardware, software and licenses needed to do
business. Instead, the client relies on another party to provide these services. The
burden of maintaining and administering the system falls to the utility computing
company, allowing the client to concentrate on other tasks.
ii. Utility computing gives companies the option to subscribe to a single service and use
the same suite of software throughout the entire client organization.
iii. Another advantage is compatibility. In a large company with many departments,
problems can arise with computing software. Each department might depend on
different software suites. The files used by employees in one part of a company might
be incompatible with the software used by employees in another part. Utility
computing gives companies the option to subscribe to a single service and use the
same suite of software throughout the entire client organization.
22. Disadvantages of Utility Computing
i. Potential disadvantage is reliability. If a utility computing company is in financial
trouble or has frequent equipment problems, clients could get cut off from the
services for which they're paying.
ii. Utility computing systems can also be attractive targets for hackers. A hacker might
want to access services without paying for them or snoop around and investigate
client files. Much of the responsibility of keeping the system safe falls to the provider
References:
i. https://www.utilitydive.com/news/how-the-cloud-can-change-the-utility-business-model/438611/
ii. http://utilitygridcomputing.blogspot.com/2008/10/advantages-disadvantages-of-uc.html
iii. https://www.hcltech.com/technology-qa/what-is-utility-computing
iv. https://www.techopedia.com/definition/14622/utility-computing
v. https://www.slideshare.net/asmitamtarar/cloud-computing-and-utility-computing