Unraveling Multimodality with Large Language Models.pdf
Energy efficient resource allocation007
1. ABSTRACT
Cloud computing has rapidly emerged as a successful paradigm for providing IT
infrastructure, resources and services on a pay-per-use basis over the past few years.
As, the wider adoption of Cloud and virtualization technologies has led to the establishment
of large scale data centres that consume excessive energy and have significant carbon
footprints, energy efficiency is becoming increasingly important for data centres and Cloud.
Today data centres energy consumption represents 3 per cent of all global electricity
production and is estimated to further rise in the future.
Cloud computing infrastructures are designed to support
the accessibility and deployment of various service oriented applications by the users. Cloud
computing services are made available through the server firms or data centres. These
resources are the major source of the power consumption in data centres along with air
conditioning and cooling equipment. Moreover the energy consumption in the cloud is
proportional to the resource utilization and data centres are almost the world’s highest
consumers of electricity. The complexity of the resource allocation problem increases with
the size of cloud infrastructure and becomes difficult to solve effectively. This thesis presents
the resource allocation problem in cloud computing as a linear programming problem with
the objective to minimize energy consumed in computation. This resource allocation problem
has been treated using heuristic and meta-heuristic approach. Some heuristic techniques are
adopted, implemented, and analysed under one set of common assumptions considering
Expected time to compute (ETC) task model for resource allocation. Therefore, Green Cloud
computing solutions that can not only minimize operational costs but also reduce the
environmental impact is essential.
2. TABLE OF CONTENTS:
INTRODUCTION ................................................................................................................6
1. CHAPTER 1 .........................................................................Error! Bookmark not defined.
1.1 Cloud Computing .........................................................................................................8
1.1.1 What Is Cloud Computing?.......................................................................................
1.1.2 Cloud Computing Actors ......................................................................................10
1.1.3 Cloud Service Overview........................................................................................11
1.1.3.1 Classic Cloud Service Models ..............................................................................
2. CHAPTER 2 ...................................................................................................................12
2.1 Need For Energy Management In Cloud .......................................................................12
2.2 Virtualization ..............................................................................................................13
2.3 Objective.....................................................................................................................14
2.4 Literature Survey.........................................................................................................15
3. CHAPTER 3 ...................................................................................................................16
3.1 Green Cloud Computing Architecture..............................................................................
4. CHAPTER 4 ...................................................................................................................18
4.1 Algorithms.....................................................................................................................
4.1.1 Modified Best Fit Heuristic Algorithm ........................................................................
4.1.1 Energy Aware Job Scheduling ....................................................................................
APPLICATIONS................................................................................................................21
ADVANTAGES..................................................................................................................22
CONCLUSION...................................................................................................................23
REFERENCES...................................................................................................................24
3. INTRODUCTION
Cloud computing is emerging as a new paradigm of large scale distributed computing. It has
moved computing and data away from desktop and portable PCs, into large data centres. It
provides the scalable IT resources such as applications and services, as well as the
infrastructure on which they operate, over the Internet, on pay-per-use basis to adjust the
capacity quickly and easily. It helps to accommodate changes in demand and helps any
organization in avoiding the capital costs of software and hardware. Thus, cloud computing is
a framework for enabling a suitable, on demand network access to a shared pool of
computing resources (e.g. networks, servers, storage, applications, and services). These
resources can be provisioned and deprovisioned quickly with minimal management effort or
service provider interaction. This further helps in promoting availability. Due to the
exponential growth of cloud computing, it has been widely adopted by the industry and there
is a rapid expansion in data-centres. This expansion has caused the dramatic increase in
energy use and its impact on the environment in terms of carbon footprints. The link between
energy consumption and carbon emission has given rise to an energy management issue
which is to improve energy-efficiency in cloud computing to achieve Green computing.
The fact that electricity consumption is set to rise 76%
from 2007 to 2030 with data centres contributing an important portion of this increase
emphasizes the importance of reducing energy consumption in Clouds. According to the
Gartner report, the average data centre is estimated to consume as much energy as 25000
households, and according to report, "The total estimated energy bill for data centres in 2010
is 11:5 billion and energy costs in a typical data centre double every five years". Face to this
electronic waste and to these huge amount of energy used to power data centres, energy
efficient data centre solutions have become one of the greatest challenges.
A major cause of energy inefficiency in data centres is
the idle power wasted when resources are under used. In addition, this problem of low
resources utilization, servers are permanently switched on even if they are not used and still
consume up to 70% of their peak power. To address these problems, it is necessary to
eliminate the power waste, to improve efficiency and to change the way resources are used.
This can be done by designing energy efficient resource allocation solutions at different
Cloud levels, which is the focus of this thesis.
Energy Efficiency is one of the critical issues in cloud
computing. To achieve energy efficiency in cloud environment, tasks need to be scheduled
efficiently. In cloud computing, the underlying large-scale computing infrastructure is often
heterogeneous, not only because it’s not economic and reliable to procure all the servers,
network devices and power supply devices in one size and one time, but because different
application requires different computer hardware, e.g. workflow extensive computing might
4. need standard and cheap hardware; scientific computing might need specific hardware other
than CPU like GPU or ASIC. There are kinds of resources in the large-scale computing
infrastructure need to be managed, CPU load, network bandwidth, disk quota, and even type
of operating systems. To provide better quality of service, resources are provisioned to the
users or applications, via load balancing mechanism, high availability mechanism and
security and authority mechanism. To maximize cloud utilization, the capacity of application
requirements shall be calculated so that minimal cloud computing infrastructure devices shall
be procured and maintained. Given access to the cloud computing infrastructure, applications
shall allocate proper resources to perform the computation with time cost and infrastructure
cost minimized. Proper resources shall be selected for specific applications.
5. CHAPTER 1
1.1 CLOUD COMPUTING
1.1.1 WHAT IS CLOUD COMPUTING?
Cloud computing has become one of the fastest growing paradigms in computer science. It is
a model for providing IT resources as a service in a cost efficient and pay-per-use way. By
adopting Cloud services, companies and simple users are enabled to externalize their
hardware resources, services, applications and their IT functions.
6. The proposed definition was: " Cloud computing is a pay-per-use model for enabling
convenient, on-demand network access to a shared pool of configurable computing
resources such as networks, servers, storage, applications, and services. It can be
rapidly provisioned and released with minimal management effort or service provider
interaction". From this definition we can identify the following key features of Cloud
computing:
On-demand self-service: automated on-demand resource provisioning.
Broad network access: Resources can be accessed remotely over the network.
Resource pooling: Resources are pooled and dynamically assigned independently
from their physical location.
Rapid elasticity: Capability can scale to cope with demand peaks.
Measured Service: Resource usage is metered to enable the pay-per-use model.
Different approaches can be used to deploy Cloud infrastructures:
Private cloud:
Refers to cloud infrastructures owned and managed by a single company, used in a private
network and not available for public use.
Community cloud:
Refers to shared cloud infrastructures for specific communities composed by multiple users.
Public cloud:
Refers to high-performance and large infrastructures operated by external companies that
provide IT services for many consumers via the Internet.
Hybrid cloud:
As the name already indicates, a hybrid cloud is a combination of both a private and public
cloud. Parts of the service run on the company's private cloud, and parts are outsourced to an
external public cloud.
7. 1.1.2 CLOUD COMPUTING ACTORS
Cloud computing involves three main actors that have distinct roles and interactions inside
the Cloud environment: providers, brokers and users.
Cloud Provider:
The provider possesses the Cloud infrastructure on which Cloud services are deployed.
This actor is responsible for the management and the control of cloud resources and for
handling users' requests.
Cloud user:
A Cloud user is a person or an organization that consumes Cloud services.
Cloud Broker:
The Broker is an intermediate player between Cloud users and provider. It is responsible for
the distribution incoming requests between the different providers based on users'
requirements. To make a simple analogy, a Cloud broker is like a travel agency that acts as an
intermediary between clients and service providers.
8. 1.1.3 CLOUD SERVICES OVERVIEW
Cloud service models describe how services are made available to users. We distinguish
between two different types of models : classic Cloud service models and new hybrid ones.
1.1.3.1 CLASSIC CLOUD SERVICE MODELS
Classic Cloud service models can be categorized into three types: Infrastructure as a Service
(IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS).
Software as a Service (SaaS). The capability provided to the consumer is to use the
provider’s applications running on a cloud infrastructure. The applications are accessible
from various client devices through either a thin client interface, such as a web browser (e.g.,
web-based email), or a program interface. The consumer does not manage or control the
underlying cloud infrastructure including network, servers, operating systems, storage, or
even individual application capabilities, with the possible exception of limited user-specific
application configuration settings.
Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the
cloud infrastructure consumer-created or acquired applications created using programming
languages, libraries, services, and tools supported by the provider. The consumer does not
manage or control the underlying cloud infrastructure including network, servers, operating
systems, or storage, but has control over the deployed applications and possibly configuration
settings for the application-hosting environment.
Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision
processing, storage, networks, and other fundamental computing resources where the
consumer is able to deploy and run arbitrary software, which can include operating systems
and applications. The consumer does not manage or control the underlying cloud
infrastructure but has control over operating systems, storage, and deployed applications; and
possibly limited control of select networking components (e.g., host firewalls).
9. CHAPTER 2
2.1 NEED FOR ENERGY MANAGEMENT IN CLOUD
Energy efficiency is increasingly important due to the increasing energy costs and the need to
reduce greenhouse gas emissions and also to decrease the overall energy consumption,
storage and communications.
Servers are unfriendly to environment and IT industry contributes to 2% of worlds total CO2
emissions (eg: 2.8 tons from US power plants). A typical datacentre consumes as much as
energy as 25000 households. Servers consume 0.5% of the world’s total electricity usage.
More than 15% of the servers are running without being used actively. So we need energy
efficient resource management methods that minimize energy consumption at the same time
meet the job deadlines[2].
The energy consumption for computing could be divided according its use in two edges ,the
first regarding to the energy consumed by the clients conformed by PCs, peripherals and all
types of mobile devices and the second refers to the energy consumed by servers, networks
and cooling systems in data centres. Due to the need to maintain the quality of service that
customers expect and the continuous expansion of the industry, energy consumption in the
“data centre edge” is increasing along with their performance increase. With the aim of
minimizing the negative environmental impact of ICTs, emerges a different perspective to
perform and use computing infrastructure named “Green Computing” or “Green IT”.
Different approaches for energy efficiency are [1]
Energy Efficient Hardware
Virtualization
Dynamic Voltage and Frequency Scaling
Energy-aware job Scheduling
Request Batching
Multi-speed Disks
Server Consolidation
10. 2.2 VIRTUALIZATION
Virtualization is the main key factor for Cloud Computing to attain sustainability from
the cost and energy efficient point of view.
Virtualization is a kind of technology that is rapidly transforming the IT landscape and has
changed the way people compute. It reduces hardware utilization, saves energy and costs and
makes it possible to run multiple applications and various operating systems on the same
SERVER at the same time. It increases the utilization, efficiency and flexibility of existing
computer hardware.
Why do we need virtualization?
Virtualization provides various benefits including saving time and energy, decreasing
costs and minimizing overall risk.
Provides ability to manage resources effectively.
Provides for data loss prevention.
Hardware Independence: Virtual machines run independently of underlying hardware.
Portability: Virtual machines can be migrated between different hosts.
11. 2.3 OBJECTIVE
Energy efficient Cloud resources allocation consists in identifying and assigning resources to
each incoming user request in such a way, that the user requirements are met, that the least
possible number of resources is used and that data centre energy efficiency is optimized.
The main focus of this thesis is on the design and development of models and
algorithms for energy efficient resource allocation in Cloud data centres [1].
The first goal of this work is to propose, develop and evaluate optimization algorithms
of resource allocation for traditional IaaS architectures that are widely used to manage
clouds [2].
12. 2.4 LITERATURE SURVEY
According to the Report to Congress on Server and Data Centre Energy, servers consume
59% of the total IT load and 41% of total data centre power consumption. The rest of power
is consumed by other devices like transformers, distribution wiring, air conditioners, pumps,
and lighting[5].
CHAPTER 3
3.1 GREEN CLOUD COMPUTING ARCHITECTURE
13. Cloud IaaS Manager (e.g. OpenStack, OpenNebula and Eucalyptus) control
and manage cloud resources and handle clients requests, VM scheduling and
fetch and store images in storage spaces.
Energy Estimation Module is an intermediate module between the cloud
infrastructure manager and the energy-aware scheduler. The module can rely
for example on an energy estimation tool such as Joulemeter [46] that uses
power models to conclude power consumption of VMs or servers from
resource usage.
Energy-aware VM Scheduler responsible for the energy aware VM
placement in the data center is the focus of our energy consumption
optimization model. This green scheduler is basically composed of two
modules. An allocation module and a migration module. The role of the
allocation module is to perform the initial VM placement using our exact VM
allocation algorithm. The dynamic consolidation of virtual machines is handled
by the migration module that minimizes the number of used or activated
servers.
15. This paper proposed an architectural framework and principles for energy-efficient Cloud
computing.
Figure below shows the high-level architecture for supporting energy-efficient service
allocation in a Green Cloud computing infrastructure
virtual Machine migration and placement is a technique for minimizing energy consumption.
There are two problems of VM placement
i. Placing the VMs on hosts,
ii. Optimization of the current VM allocation.
First problem can be solved by Modified Best Fit Decreasing. That is, sort all VMs in
decreasing order of their current CPU utilizations, and allocate each VM to a host that
provides the least increase of power consumption due to this allocation. Second problem is
carried out in two steps
i. Select VMs that need to be migrated
ii. Chosen VMs are placed on the hosts using the MBFD algorithm.
To determine when and which VMs should be migrated, three double-threshold VM selection
policies
i. The minimization of migrations policy
ii. The highest potential growth policy
iii. The random choice policy
16. The Minimization of Migrations (MM) policy selects the minimum number of VMs needed
to migrate from a host to lower the CPU utilization below the upper utilization threshold if
the upper threshold is violated. The algorithm sorts the list of VMs in the decreasing order of
the CPU utilization. Then, it repeatedly looks through the list of VMs and finds a VM that is
the best to migrate from the host. The Highest Potential Growth (HPG) policy migrate VMs
that have the lowest usage of the CPU relatively to the CPU capacity. The Random Choice
(RC) policy relies on a random selection of a number of VMs needed to decrease the CPU
utilization by a host below the upper utilization threshold.
4.1.2 Energy aware job scheduling[5]
First Come First Serve
First come, first served (FCFS) is an operating system process scheduling algorithm and a
network routing management mechanism that automatically executes queued requests and
processes by the order of their arrival. With first come, first served, what comes first is
handled first; the next request in line will be executed once the one before it is complete.
The proposed modification is that in pre-emptive scheduling FCFS also keeps track of
tasks, if a task need 10 units of energy while the system has less than 10 units remaining there
is no point in scheduling that task as it will to failure eventually.
Round-Robin
17. Round robin scheduling (RRS) is a job-scheduling algorithm that is considered to be very
fair, as it uses time slices that are assigned to each process in the queue or line. Each process
is then allowed to use the CPU for a given amount of time, and if it does not finish within the
allotted time, it is pre-empted and then moved at the back of the line so that the next process
in line is able to use the CPU for the same amount of time.
As this scheduling has free states it is not considered efficient the proposed solution is to
assign processors to tasks which consume energy proportional to the system. Example: The
system is charging at 5 units per minute and when is charged 10 units a task T1 enters the
ready queue. T1 needs 10 units per minute power for 3 minutes. In this case if the scheduler
directly assigns the processor without checking power needs the system will fail. While when
the processor is free it must go to power saving modes.
18. APPLICATIONS
1. Datacentres where servers are powered on all time.
The biggest impact of this framework has to on Data centres where huge amount
of power is continuously consumed. Even small savings of Energy here would have a
huge impact overall.
2. In software industries.
Each software organization has its own database stored on either the Cloud or
some private datacentre. These systems could be made better by applying this
framework.
3. In making public systems more energy efficient
As discussed networking is a basic computing element as people would be
volunteering to upgrade their system at a very low cost. Also it will help in cost
saving the long run.
4. In embedded systems
Embedded systems have the biggest limitation in power supply also all networking
here is wireless. Power consumption matters a lot in such appliances which would be
reduced.
19. ADVANTAGES
1. REDUCE ENERGY CONSUMPTION OF COMPUTING RESOURCES DURING PEAK OPERATION.
2. SAVE ENERGY DURING IDLE OPERATION.
3. USE ECO-FRIENDLY SOURCES OF ENERGY.
4. REDUCE HARMFUL EFFECTS OF COMPUTING RESOURCES.
5. REDUCE COMPUTING WASTES.
20. CONCLUSION
Cloud computing, a pool of virtualized computer resources, is a new concept. Green cloud
computing is the future development trend and main research object. Reducing energy
consumption is an increasingly important issue in cloud computing, more specifically when
dealing with a large-scale cloud. In this paper, we propose an improved clonal selection
algorithm based on time cost and energy consumption models in cloud computing
environment. The experimental results show that our approach has immense potential as it
offers significant improvement in average execution time, demonstrates high potential in
improving energy efficiency of the data centre, and can effectively meet the service level
agreement requested by the users. In the future, we will improve the proposed algorithm by
considering other operators and computational complexity to make further works more
practical in green cloud computing.
21. REFERENCES
[1] Shahin Vakilinia, Behdad Heidarpour, Mohamed Cheriet, “Energy Efficient
Resource Allocation In Cloud Computing Environments”,
“10.1109/ACCESS.2016.2633558”.
[2] Mehiar Dabbagh, Bechir Hamdaoui, Mohsen Guizaniy and Ammar Rayesz,”
Energy-Efficient Resource Allocation and Provisioning Framework for Cloud Data
Centers”,” Cisco Systems, San Jose, CA 95134,”
[3] Dang Minh Quan1, Robert Basmadjian2, Hermann De Meer2, Ricardo Lent3,
Toktam Mahmoodi3, Domenico Sannelli4, Federico Mezza4, Corenten
Dupont1,”Energy efficient resource allocation strategy for cloud data centres”.
[4] Vinisha Sasidharan,”Survey on Energy Efficient Resource Allocation Methods in
Cloud Environment”,” International Journal of Computer Applications (0975 – 8887)
International Conference on Innovation in Communication, Information and
`Computing (ICICIC) 2013”.
[5] Chaima Ghribi,”These De Doctorat Conjoint Telecom Sudparis et l'universite
Pierre Etmarie Curie”.” Energy e_cient resource allocation in cloud computing
environments”.