This document summarizes an article from the Annals of Emerging Technologies in Computing (AETiC) journal that models and simulates power dissipation control techniques in internet data centers. It begins with background on internet data centers and the need to reduce power consumption and cooling costs. It then describes three control techniques - CRACs ON-OFF control, multi-step ON/OFF control, and CRACs step-3 ON-OFF control - and finds through simulation that the CRACs step-3 ON/OFF control provides the smoothest power variations and is the best option. The document also includes details on modeling the data center, server racks, and CRAC units to simulate the different control techniques under
[Meetup] a successful migration from elastic search to clickhouseVianney FOUCAULT
Paris Clickhouse meetup 2019: How Contentsquare successfully migrated to Clickhouse !
Discover the subtleties of a migration to Clickhouse. What to check before hand, then how to operate clickhouse in Production
[Meetup] a successful migration from elastic search to clickhouseVianney FOUCAULT
Paris Clickhouse meetup 2019: How Contentsquare successfully migrated to Clickhouse !
Discover the subtleties of a migration to Clickhouse. What to check before hand, then how to operate clickhouse in Production
Presto talk @ Global AI conference 2018 Bostonkbajda
Presented at Global AI Conference in Boston 2018:
http://www.globalbigdataconference.com/boston/global-artificial-intelligence-conference-106/speaker-details/kamil-bajda-pawlikowski-62952.html
Presto, an open source distributed SQL engine, is widely recognized for its low-latency queries, high concurrency, and native ability to query multiple data sources. Proven at scale in a variety of use cases at Facebook, Airbnb, Netflix, Uber, Twitter, LinkedIn, Bloomberg, and FINRA, Presto experienced an unprecedented growth in popularity in both on-premises and cloud deployments in the last few years. Presto is really a SQL-on-Anything engine in a single query can access data from Hadoop, S3-compatible object stores, RDBMS, NoSQL and custom data stores. This talk will cover some of the best use cases for Presto, recent advancements in the project such as Cost-Based Optimizer and Geospatial functions as well as discuss the roadmap going forward.
Presto @ Treasure Data - Presto Meetup Boston 2015Taro L. Saito
Treasure Data simplifies event analytics for the complex digital
world. Our customers send us 1,000,000 events per second and issue 30,000+ Presto queries everyday to understand their customers better. One of the challenges is designing a cloud database with zero downtime to support a global customer base. We have achieved this goal by developing several open-source technologies; Fluentd and Embulk enable seamless log collection from stream/batch sources, and with MessagePack we can provide an extensible columnar store that accommodates future schema changes. Finally, Presto allows us to serve a wide variety of data processing our customers perform on our service. In this talk, I will present an overview of our system, and how our customers keep using Presto while collecting and extending their data set.
Visualize some of Austin's open source data using Elasticsearch with Kibana. ObjectRocket's Steve Croce presented this talk on 10/13/17 at the DBaaS event in Austin, TX.
Stream Processing Live Traffic Data with Kafka StreamsTim Ysewyn
In this workshop we will set up a streaming framework which will process realtime data of traffic sensors installed within the Belgian road system.
Starting with the intake of the data, you will learn best practices and the recommended approach to split the information into events in a way that won't come back to haunt you.
With some basic stream operations (count, filter, ... ) you will get to know the data and experience how easy it is to get things done with Spring Boot & Spring Cloud Stream.
But since simple data processing is not enough to fulfill all your streaming needs, we will also let you experience the power of windows. After this workshop, tumbling, sliding and session windows hold no more mysteries and you will be a true streaming wizard.
Iceberg: a modern table format for big data (Ryan Blue & Parth Brahmbhatt, Netflix)
Presto Summit 2018 (https://www.starburstdata.com/technical-blog/presto-summit-2018-recap/)
Building Pinterest Real-Time Ads Platform Using Kafka Streams confluent
Building Pinterest Real-Time Ads Platform Using Kafka Streams (Liquan Pei + Boyang Chen, Pinterest) Kafka Summit SF 2018
In this talk, we are sharing the experience of building Pinterest’s real-time Ads Platform utilizing Kafka Streams. The real-time budgeting system is the most mission-critical component of the Ads Platform as it controls how each ad is delivered to maximize user, advertiser and Pinterest value. The system needs to handle over 50,000 queries per section (QPS) impressions, requires less than five seconds of end-to-end latency and recovers within five minutes during outages. It also needs to be scalable to handle the fast growth of Pinterest’s ads business.
The real-time budgeting system is composed of real-time stream-stream joiner, real-time spend aggregator and a spend predictor. At Pinterest’s scale, we need to overcome quite a few challenges to make each component work. For example, the stream-stream joiner needs to maintain terabyte size state while supporting fast recovery, and the real-time spend aggregator needs to publish to thousands of ads servers while supporting over one million read QPS. We choose Kafka Streams as it provides milliseconds latency guarantee, scalable event-based processing and easy-to-use APIs. In the process of building the system, we performed tons of tuning to RocksDB, Kafka Producer and Consumer, and pushed several open source contributions to Apache Kafka. We are also working on adding a remote checkpoint for Kafka Streams state to reduce the time of code start when adding more machines to the application. We believe that our experience can be beneficial to people who want to build real-time streaming solutions at large scale and deeply understand Kafka Streams.
Do you gather metrics from your application? Can you combine them and easily generate custom graphs out of them? Can your developers measure whatever they want at any point of your application without breaking it or making it slower?
In our next itnig friday, Víctor Martínez will show us how easy it is to roll on your own Graphite installation and how to use Etsy's statsd collector to flush your metrics. You will learn what Graphite is, how all of its components work, how to get your real time&historic metrics into Carbon, Graphite's database, and how to plot them in different manners. Víctor will show us some Graphite dashboards, alternative statds implementations, detailed common Graphite configuration gotchas, design limitations and how to deal with them.
<a>Visit details</a>
Bonnier News is the largest news organisation in Sweden, publishing Dagens Nyheter and Expressen, two of the country’s largest newspapers. When we needed to build a new data processing platform that could accommodate the needs of many different, competing brands, we turned to Openshift and Kubernetes. In this presentation, we will describe the architectural tradeoffs and choices we made, and how we have been able to deploy data flows at a high rate by focusing on technical simplicity.
This presentation is an attempt do demystify the practice of building reliable data processing pipelines. We go through the necessary pieces needed to build a stable processing platform: data ingestion, processing engines, workflow management, schemas, and pipeline development processes. The presentation also includes component choice considerations and recommendations, as well as best practices and pitfalls to avoid, most learnt through expensive mistakes.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
Presto talk @ Global AI conference 2018 Bostonkbajda
Presented at Global AI Conference in Boston 2018:
http://www.globalbigdataconference.com/boston/global-artificial-intelligence-conference-106/speaker-details/kamil-bajda-pawlikowski-62952.html
Presto, an open source distributed SQL engine, is widely recognized for its low-latency queries, high concurrency, and native ability to query multiple data sources. Proven at scale in a variety of use cases at Facebook, Airbnb, Netflix, Uber, Twitter, LinkedIn, Bloomberg, and FINRA, Presto experienced an unprecedented growth in popularity in both on-premises and cloud deployments in the last few years. Presto is really a SQL-on-Anything engine in a single query can access data from Hadoop, S3-compatible object stores, RDBMS, NoSQL and custom data stores. This talk will cover some of the best use cases for Presto, recent advancements in the project such as Cost-Based Optimizer and Geospatial functions as well as discuss the roadmap going forward.
Presto @ Treasure Data - Presto Meetup Boston 2015Taro L. Saito
Treasure Data simplifies event analytics for the complex digital
world. Our customers send us 1,000,000 events per second and issue 30,000+ Presto queries everyday to understand their customers better. One of the challenges is designing a cloud database with zero downtime to support a global customer base. We have achieved this goal by developing several open-source technologies; Fluentd and Embulk enable seamless log collection from stream/batch sources, and with MessagePack we can provide an extensible columnar store that accommodates future schema changes. Finally, Presto allows us to serve a wide variety of data processing our customers perform on our service. In this talk, I will present an overview of our system, and how our customers keep using Presto while collecting and extending their data set.
Visualize some of Austin's open source data using Elasticsearch with Kibana. ObjectRocket's Steve Croce presented this talk on 10/13/17 at the DBaaS event in Austin, TX.
Stream Processing Live Traffic Data with Kafka StreamsTim Ysewyn
In this workshop we will set up a streaming framework which will process realtime data of traffic sensors installed within the Belgian road system.
Starting with the intake of the data, you will learn best practices and the recommended approach to split the information into events in a way that won't come back to haunt you.
With some basic stream operations (count, filter, ... ) you will get to know the data and experience how easy it is to get things done with Spring Boot & Spring Cloud Stream.
But since simple data processing is not enough to fulfill all your streaming needs, we will also let you experience the power of windows. After this workshop, tumbling, sliding and session windows hold no more mysteries and you will be a true streaming wizard.
Iceberg: a modern table format for big data (Ryan Blue & Parth Brahmbhatt, Netflix)
Presto Summit 2018 (https://www.starburstdata.com/technical-blog/presto-summit-2018-recap/)
Building Pinterest Real-Time Ads Platform Using Kafka Streams confluent
Building Pinterest Real-Time Ads Platform Using Kafka Streams (Liquan Pei + Boyang Chen, Pinterest) Kafka Summit SF 2018
In this talk, we are sharing the experience of building Pinterest’s real-time Ads Platform utilizing Kafka Streams. The real-time budgeting system is the most mission-critical component of the Ads Platform as it controls how each ad is delivered to maximize user, advertiser and Pinterest value. The system needs to handle over 50,000 queries per section (QPS) impressions, requires less than five seconds of end-to-end latency and recovers within five minutes during outages. It also needs to be scalable to handle the fast growth of Pinterest’s ads business.
The real-time budgeting system is composed of real-time stream-stream joiner, real-time spend aggregator and a spend predictor. At Pinterest’s scale, we need to overcome quite a few challenges to make each component work. For example, the stream-stream joiner needs to maintain terabyte size state while supporting fast recovery, and the real-time spend aggregator needs to publish to thousands of ads servers while supporting over one million read QPS. We choose Kafka Streams as it provides milliseconds latency guarantee, scalable event-based processing and easy-to-use APIs. In the process of building the system, we performed tons of tuning to RocksDB, Kafka Producer and Consumer, and pushed several open source contributions to Apache Kafka. We are also working on adding a remote checkpoint for Kafka Streams state to reduce the time of code start when adding more machines to the application. We believe that our experience can be beneficial to people who want to build real-time streaming solutions at large scale and deeply understand Kafka Streams.
Do you gather metrics from your application? Can you combine them and easily generate custom graphs out of them? Can your developers measure whatever they want at any point of your application without breaking it or making it slower?
In our next itnig friday, Víctor Martínez will show us how easy it is to roll on your own Graphite installation and how to use Etsy's statsd collector to flush your metrics. You will learn what Graphite is, how all of its components work, how to get your real time&historic metrics into Carbon, Graphite's database, and how to plot them in different manners. Víctor will show us some Graphite dashboards, alternative statds implementations, detailed common Graphite configuration gotchas, design limitations and how to deal with them.
<a>Visit details</a>
Bonnier News is the largest news organisation in Sweden, publishing Dagens Nyheter and Expressen, two of the country’s largest newspapers. When we needed to build a new data processing platform that could accommodate the needs of many different, competing brands, we turned to Openshift and Kubernetes. In this presentation, we will describe the architectural tradeoffs and choices we made, and how we have been able to deploy data flows at a high rate by focusing on technical simplicity.
This presentation is an attempt do demystify the practice of building reliable data processing pipelines. We go through the necessary pieces needed to build a stable processing platform: data ingestion, processing engines, workflow management, schemas, and pipeline development processes. The presentation also includes component choice considerations and recommendations, as well as best practices and pitfalls to avoid, most learnt through expensive mistakes.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
A hybrid algorithm to reduce energy consumption management in cloud data centersIJECEIAES
There are several physical data centers in cloud environment with hundreds or thousands of computers. Virtualization is the key technology to make cloud computing feasible. It separates virtual machines in a way that each of these so-called virtualized machines can be configured on a number of hosts according to the type of user application. It is also possible to dynamically alter the allocated resources of a virtual machine. Different methods of energy saving in data centers can be divided into three general categories: 1) methods based on load balancing of resources; 2) using hardware facilities for scheduling; 3) considering thermal characteristics of the environment. This paper focuses on load balancing methods as they act dynamically because of their dependence on the current behavior of system. By taking a detailed look on previous methods, we provide a hybrid method which enables us to save energy through finding a suitable configuration for virtual machines placement and considering special features of virtual environments for scheduling and balancing dynamic loads by live migration method.
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Journals
Abstract Cloud computing is a new model of computing that is widely used in today’s industry, organizations and society in information technology service delivery as a utility. It enables organizations to reduce operational expenditure and capital expenditure. However, cloud computing with underutilized resources still consumes an unacceptable amount of energy than fully utilized resource. Many techniques for optimizing energy consumption in virtualized cloud have been proposed. This paper surveys different energy efficient models with task consolidation in the virtualized cloud computing environment. Keywords: Cloud computing, Virtualization, Task consolidation, Energy consumption, Virtual machine
A SURVEY ON REDUCING ENERGY SPRAWL IN CLOUD COMPUTINGaciijournal
Cloud computing is the cluster of autonomic computing, grid computing and utility computing. Cloud
providers are there to rescue their customers from the problem of dynamism. The providers focus on
resource sharing and in improving the performance. Energy consumption is the major factor to degrade the
performance. Reducing energy sprawl will bloom the performance. This paper delineates the different
techniques involved in scheduling the workload of the servers in order to minimize the energy sprawl.
A Survey on Reducing Energy Sprawl In Cloud Computingaciijournal
Cloud computing is the cluster of autonomic computing, grid computing and utility computing. Cloud
providers are there to rescue their customers from the problem of dynamism. The providers focus on
resource sharing and in improving the performance. Energy consumption is the major factor to degrade the
performance. Reducing energy sprawl will bloom the performance. This paper delineates the different
techniques involved in scheduling the workload of the servers in order to minimize the energy sprawl.
Service oriented cloud architecture for improved performance of smart grid ap...eSAT Journals
Abstract An effective and flexible computational platform is needed for the data coordination and processing associated with real time operational and application services in smart grid. A server environment where multiple applications are hosted by a common pool of virtualized server resources demands an open source structure for ensuring operational flexibility. In this paper, open source architecture is proposed for real time services which involve data coordination and processing. The architecture enables secure and reliable exchange of information and transactions with users over the internet to support various services. Prioritizing the applications based on complexity enhances efficiency of resource allocation in such situations. A priority based scheduling algorithm is proposed in the work for application level performance management in the structure. Analytical model based on queuing theory is developed for evaluating the performance of the test bed. The implementation is done using open stack cloud and the test results show a significant gain of 8% with the algorithm. Index Terms: Service Oriented Architecture, Smart grid, Mean response time, Open stack, Queuing model
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
PROCESS OF LOAD BALANCING IN CLOUD COMPUTING USING GENETIC ALGORITHMecij
The running generation of world, cloud computing has become the most powerful, chief and also lightning technology. IT based companies has already changed their way to buy and design hardware through this technology. It is a high utility which can also make software more attractive. Load balancing research in
cloud technology is one of the burning technologies in modern time. In this paper, pointing various proposed algorithms, the topic of load balancing in Cloud Computing are researched and compared to provide a gist of the latest way in this research area. By using Genetic Algorithm the balance is most
flexible which is represented here.
Energy-Aware Adaptive Four Thresholds Technique for Optimal Virtual Machine P...IJECEIAES
With the increasing expansion of cloud data centers and the demand for cloud services, one of the major problems facing these data centers is the “increasing growth in energy consumption ". In this paper, we propose a method to balance the burden of virtual machine resources in order to reduce energy consumption. The proposed technique is based on a four-adaptive threshold model to reduce energy consumption in physical servers and minimize SLA violation in cloud data centers. Based on the proposed technique, hosts will be grouped into five clusters: hosts with low load, hosts with a light load, hosts with a middle load, hosts with high load and finally, hosts with a heavy load. Virtual machines are transferred from the host with high load and heavy load to the hosts with light load. Also, the VMs on low hosts will be migrated to the hosts with middle load, while the host with a light load and hosts with middle load remain unchanged. The values of the thresholds are obtained on the basis of the mathematical modeling approach and the 퐾-Means Clustering Algorithm is used for clustering of hosts. Experimental results show that applying the proposed technique will improve the load balancing and reduce the number of VM migration and reduce energy consumption.
An optimized cost-based data allocation model for heterogeneous distributed ...IJECEIAES
Continuous attempts have been made to improve the flexibility and effectiveness of distributed computing systems. Extensive effort in the fields of connectivity technologies, network programs, high processing components, and storage helps to improvise results. However, concerns such as slowness in response, long execution time, and long completion time have been identified as stumbling blocks that hinder performance and require additional attention. These defects increased the total system cost and made the data allocation procedure for a geographically dispersed setup difficult. The load-based architectural model has been strengthened to improve data allocation performance. To do this, an abstract job model is employed, and a data query file containing input data is processed on a directed acyclic graph. The jobs are executed on the processing engine with the lowest execution cost, and the system's total cost is calculated. The total cost is computed by summing the costs of communication, computation, and network. The total cost of the system will be reduced using a Swarm intelligence algorithm. In heterogeneous distributed computing systems, the suggested approach attempts to reduce the system's total cost and improve data distribution. According to simulation results, the technique efficiently lowers total system cost and optimizes partitioned data allocation.
In this research simulation process was according to the cost of the proposed
algorithms. The proposed algorithms were as follow LA, CA and LOAC algorithms.
NETBEANS software was employed for implementation of these algorithms. Results of
simulation of this research were validated with pinch mark. The results of simulation
were for two aspects, in terms of cost for four scenarios and in terms of processing time
for seven different data centers. The results for costs were stated that the lower cost is
happened at the first scenario and maximum cost happened at the third scenario. When
it comes to the processing time, the maximum delay happens in data center No.6 while
the minimum processing time happened in data center No.2
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Interplay of Communication and Computation Energy Consumption for Low Power S...ijasuc
The sensor network design approach normally considers the communication energy consumption for
evaluating a communication protocol. This is true for the low power devices such as MICAz/MICA2
which do not consume a lot of energy for the data treatment. However, recently developed sensor devices
for multimedia applications such as iMote2 do consume considerable amount of energy for data
processing. In this article, we consider various scenarios for routing the data in wireless multimedia
sensor networks by considering the local design parameters of devices such as PXA27x and beagleboard.
The proposed routing solution considers node level optimizations such as data compression, dynamic
voltage and frequency scaling (DVFS) for making a routing decision. The proposed approaches have
been simulated to prove the effectiveness of the approach.
Optimization of power consumption in data centers using machine learning bas...IJECEIAES
Data center hosting is in higher demand to fulfill the computing and storage requirements of information technology (IT) and cloud services platforms which need more electricity to power on the IT devices and for data center cooling requirements. Because of the increased demand for data center facilities, optimizing power usage and ensuring that data center energy quality is not compromised has become a difficult task. As a result, various machine learning-based optimization approaches for enhancing overall power effectiveness have been outlined. This paper aims to identify and analyze the key ongoing research made between 2015 and 2021 to evaluate the types of approaches being used by researchers in data center energy consumption optimization using Machine Learning algorithms. It is discussed how machine learning can be used to optimize data center power. A potential future scope is proposed based on the findings of this review by combining a mixture of bioinspired optimization and neural network.
Cloud computing offers to users worldwide a low cost on-demand services, according to their requirements. In the recent years, the rapid growth and service quality of cloud computing has made it an attractive technology for different Tech Companies. However with the growing number of data centers resources, high levels of energy cost are being consumed with more carbon emissions in the air. For instance, the Google data center estimation of electric power consumption is equivalent to the energy requirement of a small sized city. Also, even if the virtualization of resources in cloud computing datacenters may reduce the number of physical machines and hardware equipments cost, it is still restrained by energy consumption issue. Energy efficiency has become a major concern for today’s cloud datacenter researchers, with a simultaneous improvement of the cloud service quality and reducing operation cost. This paper analyses and discusses the literature review of works related to the contribution of energy efficiency enhancement in cloud computing datacenters. The main objective is to have the best management of the involved physical machines which host the virtual ones in the cloud datacenters.
Similar to empirical analysis modeling of power dissipation control in internet data centers (20)
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
MARUTI SUZUKI- A Successful Joint Venture in India.pptx
empirical analysis modeling of power dissipation control in internet data centers
1. Annals of Emerging Technologies in Computing (AETiC)
Vol. 5, No. 3, 2021
Firstnames Lastname, Abcd E. Ghij and Klmn Opqr and Stuv Wx Yz, “This is the Title of the Article: without Any Line Break”,
Annals of Emerging Technologies in Computing (AETiC), PrintISSN: 2516-0281, Online ISSN: 2516-029X, pp. 1-7, Vol. 5, No. 3, 1st July
2021, Publishedby International AssociationofEducatorsandResearchers (IAER), DOI: 10.33166/AETiC.2021.03.001, Available:
http://aetic.theiaer.org/archive/v5/v5n3/p1.html.
Review Article
Empirical Analysis Modeling of Power
Dissipation Control in Internet Data
Centers
Rahila Batool1, Mutiullah Jamil2, Ayesha Waheed3, Hafeez ur Rehman4*, Sabi Zahra5
1 The Islamic University Bahawalpur (IUB), Pakistan.
2,3,4,5 Khwaja Fareed University of Engineering & Information Technology (KFUEIT), Pakistan.
rahila_batool@hotmail.com ; mutiullahj@gmail.com ; ayeshawaheeed@gmail.com ; hinagillani16 @yahoo.com
*Correspondence: siddiqov@gmail.com
Received: 8th January 2021; Accepted: 17th March 2021; Published: 1st April 2020
Abstract: Large-scale data centers involve a set of server racks for storage and computations for which they
require a massive amount of power and some cooling arrangement. It is observed in the literature that the size of
the internet data center has increased ten times in the last ten years, and energy cost is going up similarly. So there
is a need for proper power management in the internet data center to reduce power consumption. This paper
focuses on modeling and simulation of internet data centers and comparing three different control techniques to
varying workloads on the servers. The first technique is a CRACs ON-OFF method where the power of computer
room air conditioning (CRACs) is automatically controlled based on the server’s output temperature. In particular,
if the temperature of a server rack in the internet data Centre is more significant than some fixed temperature, the
CRACs are turned on. Otherwise, the CRACs will stay off. The second method is the multi-steps ON/OFF control
in which the CRACs are partially turned on and off based on the outer air temperature of servers. We vary the
intermediate steps to 1 and 3 in the multi-step ON/OFF control. The two different control techniques can ensure
the desired output temperature of server racks. Still, the CRACs ON-OFF control method involves more and sharp
power peaks, which can cause problems in the operation of IDCs (Internet Data Centers). The third technique
CRACs step-3 ON/OFF control, involves smooth power variations and therefore can be considered a better option
than the CRACs ON-OFF method. Various experiments at Matlab Simulink show that the control system's
behavior is almost similar at different workload conditions. So CRACs step-3 ON/OFF proposed control model
minimize the power consumption to a large extent. Future work will consider the state estimation in the modeling
and control strategy under different workloads.
Keywords: CRAC; Datacenter; Modelling; Power Dissipation
1. Introduction
The number of IoT devices usage is exponentially increasing,and approximately 75.44 billion devices
become part of the IoT network every year [1]. Internet or network of networks has two main parts: the
hardware and the protocols or rules for its functionality. An essential part of the hardware is the data
center, where there are racks of servers that store, retrieve, and transmit data to the clients. Since these
servers involve large computations, they require unique cooling systems for their operation. With the
increase in such computing services, the power consumption in the internet data center is increasing
rapidly. The servers and cooling system both consume immense power and incur a high cost. According
to Computer World, the power requirements of the existing data centers require 34 dedicated power
plants, each capable of generating 500 megawatts of electricity. It is essential to highlight that most data
centers utilize more power than their requirements [2]. Using the topology of IoT devices and placement
of most influence node energy can be saved [3]. In this paper, our focus is on analyzing and controlling
power consumption in the internet data center. We begin with a brief introduction to the data center and
then present the statement of our research problem. Also, we highlighted the importance of the research
2. AETiC 2021,Vol. 5, No. 3 2
www.aetic.theiaer.org
problem and gave our contributions. Nowadays, most organizations keep their data stored online in the
form of archive directives or websites.All online data is stored on powerful computers called web servers.
These web servers require air conditions to keep the servers fully functional and safe around the clock to
provide certain facilities and in cloud computing load balancing is required to efficient performance with
minimum heat generation [4]. This infrastructure is named an internet data center. The virtual backend
environment is provided by internet data centers so that data can be accessed when required. It requires a
reliableinfrastructure with high-security standards and routine IT operations so users can have constant
accessibility to stored data. The internet data center consists of server racks and cooling equipment. Each
server rack has further numbers of servers for computation. Some of them are active having workload
whileothers are inactive.For a better understanding,we consider a simple example from the literature [5].
A network of two front-end Web portals and two internet data centers (IDCs) located in different regions.
Web portal receives the user’s request and distributes their task between IDCs. Then respective IDC
divides the task between servers. The purpose of the task division is to decrease the overall computation
time. There is a presentation of the architecture of a data center with the different numbers of active
servers. In this we have two web portals with IDC (internet data center 1 and 2).
Figure 1. The architecture of Internet Data Centers [Matlab Simulink ]
In Figure 1. There are two front-end Web portals and two internet data centers (IDCs) located in
different regions. Web portal receives the user’s request and distributes their task between IDCs. Then
respectiveIDC divides the taskbetween servers.The purpose of the taskdivision is to decrease the overall
computation time. In figure 1, there are four servers at IDC 1, from which two are active (servers with the
workload) while others are inactive. Similarly, at IDC 2, there are three servers from which only one is
active.
1.1 Research Problem
Due to the inefficient cooling system, the operational cost of IDC, 40% of the total cost, is maximized
[6]. So there is a need to formulate this problem in the form of a mathematical model and reduce the
operational cost of IDC by reducing the power consumption and temperature of internet Datacenters
to save cooling costs.
Due advancement in technology and rapid increase in the demand of internet resource IDC have gone
under pressure in terms of workload. According to published work reviews jin et al. and others, this work
can be classified into two categories: Thermal environment and energy efficiency. Table 1 summarizes the
existing thoughts of thermal environment, energy efficiency, and power models for data centers [7].
Table 1. A summary of Literature review
Reference Year Work and conclusion
Lu et al. [8] 2018 Row and rack-basedsolution withdifferent combinations of air distribution
Alkharabshehet al.
[9]
2015 Present the numerical modeling of experiment measurement and recent cooling techniques and
device-level liquid cooling system.
Chu and Wang [10] 2019 Anexperiment was performed forlong-distance and short-distance cooling and airflow
management of rack-level cooling.
Rambo and Joshi
[11]
2007 (1) Datacenter modeling objectives
(2) Numerical modeling
(3) Model validation
3. AETiC 2021,Vol. 5, No. 3 3
www.aetic.theiaer.org
(4) Rack-level compact modeling
(5) Datacenter dynamics
Ge et al. [12] 2013 Provide the various power-saving strategies
Mittal [13] 2014 Give the techniques for managing power consumption of the embeddedsystemsand discussthe
need forpower management
Orgerie et al. [14] 2014 Studies and models forestimating the energy consumptionof
these resources
Shuja et al. [15] 2016 Computing systems including server architectures, power
distribution, and cooling
Mobius et al. [16] 2013 (1) Estimationmodels’ essential steps: model inputsand training
model withbenchmarks
(2) CPU models
(3) Virtual machine models
(4) Server models
2. Materials and Methods environment and energy efficiency
2.1. Modelling of Data Center
In Figure 2. There are a network simulation of C front end, Web portals, and N internet data centers
(IDCs) located in different regions [5]. Each of the front end Web portals has a workload Li, i = 1, . . . , C
assigned by the client request, which is further subdivided into λj ≥ 0 workloads and forwarded by Web
portal i to IDC j. Thus, we have
Li =∑ 𝜆𝑖𝑗
𝑁
𝑗 =1 , ∀ i = 1...C. (1)
There is the total number of Mj servers in each IDC, with mj active servers (blue) having a capacity of
λj workload. It means that
𝜆𝑗 =∑ 𝜆𝑖𝑗
𝑁
𝑗=1 , ∀ j=1... N (2)
The power consumption Pjk of the individual active server k (k = 1,...,mj) in the IDC j is dependent on
the CPU utilization Ujk and frequency ƒ of the server. To map the above two parameters into power
consumption , the curve fitting method is often utilized [17] through a set of experiments. The derived
power consumption model for an active server k having workload λjk becomes
𝑝𝑗𝑘
= 𝑏1𝜆𝑗𝑘 + 𝑏0 ,∀ k = 1... j (3)
Figure 2. The architecture and simulation of C front end and N Internet Data Centers [18]
Where b1 and b0 are fitting parameters, and the CPU utilization Ujk is approximated by λjk.
Assuming that each IDCj have fixed and equal frequency servers, the total power consumption Pj for IDCj
is
4. AETiC 2021,Vol. 5, No. 3 4
www.aetic.theiaer.org
𝑝𝑗 = 𝑏1𝜆𝑗 + 𝑏0𝑀𝑗 (4)
To process the incoming workload from front end Web portal, each IDC can utilize the M/M/n
queuing model. In which the average service latency D can be written as D = PQ/(nµ-λ ). Where n is the
number of active servers, λ is the workload arrival rate, µ is the service rate, and PQ is the probability of
clients waiting in the queue. The actual average latency for IDCj becomes
𝐷𝑗
𝑎
=
1
𝑚𝑗𝜇𝑗−𝜆𝑗
(5)
It is assumed that therearealways client requests waitingin thequeue, i.e., PQ = 1.
In general, each IDC has thousands of servers mounted onto racks that can be treated as discrete
thermal nodes. If we assume that there are N racks of servers in a single IDC connected to C front-end
Web portals, the framework will be similar to Figure 2. The dynamic thermal model for rack j can be
written as [19].
𝑑𝑇𝑜𝑢𝑡
𝑗
𝑑𝑡
= −𝑐𝑗𝑇𝑜𝑢𝑡
𝑗
+ 𝑘𝑗𝑇𝑖𝑛
𝑗
+ ℓ𝑗𝓅𝑗 (6)
Where𝑇𝑖𝑛
𝑗
and 𝑇𝑜𝑢𝑡
𝑗
are the ambient air temperatureand outer air temperatureof theserver
Rack respectively.Also, Pj is the total power consumption of rack j and𝑐𝑗, 𝑘𝑗 is a constant coefficient.
𝑇𝑖𝑛
𝑗
The ambient air temperature is represented as ℓ and 𝓅 is mapping ambient air temperaturefrom
output air temperatures,thenonnegativecoefficient for rack j, whosesum equals 1.
𝑇𝑖𝑛
𝑗
=∑ 𝒢𝑗,ℓ
ℓℰ𝑀 𝑇𝑜𝑢𝑡
ℎ
+∑ ℋ𝑗,𝒽
ℎℰ𝐹 𝑇𝑜𝑢𝑡
ℎ
(7)
We define F = 1,2, F as a set of CRACs in an internet data center.Analogous to the thermal model of racks,
the dynamics of CRACs can be written as
𝑑𝑇𝑜𝑢𝑡
ℎ
𝑑𝑡
= −𝐴ℎ 𝑇𝑜𝑢𝑡
𝑗
+ 𝐴ℎ 𝑇𝑖𝑛
ℎ
+ ℬℎ 𝒫ℎ (8)
Where 𝑇𝑖𝑛
ℎ
and 𝑇𝑜𝑢𝑡
ℎ
are the ambient air temperature and extreme air temperature of CRAC h, respectively.
Also, 𝒫ℎ is the total power consumption of CRAC h and 𝐴ℎ , ℬℎ are constant coefficients [20]. 𝑇𝑖𝑛
ℎ
is the
ambient air temperature represented as
𝑇𝑖𝑛
ℎ
= ∑ 𝐺ℎ ,𝑔𝑇𝑜𝑢𝑡
𝑔
+
𝑔ℰ𝐹
∑ 𝐻ℎ,𝑗𝑇𝑜𝑢𝑡
𝑗
𝑗ℰ𝑀 (9)
G and H are mappingambient air temperaturefrom output air temperatures,nonnegativecoefficients for
CRACs h, whosesum equals 1.
2.2. Internet Data Center Configuration and Limitation
This section presents the details of a specific internet data center taken from the literature [20].Thedata
center comprises three server racks and three CRAC units, as given in figure 2. The total number of
servers in rack j = 1,2,3 are given as M1 = 300, M2 = 400 and M3 = 200. The tolerance level for each rack's
latency delay or queuing delay is fixed to Dj = 10ms. Also, it is observed that a single server with
maximum utilization consumes a power of 285 Watts while the completely idle server consumes 150
Watts. This is the case for all servers in each rack, and therefore, the high power is represented by PjH =
285 W and the low power by PjL = 150 W. Also, the service rate for each rack is constant, which is µj = 2
jobs/sec. The configuration of racks is summarized in table 1. Regarding the CRAC units, it is assumed
that the power consumption by each CRAC is constant, that is, P1 = P2 = P3, and its value is either 0 or 100
kW. The dynamics of the ambient temperature and the output temperature of both server racks and
CRAC units are related by some parameters, as discussed in table 2.
Table 2. Configuration of Racks in IDCs
i 𝝁𝒋 𝑷𝒊
𝑯
𝑷 𝒊
𝑳 𝑴𝒋 𝑫𝒋
1 2 285 150 300 0
2 2 285 150 200 0
3 2 285 150 400 0
5. AETiC 2021,Vol. 5, No. 3 5
www.aetic.theiaer.org
Table 3. Parameters of Racks in IDCs
Node Rack 1 Rack 2 Rack 3
Rack 1 G11 = 0.01 G 12 = 0.02 G 13 = 0.06
Rack 2 G 21 = 0.03 G 22 = 0.01 G 23 = 0.05
Rack 3 G 31 = 0.04 G 32 = 0.04 G 33 = 0.84
CRAC 1 H11 = 0.85 H12 = 0.07 H13 = 0.03
CRAC 2 H21 = 0.04 H22 = 0.88 H23 = 0.02
CRAC 3 H31 = 0.07 H32 = 0.0 H33 = 0.81
Table 4. Parameters of CRACs in IDCs
Node CRACs 1 CRACs 2 CRACs3
Rack 1 H11 = 0.80 H 12 = 0.07 H 13 = 0.04
Rack 2 H 21 = 0.04 H 22 = 0.85 H 23 = 0.02
Rack 3 H 31 = 0.04 H 32 = 0.03 H 33 = 0.84
CRAC 1 G11 = 0.01 G12 = 0.01 G13 = 0.04
CRAC 2 G21 = 0.01 G22 = 0.01 G23 = 0.04
CRAC 3 G31 = 0.05 G32 = 0.04 G33 = 0.01
Before going into the details of the control techniques for this internet data Centre, we discuss some
elements of the control input (power dissipation of racks and CRACs) and its relationship with the job
arrival rate or workload on the server racks parameters. Details of RACs and CRACs are given in Table 3
and Table 4, respectively [21].
2.3 Assumptions of Environment
The problem is studied dynamically (transient)and undergoes thefollowing assumptions:
It assumed that theroom transfers noheat to the outside the room.
The air flows only through servers and heat exchangers.
Heat conduction allowed through the aisle containment walls.
It assumed that thepower consumption by each CRAC is constant,that is P1 = P2 = P3, and
its value is either 0 or 100 kW
All the environment and simulation is done by using MATLAB
3. Power Dissipation and Workload
Each server rack and CRAC unit's power consumption is used as control input in the state space
model. Since the power consumption of the server racks is related to the workload of the servers, we have
expressed the total power consumed by mj active servers in a rack as [7] [22].
𝑃
𝑗 (λ) = 𝑏1λ𝑗 +𝑏0𝑚𝑗 (10)
The total power consumed by all servers (including active and idle servers) in a rack [23] are therefore
𝑃𝑎𝑗 (λ) = 𝑏1λj +𝑏0𝑀𝑗 (11)
To identify the parameters b1 and b0 for the internet data center, it is clear that for a single server, the
power consumption is related to the CPU utilization of the server as
P = (𝑃𝑖
𝐻
- 𝑃 𝑖
𝐿
)Ucpu +𝑃
𝑖
𝐿
(12)
This means that if a server is 100% utilized, Ucpu = 1 and therefore P =𝑃𝑖
𝐻
. Similarly if Ucpu = 0, we get
P = 𝑃𝑖
𝐿
.
The relationship between CPU utilization and power consumption is shown in the figure. The CPU
utilization is related to the arrival rate λ and service rate µ of a server, that is Ucpu =
𝜆
𝑢
. Let the arrival
rate or workload of the ith server in rack j is represented by λij, then the power consumed by the ith
server in rack j is [24]
Pij = (𝑃𝑖
𝐻
- 𝑃𝑖
𝐿
)
𝜆𝑖𝑗
µ𝑗
+𝑃
𝑖
𝐿
(13)
6. AETiC 2021,Vol. 5, No. 3 6
www.aetic.theiaer.org
Power Dissipation and WorkloadWhere it is assumed that the frequency/rate of service µ is constant
for each server in rack j, and therefore, it is represented by µj.
The total power consumption for mj active servers in rack j can be written as [25]
Figure 3 Relationship between CPU Utilization and Power Consumption
𝑝𝑗 = ∑ 𝑝𝑖𝑗 = (𝑝𝑗
ℎ
− 𝑝𝑗
𝑙
)
∑ λij
𝑚𝑗
𝑖=1
µj
𝑚𝑗
𝑖=1 + 𝑝𝑗
𝐿
𝑀𝑗 (14)
Since the workload assigned to rack j is ∑ 𝜆𝑖𝑗
𝑚𝑗
𝑖=1 = λj, we have
𝑝𝑗 =
(𝑝𝑗
𝐻−𝑝𝑗
𝐿)
µj
λ𝑗 + 𝑝𝑗
𝐿
𝑀𝑗 (15)
Comparing the above Equation with (11), weobtain
𝑏1 =
(𝑝𝑗
𝐻
−𝑝𝑗
𝐿
)
µj
, 𝑏0 = 𝑝𝑗
𝐿
(16)
This means that for the internet data center discussed in the previous section, the parameters are
b1 = ((285-150))/2 =67.5 and b0 = 150.So the power consumption of mj activeservers becomes
𝑝𝑗 = 67.5 λj + 150𝑚𝑗 (17)
Notice that the total power consumption of rack j, including active and idle servers, becomes
P_ja = 67.5λj +150Mj and Pj ≤ P_ja [26].
We are considering the small internet data center w ith three racks whose configuration is
summarized in table 2. We are observing the effect of different workload levels on power consumption by
IDCs. In the first case, we are considering the linear workload percentage. Every rack is utilizing CPU
100% [27] it means each rack has its complete workload without any distribution
Figure 4. Case 1: Relationship of Power Consumption and Full Workload of each rack without any Distribution
7. AETiC 2021,Vol. 5, No. 3 7
www.aetic.theiaer.org
When workload percentage is zero, it means all servers are idle but still they are consuming P_1a
= 150∗300 = 45 KW, P_2a = 150 ∗ 400 = 60 KW and P_3a = 150 ∗ 200 = 30 KW respectively. During this
experimental study, it is observed that when the workload increases from 0 to 100%, then power
consumption also increases. When workload percentage is one, it means that all servers in a rack are active
and consuming P_1a = 285 ∗ 300 = 85.5 KW, P_2a = 285 ∗ 400 = 114 KW and P_3a = 285 ∗ 200 = 57 KW
respectively. We assumed the workload distribution has a linear range. Rack 1 has a 33% to 66% workload.
Rack 2 has 0% to 33% workload while rack 3 has 67% to 0% of workload as in figure.
Figure 5 (a) Workload distribution in 3 racks (b) Power Consumption in 3 racks
Put these values in equation 14 when the workload is 33% on rack1,0% on rack 2, and 67% on
rack 3 then we get thetotal power consumed by rack 1, 2, and 3 is P_1a =[
((285−150)∗0.33+150)∗300 )
1000
] = 58.36
KW, P_2a=[(
(𝟐𝟖𝟓−𝟏𝟓𝟎)∗𝟎+𝟏𝟓𝟎)∗𝟒𝟎𝟎)
𝟏𝟎𝟎𝟎
]=60 KW and, P_3a
=[((𝟐𝟖𝟓−𝟏𝟓𝟎)∗𝟎.𝟔𝟕+𝟏𝟓𝟎)∗𝟐𝟎𝟎)
𝟏𝟎𝟎𝟎
] =47.82 KW respectively
3.1. Methodology
The above mathematical model provides the maximum power consumption, which is evaluated
by performing empirical analysis with three techniques Computer Room Air-condition CRACs ON/OFF,
CRACs 1-Step ON/OFF, and Multi-Step-3 CRACs ON/OFF at various parameters and configuration are
given in Table 2 to Table 3 called case 1 and case 2. Detailed descriptions of three techniques are given
below.
Case 1: Workload distribution percentage is 33%, 33%, and 34% on servers one, two, and server
three, respectively.
Case 2: Workload distribution percentage is 0%, 33%, and 67% on servers One, two, and three,
respectively.
3.2. CRACs ON/OFF Control Method
CRACs ON/OFF control in which there are only two possibilities and that if it is turned off, it will
consume no power means 0 KW. The second is if it is turned on, then it will consume 100 KW. So, it has
more total power peaks between 175.36 KW to 475.36 KW. The maximum outer air temperature of server
1 is 25.19◦ . Server 2 has 25.21◦ , and Server 2 has 25.21◦ Server 2 has 25.11◦. In this case, ambient
temperature shows sharp peaks fluctuation. Figures 6 and 7. CRACs 1-Step ON/OFF Control method
3.3. CRACs 1-Step ON/OFF Control method
In 1-step CRACs, CRACs can be partially turned ON/OFF. There are three values of 0,50,100.
When the ambient temperature of servers increases from 25◦ , it checks the value of control input of
CRACs if CRACs were completely off means consumes 0 KW, then CRACs will 50% turned on means will
consume 50KW else 100 KW. Similarly,if theambient temperature of servers decreases from 25◦ , it checks
the value of control input of CRACs if CRACs were completely on means consumes 100 KW, then CRACs
8. AETiC 2021,Vol. 5, No. 3 8
www.aetic.theiaer.org
will 50% turned off means will consume 50 KW otherwise 0 KW. The benefit of using multi-1-step CRACs
ON/OFF control is that the maximum power consumes 325.36 KW less than CRACs ON/OFF control,
which was 475.36 KW. If we compare the ambient air temperature, then, in this case, it is closer to 2525◦ ,
so it shows small peaks as compare to CRACs ON/OFF control.
3.3. CRACs 3-Steps ON/OFF Control method
The third technique in which we are turning OFF/ON is three steps. This controller checks the
ambient temperature of racks if it is greater than 25◦. then it will check the value of controlled input. If
CRACs were turned off means 0 KW, it would be partially turned on at 25 KW. If 25 KW, it will be turned
on 50 KW; if it is 50 KW, then it will be turned on 75 KW or 100 KW if the ambient air temperature is less
than 25◦ , then vice versa.
4. Results and Discussion
4.1. CRACs simple ON/OFF method
Figure 6 Power consumption and ambient temperature in centigrade of case 1
Figure 6 shows the result of a CRACs ON/OFF control method of case 1 in which there are only two
possibilities and that if it is turned off, it will consume no power means 0 KW. The second is if it is turned
on, then it will consume 100 KW. So, it has more total power peaks between 175.36 KW to 475.36 KW. The
maximum outer air temperature of server 1 is 25.19◦ , Server 2 has 25.21◦ , and Server 2 has 25.21◦ Server 2
has 25.11◦. In this case, ambient temperature shows sharp peaks fluctuation.
Figure 7 Power consumption and ambient temperature in centigrade of case 2
We will now compare different workload effects as compare to case 1 while techniques are the
same. Similarly, in this, CRACs are completely turned ON at 100 KW and completely turned OFF at 0KW.
There is no concept of the partially turned ON/OFF concept of CRACs. From figure 7, rack 1 has 0%
9. AETiC 2021,Vol. 5, No. 3 9
www.aetic.theiaer.org
workload, but it still has 25.1463◦ maximum output air temperature. In this case, two racks 2 has the
highest air temperature as compared to other racks. It has more workload and the highest number of the
activated server.Rack 2 is 25.2179◦,and rack 3 has 25.1425◦.In this case, the maximum power consumption
is 470.91 KW while the minimum is 170.91 KW which is different from case 1. It means that workload
affects power consumption. Ambient temperature, in this case, shows fluctuations.
4.2. CRACs 1-Step ON/OFF Control method
In case 1, we are also considering CRACs 1-Step ON/OFF control figure 8. In 1-step CRACs,
CRACs can be partially turned ON/OFF. There are three values of 0, 50,100. When the ambient
temperature of servers increases from 25◦ , it checks the value of control input of CRACs if CRACs were
completely off means consumes 0 KW, then CRACs will 50% turned on means will consume 50 KW
otherwise 100 KW.
Similarly, if the ambient temperature of servers decreases from 25◦ , it checks the value of control input of
CRACs if CRACs were completely on means consumes 100 KW, then CRACs will 0% turned off means
will consume 50 KW else 0 KW. The benefit of using multi-1-step CRACs ON/OFF control is that the
maximum power consumes 325.36 KW less than CRACs ON/OFF control, which was 475.36 KW. If we
compare the ambient air temperature, then, in this case, it is closer to 2525◦ , so it shows small peaks as
compared to CRACs ON/OFF control.
Figure 8 Case 1: CRACs 1-Step ON/OFF Method power consumption and ambient temperature in centigrade
Similarly, Figure 9 shows the result of case 2 CRACs 1- step method result that is 50 KW turned
OFF/ON instead of directly turned ON/OFF. It minimizes the ambient air temperature peaks than CRACs
ON/OFF Control. The power consumption peaks vary between 320.91 KW to 170.91 KW, far lesser than
CRACs ON/OFF Control.
Figure 9 Case 2: CRACs 1-Step ON/OFF Method power consumption and ambient temperature in centigrade
4.3. CRACs 3-Steps ON/OFF Control method
10. AETiC 2021,Vol. 5, No. 3 10
www.aetic.theiaer.org
Figure 10 shows the result of our third technique with case 1 configuration in which we are
turning OFF/ON in three steps. This controller checks the ambient temperature of racks. If it is greater
than 25,◦ then it will check the value of controlled input. If CRACs were turned off means 0 KW, then it
will be partially turned on at 25 KW. If 25 KW, then it will be turned on 50 KW. If it is 50 KW, then it will
be turned on 75 KW else 100 KW if the ambient air temperature is less than 25◦ , then vice versa. The
maximum power consumed by CRACs and servers is 250.36 KW which is even less than both techniques.
If we compare the ambient temperature of this technique, it is closest to 25◦ compared to the other two
techniques.
e
Figure 10 Case 1:CRACs 3-Step ON/OFF Method power consumption and ambient temperature in centigrade
The result of CRACs 3-Steps with case 2 configuration shown in Figure 11, which is the last case.
Our goal was to reduce power consumption, and in CRACs 3-step control, the power consumption varies
between 245.91 KW and 170.91 KW. It is the minimum power consumed than all other cases
Figure 11 Case 2: CRACs 3-Step ON/OFF Method power consumption and ambient temperature
Ambient air temperature is also closer to 25◦. CRACs all three approaches are following few
constraints given below: -
• The temperature of IDCs is nearly equal to 25◦.
• Power consumed by racks and CRACs must be positive integers greater than zero.
• The total consumed power is equal to the summation of power consumed by racks and CRACs.
5. Conclusions
11. AETiC 2021,Vol. 5, No. 3 11
www.aetic.theiaer.org
Mathematical representation of the internet data center in the State Space model and its
simulation in MATLAB. Both algorithms are given in Appendix 1 and Appendix 2.
Two different control techniques have been used to minimize the power consumption of IDCs.
The response of control techniques has been observed under different workload conditions. Based on
observation, we conclude that CRACs multi-step (1-Step and 3-Step) ON/OFF control, especially CRACs
step-3 ON/OFF control, presented a good sign for our problem statement and has some useful benefits as
shown below
• CRACs step-3 ON/OFF control minimizes power consumption more than the other two
techniques.
• It is observed that CRACs step-3 ON/OFF control has smooth power variations while CRACs
step-1 ON/OFF and CRACs ON/OFF control shows sharp power peaks.
6. Future Work
The power reduction modeling proposed CRACs step-3 ON/OFF control has significantly
reduced the heat emission. It is required to model the experimental phenomena in mathematical
interpretation.Subsequently, this will help the mathematician implement the above model at a large scale
to control the limitations given in the manuscript.
.
References
[1] S. H. Mahmud, L. Assan, and R. Islam, “Potentials of internet of things (IoT) in malaysian
construction industry”, Annals of Emerging Technologies in Computing (AETiC), Print ISSN, pp.
2516-0281, 2018.
[2] C. W. Günther,andW. M. VanDer Aalst,"Fuzzymining–adaptive processsimplificationbased on
multi-perspective metrics.", pp. 328-343.
[3] M. Alhaisoni, “IoT Energy Efficiency through Centrality Metrics”, Annals of Emerging
Technologies in Computing (AETiC), Print ISSN, pp. 2516-0281, 2019.
[4] M. S. Ranjithkumar, M. K. Sellamuthu, M. R. Rajkumar, and M. V. Krishnakumar, “Certain
Investigationon Load Balancing using Cloudlet Assignment and min-max Algorithm”, Annals of
the Romanian Society for Cell Biology, pp. 2223-2229, 2021.
[5] J. Yao, X. Liu, W. He, and A. Rahman, "Dynamic control of electricity cost with power demand
smoothing and peak shaving for distributed internet data centers.", pp. 416-424.
[6] A. Capozzoli, and G. Primiceri, “Cooling systems in data centers: state of art and emerging
technologies”, Energy Procedia, vol. 83, pp. 484-493, 2015.
[7] C. Jin,X. Bai, C. Yang, W. Mao, and X. Xu, “A review of power consumption models of servers in
data centers”, applied energy, vol. 265, pp. 114806, 2020.
[8] H. Lu, Z. Zhang,and L. Yang, “A review on airflow distribution and management in data center”,
Energy and Buildings, vol. 179, pp. 264-277, 2018.
[9] S. Alkharabsheh,J. Fernandes, B. Gebrehiwot, D. Agonafer, K. Ghose, A. Ortega, Y. Joshi, and B.
Sammakia,“A brief overviewof recentdevelopments in thermal management in data centers”,
Journal of Electronic Packaging, vol. 137, no. 4, pp. 040801, 2015.
[10] W.-X.Chu,and C.-C. Wang, “A review on airflow management in data centers”, Applied Energy,
vol. 240, pp. 84-119, 2019.
[11] J. Rambo, and Y. Joshi, “Modeling of data center airflow and heat transfer: State of the art and
future trends”, Distributed and Parallel Databases, vol. 21, no. 2, pp. 193-225, 2007.
[12] C. Ge, Z. Sun, and N. Wang, “A survey of power-saving techniques on data centers and content
delivery networks”, IEEE Communications Surveys & Tutorials, vol. 15, no. 3, pp. 1334-1354,
2012.
12. AETiC 2021,Vol. 5, No. 3 12
www.aetic.theiaer.org
[13] S. Mittal, “A survey of techniques for improving energy efficiency in embedded computing
systems”,InternationalJournalof ComputerAided Engineering and Technology, vol. 6, no. 4, pp.
440-459, 2014.
[14] A.-C. Orgerie, M. D. d. Assuncao, and L. Lefevre, “A survey on techniques for improving the
energyefficiencyof large-scaledistributedsystems”, ACMComputing Surveys(CSUR), vol.46,no.
4, pp. 1-31, 2014.
[15] J. Shuja, K. Bilal, S. A. Madani, M. Othman, R. Ranjan, P. Balaji, and S. U. Khan, “Survey of
techniquesandarchitecturesfordesigningenergy-efficient data centers”, IEEE Systems Journal,
vol. 10, no. 2, pp. 507-519, 2014.
[16] C. Möbius, W. Dargie, and A. Schill, “Power consumption estimation models for processors,
virtual machines,andservers”, IEEETransactionson Paralleland Distributed Systems, vol. 25, no.
6, pp. 1600-1614, 2013.
[17] P. Kaplan, Mustang theInspiration:ThePlane that Turned the Tide of World War Two: Casemate
Publishers, 2013.
[18] H. Shao, L. Rao, Z. Wang, X. Liu, Z. Wang, and K. Ren, “Optimal load balancing and energy cost
managementforinternetdatacentersin deregulated electricity markets”, IEEE Transactions on
Parallel and Distributed Systems, vol. 25, no. 10, pp. 2659-2669, 2013.
[19] L. Parolini, B. Sinopoli, B. H. Krogh, and Z. Wang, “A cyber–physical systems approach to data
center modeling and control for energy efficiency”, Proceedings of the IEEE, vol. 100, no. 1, pp.
254-268, 2011.
[20] J. Yao, H. Guan,J. Luo, L. Rao, and X. Liu, “Adaptive power management through thermal aware
workload balancing in internet data centers”, IEEE Transactions on Parallel and Distributed
Systems, vol. 26, no. 9, pp. 2400-2409, 2014.
[21] F. Yao, A.Demers,andS. Shenker,"A schedulingmodelfor reduced CPU energy", In Proceedings
of IEEE 36th annual foundations of computer science, pp. 374-382. IEEE, 1995.
[22] L. A. Barroso, and U. Hölzle, “The case for energy-proportional computing”, Computer, vol. 40,
no. 12, pp. 33-37, 2007.
[23] D. Economou, S. Rivoire, C. Kozyrakis, and P. Ranganathan, "Full-system power analysis and
modeling for server environments.", International Symposium on Computer Architecture (IEEE),
2006.
[24] J. D. Moore, J. S. Chase, P. Ranganathan, and R. K. Sharma, "Making Scheduling" Cool":
Temperature-Aware Workload Placement in Data Centers.", pp. 61-75, 2005
[25] S.-T. Kung, C.-C. Cheng, C.-c. Liu, and Y.-c. Chen, "Dynamic power saving by monitoring CPU
utilization", Google Patents, 2003.
[26] E. Pakbaznia, and M. Pedram, "Minimizing data center cooling and server power costs.",
In Proceedings of the 2009 ACM/IEEE international symposium on Low power electronics and
design, pp. 145-150. 2009.
[27] S. Srikantaiah, A. Kansal, and F. Zhao, “Energy aware consolidation for cloud computing”, 2008.
Appendix 1
Algorithm CRACs ON/OFF Control
Input: Ad, Bd, and Cd matrices are given.
Calculate P1, P2, P3 by using Equation (14).
Initialindon:
u(:, 1) = [P1;P2;P3;0;0;0]
x(:, 1) = [25;25;25;25;25;25]
Loop:
For k = 1,...,100
x(:,k +I) = Adx(:,k) + Bdu(:,k)
Y(:,k + I) =Cdx(:, k + I)
If (y(1,k+ I) > 25) || (y(2,k + I) > 25) II (y(3,k+ I) > 25)
u(4 :6,k + I) = [100;100;100]
else