International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The paper presents a nature inspired algorithm that copies the big bang theory of evolution.
This algorithm is simple with regard to number of parameters. Embedded systems are powered by
batteries and enhancing the operating time of the battery by reducing the power consumption is vital.
Embedded systems consume power while accessing the memory during their operation. An efficient
method for power management is proposed in this work. The proposed method, reduce the energy
consumption in memories from 76% up to 98% as compared to other methods reported in the
literature.
Comparative Analysis of Job Scheduling for Grid Environment ............................................................1
Neeraj Pandey, Ashish Arya and Nitin Kumar Agrawal
Hackers Portfolio and its Impact on Society ........................................................................................1
Dr. Adnan Omar and Terrance Sanchez, M.S.
Ontology Based Multi-Viewed Approach for Requirements Engineering ..............................................1
R. Subha and S. Palaniswami
Modified Colonial Competitive Algorithm: An Approach for Graph Coloring Problem ..........................1
Hojjat Emami and Parvaneh Hasanzadeh
Security and Privacy in E-Passport Scheme using Authentication Protocols and Multiple Biometrics
Technology ........................................................................................................................................1
V. K. Narendira Kumar and B. Srinivasan
Comparative Study of WLAN, WPAN, WiMAX Technologies ................................................................1
Prof. Mangesh M. Ghonge and Prof. Suraj G. Gupta
A New Method for Web Development using Search Engine Optimization ............................................1
Chutisant Kerdvibulvech and Kittidech Impaiboon
A New Design to Improve the Security Aspects of RSA Cryptosystem ..................................................1
Sushma Pradhan and Birendra Kumar Sharma
A Hybrid Model of Multimodal Approach for Multiple Biometrics Recognition ...................................1
P. Prabhusundhar, V.K. Narendira Kumar and B. Srinivasan
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPINGijdkp
An intrinsic problem of classifiers based on machine learning (ML) methods is that their learning time
grows as the size and complexity of the training dataset increases. For this reason, it is important to have
efficient computational methods and algorithms that can be applied on large datasets, such that it is still
possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper
a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model
(HMM) training. The idea of the proposed process consists of two steps. In the first step, training instances
with similar inputs are clustered and a weight factor which represents the frequency of these instances is
assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity
function to cluster similar examples. In the second step, all formulas in the classical HMM training
algorithm (EM) associated with the number of training instances are modified to include the weight factor
in appropriate terms. This process significantly accelerates HMM training while maintaining the same
initial, transition and emission probabilities matrixes as those obtained with the classical HMM training
algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set,
speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach
is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
Cell Charge Approximation for Accelerating Molecular Simulation on CUDA-Enabl...ijcax
Methods for Molecular Dynamics(MD) simulations are investigated. MD simulation is the widely used computer simulation approach to study the properties of molecular system. Force calculation in MD is computationally intensive. Paral-lel programming techniques can be applied to improve those calculations.
The major aim of this paper is to speed up the MD simulation calculations by/using General Purpose Graphics Processing Unit(GPU) computing paradigm, an efficient and economical way for parallel computing. For that we are proposing a method called cell charge approximation which treats the
electrostatic interactions in MD simulations.This method reduces the complexity of force calculations.
Competitive Demand Response Trading in Electricity Markets: Aggregator and En...Nur Mohammad
Competitive Demand Response Trading Mechanism using Bi-level Optimization Framework.
Keywords: Demand-side management, electricity pricing, auction-based demand response mechanism design, bi-level optimization, game theory, power system network economics.
Goal: The research project aims to identify the day-ahead operational behavior of market participants both from supply and demand sides of a trans-active market mechanism. We want to provide a coordinated DR exchange (DRX) mechanism, integrated it into a market clearing model to suppress locational marginal price (LMP) spikes and operating cost.
Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing IJORCS
This paper shows the importance of fair scheduling in grid environment such that all the tasks get equal amount of time for their execution such that it will not lead to starvation. The load balancing of the available resources in the computational grid is another important factor. This paper considers uniform load to be given to the resources. In order to achieve this, load balancing is applied after scheduling the jobs. It also considers the Execution Cost and Bandwidth Cost for the algorithms used here because in a grid environment, the resources are geographically distributed. The implementation of this approach the proposed algorithm reaches optimal solution and minimizes the make span as well as the execution cost and bandwidth cost.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the
heterogeneous resources from different network are used simultaneously to solve a particular problem that
need huge amount of resources. Potential of Grid computing depends on my issues such as security of
resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling
is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing
resources and is an NP-complete problem. To achieve the promising potential of grid computing, an
effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to
improve the performance of resources i.e. makespan time & resource utilization. With this, we have
classified various tasks scheduling heuristic in grid on the basis of their characteristics.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The paper presents a nature inspired algorithm that copies the big bang theory of evolution.
This algorithm is simple with regard to number of parameters. Embedded systems are powered by
batteries and enhancing the operating time of the battery by reducing the power consumption is vital.
Embedded systems consume power while accessing the memory during their operation. An efficient
method for power management is proposed in this work. The proposed method, reduce the energy
consumption in memories from 76% up to 98% as compared to other methods reported in the
literature.
Comparative Analysis of Job Scheduling for Grid Environment ............................................................1
Neeraj Pandey, Ashish Arya and Nitin Kumar Agrawal
Hackers Portfolio and its Impact on Society ........................................................................................1
Dr. Adnan Omar and Terrance Sanchez, M.S.
Ontology Based Multi-Viewed Approach for Requirements Engineering ..............................................1
R. Subha and S. Palaniswami
Modified Colonial Competitive Algorithm: An Approach for Graph Coloring Problem ..........................1
Hojjat Emami and Parvaneh Hasanzadeh
Security and Privacy in E-Passport Scheme using Authentication Protocols and Multiple Biometrics
Technology ........................................................................................................................................1
V. K. Narendira Kumar and B. Srinivasan
Comparative Study of WLAN, WPAN, WiMAX Technologies ................................................................1
Prof. Mangesh M. Ghonge and Prof. Suraj G. Gupta
A New Method for Web Development using Search Engine Optimization ............................................1
Chutisant Kerdvibulvech and Kittidech Impaiboon
A New Design to Improve the Security Aspects of RSA Cryptosystem ..................................................1
Sushma Pradhan and Birendra Kumar Sharma
A Hybrid Model of Multimodal Approach for Multiple Biometrics Recognition ...................................1
P. Prabhusundhar, V.K. Narendira Kumar and B. Srinivasan
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPINGijdkp
An intrinsic problem of classifiers based on machine learning (ML) methods is that their learning time
grows as the size and complexity of the training dataset increases. For this reason, it is important to have
efficient computational methods and algorithms that can be applied on large datasets, such that it is still
possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper
a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model
(HMM) training. The idea of the proposed process consists of two steps. In the first step, training instances
with similar inputs are clustered and a weight factor which represents the frequency of these instances is
assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity
function to cluster similar examples. In the second step, all formulas in the classical HMM training
algorithm (EM) associated with the number of training instances are modified to include the weight factor
in appropriate terms. This process significantly accelerates HMM training while maintaining the same
initial, transition and emission probabilities matrixes as those obtained with the classical HMM training
algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set,
speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach
is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
Cell Charge Approximation for Accelerating Molecular Simulation on CUDA-Enabl...ijcax
Methods for Molecular Dynamics(MD) simulations are investigated. MD simulation is the widely used computer simulation approach to study the properties of molecular system. Force calculation in MD is computationally intensive. Paral-lel programming techniques can be applied to improve those calculations.
The major aim of this paper is to speed up the MD simulation calculations by/using General Purpose Graphics Processing Unit(GPU) computing paradigm, an efficient and economical way for parallel computing. For that we are proposing a method called cell charge approximation which treats the
electrostatic interactions in MD simulations.This method reduces the complexity of force calculations.
Competitive Demand Response Trading in Electricity Markets: Aggregator and En...Nur Mohammad
Competitive Demand Response Trading Mechanism using Bi-level Optimization Framework.
Keywords: Demand-side management, electricity pricing, auction-based demand response mechanism design, bi-level optimization, game theory, power system network economics.
Goal: The research project aims to identify the day-ahead operational behavior of market participants both from supply and demand sides of a trans-active market mechanism. We want to provide a coordinated DR exchange (DRX) mechanism, integrated it into a market clearing model to suppress locational marginal price (LMP) spikes and operating cost.
Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing IJORCS
This paper shows the importance of fair scheduling in grid environment such that all the tasks get equal amount of time for their execution such that it will not lead to starvation. The load balancing of the available resources in the computational grid is another important factor. This paper considers uniform load to be given to the resources. In order to achieve this, load balancing is applied after scheduling the jobs. It also considers the Execution Cost and Bandwidth Cost for the algorithms used here because in a grid environment, the resources are geographically distributed. The implementation of this approach the proposed algorithm reaches optimal solution and minimizes the make span as well as the execution cost and bandwidth cost.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the
heterogeneous resources from different network are used simultaneously to solve a particular problem that
need huge amount of resources. Potential of Grid computing depends on my issues such as security of
resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling
is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing
resources and is an NP-complete problem. To achieve the promising potential of grid computing, an
effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to
improve the performance of resources i.e. makespan time & resource utilization. With this, we have
classified various tasks scheduling heuristic in grid on the basis of their characteristics.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
DYNAMIC VOLTAGE SCALING FOR POWER CONSUMPTION REDUCTION IN REAL-TIME MIXED TA...cscpconf
The reduction in energy consumption without any deadline miss is one of the main challenges in real-time embedded systems. Dynamic voltage scaling (DVS) is a technique that reduces the power consumption of processors by utilizing various operating points provided to the DVS processor. These operating points consist of pairs of voltage and frequency. The selection of operating points can be done based on the load to the system at a particular point of time. In this work DVS is applied to both periodic and sporadic tasks, and an average of 40% of energy is reduced. The energy consumption of the processor is further reduced by 2-10% by reducing the number of pre-emption and frequency switching
FAST ALGORITHMS FOR UNSUPERVISED LEARNING IN LARGE DATA SETScsandit
The ability to mine and extract useful information automatically, from large datasets, is a
common concern for organizations (having large datasets), over the last few decades. Over the
internet, data is vastly increasing gradually and consequently the capacity to collect and store
very large data is significantly increasing.
Existing clustering algorithms are not always efficient and accurate in solving clustering
problems for large datasets.
However, the development of accurate and fast data classification algorithms for very large
scale datasets is still a challenge. In this paper, various algorithms and techniques especially,
approach using non-smooth optimization formulation of the clustering problem, are proposed
for solving the minimum sum-of-squares clustering problems in very large datasets. This
research also develops accurate and real time L2-DC algorithm based with the incremental
approach to solve the minimum
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs), where application developers can extract data from sensor nodes through a high level abstraction of the system. Instead of developing the entire application, task graph representation of the WSN model presents simplified approach of data collection. However, mapping of tasks onto sensor nodes highlights several problems in energy consumption and routing delay. In this paper, we present an efficient hybrid approach of task mapping for WSN – Hybrid Genetic Algorithm, considering multiple objectives of optimization – energy consumption, routing delay and soft real time requirement. We also present a method to configure the algorithm as per user's need by changing the heuristics used for optimization. The trade-off analysis between energy consumption and delivery delay was performed and simulation results are presented. The algorithm is applicable during macro-programming enabling developers to choose a better mapping according to their application requirements.
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs),
where application developers can extract data from sensor nodes through a high level abstraction of the
system. Instead of developing the entire application, task graph representation of the WSN model presents
simplified approach of data collection.
Operation cost reduction in unit commitment problem using improved quantum bi...IJECEIAES
Unit Commitment (UC) is a nonlinear mixed integer-programming problem. UC used to minimize the operational cost of the generation units in a power system by scheduling some of generators in ON state and the other generators in OFF state according to the total power outputs of generation units, load demand and the constraints of power system. This paper proposes an Improved Quantum Binary Particle Swarm Optimization (IQBPSO) algorithm. The tests have been made on a 10-units simulation system and the results show the improvement in an operation cost reduction after using the proposed algorithm compared with the ordinary Quantum Binary Particle Swarm Optimization (QBPSO) algorithm
PROCESS OF LOAD BALANCING IN CLOUD COMPUTING USING GENETIC ALGORITHMecij
The running generation of world, cloud computing has become the most powerful, chief and also lightning technology. IT based companies has already changed their way to buy and design hardware through this technology. It is a high utility which can also make software more attractive. Load balancing research in
cloud technology is one of the burning technologies in modern time. In this paper, pointing various proposed algorithms, the topic of load balancing in Cloud Computing are researched and compared to provide a gist of the latest way in this research area. By using Genetic Algorithm the balance is most
flexible which is represented here.
Task scheduling is an important aspect to improve the utilization of resources in the Cloud Computing. This paper proposes a Divide and Conquer based approach for heterogeneous earliest finish time algorithm. The proposed system works in two phases. In the first phase it assigns the ranks to the incoming tasks with respect to size of it. In the second phase, we properly assign and manage the task to the virtual machine with the consideration of ideal time of respective virtual machine. This helps to get more effective resource utilization in Cloud Computing. The experimental results using Cybershake Scientific Workflow shows that the proposed Divide and Conquer HEFT performs better than HEFT in terms of task's finish time and response time. The result obtained by experimentally demonstrate that the proposed DCHEFT performance superiorly.
Multi-Objective Optimization Based Design of High Efficiency DC-DC Switching ...IJPEDS-IAES
In this paper we explore the feasibility of applying multi objective stochastic
optimization algorithms to the optimal design of switching DC-DC
converters, in this way allowing the direct determination of the Pareto
optimal front of the problem. This approach provides the designer, at
affordable computational cost, a complete optimal set of choices, and a more
general insight in the objectives and parameters space, as compared to other
design procedures. As simple but significant study case we consider a low
power DC-DC hybrid control buck converter. Its optimal design is fully
analyzed basing on a Matlab public domain implementations for the
considered algorithms, the GODLIKE package implementing Genetic
Algorithm (GA), Particle Swarm Optimization (PSO) and Simulated
Annealing (SA). In this way, in a unique optimization environment, three
different optimization approaches are easily implemented and compared.
Basic assumptions for the Matlab model of the converter are briefly
discussed, and the optimal design choice is validated “a-posteriori” with
SPICE simulations.
Temporal workload analysis and its application to power aware schedulingijesajournal
Power
-
aware scheduling reduces CPU energy consumption in hard real
-
time systems through dynamic
voltage scaling(DVS). The basic idea of power
-
aware scheduling
is to find slacks available to tasks and
reduce CPU‟s frequency or lower its voltage using the found slacks. In this paper, we introduce temporal
workload of a system which specifies how much busy its CPU is to complete the tasks at current time.
Analyzin
g temporal workload provides a sufficient condition of schedulability of preemptive early
-
deadline
first scheduling and an effective method to identify and distribute slacks generated by early completed
tasks. The simulation results show that proposed algo
rithm reduces the energy consumption by 10
-
70%
over the existing algorithm and its algorithm complexity is O(n). So, practical on
-
line scheduler could be
devised using the proposed algorithm.
Temporal workload analysis and its application to power aware schedulingijesajournal
Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling(DVS). The basic idea of power-aware scheduling is to find slacks available to tasks and reduce CPU‟s frequency or lower its voltage using the found slacks. In this paper, we introduce temporal workload of a system which specifies how much busy its CPU is to complete the tasks at current time. Analyzing temporal workload provides a sufficient condition of schedulability of preemptive early-deadline first scheduling and an effective method to identify and distribute slacks generated by early completed tasks. The simulation results show that proposed algorithm reduces the energy consumption by 10-70% over the existing algorithm and its algorithm complexity is O(n). So, practical on-line scheduler could be devised using the proposed algorithm.
Security Constrained UCP with Operational and Power Flow ConstraintsIDES Editor
An algorithm to solve security constrained unit
commitment problem (UCP) with both operational and power
flow constraints (PFC) have been proposed to plan a secure
and economical hourly generation schedule. This proposed
algorithm introduces an efficient unit commitment (UC)
approach with PFC that obtains the minimum system
operating cost satisfying both unit and network constraints
when contingencies are included. In the proposed model
repeated optimal power flow for the satisfactory unit
combinations for every line removal under given study period
has been carried out to obtain UC solutions with both unit and
network constraints. The system load demand patterns have
been obtained for the test case systems taking into account of
the hourly load variations at the load buses by adding
Gaussian random noises. In this paper, the proposed
algorithm has been applied to obtain UC solutions for IEEE
30, 118 buses and Indian utility practical systems scheduled
for 24 hours. The algorithm and simulation are carried
through Matlab software and the results obtained are quite
encouraging.
Proposing a Scheduling Algorithm to Balance the Time and Energy Using an Impe...Editor IJCATR
Computational grids have become an appealing research area as they solve compute-intensive problems within the scientific
community and in industry. A grid computational power is aggregated from a huge set of distributed heterogeneous workers; hence, it
is becoming a mainstream technology for large-scale distributed resource sharing and system integration. Unfortunately, current grid
schedulers suffer from the haste problem, which is the schedule inability to successfully allocate all input tasks. Accordingly, some tasks
fail to complete execution as they are allocated to unsuitable workers. Others may not start execution as suitable workers are previously
allocated to other peers. This paper presents an imperialist competition algorithm (ICA) method to solve the grid scheduling problems.
The objective is to minimize the makespan and energy of the grid. Simulation results show that the grid scheduling problem can be
solved efficiently by the proposed method
Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems iosrjce
Imprecise computation model is used in dynamic scheduling algorithm having heuristic function to
schedule task sets. A task is characterized by ready time, worst case computation time, deadline and resource
requirements. A task failing to meet its deadline and resource requirements on time is split into mandatory part
and optional part. These sub-tasks of a task can execute concurrently on multiple cores, thus achieving
parallelization provided by the multi-core system. Mandatory part produces acceptable results while optional
part refines the result further. To study the effectiveness of proposed scheduling algorithm, extensive simulation
studies have been carried out. Performance of proposed scheduling algorithm is compared with myopic and
improved myopic scheduling algorithm. The simulation studies shows that schedulability of task split myopic
algorithm is always higher than myopic and improved myopic algorithm.
An Improved Particle Swarm Optimization for Proficient Solving of Unit Commit...IDES Editor
This paper presents a new approach to solving the
short-term unit commitment problem using an improved
Particle Swarm Optimization (IPSO). The objective of this
paper is to find the generation scheduling such that the total
operating cost can be minimized, when subjected to a variety
of constraints. This also means that it is desirable to find the
optimal generating unit commitment in the power system for
the next H hours. PSO, which happens to be a Global
Optimization technique for solving Unit Commitment
Problem, operates on a system, which is designed to encode
each unit’s operating schedule with regard to its minimum
up/down time. In this, the unit commitment schedule is coded
as a string of symbols. An initial population of parent
solutions is generated at random. Here, each schedule is
formed by committing all the units according to their initial
status (“flat start”). Here the parents are obtained from a predefined
set of solution’s i.e. each and every solution is adjusted
to meet the requirements. Then, a random decommitment is
carried out with respect to the unit’s minimum down times. A
thermal Power System in India demonstrates the effectiveness
of the proposed approach; extensive studies have also been
performed for different power systems consist of 10, 26, 34
generating units. Numerical results are shown comparing the
cost solutions and computation time obtained by using the
IPSO and other conventional methods like Dynamic
Programming (DP), Legrangian Relaxation (LR) in reaching
proper unit commitment.
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Journals
Abstract Cloud computing is a new model of computing that is widely used in today’s industry, organizations and society in information technology service delivery as a utility. It enables organizations to reduce operational expenditure and capital expenditure. However, cloud computing with underutilized resources still consumes an unacceptable amount of energy than fully utilized resource. Many techniques for optimizing energy consumption in virtualized cloud have been proposed. This paper surveys different energy efficient models with task consolidation in the virtualized cloud computing environment. Keywords: Cloud computing, Virtualization, Task consolidation, Energy consumption, Virtual machine
Similar to On the-joint-optimization-of-performance-and-power-consumption-in-data-centers (20)
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
The Art of the Pitch: WordPress Relationships and Sales
On the-joint-optimization-of-performance-and-power-consumption-in-data-centers
1. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
On the Joint Optimization of Performance and
Power Consumption in Data Centers
Samee Ullah Khan and Cemal Ardil
International Science Index 32, 2009 waset.org/publications/9996745
Abstract—We model the process of a data center as a multiobjective problem of mapping independent tasks onto a set of data
center machines that simultaneously minimizes the energy consumption and response time (makespan) subject to the constraints of
deadlines and architectural requirements. A simple technique based
on multi-objective goal programming is proposed that guarantees
Pareto optimal solution with excellence in convergence process. The
proposed technique also is compared with other traditional approach.
The simulation results show that the proposed technique achieves
superior performance compared to the min-min heuristics, and competitive performance relative to the optimal solution implemented in
LINDO for small-scale problems.
Keywords—Energy-efficient
multi-objective optimization.
computing,
distributed
systems,
I. I NTRODUCTION
Data Centers are huge structures that house services for
customers. Owing to their structural services they are continuously demanded for increase in throughput and reduced
energy consumption. Energy-efficient techniques for managing
a system at runtime can bring down the amount of energy it
consumes. These management techniques are mostly for
• reducing the energy wasted by transitioning a system to
its sleep mode when it is idle and
• reducing the energy consumed by slowing down the
system during lean (but not idle) periods.
The former technique is called Dynamic Power Management
(DPM) [9], while the latter is called Dynamic Voltage Scaling
(DVS) [12] (or Speed Scaling in the more theoretical literature [2]).
DPM considers a system (in the simplest case a processor)
that can be in one of the two states, which we call the active
state and the sleep state. The system can handle requests only
in its active state, but the active state consumes far more energy
per unit time compared to the sleep state. However, when a
request arrives while the system is in the sleep state, it must
“wake up” and assume the active state before the request can
be served. This transition from sleep to active state has a high
transition cost, and is not a favorable approach undertaken by
researches and vendors [4].
DVS on the other hand, seeks to exploit the convex relationship between the CPU supply voltage (that impacts the
speed of execution) and the power consumption. The power
consumption in CMOS circuits is given by P = V 2 ×
f × CEF F , where V , f , and CEF F are the supply voltage,
S. U. Khan is with Department of Electrical and Computer Engineering, North Dakota State University, Fargo, ND 58108, E-mail:
samee.khan@ndsu.edu.
C. Ardil is with the National Academy of Aviation, Baku, Azerbaijan, Email: cemalardil@gmail.com
clock frequency, and effective switched capacitance of the
circuits, respectively. Moreover, we also know that the time to
finish an operation is inversely proportional to the frequency.
Furthermore, power is the rate at which energy is consumed.
Therefore, the energy per operation is proportional to V 2 ,
which implies that lowering the supply voltage quadratically
decreases the energy. However, lowering the supply voltage
reduces the maximum allowable clock speed (or frequency) in
an approximately linear manner. This leads us to the cube rule
in CMOS circuits which states that the instantaneous power
is roughly proportional to the clock speed cubed. The main
objective, therefore, is to keep the supply voltage (or clock
speed) as low as possible so that the power consumption
is minimal, but without compromising QoS measures [22].
In this paper, we will investigate the joint optimization of
energy consumption and response time. Because response
time improves whence the makespan improves, we must use
makespan as the primary criteria to determine improvement
in response time. Moreover, because power is simply the
rate at which energy is consumed, we must optimize the
instantaneous power.
The remainder of this paper is organized as following.
The system model and problem formulation are discussed
in Section 2. Section 3 provides some essential information
pertaining to goal programming and details our proposed
approach. Simulation results and related work are provided
in Sections 4 and 5, respectively. Finally, in Section 6, we
summarize our investigation.
II. S YSTEM M ODEL AND P ROBLEM D ESCRIPTION
A. The System Model
Consider a data center comprising of a set of machines,
M = {m1 , m2 , · · · , mm }. Assume that each machine is
equipped with a DVS module and is characterized by:
1) The frequency of the CPU, fj , given in cycles per unit
min
time. With the help of a DVS, fj can vary from fj
max
min
max
to fj
, where 0 < fj
< fj . From frequency,
it is easy to obtain the speed of the CPU, Sj , which
is approximately proportional to the frequency of the
machine [14], [23].
2) The specific machine architecture, A(mj ). The architecture would include the type of CPU, bus types, and
speeds in GHz, I/O, and memory in bytes.
Consider a metatask, T = {t1 , t2 , · · · , tn }. Each task is
characterized by:
1) The computational cycles, ci , that it needs to complete.
The assumption here is that the ci is known a priori.
48
2. International Science Index 32, 2009 waset.org/publications/9996745
World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
2) The specific machine architecture, A(ti ), that it needs
to complete its execution.
3) The deadline, di , before it has to complete its execution.
Moreover, we also assume that the metatask, T , also has
a deadline, D, which is met if and only if the deadlines
of all its tasks are met.
The number of computational cycles required by ti to
execute on mj is assumed to be a finite positive number,
denoted by cij . The execution time of ti under a constant
t
speed Sij , given in cycles per second is cij = Sij . For the
ij
associated data and instructions of a task, we assume that the
processor always retrieves it from the level-1 (primary) data
cache. A task, ti , when executed on machine mj draws, pij
amount of instantaneous power. Lowering the instantaneous
power will lower the CPU frequency and consequently will
decrease the speed of the CPU and hence cause ti to possibly
miss its deadline.
The architectural requirements of each task are recorded as
a tuple with each element bearing a specific requirement. We
assume that the mapping of architectural requirements is a
Boolean operation. That is, the architectural mapping is only
fulfilled when all of the architectural constraints are satisfied,
otherwise not.
B. Problem Formulating
Find the task to machine mapping, where the cumulative
instantaneous power consumed by the data center, M and the
makespan of the metatask, T , is minimized.
Mathematically, we can say
⎛
⎞
n
minimize ⎝
n
m
pij xij and max
i=1 j=1
j
tij xij ⎠
(1)
i=1
subject to xij ∈ {0, 1},
(5)
(tij xij ≤ di ) = 1|xij = 1.
(6)
Goal programming implicitly assumes that a desired goal
is obtainable that can be used during each of the iteration of
the solution convergence process through some high-level intervention [24]. The information attainable during each of the
iteration is the current best compromise solution, referred to as
the Main Solution (MS), and a set of Possible Solutions (PS)
that are the compromise solutions obtainable if each of the
goals are satisfied serially [10]. Iteratively, goal programming,
identifies non-inferior solutions and refines them to achieve
the best possible compromise solution. An iteration can be
classified as a two-step calculation and evaluation process.
During the calculation step the MS and PS are obtained, which
are analyzed to proceed towards a compromise solution during
the evaluation process. During the iterative process, if the
evaluation procedure determines that either the MS or the one
of the PS is the best compromise solution, then the procedure
terminates [11]. Below we describe a generalized procedure
for goal programming.
Let the multi-objective problem to be solved be
(4)
(tij xij ≤ di ) ∈ {0, 1},
A. A Generalized Goal Programming Procedure
(3)
tij xij ≤ di |xij = 1,
III. G OAL P ROGRAMMING
(2)
ti → mj ; ifA(ti ) = A(mj ) then xij = 1,
first, then make that as a constraint for the rest of the
objectives.
To optimize one objective first, then make that as a constraint for the other objectives, the only plausible framework
is when one can ensure that the objective functions have an
acceptable overlap [8]. Because, the multi-objective problem
(described in this paper) has the objectives of optimizing
instantaneous power and makespan that are are opposite to
each other, we must choose to optimize both the objectives
concurrently.
n
i=1
Constraint (2) is the mapping constraint. When xij = 1, a
task, ti , is mapped to machine, mj , and xij = 0 otherwise.
Constraint (3) elaborates on this mapping in conjunction to
the architectural requirements, and it states that a mapping
can only exists if the architecture is mapped. Constraint
(4) relates to the fulfillment of the deadline of each task,
and constraint (5) tells us about the Boolean relationship
between the deadline and the actual time of execution of the
tasks. Constraint (6) relates to the deadline constraints of the
metatask that will hold if all of the deadlines of the tasks, di ,
are satisfied.
The above problem formulation is in a form of multiobjective optimization problem. In the literature, there are two
standard ways to tackle such multi-objective problems: (a)
optimize objectives concurrently or (b) optimize one objective
min (f1 (x), f2 (x), · · · , f (k (x)) ,
(7)
such that gj (x) ≤ 0, i = 1, 2, · · · , m,
where x is an n dimensional decision variable vector. The
following steps must be present for a generalized goal programming approach.
Step 0: Determine fi∗ and f∗i , as
1) min fi (x) such that g(x) ≤ 0.
The solution to the above, referred to as x∗i and f∗i is
known as the ideal solution [11]. Let fij = fi (x), then
2) fi∗ = maxj fji .
The functions fi∗ and f∗i provide the upper and lower
bounds on the objective function values, respectively. Such
values are important to guide the solution towards the desirable compromise solution. Moreover, they also determine the
feasibility of the solution space.
Step 1: Set initial goals, b = {b1 , b2 , · · · , k}. As mentioned
previously a high-level intervention must determine a desirable
goal for each and every objective function. However, one can
approximate these goals by determine the necessary and sufficient conditions of optimality — the Kuhn-Tucker conditions.
It should be clear that f∗i < bi ≤ fi∗ .
49
3. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
Input: T and M .
Output: Task to machine mapping.
Initialize: ∀j, DVSj is set to the highest level.
while T = ∅ do
I ← argmini (di ) foreach mj ∈ M do
if A(tI ) = A(mj ) then
M ← M ∪ mj ;
end
if M = ∅ then
EXIT;
end
end
foreach mj ∈ M do
ΔIj ← dI − tIj ;
if ΔIj > 0 then
Δ ← Δ ∪ ΔIj ;
end
end
if Δ = ∅ then
foreach mj ∈ M do
Reset DVSj to the highest level;
ΔIj ← dI − tIj ;
if ΔIj > 0 then
Δ ← Δ ∪ ΔIj ;
end
if Δ = ∅ then
EXIT;
end
end
end
J ← argminj (Δ);
while {tIJ xIJ ≤ dI |xIJ = 1} do
Reduce DVSJ by one level;
end
i ← I;
j ← J;
xij ← 1;
T ← T − {ti };
end
Algorithm 1: The goal programming based task to machine
mapping technique (GP).
Step 2: Solve for MS.
min a =
i
d−
i
−d+
i
,
i
,
(8)
such that g(x) ≤ 0,
fi (x) + wi × d− − wi × d+ = bi ,
i
i
(9)
d− × d+ = 0,
i
i
d− ≤ 1,
i
d− , d+ ≥ 0,
i
i
International Science Index 32, 2009 waset.org/publications/9996745
where wi = bi − f∗i . The optimization of the MS would
result in x0 and f 0 , which is known in the literature as the
core solution [11]. The weight w has been derived from a
normalizing scheme that makes the variation between f∗i and
bi equal to one. That is,
bi − f∗i
fi (x) − f∗i
+ d− − d∗ =
= 1,
i
i
bi − f∗i
bi − f∗i
(10)
where f∗i < bi ≤ fi∗ ; hence, we obtain the following
fi (x) + (bi − f∗i ) d− − (bi − f∗i ) d+ = bi ,
i
i
(11)
which is the same as (9) after substituting wi = bi − f∗i .
Moreover, the constraint d− ≤ 1 ensures that fi (x) does not
i
violate the lower bound, f∗i . Furthermore, the weights w have
the following two additional properties: (a) the value of the
weight is dynamically adjusted whence the value of a goal
alters in between an iteration, and (b) the weight increases
whence the value of goal decreases and vice versa.
Step 2: Solve for PS.
⎞
⎛⎛
⎞
d− ⎠ ,
i
min ar = ⎝⎝
i,r=i
i
−d+ ⎠ ,
i
(12)
such that g(x) ≤ 0,
fi (x) − wi × d+ = bi ,
i
fi (x) + wi × d− − wi × d+ = bi ,
i
i
d− × d+ = 0,
i
i
d− ≤ 1,
i
d− , d+ ≥ 0,
i
i
where wi = bi − f∗i . The optimization of the PS would result
in xr and f r , which is known in the literature as the achievable
solution [24].
The goal programming approach must iterate between steps
2 and 3 to arrive at a compromised solution. The question
that how does one obtain a proper weightage for an given
optimization problem is the topic of the subsequent section.
Moreover, by deriving the necessary and sufficient conditions
of optimality, one ensures that the optimization process is
convergent. Furthermore, the solution space is reduced only
to Pareto frontier [6].
B. Conditions of Optimality
Because we must have an upper bound on the power
consumption, we introduce power conservation conditions to
the set of constraints (2), (3), (4), (5), and (6).
n
m
pij ≤ P,
(13)
i=1 m=1
pij ≥ 0, i = 1, 2, · · · , n; j = 1, 2, · · · , m.
(14)
The power conservation condition of (13) states that the
instantaneous power allocated is bounded. That is, at any given
instance, the total power consumption of all of the machines
must be less than when all of the machines are running at their
peak power consumption. Clearly, the instantaneous power
consumption must be a positive number, as in (14). These
constraints make the multi-objective problem convex that in
50
4. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
turn makes the optimization problem tractable [20]. Moreover,
because the instantaneous power regulates the time to complete
a task, it is sufficient to utilize power as the only tunable
variable to derive the conditions of optimality. Let α ≤ 0
and ηj ≤ 0 denote the Lagrange multipliers, and γj be the
gradient of the binding constraints [13]. Then, we can say
that the Lagrangian is
n
m
L(γj , α, ηj ) =
⎛
ln ⎝γj (α) − pij +
i=1 j=1
n
m
C. Goal Programming Based Technique (GP)
⎞
(γj − P )⎠
i=1 j=1
n
m
ηj (γj − pij ) .
+
i=1 j=1
International Science Index 32, 2009 waset.org/publications/9996745
The first-order Kuhn-Tucker conditions are given in (15) and
(16), with a constraint given in (17):
δL
−1
=
+ αηj = 0,
δγj
γj − pij
δL
=
δα
(15)
m
γj − P = 0,
(16)
j=1
γj − pij ≥ 0, ηj (γj − pij ) = 0, ηj ≤ 0.
(17)
If γj − pij = 0, then the current instantaneous power consumption is the best instantaneous power. If γj − pij > 0,
then ηj = 0. The solution (or the derivative) of (15) and (16)
is given in (18) and (19), respectively.
1
− α = 0,
γj − pij
(18)
m
γj = P.
(19)
j=1
It then follows that
γj =
P−
n
i=1
m
j=1
pij
P−
n
i=1
m
j=1
pij
.
(20)
m
Because γj by definition is the gradient of the binding constraints, we can replace γj with pij . That gives us
.
(21)
m
Now, for a specific machine j, the optimality must oscillate
between the instantaneous power consumed by machine j and
the rest of m−1 machines. Therefore, the following must hold
pij =
pij =
P−
n
i=1
∀ k∈M,k=j
pik
optimality are sufficient to consider only one single constraint
— instantaneous power. Utilizing both of the constraints would
have resulted in a similar conditions of optimality; however,
the derivation would have been complicated. In the next
section, we will outline our goal programming based task to
machine mapping technique.
.
(22)
m
The Kuhn-Tucker conditions verify the following: (a) The
non-inferior solutions form the Pareto frontier when the instantaneous power consumption of a machine j (that has mapped
a task) is below the peak power consumption of machine j.
(b) The goal is achieved whence machine j is operating on an
instantaneous power that is scaled as the m-th lowest power
consumption of machine j. It also is worth reporting that
due to the linearity relationship between power consumption
and the associated task completion time, the conditions of
We have all the necessary components to propose a goal programming based task to machine mapping technique, acronym
GP. The GP technique must take in as an input the sets M and
T with all machines initialized to their corresponding highest
level of DVS, and produce a task to machine mapping.
To derive an upper and lower bound on the desired goal
(corresponding to Step 0 of Section III-A), we must utilize
the earliest deadline first approach. This will ensure that
the classical claim by the earliest deadline first approach
is satisfied. That is, if T can be scheduled (by an optimal
algorithm) such that constraint (4) is satisfied, then the earliest
deadline first approach will schedule T such that constraint (6)
is satisfied. The earliest deadline first approach also will ensure
that the GP technique has a corresponding upper and lower
bound on instantaneous power consumption and makespan
given deadlines of the metatask. The corresponding bounds
will be dictated by the tightness of the associated deadlines.
That is, the tighter the deadline for a given task, the more
instantaneous power a mapped machine would consume, and
vice versa.
The Kuhn-Tucker conditions derived in Section III-B set the
initial goals corresponding to Step 1 of Section III-A. They are
not depicted in Algorithm 1 that describes the GP technique.
Instead, implicitly, Steps 2 and 3 guide the solution towards a
best possible compromise [21].
To develop a MS (corresponding to Step 2 of Section
III-A.), we must satisfy constraint (3). First, we limit our
solution space to only those machines, M, that can satisfy
the architectural constraint. To ensure a feasible MS, we must
identify machines that without altering their current DVS level
can finish the task within the specified deadline. Such an
assurance is also known as laxity [16]. A laxity set, Δ , is
constructed. Using Δ, we determine the best possible task to
machine mapping without any alteration to the DVS levels.
This is accomplished by picking the machine that exhibits the
minimum laxity. The MS is not complete until an optimum
level of DVS is determined. The DVS level of the chosen
machine j is lowered until constraint (4) is violated. This
ensures that the mapped task is running on a machine that can
fulfill all of the constraints and consuming an instantaneous
power that results in a compromised solution.
The MS will be stable as long as Δ can be constructed.
However, mapped tasks stack-up on machines, thereby reducing the laxity, and possibly to a level that Δ is empty. Once
that happens PS must be constructed corresponding to Step 3
of Section III-A. Because only set M can potentially satisfy
constraint (4), the PS must be from within M. To increase laxity, the machines must operate on their corresponding highest
speed levels (or highest DVS levels), respectively. These new
51
5. International Science Index 32, 2009 waset.org/publications/9996745
World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
DVS levels will pad (down) the stacked tasks on machines
to levels that can ensure that we have one feasible (positive)
laxity. (The pad down is achieved by running all the maped
tasks on the highest speed. This will lower the makespan;
hence ensuring a feasible laxity.) Set Δ is reconstructed and
the machine that ensures the minimum laxity is chosen as the
PS.
The GP heuristic as mandated in Section III-A oscillates
between MS and PS. A number of important conclusions also
can be deduced from the GP technique. Namely,
1) A feasible solution if it exists is always identified; otherwise the EXIT statements identify unfeasible solutions
based on constraint (3) or the laxity criterion.
2) If the algorithm maps all of the tasks on machines within
the MS construction, then the solution is the optimal.
Moreover, deadlines must be very loose. Furthermore,
the laxity must be very high.
3) If the algorithm constructs PS, then the solution is on
the Pareto frontier (definition of Kuhn-Tucker conditions
of Section III-B). Moreover, PS ensure that optimum
solution is identified (definition of PS, Section III-A).
Furthermore, PS revisits MS to rectify anomalies by
altering the corresponding DVS levels such that the
resultant is a feasible optimal compromise.
Finally, to ensure that the GP technique is tractable, we
analyze the termination time. It is easy to congregate that the
exact worst-case bound is O n2 log n + 3mn + mn log m .
Because it is assumed that m
n, the worst-case bound
reduces to O n2 log n .
(a) Makespan ratio over the optimal.
(b) Makespan ratio over the optimal.
IV. S IMULATIONS , R ESULTS , AND D ISCUSSION
We set forth two major goals for our simulation study:
(a) To measure and compare the performance of the proposed technique against the optimal solution and the min-min
heuristic [23]. (b) To measure the impact of system parameter
variations. Based on the size of the problems, the experiments
were divided in two parts.
For small size problems, we used an Integer Linear Programming tool called LINDO [19]. LINDO is useful to obtain
optimal solutions, provided the problem size is relatively
small. Hence for small problem sizes, the performance of the
proposed is compared against 1) the optimal solution using
LINDO and 2) the min-min heuristic. The LINDO implementation and the min-min heuristic do not consider power as
an optimization constraint; however, they are very effective
for the optimization of the makespan. Thus, the comparison
provides us with a wide range of results. On one extreme
we have the optimal algorithm, on the other a technique
which scales well with the corresponding increase in the
problem size. For large size problems, it becomes impractical
to compute the optimal solution by LINDO. Hence, we only
consider comparisons against the min-min heuristic.
The system heterogeneity is captured by the distribution of
the number of CPU cycles, cij , on different mj s. Let C denote
the matrix composed by cij . The C matrix was generated
using the coefficient of variation method described in [23].
The deadline, di , of task ti was generated using the method
(c) Makespan.
(d) Makespan.
Fig. 1.
52
Simulation results.
6. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
described in [23]. For this study, we keep the architectural
affinity requirements confined to memory. (Adding other requirements such as, I/O, processor type, etc. will bear no affect
on our experimental setup or theoretical results.) Each machine
is assigned a memory on random from within the range [5005000] GB, while each task is associated a corresponding
memory requirement on random from within the range [20-50]
MB.
International Science Index 32, 2009 waset.org/publications/9996745
For small size problems, the number of machines was
fixed at 5, while the number of tasks varied from 20 to 40.
The number of DVS levels per machine was set to 4. The
frequencies of the machines were randomly mapped from
within the range [200MHz-2000MHz]. We assumed that the
potential difference of 1mV across a CMOS circuit generates
a frequency of 1MHz. For large size problems, the number of
machines was fixed at 16, while the number of tasks varied
from 1000 to 5000. The number of DVS levels per mj was
set to 8. Other parameters were the same as those for small
size problems.
(a) Power comsumption.
The experimental results for small size problems with K
equal to 1.5 and 1.0 are reported in Figs. 1(a) and 1(b). These
figures show the ratio of the makespan obtained from the two
techniques and the optimal. The plots clearly show that the GP
(proposed) technique performs extremely well and achieves a
performance level of 10%–15% of the optimal when K was
set at a very tight bound 1.0.
For large problem instances, first, we compare the makespan
identified by the min-min and the GP technique. Since the
min-min heuristic does not optimize power consumption, we
compared the min-min with a version of GP that ran on
full power and also compared it with the (original) version
that optimized power. Figs. 1(c) and 3(d) show the relative
performance of the techniques with various values of K, Vtask ,
and Vmach . The results indicate that GP outperforms the minmin technique in identifying a smaller makespan when power
is not considered as an optimization criteria. The performance
of GP is notably superior to the min-min technique when
the deadline constraints are relatively loose. It can also be
observed that GP, when considering power as an optimization
resource, identifies a task to machine mapping that produces
a makespan that is within 5%-10% of the min-min technique.
It was noticed that the relative performance of the min-min
technique was much better for large size problems, compared
with small size problems, because with the increase in the
size of the C matrix, the probability of obtaining larger values
of wis also increases. Moreover, the relative performance of
GP was also much better for large size problems, compared
with small size problems, because the DVS levels for the large
problem size are twice more than the DVS levels for the small
problem size.
Next, we compare the power consumption of both the
techniques. Figs. 2(a) and 2(b) reveal that on average the GP
technique utilizes 60%–65% less power as compared to the
min-min technique. That is a significant amount of savings
considering that the makespan identified by GP is within 5%–
10% of the makespan identified by the min-min technique.
(b) Power consumption.
Fig. 2.
Power consumption simulation results.
V. R ELATED W ORK
Most DPM techniques utilize instantaneous power management features supported by hardware. For example, Ref. [1]
extends the operating system’s power manager by an adaptive power manager (APM) that uses the processor’s DVS
capabilities to reduce or increase the CPU frequency, thereby
minimizing the overall energy consumption [3]. The DVS
technique at the processor-level together with a turn on/off
technique at the cluster-level to achieve high-power savings
while maintaining the response time is proposed in [18].
In [17], the authors introduce a scheme to concentrate the
workload on a limited number of servers in a cluster such that
the rest of the servers can remain switched-off for a longer
period of time.
While the closest techniques to combining device power
models to build a whole system has been presented in [5], our
approach aims at building a general framework for autonomic
power and performance management. Furthermore, while most
power management techniques are either heuristic-based approaches [7], [15] or stochastic optimization techniques [22],
we use goal programming to seek radically fast and efficient
solutions compared to the traditional approaches.
VI. C ONCLUSIONS
This paper presented an energy optimizing power-aware resource allocation strategy in data centers. A solution from goal
programming was proposed for this multi-objective problem.
53
7. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
The solution quality of the proposed technique was compared
against the optimal for small-scale problems, and greedy and
linear relaxation heuristics for large-scale problems. The simulation results confirm superior performance of the proposed
scheme in terms of reduction in energy consumption and
makespan compared to the heuristics and the optimal solution
obtained using LINDO.
International Science Index 32, 2009 waset.org/publications/9996745
R EFERENCES
[1] T. F. Abdelzaher and C. Lu. Schedulability analysis and utilization bounds
for highly scalable real-time services. In 7th Real-Time Technology and
Applications Symposium, p. 15, 2001.
[2] N. Bansal, T. Kimbrel, and K. Pruhs. Dynamic speed scaling to manage
energy and temperature. In 45th Annual IEEE Symposium on Foundations
of Computer Science, pp. 520–529, 2004.
[3] R. Bianchini and R. Rajamony. Power and energy management for server
systems. IEEE Computer, 37(11):68–74, 2004.
[4] D. P. Bunde. Power-aware scheduling for makespan and flow. In 8th ACM
Symposium on Parallelism in Algorithms and Architectures, pp. 190–196,
2006.
[5] J. Chen, M. Dubois, and P. Stenstr¨ m. Simwattch: Integrating completeo
system and user-level performance and power simulators. IEEE Micro,
27(4):34–48, 2007.
[6] J. S. Dyer. Interactive goal programming. Operations Research, 19:62–
70, 1972.
[7] T. Heath, B. Diniz, E. V. Carrera, W. M. Jr., and R. Bianchini. Energy
conservation in heterogeneous server clusters. In 10th ACM SIGPLAN
Symposium on Principles and Practice of Parallel Programming, pp. 186–
195, 2005.
[8] C. L. Hwang and A. S. M. Masud. Multiple Objective Decision Making—
Methods and Applications: A State-pf-the-Art Survey. Springer Verlag,
berlin, 1979.
[9] S. Irani, R. Gupta, and S. Shukla. Competitive analysis of dynamic power
management strategies for systems with multiple power savings states. In
Conference on Design, Automation and test in Europe, p. 117, 2002.
[10] L. Li and K. K. Lai. A fuzzy approach to the multiobjective transportation problem. Computers and Operations Research, 27(1):43–57,
2000.
[11] T.-F. Liang. Fuzzy multi-objective production/distribution planning
decisions with multi-product and multi-time period in a supply chain.
Computers in Industrial Engineering, 55(3):676–694, 2008.
[12] J. R. Lorch and A. J. Smith. Improving dynamic voltage scaling algorithms with pace. In 2001 ACM SIGMETRICS International Conference
on Measurement and Modeling of Computer Systems, pp. 50–61, 2001.
[13] D. Luenberger. Linear and Nonlinear Programming. Addison-Wesley,
1984.
[14] P. Mejia-Alvarez, E. Levner, and D. Moss´ . Adaptive scheduling
e
server for power-aware real-time tasks. IEEE Transactions on Embedded
Computing Systems, 3(2):284–306, 2004.
[15] R. Nathuji, C. Isci, and E. Gorbatov. Exploiting platform heterogeneity
for power efficient data centers. In 4th International Conference on
Autonomic Computing, p. 5, 2007.
[16] P. A. Laplante. Real-Time System Design and Analysis. John Wiley &
Sons, 2004.
[17] E. Pinheiro, R. Bianchini, E. V. Carrera, and T. Heath. Load balancing
and unbalancing for power and performance in cluster-based systems. In
Workshop on Compilers and Operating Systems for Low Power, 2001.
[18] C. Rusu, A. Ferreira, C. Scordino, and A. Watson. Energy-efficient
real-time heterogeneous server clusters. In 12th IEEE Real-Time and
Embedded Technology and Applications Symposium, pp. 418–428, 2006.
[19] L. Schrage. Linear, Integer, and Quadratic Programming with LINDO.
Scientific Press, 1986.
[20] A. Stefanescu and M. Stefanescu. The arbitrated solution for multiobjective convex programming. Revue Roumaine de Mathematical Pures
et Appliquees, 29:593–598, 1984.
[21] J. Wallenius. Comparative evaluation of some interactive approaches to
multicriterion optimization. Management Sciences, 21:1387–1396, 1975.
[22] M. Weiser, B. Welch, A. Demers, and S. Shenker. Scheduling for
reduced cpu energy. In 1st USENIX conference on Operating Systems
Design and Implementation, p. 2, 1994.
[23] Y. Yu and V. K. Prasanna. Power-aware resource allocation for independent tasks in heterogeneous real-time systems. In 9th International
Conference on Parallel and Distributed Systems, p. 341, 2002.
[24] M. Zangiabadi and H. R. Maleki. Fuzzy goal programming for
multiobjective transportation problems. Journal of Applied Mathematical
Computing, 24(1):449–460, 2007.
54