The document discusses energy efficient resource allocation in distributed computing systems. It proposes using a solution from cooperative game theory called the Nash Bargaining Solution (NBS) to allocate tasks to machines in a computational grid in a way that minimizes both power consumption and makespan.
The key points are:
- It formulates the problem of mapping tasks to machines as a multi-constrained multi-objective extension of the Generalized Assignment Problem (GAP) called Power-aware Task Allocation (PATA).
- It models a cooperative game where each machine is a player, and the goal is to minimize both the total power consumed and the overall makespan when allocating tasks.
- The proposed NBS
On the-joint-optimization-of-performance-and-power-consumption-in-data-centersCemal Ardil
The document summarizes research on jointly optimizing performance and power consumption in data centers. It models the process of mapping tasks in a data center onto machines as a multi-objective problem to minimize both energy consumption and response time (makespan), subject to deadline and architectural constraints. It proposes using a simple goal programming technique that guarantees Pareto optimal solutions with good convergence. Simulation results show the technique achieves superior performance compared to other approaches and is competitive with optimal solutions for small-scale problems.
A weighted-sum-technique-for-the-joint-optimization-of-performance-and-power-...Cemal Ardil
The document presents a self-adaptive weighted sum technique for jointly optimizing performance and power consumption in data centers. It formulates the problem as a multi-objective optimization to minimize total power consumption and task completion time. The proposed technique adapts weights during optimization to better explore non-convex regions of the solution space, unlike traditional weighted sum methods. It was tested on data from a satellite control network and showed improved results over greedy heuristics and competitive performance against optimal solutions for smaller problems.
DYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEYijccsa
This document summarizes dynamic energy management techniques in cloud data centers. It discusses that cloud data centers consume enormous amounts of energy, resulting in high costs and environmental impacts. It then categorizes energy management approaches as either static or dynamic. Dynamic energy management techniques dynamically reconfigure hardware and software systems based on workload variability to optimize energy usage. The document surveys techniques at the hardware level including for processors, networks, and storage, and also virtualization-assisted techniques like server consolidation and virtual machine placement and migration. The goal of these dynamic techniques is to improve energy efficiency in cloud data centers through runtime adaptation.
IRJET - Demand Response for Energy Management using Machine Learning and LSTM...IRJET Journal
1) The document proposes an hour-ahead demand response prediction algorithm for energy management systems using machine learning and LSTM neural networks.
2) It aims to more accurately forecast electricity demand by developing a stable price prediction model based on LSTM neural networks.
3) The algorithm would allow energy management systems to control load across multiple units based on hourly predicted electricity prices from the service provider, in order to minimize energy bills for users.
ENERGY PERFORMANCE OF A COMBINED HORIZONTAL AND VERTICAL COMPRESSION APPROACH...IJCNCJournal
Energy efficiency is an essential issue to be reckoned in wireless sensor networks development. Since the low-powered sensor nodes deplete their energy in transmitting the collected information, several strategies have been proposed to investigate the communication power consumption, in order to reduce the amount of transmitted data without affecting the information reliability. Lossy compression is a promising solution recently adapted to overcome the challenging energy consumption, by exploiting the data correlation and discarding the redundant information. In this paper, we propose a hybrid compression approach based on two dimensions specified as horizontal (HC) and vertical compression (VC), typically implemented in cluster-based routing architecture. The proposed scheme considers two key performance metrics, energy expenditure, and data accuracy to decide the adequate compression approach based on HC-VC or VC-HC configuration according to each WSN application requirement. Simulation results exhibit the performance of both proposed approaches in terms of extending the clustering network lifetime.
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...IJCSEA Journal
Energy consumption is a critical design issue in real-time systems, especially in battery- operated systems. Maintaining high performance, while extending the battery life between charges is an interesting challenge for system designers. Dynamic Voltage Scaling and Dynamic Frequency Scaling allow us to adjust supply voltage and processor frequency to adapt to the workload demand for better energy management. Usually, higher processor voltage and frequency leads to higher system throughput while energy reduction can be obtained using lower voltage and frequency. Many real-time scheduling algorithms have been developed recently to reduce energy consumption in the portable devices that use voltage scalable processors. For a real-time application, comprising a set of real-time tasks with precedence and resource constraints executing on a distributed system, we propose a dynamic energy efficient scheduling algorithm with weighted First Come First Served (WFCFS) scheme. This also considers the run-time behaviour of tasks, to further explore the idle periods of processors for energy saving. Our algorithm is compared with the existing Modified Feedback Control Scheduling (MFCS), First Come First Served (FCFS), and Weighted scheduling (WS) algorithms that uses Service-Rate-Proportionate (SRP) Slack Distribution Technique. Our proposed algorithm achieves about 5 to 6 percent more energy savings and increased reliability over the existing ones.
A Dynamic Programming Approach to Energy-Efficient Scheduling on Multi-FPGA b...TELKOMNIKA JOURNAL
This paper has been studied an important issue of energy-efficient scheduling on multi-FPGA systems. The main challenges are integral allocation, reconfiguration overhead and exclusiveness and energy minimization with deadline constraint. To tackle these challenges, based on the theory of dynamic programming, we have designed and implemented an energy-efficient scheduling on multi-FPGA systems. Differently, we have presented a MLPF algorithm for task placement on FPGAs. Finally, the experimental results have demonstrated that the proposed algorithm can successfully accommodate all tasks without violation of the deadline constraint. Additionally, it gains higher energy reduction 13.3% and 26.3% than that of Particle Swarm Optimization and fully balanced algorithm, respectively.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
KEYWORDS
Energy Consumption, Virtual Machine Placement, Harmony Search Algorithm, Server Consolidati
On the-joint-optimization-of-performance-and-power-consumption-in-data-centersCemal Ardil
The document summarizes research on jointly optimizing performance and power consumption in data centers. It models the process of mapping tasks in a data center onto machines as a multi-objective problem to minimize both energy consumption and response time (makespan), subject to deadline and architectural constraints. It proposes using a simple goal programming technique that guarantees Pareto optimal solutions with good convergence. Simulation results show the technique achieves superior performance compared to other approaches and is competitive with optimal solutions for small-scale problems.
A weighted-sum-technique-for-the-joint-optimization-of-performance-and-power-...Cemal Ardil
The document presents a self-adaptive weighted sum technique for jointly optimizing performance and power consumption in data centers. It formulates the problem as a multi-objective optimization to minimize total power consumption and task completion time. The proposed technique adapts weights during optimization to better explore non-convex regions of the solution space, unlike traditional weighted sum methods. It was tested on data from a satellite control network and showed improved results over greedy heuristics and competitive performance against optimal solutions for smaller problems.
DYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEYijccsa
This document summarizes dynamic energy management techniques in cloud data centers. It discusses that cloud data centers consume enormous amounts of energy, resulting in high costs and environmental impacts. It then categorizes energy management approaches as either static or dynamic. Dynamic energy management techniques dynamically reconfigure hardware and software systems based on workload variability to optimize energy usage. The document surveys techniques at the hardware level including for processors, networks, and storage, and also virtualization-assisted techniques like server consolidation and virtual machine placement and migration. The goal of these dynamic techniques is to improve energy efficiency in cloud data centers through runtime adaptation.
IRJET - Demand Response for Energy Management using Machine Learning and LSTM...IRJET Journal
1) The document proposes an hour-ahead demand response prediction algorithm for energy management systems using machine learning and LSTM neural networks.
2) It aims to more accurately forecast electricity demand by developing a stable price prediction model based on LSTM neural networks.
3) The algorithm would allow energy management systems to control load across multiple units based on hourly predicted electricity prices from the service provider, in order to minimize energy bills for users.
ENERGY PERFORMANCE OF A COMBINED HORIZONTAL AND VERTICAL COMPRESSION APPROACH...IJCNCJournal
Energy efficiency is an essential issue to be reckoned in wireless sensor networks development. Since the low-powered sensor nodes deplete their energy in transmitting the collected information, several strategies have been proposed to investigate the communication power consumption, in order to reduce the amount of transmitted data without affecting the information reliability. Lossy compression is a promising solution recently adapted to overcome the challenging energy consumption, by exploiting the data correlation and discarding the redundant information. In this paper, we propose a hybrid compression approach based on two dimensions specified as horizontal (HC) and vertical compression (VC), typically implemented in cluster-based routing architecture. The proposed scheme considers two key performance metrics, energy expenditure, and data accuracy to decide the adequate compression approach based on HC-VC or VC-HC configuration according to each WSN application requirement. Simulation results exhibit the performance of both proposed approaches in terms of extending the clustering network lifetime.
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...IJCSEA Journal
Energy consumption is a critical design issue in real-time systems, especially in battery- operated systems. Maintaining high performance, while extending the battery life between charges is an interesting challenge for system designers. Dynamic Voltage Scaling and Dynamic Frequency Scaling allow us to adjust supply voltage and processor frequency to adapt to the workload demand for better energy management. Usually, higher processor voltage and frequency leads to higher system throughput while energy reduction can be obtained using lower voltage and frequency. Many real-time scheduling algorithms have been developed recently to reduce energy consumption in the portable devices that use voltage scalable processors. For a real-time application, comprising a set of real-time tasks with precedence and resource constraints executing on a distributed system, we propose a dynamic energy efficient scheduling algorithm with weighted First Come First Served (WFCFS) scheme. This also considers the run-time behaviour of tasks, to further explore the idle periods of processors for energy saving. Our algorithm is compared with the existing Modified Feedback Control Scheduling (MFCS), First Come First Served (FCFS), and Weighted scheduling (WS) algorithms that uses Service-Rate-Proportionate (SRP) Slack Distribution Technique. Our proposed algorithm achieves about 5 to 6 percent more energy savings and increased reliability over the existing ones.
A Dynamic Programming Approach to Energy-Efficient Scheduling on Multi-FPGA b...TELKOMNIKA JOURNAL
This paper has been studied an important issue of energy-efficient scheduling on multi-FPGA systems. The main challenges are integral allocation, reconfiguration overhead and exclusiveness and energy minimization with deadline constraint. To tackle these challenges, based on the theory of dynamic programming, we have designed and implemented an energy-efficient scheduling on multi-FPGA systems. Differently, we have presented a MLPF algorithm for task placement on FPGAs. Finally, the experimental results have demonstrated that the proposed algorithm can successfully accommodate all tasks without violation of the deadline constraint. Additionally, it gains higher energy reduction 13.3% and 26.3% than that of Particle Swarm Optimization and fully balanced algorithm, respectively.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
KEYWORDS
Energy Consumption, Virtual Machine Placement, Harmony Search Algorithm, Server Consolidati
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
IRJET- A Statistical Approach Towards Energy Saving in Cloud ComputingIRJET Journal
This document proposes a statistical approach to save energy in cloud computing through predictive monitoring and optimization techniques. It discusses using Gaussian process regression to predict infrastructure workload and then applying convex optimization to determine the optimal subset of physical machines needed. Virtual machines would be migrated to this subset and idle physical machines could then be powered off to reduce energy consumption while maintaining system performance. An evaluation using 29 days of Google trace data showed the potential for significant power savings without affecting quality of service.
This summarizes a research paper that proposes a utilization-based virtual machine (VM) consolidation scheme to improve power efficiency in cloud data centers. The scheme aims to reduce the number of VM migrations and associated overhead by migrating VMs to stable hosts with higher utilization. It defines a performance function that considers power consumption and quality of service violations. The proposed Utilization-based Migration Algorithm (UMA) classifies hosts as overloaded, fully loaded, or underloaded. It computes migration probabilities for VMs on overloaded hosts and consolidates VMs from underloaded hosts to improve overall utilization and minimize the performance function value. The results show UMA reduces migrations by 77.5-82.4% and saves 39.3-42
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs), where application developers can extract data from sensor nodes through a high level abstraction of the system. Instead of developing the entire application, task graph representation of the WSN model presents simplified approach of data collection. However, mapping of tasks onto sensor nodes highlights several problems in energy consumption and routing delay. In this paper, we present an efficient hybrid approach of task mapping for WSN – Hybrid Genetic Algorithm, considering multiple objectives of optimization – energy consumption, routing delay and soft real time requirement. We also present a method to configure the algorithm as per user's need by changing the heuristics used for optimization. The trade-off analysis between energy consumption and delivery delay was performed and simulation results are presented. The algorithm is applicable during macro-programming enabling developers to choose a better mapping according to their application requirements.
The document proposes an approach to optimize energy consumption in automotive networks based on exploiting pretended networking. The approach performs static scheduling of tasks to allocate slack time for ECUs to enter a lower power pretended mode. It further optimizes by considering varying message response times in different vehicle states. An online algorithm then dynamically puts ECUs into pretended mode based on actual execution times, achieving additional energy savings compared to the static approach. Experimental results show the static approach can save up to 31.3% energy, and considering varying response times provides an additional 8.2% savings.
This document discusses adaptive system-level scheduling under fluid traffic flow conditions in multiprocessor systems. It proposes a scheduling mechanism that accounts for traffic-centric system design. The mechanism evaluates scheduling methods based on effectiveness, robustness, and flexibility. It also introduces a processor-FPGA scheduling approach that reduces schedule length by taking advantage of FPGA reconfiguration. Simulation results show that processor-FPGA scheduling outperforms multiprocessor-only scheduling under certain traffic conditions. Future work will focus on formulating a traffic-centric scheduling method.
Design and Implementation of a Programmable Truncated Multiplierijsrd.com
Truncated multiplication reduces part of the power required by multipliers by only computing the most-significant bits of the product. The most common approach to truncation includes physical reduction of the partial product matrix and a compensation for the reduced bits via different hardware compensation sub circuits. However, this result in fixed systems optimized for a given application at design time. A novel approach to truncation is proposed, where a full precision multiplier is implemented, but the active section of the partial product matrix is selected dynamically at run-time. This allows a power reduction trade off against signal degradation which can be modified at run time. Such architecture brings together the power reduction benefits from truncated multipliers and the flexibility of reconfigurable and general purpose devices. Efficient implementation of such a multiplier is presented in a custom digital signal processor where the concept of software compensation is introduced and analysed for different applications. Experimental results and power measurements are studied, including power measurements from both post-synthesis simulations and a fabricated IC implementation. This is the first system-level DSP core using a fine-grain truncated multiplier. Results demonstrate the effectiveness of the programmable truncated MAC (PTMAC) in achieving power reduction, with minimum impact on functionality for a number of applications. Software compensation also shown to be effective when deploying truncated multipliers in a system.
Design and Implementation of Low Power DSP Core with Programmable Truncated V...ijsrd.com
The programmable truncated Vedic multiplication is the method which uses Vedic multiplier and programmable truncation control bits and which reduces part of the area and power required by multipliers by only computing the most-significant bits of the product. The basic process of truncation includes physical reduction of the partial product matrix and a compensation for the reduced bits via different hardware compensation sub circuits. These results in fixed systems optimized for a given application at design time. A novel approach to truncation is proposed, where a full precision vedic multiplier is implemented, but the active section of the truncation is selected by truncation control bits dynamically at run-time. Such architecture brings together the power reduction benefits from truncated multipliers and the flexibility of reconfigurable and general purpose devices. Efficient implementation of such a multiplier is presented in a custom digital signal processor where the concept of software compensation is introduced and analyzed for different applications. Experimental results and power measurements are studied, including power measurements from both post-synthesis simulations and a fabricated IC implementation. This is the first system-level DSP core using a high speed Vedic truncated multiplier. Results demonstrate the effectiveness of the programmable truncated MAC (PTMAC) in achieving power reduction, with minimum impact on functionality for a number of applications. On comparison with the previous parallel multipliers Vedic should be much more fast and area should be reduced. Programmable truncated Vedic multiplier (PTVM) should be the basic part implemented for the arithmetic and PTMAC units.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
Optimal Placement and Sizing of Distributed Generation Units Using Co-Evoluti...Radita Apriana
Today, with the increase of distributed generation sources in power systems, it’s important to
optimal location of these sources. Determine the number, location, size and type of distributed generation
(DG) on Power Systems, causes the reducing losses and improving reliability of the system. In this paper
is used Co-evolutionary particle swarm optimization algorithm (CPSO) to determine the optimal values of
the listed parameters. Obtained results through simulations are done in MATLAB software is presented in
the form of figure and table in this paper. These tables and figures, show how to changes the system
losses and improving reliability by changing parameters such as location, size, number and type of DG.
Finally, the results of this method are compared with the results of the Genetic algorithm (GA) method, to
determine the performance of each of these methods.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
The document discusses using optimized neural networks for short-term wind speed forecasting. It proposes using parametric recurrent neural networks (PRNNs) with an improved activation function that includes a logarithmic parameter "p" to optimize the network size. The PRNNs are trained to predict wind speed using historical wind farm data. Simulation results show the PRNNs more accurately predict wind speed up to 180 minutes in the future compared to numerical methods using polynomials. The value of the "p" parameter can identify linearly dependent neurons that can be combined to reduce the optimized network size.
IRJET-Comparative Analysis of Unit Commitment Problem of Electric Power Syste...IRJET Journal
The document presents a comparative analysis of solving the unit commitment problem (UCP) of electric power systems using dynamic programming technique. It formulates the UCP considering fuel costs, voltage stability, and imbalance limits. Dynamic programming is described as a conventional algorithm for solving deterministic problems optimally through sequential decision making. The developed algorithm is implemented on 4-unit and 10-unit power systems. Results found the dynamic programming approach provides satisfactory solutions by minimizing total generation costs while meeting operational constraints.
Comparative Analysis of Job Scheduling for Grid Environment ............................................................1
Neeraj Pandey, Ashish Arya and Nitin Kumar Agrawal
Hackers Portfolio and its Impact on Society ........................................................................................1
Dr. Adnan Omar and Terrance Sanchez, M.S.
Ontology Based Multi-Viewed Approach for Requirements Engineering ..............................................1
R. Subha and S. Palaniswami
Modified Colonial Competitive Algorithm: An Approach for Graph Coloring Problem ..........................1
Hojjat Emami and Parvaneh Hasanzadeh
Security and Privacy in E-Passport Scheme using Authentication Protocols and Multiple Biometrics
Technology ........................................................................................................................................1
V. K. Narendira Kumar and B. Srinivasan
Comparative Study of WLAN, WPAN, WiMAX Technologies ................................................................1
Prof. Mangesh M. Ghonge and Prof. Suraj G. Gupta
A New Method for Web Development using Search Engine Optimization ............................................1
Chutisant Kerdvibulvech and Kittidech Impaiboon
A New Design to Improve the Security Aspects of RSA Cryptosystem ..................................................1
Sushma Pradhan and Birendra Kumar Sharma
A Hybrid Model of Multimodal Approach for Multiple Biometrics Recognition ...................................1
P. Prabhusundhar, V.K. Narendira Kumar and B. Srinivasan
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
This document describes an enhanced adaptive scoring job scheduling algorithm with replication strategy for grid environments. The algorithm aims to improve upon an existing adaptive scoring job scheduling algorithm by identifying whether jobs are data-intensive or computation-intensive. It then divides large jobs into subtasks, replicates the subtasks, and allocates the replicas to clusters based on a computed cluster score in order to improve resource utilization and job completion times. The algorithm is evaluated through simulation using the GridSim toolkit.
ENERGY-EFFICIENT ADAPTIVE RESOURCE MANAGEMENT FOR REAL-TIME VEHICULAR CLOUD S...Nexgen Technology
TO GET THIS PROJECT COMPLETE SOURCE ON SUPPORT WITH EXECUTION PLEASE CALL BELOW CONTACT DETAILS
MOBILE: 9791938249, 0413-2211159, WEB: WWW.NEXGENPROJECT.COM,WWW.FINALYEAR-IEEEPROJECTS.COM, EMAIL:Praveen@nexgenproject.com
NEXGEN TECHNOLOGY provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. NEXGEN TECHNOLOGY help it customers optimally use their resources.
The document discusses trends in cloud computing towards finer granularity of resources and pricing, including renting resources by the second. It describes a prototype called Ginseng that implements a market-driven approach for dynamically allocating memory resources in a cloud using an auction mechanism. Testing showed Ginseng improved social welfare by 6.2 to 15.8 times compared to other approaches. Future work is needed to expand Ginseng to support multiple resources and improve its mechanisms.
UnaCloud is an opportunistic based cloud infrastructure
(IaaS) that allows to access on-demand computing
capabilities using commodity desktops. Although UnaCloud
tried to maximize the use of idle resources to deploy virtual
machines on them, it does not use energy-efficient resource
allocation algorithms. In this paper, we design and implement
different energy-aware techniques to operate in an energyefficient
way and at the same time guarantee the performance
to the users. Performance tests with different algorithms and
scenarios using real trace workloads from UnaCloud, show how
different policies can change the energy consumption patterns
and reduce the energy consumption in opportunistic cloud
infrastructures. The results show that some algorithms can
reduce the energy-consumption power up to 30% over the
percentage earned by opportunistic environment.
Michael enescu keynote chicago2014_from_cloud_to_fog_and_iotMichael Enescu
Michael Enescu gave a keynote at LinuxCon 2014 about fog computing and the Internet of Things (IoT). He discussed how IoT has grown significantly in recent years and will continue growing rapidly, with over 50 billion devices expected by 2020. This growth will produce immense amounts of data that cannot all be sent to the cloud for analysis. As a result, fog computing is emerging as a way to perform analytics and actions closer to the data source using distributed computing at the network edge. Open source software will play a major role in fog computing and IoT, as it has in previous technology shifts, due to its credibility and dominance in development.
Green cloud computing using heuristic algorithmsIliad Mnd
Green computing is defined as the study and practice of designing , manufacturing, using, and disposing of computers, servers, and associated sub systems such as
monitors, printers, storage devices, and networking and
communications systems efficiently and effectively with
minimal or no impact on the environment. Research continues into key areas such as making the use of computers as energy efficient as possible, and designing algorithms and systems for efficiency related computer technologies.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
IRJET- A Statistical Approach Towards Energy Saving in Cloud ComputingIRJET Journal
This document proposes a statistical approach to save energy in cloud computing through predictive monitoring and optimization techniques. It discusses using Gaussian process regression to predict infrastructure workload and then applying convex optimization to determine the optimal subset of physical machines needed. Virtual machines would be migrated to this subset and idle physical machines could then be powered off to reduce energy consumption while maintaining system performance. An evaluation using 29 days of Google trace data showed the potential for significant power savings without affecting quality of service.
This summarizes a research paper that proposes a utilization-based virtual machine (VM) consolidation scheme to improve power efficiency in cloud data centers. The scheme aims to reduce the number of VM migrations and associated overhead by migrating VMs to stable hosts with higher utilization. It defines a performance function that considers power consumption and quality of service violations. The proposed Utilization-based Migration Algorithm (UMA) classifies hosts as overloaded, fully loaded, or underloaded. It computes migration probabilities for VMs on overloaded hosts and consolidates VMs from underloaded hosts to improve overall utilization and minimize the performance function value. The results show UMA reduces migrations by 77.5-82.4% and saves 39.3-42
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs), where application developers can extract data from sensor nodes through a high level abstraction of the system. Instead of developing the entire application, task graph representation of the WSN model presents simplified approach of data collection. However, mapping of tasks onto sensor nodes highlights several problems in energy consumption and routing delay. In this paper, we present an efficient hybrid approach of task mapping for WSN – Hybrid Genetic Algorithm, considering multiple objectives of optimization – energy consumption, routing delay and soft real time requirement. We also present a method to configure the algorithm as per user's need by changing the heuristics used for optimization. The trade-off analysis between energy consumption and delivery delay was performed and simulation results are presented. The algorithm is applicable during macro-programming enabling developers to choose a better mapping according to their application requirements.
The document proposes an approach to optimize energy consumption in automotive networks based on exploiting pretended networking. The approach performs static scheduling of tasks to allocate slack time for ECUs to enter a lower power pretended mode. It further optimizes by considering varying message response times in different vehicle states. An online algorithm then dynamically puts ECUs into pretended mode based on actual execution times, achieving additional energy savings compared to the static approach. Experimental results show the static approach can save up to 31.3% energy, and considering varying response times provides an additional 8.2% savings.
This document discusses adaptive system-level scheduling under fluid traffic flow conditions in multiprocessor systems. It proposes a scheduling mechanism that accounts for traffic-centric system design. The mechanism evaluates scheduling methods based on effectiveness, robustness, and flexibility. It also introduces a processor-FPGA scheduling approach that reduces schedule length by taking advantage of FPGA reconfiguration. Simulation results show that processor-FPGA scheduling outperforms multiprocessor-only scheduling under certain traffic conditions. Future work will focus on formulating a traffic-centric scheduling method.
Design and Implementation of a Programmable Truncated Multiplierijsrd.com
Truncated multiplication reduces part of the power required by multipliers by only computing the most-significant bits of the product. The most common approach to truncation includes physical reduction of the partial product matrix and a compensation for the reduced bits via different hardware compensation sub circuits. However, this result in fixed systems optimized for a given application at design time. A novel approach to truncation is proposed, where a full precision multiplier is implemented, but the active section of the partial product matrix is selected dynamically at run-time. This allows a power reduction trade off against signal degradation which can be modified at run time. Such architecture brings together the power reduction benefits from truncated multipliers and the flexibility of reconfigurable and general purpose devices. Efficient implementation of such a multiplier is presented in a custom digital signal processor where the concept of software compensation is introduced and analysed for different applications. Experimental results and power measurements are studied, including power measurements from both post-synthesis simulations and a fabricated IC implementation. This is the first system-level DSP core using a fine-grain truncated multiplier. Results demonstrate the effectiveness of the programmable truncated MAC (PTMAC) in achieving power reduction, with minimum impact on functionality for a number of applications. Software compensation also shown to be effective when deploying truncated multipliers in a system.
Design and Implementation of Low Power DSP Core with Programmable Truncated V...ijsrd.com
The programmable truncated Vedic multiplication is the method which uses Vedic multiplier and programmable truncation control bits and which reduces part of the area and power required by multipliers by only computing the most-significant bits of the product. The basic process of truncation includes physical reduction of the partial product matrix and a compensation for the reduced bits via different hardware compensation sub circuits. These results in fixed systems optimized for a given application at design time. A novel approach to truncation is proposed, where a full precision vedic multiplier is implemented, but the active section of the truncation is selected by truncation control bits dynamically at run-time. Such architecture brings together the power reduction benefits from truncated multipliers and the flexibility of reconfigurable and general purpose devices. Efficient implementation of such a multiplier is presented in a custom digital signal processor where the concept of software compensation is introduced and analyzed for different applications. Experimental results and power measurements are studied, including power measurements from both post-synthesis simulations and a fabricated IC implementation. This is the first system-level DSP core using a high speed Vedic truncated multiplier. Results demonstrate the effectiveness of the programmable truncated MAC (PTMAC) in achieving power reduction, with minimum impact on functionality for a number of applications. On comparison with the previous parallel multipliers Vedic should be much more fast and area should be reduced. Programmable truncated Vedic multiplier (PTVM) should be the basic part implemented for the arithmetic and PTMAC units.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
Optimal Placement and Sizing of Distributed Generation Units Using Co-Evoluti...Radita Apriana
Today, with the increase of distributed generation sources in power systems, it’s important to
optimal location of these sources. Determine the number, location, size and type of distributed generation
(DG) on Power Systems, causes the reducing losses and improving reliability of the system. In this paper
is used Co-evolutionary particle swarm optimization algorithm (CPSO) to determine the optimal values of
the listed parameters. Obtained results through simulations are done in MATLAB software is presented in
the form of figure and table in this paper. These tables and figures, show how to changes the system
losses and improving reliability by changing parameters such as location, size, number and type of DG.
Finally, the results of this method are compared with the results of the Genetic algorithm (GA) method, to
determine the performance of each of these methods.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
The document discusses using optimized neural networks for short-term wind speed forecasting. It proposes using parametric recurrent neural networks (PRNNs) with an improved activation function that includes a logarithmic parameter "p" to optimize the network size. The PRNNs are trained to predict wind speed using historical wind farm data. Simulation results show the PRNNs more accurately predict wind speed up to 180 minutes in the future compared to numerical methods using polynomials. The value of the "p" parameter can identify linearly dependent neurons that can be combined to reduce the optimized network size.
IRJET-Comparative Analysis of Unit Commitment Problem of Electric Power Syste...IRJET Journal
The document presents a comparative analysis of solving the unit commitment problem (UCP) of electric power systems using dynamic programming technique. It formulates the UCP considering fuel costs, voltage stability, and imbalance limits. Dynamic programming is described as a conventional algorithm for solving deterministic problems optimally through sequential decision making. The developed algorithm is implemented on 4-unit and 10-unit power systems. Results found the dynamic programming approach provides satisfactory solutions by minimizing total generation costs while meeting operational constraints.
Comparative Analysis of Job Scheduling for Grid Environment ............................................................1
Neeraj Pandey, Ashish Arya and Nitin Kumar Agrawal
Hackers Portfolio and its Impact on Society ........................................................................................1
Dr. Adnan Omar and Terrance Sanchez, M.S.
Ontology Based Multi-Viewed Approach for Requirements Engineering ..............................................1
R. Subha and S. Palaniswami
Modified Colonial Competitive Algorithm: An Approach for Graph Coloring Problem ..........................1
Hojjat Emami and Parvaneh Hasanzadeh
Security and Privacy in E-Passport Scheme using Authentication Protocols and Multiple Biometrics
Technology ........................................................................................................................................1
V. K. Narendira Kumar and B. Srinivasan
Comparative Study of WLAN, WPAN, WiMAX Technologies ................................................................1
Prof. Mangesh M. Ghonge and Prof. Suraj G. Gupta
A New Method for Web Development using Search Engine Optimization ............................................1
Chutisant Kerdvibulvech and Kittidech Impaiboon
A New Design to Improve the Security Aspects of RSA Cryptosystem ..................................................1
Sushma Pradhan and Birendra Kumar Sharma
A Hybrid Model of Multimodal Approach for Multiple Biometrics Recognition ...................................1
P. Prabhusundhar, V.K. Narendira Kumar and B. Srinivasan
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
This document describes an enhanced adaptive scoring job scheduling algorithm with replication strategy for grid environments. The algorithm aims to improve upon an existing adaptive scoring job scheduling algorithm by identifying whether jobs are data-intensive or computation-intensive. It then divides large jobs into subtasks, replicates the subtasks, and allocates the replicas to clusters based on a computed cluster score in order to improve resource utilization and job completion times. The algorithm is evaluated through simulation using the GridSim toolkit.
ENERGY-EFFICIENT ADAPTIVE RESOURCE MANAGEMENT FOR REAL-TIME VEHICULAR CLOUD S...Nexgen Technology
TO GET THIS PROJECT COMPLETE SOURCE ON SUPPORT WITH EXECUTION PLEASE CALL BELOW CONTACT DETAILS
MOBILE: 9791938249, 0413-2211159, WEB: WWW.NEXGENPROJECT.COM,WWW.FINALYEAR-IEEEPROJECTS.COM, EMAIL:Praveen@nexgenproject.com
NEXGEN TECHNOLOGY provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. NEXGEN TECHNOLOGY help it customers optimally use their resources.
The document discusses trends in cloud computing towards finer granularity of resources and pricing, including renting resources by the second. It describes a prototype called Ginseng that implements a market-driven approach for dynamically allocating memory resources in a cloud using an auction mechanism. Testing showed Ginseng improved social welfare by 6.2 to 15.8 times compared to other approaches. Future work is needed to expand Ginseng to support multiple resources and improve its mechanisms.
UnaCloud is an opportunistic based cloud infrastructure
(IaaS) that allows to access on-demand computing
capabilities using commodity desktops. Although UnaCloud
tried to maximize the use of idle resources to deploy virtual
machines on them, it does not use energy-efficient resource
allocation algorithms. In this paper, we design and implement
different energy-aware techniques to operate in an energyefficient
way and at the same time guarantee the performance
to the users. Performance tests with different algorithms and
scenarios using real trace workloads from UnaCloud, show how
different policies can change the energy consumption patterns
and reduce the energy consumption in opportunistic cloud
infrastructures. The results show that some algorithms can
reduce the energy-consumption power up to 30% over the
percentage earned by opportunistic environment.
Michael enescu keynote chicago2014_from_cloud_to_fog_and_iotMichael Enescu
Michael Enescu gave a keynote at LinuxCon 2014 about fog computing and the Internet of Things (IoT). He discussed how IoT has grown significantly in recent years and will continue growing rapidly, with over 50 billion devices expected by 2020. This growth will produce immense amounts of data that cannot all be sent to the cloud for analysis. As a result, fog computing is emerging as a way to perform analytics and actions closer to the data source using distributed computing at the network edge. Open source software will play a major role in fog computing and IoT, as it has in previous technology shifts, due to its credibility and dominance in development.
Green cloud computing using heuristic algorithmsIliad Mnd
Green computing is defined as the study and practice of designing , manufacturing, using, and disposing of computers, servers, and associated sub systems such as
monitors, printers, storage devices, and networking and
communications systems efficiently and effectively with
minimal or no impact on the environment. Research continues into key areas such as making the use of computers as energy efficient as possible, and designing algorithms and systems for efficiency related computer technologies.
fog computing provide security to the data in cloudpriyanka reddy
Fog computing extends cloud computing by providing security and data processing capabilities at the edge of the network, close to end users and devices. It aims to address issues like high latency and bandwidth usage that can occur when all data processing is done in the cloud. Fog computing deploys computing, storage, and applications closer to end devices and users in order to improve response times for latency-sensitive applications like smart grids and connected vehicles. It creates a distributed network that balances resources between the cloud and edge devices.
In today’s world the growing demand for knowledge has made cloud computing a center of attraction. Cloud computing is providing utility based services to all the users worldwide. It enables presentation of applications from consumers, scientific and business domains. However, data centers created for cloud computing applications consume huge amounts of energy, contributing to high operational costs and a large amount of carbon dioxide emission to the environment. With enhancement of data center, the power consumption is increasing at such a rate that it has become a key concern these days because it is ultimately leading to energy shortcomings and global climatic change. Therefore, we need green cloud computing solutions that can not only save energy, but also reduce operational costs.
Fog computing, also known as fogging/edge computing, it is a model in which data, processing and applications are concentrated in devices at the network edge rather than existing almost entirely in the cloud.
The term "Fog Computing" was introduced by the Cisco Systems .
Its extended from cloud
Fog computing is a model that processes data closer to IoT devices rather than in the cloud. It addresses the limitations of cloud like high latency and bandwidth issues. Fog extends cloud services by providing computation, storage and applications at the edge of the network. Key applications of fog include connected vehicles, smart grids, smart buildings and healthcare. Fog computing supports mobility, location awareness, low latency and real-time interactions between heterogeneous edge devices and sensors.
Fog computing is a model that processes data and applications at the edge of the network, rather than sending all data to the cloud. It helps address issues with IoT networks like high latency and bandwidth usage. Fog computing can overcome cloud limitations by keeping data local, reducing congestion and improving security. It is well-suited for applications that require real-time, localized processing like connected vehicles, smart grids, smart cities, and healthcare. Fog computing lowers costs and improves efficiencies compared to relying solely on cloud infrastructure.
Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. The motivation of Fog computing lies in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks.
Virtualization allows multiple operating systems and applications to run on a single server at the same time, improving hardware utilization and flexibility. It reduces costs by consolidating servers and enabling more efficient use of resources. Key benefits of VMware virtualization include easier manageability, fault isolation, reduced costs, and the ability to separate applications.
Achieving Energy Proportionality In Server ClustersCSCJournals
a great amount of interests in the past few years. Energy proportionality is a principal to ensure that energy consumption is proportional to the system workload. Energy proportional design can effectively improve energy efficiency of computing systems. In this paper, an energy proportional model is proposed based on queuing theory and service differentiation in server clusters, which can provide controllable and predictable quantitative control over power consumption with theoretically guaranteed service performance. Futher study for the transition overhead is carried out corresponding strategy is proposed to compensate the performance degradation caused by transition overhead. The model is evaluated via extensive simulations and is justified by the real workload data trace. The results show that our model can achieve satisfied service performance while still preserving energy efficiency in the system.
A Brief Survey of Current Power Limiting StrategiesIRJET Journal
This document provides an overview of current power limiting strategies used in computing systems. It begins with background on power capping capabilities in Intel processors using Running Average Power Limit (RAPL). The document then reviews over 15 strategies that directly limit power consumption while minimizing performance impact, including feedback controllers, dynamic voltage and frequency scaling, task scheduling policies, and power budgeting techniques. It concludes that power limiting has become important for maximizing exascale system performance within power constraints.
An optimized cost-based data allocation model for heterogeneous distributed ...IJECEIAES
The document presents an optimized cost-based data allocation model for heterogeneous distributed computing systems. It aims to reduce the total system cost by optimizing how data is partitioned and allocated across different processors. The proposed approach uses an artificial bee colony algorithm to determine the allocation that minimizes the total cost, which is calculated by summing the costs of communication, computation, and network usage. Simulation results show the technique is able to efficiently lower the total system cost compared to existing methods and optimize the partitioned data allocation in heterogeneous distributed computing systems.
Two-way Load Flow Analysis using Newton-Raphson and Neural Network MethodsIRJET Journal
The document presents a study comparing two-way load flow analysis using the Newton-Raphson method and a neural network method for networked microgrids. The optimal power flow problem is solved using both a conventional Newton-Raphson method and an artificial intelligence neural network method. Results show that the neural network method achieves minimum losses and higher efficiency compared to the Newton-Raphson method, with efficiencies of 99.3% and 97% respectively for the test networked microgrid system.
DYNAMIC VOLTAGE SCALING FOR POWER CONSUMPTION REDUCTION IN REAL-TIME MIXED TA...cscpconf
The reduction in energy consumption without any deadline miss is one of the main challenges in real-time embedded systems. Dynamic voltage scaling (DVS) is a technique that reduces the power consumption of processors by utilizing various operating points provided to the DVS processor. These operating points consist of pairs of voltage and frequency. The selection of operating points can be done based on the load to the system at a particular point of time. In this work DVS is applied to both periodic and sporadic tasks, and an average of 40% of energy is reduced. The energy consumption of the processor is further reduced by 2-10% by reducing the number of pre-emption and frequency switching
This document proposes using a novel dynamic fuzzy c-means (dFCM) clustering technique combined with an artificial neural network (ANN) approach to reconfigure power distribution networks and minimize active power loss. The dFCM clustering is used to preprocess the input data for the ANN by transforming the input space into kernels, reducing the size of the ANN needed. The proposed framework combines dFCM clustering and ANN and is tested on two power networks, showing benefits like very short processing time, high accuracy, and a simple ANN structure with few neurons compared to other traditional reconfiguration methods.
IRJET- An Optimal Algorithm for Data Centres to Minimize the Power SupplyIRJET Journal
This document summarizes research on developing an optimal algorithm for data centers to minimize power supply. It discusses using data centers to provide ancillary services like voltage stability and optimal power flow calculations to power systems in exchange for compensation. The document reviews several related works on using genetic algorithms and geographic load balancing to optimize data center power management. It proposes a mutually beneficial ancillary services model and service level agreement between data centers and power systems to maximize profits for both while ensuring reliable power system operation.
This document proposes a method for optimizing power-normalized performance at runtime for heterogeneous application scenarios on many-core platforms. The method uses linear regression to build state-space models of power-normalized performance (IPS/Watt) based on feedback from hardware performance counters. It dynamically adapts DVFS and thread-to-core allocations to find the most energy-efficient performance point for single applications or combinations of CPU/memory-intensive applications executing concurrently. Experiments show the method improves power-normalized performance by up to 23% compared to standard Linux governors.
JAVA 2013 IEEE NETWORKING PROJECT Harvesting aware energy management for time...IEEEGLOBALSOFTTECHNOLOGIES
This document proposes energy management algorithms called Harvesting Aware Speed Selection (HASS) for time-critical wireless sensor networks that use energy harvesting. HASS aims to maximize the minimum energy reserve across all nodes to ensure resilient performance during emergencies or faults. Both a centralized optimal solution and distributed efficient solution are presented. Simulations show HASS achieves significantly higher energy reserves than baseline methods and improves systems' ability to handle emergencies while meeting performance requirements.
Harvesting aware energy management for time-critical wireless sensor networksIEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
This document proposes algorithms for dynamic task scheduling on a DVS system to minimize total system energy consumption. It presents two algorithms:
1) duSYS, which uses optimal speed setting and limited preemption to reduce device standby energy.
2) duSYS_PC, which further reduces preemptions compared to duSYS to achieve additional energy savings.
The algorithms achieve up to 43% energy savings compared to prior work and up to 30% savings compared to a CPU-energy efficient algorithm, showing their effectiveness in minimizing total system energy.
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Journals
Abstract Cloud computing is a new model of computing that is widely used in today’s industry, organizations and society in information technology service delivery as a utility. It enables organizations to reduce operational expenditure and capital expenditure. However, cloud computing with underutilized resources still consumes an unacceptable amount of energy than fully utilized resource. Many techniques for optimizing energy consumption in virtualized cloud have been proposed. This paper surveys different energy efficient models with task consolidation in the virtualized cloud computing environment. Keywords: Cloud computing, Virtualization, Task consolidation, Energy consumption, Virtual machine
Power consumption and energy management for edge computing: state of the artTELKOMNIKA JOURNAL
Edge computing aims to make internet-based services and remote computing power close to the user by placing information technology (IT) infrastructure at the network edges. This proximity provides data centers with low-latency and context-aware services. Edge computing power consumption is mainly caused by data centers, network equipment, and user equipment. With edge computing (EC), energy management platforms for residential, industrial, and commercial sectors are built. Energy efficiency is considered to be one of the key aspects of edge power constraints. This paper provides the state of the art of power consumption and energy management for edge computing, the computation offloading methods, and more important highlights the power efficiency of edge computing systems. Furthermore, renewable energy and related concepts will also be explored and presented since no human participation is required in replacing or recharging batteries when using such energy sources. Based on such study, a recommendation is to develop a dynamic system for energy management in real-time with the assessment of local renewable energy so that the system be reliable with minimum power consumption. Also, regarding energy management, we recommend providing backup energy sources (or using more than one energy source) or (a hybrid technique).
A probabilistic multi-objective approach for FACTS devices allocation with di...IJECEIAES
This study presents a probabilistic multi-objective optimization approach to obtain the optimal locations and sizes of static var compensator (SVC) and thyristor-controlled series capacitor (TCSC) in a power transmission network with large level of wind generation. In this study, the uncertainties of the wind power generation and correlated load demand are considered. The uncertainties are modeled in this work using the points estimation method (PEM). The optimization problem is solved using the multi-objective particle swarm optimization (MOPSO) algorithm to find the best position and rating of the flexible AC transmission system (FACTS) devices. The objective of the problem is to maximize the system loadability while minimizing the power losses and FACTS devices installation cost. Additionally, a technique based on fuzzy decision-making approach is employed to extract one of the Pareto optimal solutions as the best compromise one. The proposed approach is applied on the modified IEEE 30bus system. The numerical results evince the effectiveness of the proposed approach and shows the economic benefits that can be achieved when considering the FACTS controller.
This document provides a review of evolutionary algorithms that have been used to optimize wireless sensor networks (WSNs). It begins with background on WSNs and discusses common issues like energy efficiency. It then reviews heuristic and metaheuristic approaches that have been used for clustering and routing in WSNs. The main part of the document focuses on four commonly used evolutionary algorithms - genetic algorithms, particle swarm optimization, harmony search algorithm, and flower pollination algorithm. For each algorithm, it provides an overview and details on how the algorithm works and pseudo-code. It concludes that these nature-inspired metaheuristic techniques can help optimize challenges in WSNs like cluster formation and energy consumption better than classical algorithms.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies, which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
Optimal power flow with distributed energy sources using whale optimization a...IJECEIAES
Renewable energy generation is increasingly attractive since it is non-polluting and viable. Recently, the technical and economic performance of power system networks has been enhanced by integrating renewable energy sources (RES). This work focuses on the size of solar and wind production by replacing the thermal generation to decrease cost and losses on a big electrical power system. The Weibull and Lognormal probability density functions are used to calculate the deliverable power of wind and solar energy, to be integrated into the power system. Due to the uncertain and intermittent conditions of these sources, their integration complicates the optimal power flow problem. This paper proposes an optimal power flow (OPF) using the whale optimization algorithm (WOA), to solve for the stochastic wind and solar power integrated power system. In this paper, the ideal capacity of RES along with thermal generators has been determined by considering total generation cost as an objective function. The proposed methodology is tested on the IEEE-30 system to ensure its usefulness. Obtained results show the effectiveness of WOA when compared with other algorithms like non-dominated sorting genetic algorithm (NSGA-II), grey wolf optimization (GWO) and particle swarm optimization-GWO (PSOGWO).
Similar to Energy efficient-resource-allocation-in-distributed-computing-systems (20)
This document summarizes a research paper that proposes using a Real-Coded Genetic Algorithm to design Unified Power Flow Controller (UPFC) damping controllers. The goal is to damp low frequency oscillations in power systems. The paper models a single-machine infinite-bus power system installed with a UPFC. It linearizes the system equations and formulates the controller design as an optimization problem to minimize oscillations. Simulation results comparing the proposed RCGA approach to conventional tuning are presented to demonstrate its effectiveness and robustness in damping power system oscillations.
The main-principles-of-text-to-speech-synthesis-systemCemal Ardil
This document discusses text-to-speech synthesis systems. It provides background on the history and development of such systems over three generations from 1962 to the present. It describes some of the main challenges in developing speech synthesis for different languages. The document then focuses on specifics of the Azerbaijani language and outlines the approach used in the text-to-speech synthesis system developed by the authors, which combines concatenative synthesis and formant synthesis methods.
The feedback-control-for-distributed-systemsCemal Ardil
The document summarizes a study on feedback control synthesis for distributed systems. The study proposes a zone control approach, where the state space is partitioned into zones defined by observable points. Control actions are piecewise constant functions that only change when the system transitions between zones. An optimization problem is formulated to determine the optimal constant control value for each zone. Gradient formulas are derived to solve this using numerical optimization methods. The zone control approach was tested on heat exchanger process control problems and showed more robust performance than alternative methods.
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...Cemal Ardil
This document summarizes a research paper that analyzes the transient behavior of the overflow probability in a queuing system with fixed-size batch arrivals. It introduces a set of polynomials that generalize Chebyshev polynomials and can be used to assess the transient behavior. The key findings are:
1≤k ≤ B
k≥B+1
which is just the generating function of the Chebyshev
polynomials of the second kind.
Furthermore, if we consider the special case when B = 1 in
(9), and make the substitution x → 2x, we obtain
k =0
0≤k ≤ B
Pk −1
λ
μ
Sonic localization-cues-for-classrooms-a-structural-model-proposalCemal Ardil
The document describes a proposed structural model for sonic localization cues in classrooms. It discusses two primary cues for localization - interaural time difference (ITD) and interaural level difference (ILD) created by sounds reaching each ear. While these cues provide azimuth information, they do not provide elevation information. Elevation information is provided by spectral filtering effects of the head, torso and outer ears (pinnae) known as the head related transfer function (HRTF). The proposed structural model aims to produce well-controlled horizontal and vertical localization cues through a signal processing model of the HRTF that mimics how sounds interact with the body. The effectiveness of the model is tested through synthesized spatial audio experiments with human subjects
This document summarizes a new method for designing robust fuzzy observers for nonlinear systems based on Takagi-Sugeno fuzzy models. The method uses linear matrix inequalities to design observers that minimize the H-norm of the closed loop system, providing a measure of robustness and disturbance attenuation. The observer design method is similar to existing parallel distributed compensation controller design methods, making it possible to adapt controller design techniques for observer design. The observer estimates system states and outputs based on measured outputs and system inputs while attenuating effects of disturbances and uncertainties.
This document discusses evaluating the response quality of heterogeneous question answering systems. It begins by noting the lack of standard evaluation metrics for systems that use natural language understanding and reasoning to answer questions, as opposed to just information retrieval. It proposes a "black-box" approach to evaluate response quality by observing system responses, developing a classification scheme to categorize responses, and assigning scores. As a demonstration, it applies this approach to evaluate three example systems (AnswerBus, START, and NaLURI) on a set of questions about cyberlaw.
The document describes two methods for reducing the order of linear time-invariant systems: Routh approximation and particle swarm optimization (PSO). Routh approximation determines the denominator of the reduced order model using a Routh array, while retaining time moments or Markov parameters to determine the numerator. PSO reduces order by minimizing the integral squared error between responses of the original and reduced models, adjusting numerator and denominator coefficients. The methods are illustrated on examples, with Routh approximation providing stability guarantees when applied to stable systems.
Real coded-genetic-algorithm-for-robust-power-system-stabilizer-designCemal Ardil
This document summarizes a research paper that uses a real-coded genetic algorithm to optimize the design of power system stabilizers. The algorithm is applied to both single-machine and multi-machine power systems. The goal is to minimize rotor speed deviations and improve stability under disturbances. Simulation results show the proposed controller provides effective damping of low frequency oscillations across different operating conditions.
This summary provides the key points about the document in 3 sentences:
The document presents a method for obtaining the exact probability of error for block codes using soft-decision decoding and the eigenstructure of the code correlation matrix. It shows that under a unitary transformation, the performance evaluation of a block code becomes a one-dimensional problem involving only the dominant eigenvalue and its corresponding eigenvector. Simulation results demonstrate good agreement with the analysis, validating the method for computing the bit error rate of block codes based on the properties of the code correlation matrix.
This document presents an optimal supplementary damping controller design for Thyristor Controlled Series Compensator (TCSC) using Real-Coded Genetic Algorithm (RCGA). TCSC is capable of improving power system stability by modulating reactance during disturbances. The document proposes using a multi-objective fitness function consisting of damping factors and real parts of eigenvalues to optimize the parameters of a TCSC-based supplementary damping controller using RCGA. Simulation results presented show the effectiveness of the proposed controller over a wide range of operating conditions and disturbances.
This document presents a method for generating optimal straight line trajectories in 3D space using an algorithm called the Bounded Deviation Algorithm (BDA). BDA approximates a straight line trajectory between two points by iteratively inserting knot points to minimize the deviation between the actual trajectory and the joint space trajectory. The document provides the mathematical formulation and simulation results of applying BDA to generate a straight line trajectory for a 5-axis articulated robot between two specified points.
On the-optimal-number-of-smart-dust-particlesCemal Ardil
This document discusses optimizing the number of smart dust particles used to generate weather maps. It addresses two main challenges: 1) how to match signals from smart dust particles to receivers given atmospheric constraints, and 2) what is the optimal number of particles needed to generate precise and cost-effective 3D maps. The document presents an algorithm to optimally match particles to receivers in O(n*m) time by framing it as a maximal bipartite graph matching problem. It also develops mathematics to prove a conjecture that the optimal number of particles is approximately 1/ε, where ε is the drift error.
On the-approximate-solution-of-a-nonlinear-singular-integral-equationCemal Ardil
This document summarizes a study on finding approximate solutions to nonlinear singular integral equations. The study proves the existence and uniqueness of solutions to such equations defined on bounded regions of the complex plane. It then presents a method for finding approximate solutions using an iterative fixed-point principle approach. Nonlinear singular integral equations have many applications in fields like elasticity, fluid mechanics, and mathematical physics. The study contributes to improving methods for solving these important types of equations.
On problem-of-parameters-identification-of-dynamic-objectCemal Ardil
This document discusses methods for identifying parameters of dynamic objects described by systems of ordinary differential equations. Specifically, it addresses problems with multiple initial boundary conditions that are not shared across points.
The paper proposes a new "conditions shift" method to transfer the initial boundary conditions in a way that eliminates differential links and multipoint conditions. This reduces the parameter identification problem to solving either an algebraic system or a quadratic programming problem.
Two cases are considered: case A where the number of conditions equals the number of conditionally free parameters, resulting in a single parameter vector solution. Case B where additional conditions on the parameters are needed in the form of equalities or inequalities, resulting in an optimization problem to select optimal parameter values.
This document summarizes a research article about numerical modeling of gas turbine engines. The researchers developed mathematical models and numerical methods to calculate the stationary and quasi-stationary temperature fields of gas turbine blades with convective cooling. They combined the boundary integral equation method and finite difference method to solve this problem. The researchers proved the validity of these methods through theorems and estimates. They were able to visualize the temperature profiles using methods like least squares fitting with automatic interpolation, spline smoothing, and neural networks. The reliability of the numerical methods was confirmed through calculations and experimental tests of heat transfer characteristics on gas turbine nozzle blades.
New technologies-for-modeling-of-gas-turbine-cooled-bladesCemal Ardil
The document describes new technologies for modeling gas turbine cooled blades, including:
1) Developing mathematical models and numerical methods using the boundary integral equation method (BIEM) and finite difference method (FDM) to calculate the stationary and quasi-stationary temperature field of a blade profile with convective cooling.
2) Using splines, smooth interpolation, and neural networks for visualization of blade profiles.
3) Validating the designed methods through computational and experimental investigations of heat and hydraulic characteristics of a gas turbine nozzle blade.
This document discusses using neuro-fuzzy networks to identify parameters for mathematical models of geofields. It proposes a new technique using fuzzy neural networks that can be applied even when data is limited and uncertain in the early stages of modeling. A numerical example is provided to demonstrate the identification of parameters for a regression equation model of a geofield using a fuzzy neural network structure. The network is trained on experimental fuzzy statistical data to determine values for the regression coefficients that satisfy the data. The technique is concluded to have advantages over traditional statistical methods as it can be applied regardless of the parameter distribution and is well-suited for cases with insufficient data in early modeling stages.
This document presents a new multivariate fuzzy time series forecasting method to predict car road accidents. The method uses four secondary factors (number killed, mortally wounded, died 30 days after accident, severely wounded, and lightly casualties) along with the main factor of total annual car accidents in Belgium from 1974 to 2004. The new method establishes fuzzy logical relationships between the factors to generate forecasts. Experimental results show the proposed method performs better than existing fuzzy time series forecasting approaches at predicting car accidents. Actuaries can use this kind of multivariate fuzzy time series analysis to help define insurance premiums and underwriting.
The document describes a multistage system for monitoring the technical condition of aircraft gas turbine engines. It proposes using soft computing methods like fuzzy logic and neural networks at early stages when data is limited or uncertain. At later stages, it uses statistical analysis methods like analyzing changes in skewness, kurtosis, and correlation coefficients of engine parameters. Fuzzy regression models are developed using these techniques to estimate engine condition. The system provides staged estimation of engine technical conditions from initial to later stages of operation.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Energy efficient-resource-allocation-in-distributed-computing-systems
1. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
Energy Efficient Resource Allocation
in Distributed Computing Systems
Samee Ullah Khan and Cemal Ardil
Abstract—The problem of mapping tasks onto a computational
International Science Index 32, 2009 waset.org/publications/4301
grid with the aim to minimize the power consumption and the
makespan subject to the constraints of deadlines and architectural
requirements is considered in this paper. To solve this problem, we
propose a solution from cooperative game theory based on the
concept of Nash Bargaining Solution. The proposed game theoretical
technique is compared against several traditional techniques. The
experimental results show that when the deadline constraints are
tight, the proposed technique achieves superior performance and
reports competitive performance relative to the optimal solution.
Keywords—Energy efficient algorithms, resource allocation,
resource management, cooperative game theory.
P
I. INTRODUCTION
OWER management in various computing systems is
widely recognized to be an important research problem.
Power consumption of Internet and Web servers accounts for
8% of the US electricity consumption. By reducing the
demand for energy, the amount of CO2 produced each year by
electricity generators can also be mitigated. For example,
generating electricity for the next generation large-scale data
centers would release about 25M tons of additional CO2 each
year [19]. Power consumption is also a critical and crucial
problem in large distributed computing systems, such as,
computational grids because they consume massive amounts
of power and have high cooling costs. These systems must be
designed to meet functional and timing requirements while
being energy-efficient. The quality of service delivered by
such systems depends not only on the accuracy of
computations, but on their timeliness [36].
A computational grid (or a large distributed computing
system) is composed of a set of heterogeneous machines,
which may be geographically distributed heterogeneous
multiprocessors that exploit task-level parallelism in
applications. Resource allocation in computational grids is
already a challenging problem due to the need to address
deadline constraints and system heterogeneity. The problem
becomes more challenging when power management is an
additional design objective because power consumption of the
system must be carefully balanced against other performance
S. U. Khan is with the Department of Electrical and Computer
Engineering, North Dakota State University, Fargo, ND 58102, USA (phone:
701-231-7615; fax: 701-231-8677; e-mail: samee.khan@ ndsu.edu).
C. Ardil is with the National Academy of Aviation, Baku, Azerbaijan, (email: cemalardil@gmail.com).
measures. Power management can be achieved by two
methods. The Dynamic Power Management (DPM) [27]
approach brings a processor into a power-down mode, where
only certain parts of the computer system (e.g., clock
generation and time circuits) are kept running, while the
processor is in an idle state. The Dynamic Voltage Scaling
(DVS) [34] approach exploits the convex relation between the
CPU supply voltage and power consumption. The rationale
behind DVS technique is to stretch out task execution time
through CPU frequency and voltage reduction.
Power-aware resource allocation using DVS can be
classified as static and dynamic techniques. Static techniques
are applied at design time by allocating and scheduling
resources using off-line approaches, while dynamic
techniques control the runtime behavior of the systems to
reduce power consumption.
The traditional resource allocation and scheduling theory
deals with fixed CPU speed, and hence cannot be directly
applied to this situation. In this paper, we study the problem of
power-aware task allocation (PATA) for assigning a set of
tasks onto a computational grid each equipped with DVS
feature. The PATA problem is formulated as multiconstrained multi-objective extension of the Generalized
Assignment Problem (GAP). PATA is then solved using a
novel solution from cooperative game theory based on the
celebrated Nash Bargaining Solution (NBS) [22]; we shall
acronym this solution concept as NBS-PATA.
The rest of the paper is organized as follows: A brief
discussion of related work is presented in Section II. The
PATA problem formulation and background information are
discussed in Section III. In Section IV, we model a
cooperative game played among the machines for task
allocation with the objective to minimize power consumption
and makespan, simultaneously. Experimental results and
concluding remarks are provided in Sections V and VI,
respectively.
II. RELATED WORK
Most DPM techniques utilize power management features
supported by hardware. For example, in [4], the authors
extend the operating system's power manager by an adaptive
power manager (APM) that uses the processor's DVS
capabilities to reduce or increase the CPU frequency thereby
minimizing the overall energy consumption [6]. The DVS
technique at the processor-level together with a turn on/off
35
2. International Science Index 32, 2009 waset.org/publications/4301
World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
technique at the cluster-level to achieve high power savings
while maintaining the response time is proposed in [11]. In
[26] the authors introduce a scheme to concentrate the
workload on a limited number of servers in a cluster such that
the rest of the servers can remain switched-off for a longer
time. Other techniques use a utilization bound for scheduling
a-periodic tasks [1], [2] to maintain the timeliness of
processed jobs while conserving power.
While the closest techniques to combining device power
models to build a whole system has been presented in [13],
our approach aims at building a general framework for
autonomic power and performance management where we
bring together and exploit existing device power management
techniques from a whole system's perspective. Furthermore,
while most power management techniques are either heuristicbased approaches [15], [16], [19], [30], or stochastic
optimization techniques [10], [29], [34], we use game theory
to seek radically fast and efficient solutions compared with the
traditional approaches, e.g., heuristics, genetic algorithms,
linear and dynamic programming, branch-and-bound etc. With
game theoretical techniques the solution may not be globally
optimal in the traditional sense, but would be optimal under
given circumstances [17].
Another advantage of using game theory is that our overall
management strategy allows to use lower level information to
dynamically tune the high-level management policies freeing
the need to execute complex algorithms [18]. The proposed
generic management framework not only enables us to
experiment with different types of power management
techniques ranging from heuristic approaches but also
provides a mechanism to consolidate power management with
other autonomic management objectives pertinent to
computational grids, such as fault-tolerance and security.
III. PROBLEM FORMULATION
A. Background Information
The power consumption in CMOS circuits is captured by
the following:
P = V 2 × f × CEFF ,
(1)
where V, f, and CEFF are the supply voltage, clock
frequency, and effective switched capacitance of the circuits.
It is to be understood that time to finish an operation is
inversely proportional to the clock frequency. This
relationship can be extended to gather an insight on the energy
consumption of the processor, by simply recalling that energy
is power times time. Therefore, the energy per operation, Eop,
is proportional to V2. This implies that lowering the supply
voltage will reduce the energy consumption of the system in a
quadratic fashion. However, lowering the supply voltage also
decreases the maximum achievable clock speed. More
specifically, f is (approximately) linearly proportional to V [5].
Therefore, we have:
P ∝ f 3 , and Eop ∝ f 2 .
(2)
A computing device’s power consumption can be
significantly reduced by running the device’s CPU at a slower
frequency. This is the key idea behind the DVS technology. In
conventional system design with fixed supply voltage and
clock frequency, clock cycles, and hence energy, are wasted
when the CPU workload is light and the processor becomes
idle. Reducing the supply voltage in conjunction with the
clock frequency eliminates the idle cycles and saves the
energy significantly. We have:
P ∝ t −3 , and Eop ∝ t −2
(3)
-1
since f ∝ t , where t is the time to complete an operation.
Thus, the reduction of the supply voltage would reduce the
energy dissemination, it would substantially slow down the
time to complete an operation – a balance is needed.
B. The System Model
We consider the system as a collection of machines that
comprise the computational grid and the collection of tasks.
Machines: Consider a computational grid comprising of a
set of machines, M = {m1, m2, …, mm}. Assume that each
machine is equipped with a DVS module. Each machine is
characterized by:
1. The frequency of the CPU, fj, given in cycles per unit
time. With the help of a DVS, fj can vary from fjmin to
fjmax, where 0 < fjmin < fjmax. From frequency it is easy to
obtain the speed of the CPU, Sj, which is simply the
inverse of the frequency.
2. The specific machine architecture, A(mj). The
architecture would include the type of CPU (Intel,
AMD, RISC), bus types and speeds in GHz, I/O, and
memory in Bytes.
Tasks: Consider a metatask, T = {t1, t2, …, tn}, where ti is a
task. Each task is characterized by:
1. The computational cycles, ci, that it needs to complete.
(The assumption here is that the ci is known a priori.)
2. The specific machine architecture, A(ti), that it needs to
complete its execution.
The deadline, di, before it has to complete its execution.
It is obvious that the metatask, T, also has a deadline, D,
which is met if and only if the deadlines of all its tasks are
met.
C. Preliminaries
Suppose we are given a computational grid and a metatask,
T, and we are required to map T on the computational grid
such that all the characteristics of the tasks and the deadline
constraint of T are fulfilled. We term this fulfillment as a
feasible task to machine mapping. A feasible task to machine
mapping happens when:
1. Each task ti∈T can be mapped to at least one mj subject
to the fulfillment of all the constraints associated with
each task: Computational cycles, architecture, and
deadline.
2. The deadline constraint of T is also satisfied.
3. The number of computational cycles required by ti to
execute on mj is assumed to be a finite positive number,
denoted by cij. The execution time of ti under a constant
36
3. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
speed Sij, given in cycles per second is tij = cij/Sij.
A task, ti, when executed on machine mj draws, pij amount
of power. Lowering the power, will lower the CPU frequency
and consequently will decrease the speed of the CPU, and
hence cause ti to possibly miss its deadline. For simplicity
assume that the overhead of switching the CPU frequency is
minimal and hence ignored.
The architectural requirements of each task are recorded as
a tuple with each element bearing a specific requirement. We
assume that the mapping of architectural requirements is a
boolean operation. That is, the architectural mapping is only
fulfilled when all of the architectural constraints are satisfied,
otherwise not.
cooperative game presented here considers each machine in
the computational grid as a player. The goal of the players is
to execute task in a manner that reduces the overall makespan
of the metatask, while keeping the aggregate power
consumption to its minimum. If pij is the power consumed and
tij is the time taken by machine j to execute task i, then the
objective of the cooperative game is to minimize both pij and
tij. Hypothetically, we can express this cooperative game
(CG1) as:
n
min
ij
i =1
International Science Index 32, 2009 waset.org/publications/4301
n
min
m
∑∑ p x
ij
i =1
n
ij
such that min max ∑ t x subject to
j =i
1≤ j ≤ m
ij
ij
i =1
xij ∈ {0,1}, i = 1, 2, ..., n; j = 1, 2, ...m.
(4)
if t i → m j , ∀i , ∀j , such that A(t i ) = A( m j ), then xij = 1
ij
≤ d i ) ∈ {0,1}
ij
ij
≤ d i ) = 1, ∀i , ∀j , xij = 1
j =1
≤P
ij
(9)
(10)
The conservation condition (9) states that the total power
allocated is bounded. Clearly, the power consumption has to
be a positive number (10). These constraints make the PATA
problem convex. Otherwise, the NBS set of points can grow
exponentially, making the extraction of an agreement point an
NP-hard problem.
In transforming the problem, the above cooperative game
(CG1) is equivalent to the following cooperative game (CG2):
n
max
m
n
∑ ∑ ( − p x ) such that max min ∑ ( −t x )
ij
i =1
ij
1≤ j ≤ m
j =i
ij
ij
i =1
subject to (4), (5), (9)
− ( t ij xij ) ≥ − d i , ∀i , ∀j , xij = 1
(8)
(11)
( − ( t x ) ≥ − d ) ∈ {0,1}
n
∏ (t x
ij
pij ≥ 0, i = 1, 2, ..., n; j = 1, 2, ...m
(7)
ij
ij
i =1
m
i =1
(6)
(t x
1≤ j ≤ m
j =i
∑∑ p
(5)
t ij xij ≤ d i , ∀i , ∀j , xij = 1
n
such that min max ∑ t x
ij
subject to (4), (5), (6), (7), (8) and
n
D. Problem Statement
Given is a computation grid and a metatask T. Find the task
to machine mapping, where:
“The total power utilized by the computational grid is
minimized such that the makespan of the metatask, T, is
minimized.”
Mathematically, we can say:
m
∑∑ p x
(12)
ij
ij
i
i =1
Constraint (4) is the mapping constraint, when xij =1, a task,
ti, is mapped to machine, mj. Constraint (5) elaborates on this
mapping in conjunction to the architectural requirements and
it states that a mapping can only exists if the architecture is
mapped. Constraint (6) relates to the fulfillment of the
deadline of each task, and constraint (7) tells us about the
boolean relationship between the deadline and the actual time
of execution of the tasks. Constraint (8) relates to the deadline
constraints of the metatask, which will hold if and only if all
the deadlines of the tasks, di, i =1, 2, …n, are satisfied.
This formulation is in the same form as that of a GAP
except for constraints (6), (7), and (8). The major difference
between PATA and GAP is that the capacity of resources in
PATA, in terms of the utilization of power, are defined in
groups, whereas in case of GAP, they are defined
individually.
n
∏ ( − ( t x ) ≥ − d ) = 1, ∀i, ∀j , x
ij
ij
i
ij
=1
(13)
i =1
− pij ≤ 0, i = 1, 2, ..., n; j = 1, 2, ...m.
(14)
Based on the above, we can formulate a cooperative game
as follows. The machines in the computational grid are the
players, each having fj(x) = - pij as an objective function. In a
cooperative game, all players need to optimize their objective
functions simultaneously.
Assume that all players are able to achieve performance
strictly superior to the initial performance, that is the set J =
M. The initial performance of player j is given by γj0. This
corresponds to the peak power consumption of the machine j.
This will always be an agreeable point because this is the
minimum acceptable performance, but another NBS with
greater performance is desired by reducing the power as much
IV. PROPOSED GAME THEORETICAL TECHNIQUE
We consider the system model described in Section 3. The
37
4. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
Input: Machines each initialized to its maximum power using the DVS ={dvs1, dvs2, ….dvsl}.
Output: A task to machine mapping consumes the minimum power and that has the minimum possible makespan.
1. For i = 1 to n
2. Sort the machines in decreasing order of their current power consumption: p1≥ p2 ≥…≥ pm.
3. pav = (∑ pj)/m
4. while (pav > pm) do
a. m = m – 1
b. pav = (pav – (pm+1/m+1)) (m+1/m)
5. If the architectural constraint (5) is met then goto Step 5a else goto Step 5c.
a. If the smallest dvsl that satisfies the deadline constraint (6) is found then goto Step 5b else goto Step 6.
b. Assign task ti to machine mm with dvsl, update pm and goto Step 2.
c. If m > 1 then m = m – 1 and goto Step 4 else goto Step 6.
6. Initialize all machines to maximum power and goto Step 2.
Fig. 1 The pseudo-code for the proposed technique
International Science Index 32, 2009 waset.org/publications/4301
TABLE I
VARIOUS PARAMETRIC VALUES FOR THE COEFFICIENT OF VARIANCE METHOD
Variable
Vtask
μtask
αtask
βtask
Vmach
μmach
αmach
βtask
Values
{0.10, 0.50}
2×1012
{2, 10}
{2×1011, 1×1012}
{0.10, 0.50}
[1.315×1010, 6.5214×1014]
{2, 10}
{[1.32×109, 6.58×109], [6.52×1013, 3.26×1014]}
as possible.
Theorem 1 (NBS-PATA): The NBS for the cooperative
PATA game is determined by solving the following
optimization problem.
n
m
(
)
n
max ∑∑ γ j xij − pij xij such that max min ∑ −tij xij
i =1
j =i
0
1≤ j ≤ m
The pseudo-code for NBS-PATA is depicted in Fig. 1.
Theorem 2 (Correctness of the NBS-PATA): The power
adjustments, pijs, computed by the NBS-PATA technique
solve the optimization problem in Theorem 4.
Proof: The while loop in Step 4a finds the minimum index
m for which
i =1
subject to (4), (5), (9), (10) (11), (12), (13).
Proof: Consider fj(γj) = -pij which is concave and bounded
above, and hence guarantees a solution but with higher
complexity. The set of solutions determined by the constraint
is convex and compact, and hence is not always guaranteed
but has a lower complexity. Using the fact that fj(γj) = -pij are
■
1-1 functions of pij the results follows.
Now, assume that the machines in the computational grid
are sorted in the decreasing order of their current power
consumption. Given such a list, we assign a task to the
machine that is currently running on a power just above the
weighted average power consumption. The assignment
warrants adjusting the power consumption to the appropriate
DVS level DVS = {dvs1, dvs2, ….dvsl} as to which the
machine can guarantee the deadline associated with the task.
This procedure is applied until a feasible solution is found.
Based on Theorem 1, we derive an algorithm (called NBSPATA) for obtaining NBS for the cooperative PATA game.
⎛
⎝
n
m
i =1
j =1
⎞
⎠
γ m < ⎜ ∑ ∑ pij ⎟ − P m
(15)
If |J| = |M| then it means that all γjs are positive. Applying
Theorem 6 and the proof follows.
If |J| < |M| (which may be the most probable case) then we get
⎛
⎝
n
m
i =1
j =1
⎞
⎠
γ | J | < ⎜ ∑ ∑ pij ⎟ − P | J |
(16)
when γjs are sorted as γ1 ≥ γ2 ≥ … ≥ γ|J|.
Now, we know that γ|J| ≠ 0. Also, we know that NBS-PATA
will not allocate any tasks to γ|J| in the while loop. The only
argument that we need to be concerned with is the integrity of
the while loop. That can be answered by showing that at any
given instance, the while loop always produces a task to
machine assignment that is covered by Theorem 1. By
definition, the while loop if stooped at an instance where the
first k machines were dealt with, then those k machines will
correspond to the k fastest machines that bare the power
consumption of γ1, γ2, …, γk. But we know from Theorem 1
38
5. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
that a machine j’s best strategy is:
⎛⎛
⎝⎝
n
γ j = ⎜⎜ ∑
i =1
∑
pik
∀k ∈ M , k ≠ j
⎞ − P⎞
⎟ ⎟
⎠ ⎠
m
(17)
and in terms of power is given as:
⎛⎛
∑
⎝⎝
pij = ⎜ ⎜
n
i =1
∑
∀k ∈ M , k ≠ j
pik
⎞ − P⎞
⎟ ⎟
⎠ ⎠
m
(18)
This completes the proof.
■
The execution of NBS-PATA technique is in O(n mlog(m)).
The complexity is determined by observing that Step 2 in Fig.
1 takes O(mlog(m)). Step 2 enclosed in Step 1 which takes
O(n). This polynomial time complexity is possible because the
objective functions are convex; however, in general
determining an NBS is an NP-hard problem [25].
International Science Index 32, 2009 waset.org/publications/4301
V. EXPERIMENTS AND DISCUSSIONS
In this section, we demonstrate the quality of solutions
produced by our NBS-PATA technique. The simulation study
assumed synthetic task sets (explained in the subsequent text).
We set forth two major goals for our experimental study:
1. To measure and compare the performance of NBSPATA against 1) the optimal solution and 2) the minmin heuristic [8].
2. To measure the impact of system parameter variations,
such as, the tightness of the task related constraints.
Based on the size of the problems, the experiments were
divided in two parts. For small size problems, we used an
Integer Linear Programming tool called LINDO [28]. LINDO
is useful to obtain optimal solutions, provided the problem size
is relatively small. Hence for small problem sizes, the
performance of the NBS-PATA is compared against 1) the
optimal solution using LINDO and 2) the min-min heuristic
[8]. The LINDO implementation and the min-min heuristic do
not consider power as an optimization constraint; however,
they are very effective for the optimization of the makespan.
Thus, the comparison provides us with a wide range of results.
On one extreme we have the optimal algorithm, on the other a
technique which scales well with the corresponding increase in
the problem size. For large size problems, it becomes
impractical to compute the optimal solution by LINDO.
Hence, we only consider comparisons against the min-min
heuristic.
The system heterogeneity is captured by the distribution of
the number of CPU cycles, cij, on different mjs. Let C denote
the matrix composed by cij, where i = 1, 2, …, n and j = 1, 2,
…,m. The C matrix was generated using the coefficient of
variation method described in [3]. The method requires
various inputs which are reported in Table I. di, the deadline of
task ti was generated using the method described in [36]. Let
wi be the largest value among the elements in the i-th row of C
and let wis corresponding machine be denoted by m0. Let Z =
n/m, where n is the number of tasks and m is the number of
machines. di is calculated as K × (wi/Sm0) × Z, where K is a
pre-specified positive value for adjusting the relative deadlines
of tasks and Sm0 is the speed of machine m0 running at DVS
level of 100%. For this study, we keep the architectural
affinity requirements confined to memory. (Adding other
requirements such as, I/O, processor type, etc. will bear no
affect on our experimental setup or theoretical results.) Each
machine is assigned a memory on random from within the
range [500-5000] GB, while each task is associated a
corresponding memory requirement on random from within
the range [20-50] MB.
For small size problems, the number of machines was fixed
at 5, while the number of tasks varied from 20 to 40. The
number of DVS levels per machine was set to 4. The
frequencies of the machines were randomly mapped from
within the range [200MHz-2000MHz]. We assumed that the
potential difference of 1mV across a CMOS circuit generates a
frequency of 1MHz. For large size problems, the number of
machines was fixed at 16, while the number of tasks varied
from 1000 to 5000. The number of DVS levels per mj was set
to 8. Other parameters were the same as those for small size
problems.
Min-min: The Min-min heuristic begins with the set U of
all unmapped tasks. Then, the set of minimum completion
times, CT = {cti | cti = minj tij, for each i ∈ U}. Next, the task
with the overall minimum completion time from CT is selected
and assigned to the corresponding machine. Lastly, the newly
mapped task is removed from U, and the process repeats until
all tasks are mapped.
Comparative results: The experimental results for small
size problems with K equal to 1.5 and 1.0 are reported in Figs.
2 and 3. These figures show the ratio of the makespan
obtained from the two techniques and the optimal. The plots
clearly show that the NBS-PATA technique performs
extremely well and achieves a performance level of 10%-15%
of the optimal when K was set at a very tight bound 1.0.
For large problem instances, first, we compare the
makespan identified by the min-min and the NBS-PATA
technique. Since the min-min heuristic does not optimize
power consumption, we compared the min-min with a version
of NBS-PATA that ran on full power and also compared it
with the (original) version that optimized power. Figs. 4 and 5
show the relative performance of the techniques with various
values of K, Vtask, and Vmach. The results indicate that NBSPATA outperforms the min-min technique in identifying a
smaller makespan when power is not considered as an
optimization criteria. The performance of NBS-PATA is
notably superior to the min- min technique when the deadline
constraints are relatively loose. It can also be observed that
NBS-PATA, when considering power as an optimization
resource, identifies a task to machine mapping that produces a
makespan that is within 5%-10% of the min-min technique. It
was noticed that the relative performance of the min-min
technique was much better for large size problems, compared
with small size problems, because with the increase in the
39
6. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
International Science Index 32, 2009 waset.org/publications/4301
Fig. 2 Makespan ratio of min-min and
NBS-PATA over optimal
Fig. 3 Makespan ratio of min-min and
NBS-PATA over optimal
Fig. 4 Makespan
Fig. 5 Makespan
Fig. 6 Power consumption
Fig. 7 Power consumption
TABLE II
AVERAGE EXECUTION TIME (SEC.) OF THE THREE TECHNIQUES
Problem size
K = 1.5, m = 5
K = 1.0, m = 5
No. of tasks
20
20
20
20
20
20
25
30
35
40
Optimal
0.2732 0.2934 0.2934 0.2934 0.2934 0.2934 0.3470 0.4183 0.4938 0.5290
Min-Min
0.0032 0.0031 0.0031 0.0031 0.0031 0.0031 0.0042 0.0045 0.0052 0.0057
NBS-PATA 0.0673 0.0793 0.0793 0.0793 0.0793 0.0793 0.0872 0.0971 0.1004 0.1159
TABLE III
AVERAGE EXECUTION TIME (SEC.) OF THE THREE TECHNIQUES
Problem size
K = 0.50, m = 16
K = 0.25, m = 16
No. of tasks
1000
2000
3000
4000
5000
1000
2000
3000
4000
5000
Min-Min
0.1395 0.1470 0.1537 0.1682 0.1753 0.1534 0.2985 0.3408 0.4485 0.5230
NBS-PATA 5.8401 8.3245 4.8314 6.4984 8.7219 5.7817 6.4629 7.3901 8.0293 9.6038
size of the C matrix, the probability of obtaining larger values
of wis also increases. Moreover, the relative performance of
NBS-PATA was also much better for large size problems,
compared with small size problems, because the DVS levels
for the large problem size are twice more than the DVS levels
for the small problem size.
Next, we compare the power consumption of both the
techniques. Figs. 6 and 7 reveal that on average the NBStechnique utilizes 60%-65% less power as compared to the
min-min technique. That is a significant amount of savings
considering that the makespan identified by NBS-PATA is
within 5%-10% of the makespan identified by the min-min
technique.
Lastly, we analyze the running time for both small and large
problem sizes. For completion, the running time of the optimal
for small problem size is also presented for comparisons. It
can be seen that when the number of tasks increase, the ratio
of running time of NBS-PATA to that of the min-min heuristic
decreases in the case of small problem size from 22 to 17 and
in the case of large problem size from 56 to 18. The results are
depicted in Tables II and III.
VI. CONCLUSIONS
This paper presented a power-aware resource allocation
strategy in computational grids for multiple tasks. The
problem was formulated as an extension of the Generalized
Assignment Problem. A solution from cooperative game
theory based on the concept of Nash Bargaining Solution
(NBS-PATA) was proposed for this problem. We proved
through rigorous mathematical proofs that the proposed NBSPATA can guarantee pareto-optimal solutions in mere O(n
40
7. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:3 No:8, 2009
mlog(m)) time (where n is the number of tasks and m is the
number of machines). The solution quality of the NBS-PATA
was experimentally compared against the optimal (for small
problems sizes) and the min-min heuristic (for large problem
sizes). The experimental results confirm superior performance
of the proposed scheme in terms of reduction in power
consumption, makespan compared to the min-min heuristic,
and comparative solution compared to the optimal.
We plan to study power-aware resource allocation problems
which fulfill application specific demands. We also plat to
investigate power-aware resource allocation techniques that
cater for dynamic environments and which allocate resource
on run-time. Moreover, modeling, measuring, and optimizing
the communication, the messages, and the energy costs offer
new challenges, especially in sensor networks, primarily
because the communication is carried out in an ad hoc
environment. We plan on extending our current study to
dynamic, real-time, and ad hoc environments.
International Science Index 32, 2009 waset.org/publications/4301
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
T. Abdelzaher and V. Sharma, “A Synthetic Utilization Bound for
Aperiodic Tasks with Resource Requirements,” in Euromicro
Conference on Real Time Systems, 2003.
T. Abdelzaher and C. Lu, “Schedulability Analysis and Utilization
Bounds for Highly Scalable Real-time Services,” in IEEE Real-Time
Technology and Applications Symposium, 2001.
S. Ali, H. J. Siegel, M. Maheswaran, D. Hensgen and S. Ali, “Task
Execution Time Modeling for Heterogeneous Computing Systems,” in
9th Heterogeneous Computing Workshop, May 2000, pp. 185-199.
Application Specific and Automatic Power Management based on
Whole
Program
Analysis,
available
at:
http://cslab.snu.ac.kr/~egger/apm/final-report.pdf, 2004.
H. Aydin, R. Melhem, D. Mosse and P. Mejya-Alvarez, “Dynamic and
Aggressive Scheduling Techniques for Power-aware Real-time
Systems,” in IEEE Real-Time Systems Symposium, Dec. 2001, pp. 95105.
R. Bianchini and R. Rajamony, “Power and Energy Management for
Server Systems,” IEEE Computer, vol. 37, no. 11, pp. 68-74, 2004.
S. J. Brams, Theory of Moves, Cambridge University Press, New York,
USA, 1994.
T. D. Braun, S. Ali, H. J. Siegel and A. A. Maciejewski, “Using the MinMin Heuristic to Map tasks onto Heterogeneous High-performance
Computing Systems,” in 2nd Symposium of the Los Alamos Computer
Science Institute, Oct. 2001.
A. P. Chandrakasan, S. Sheng and R. W. Brodersen, “Low Power
CMOS Digital Design, IEEE Journal of Solid-State Circuits, 27(4):473484, 1992.
E. Chung, L. Benini, A. Bogliolo and G. Micheli, “Dynamic Power
Management for non-stationary service requests”, IEEE Transactions on
Computers, vol. 51, no. 11, pp. 1345-1361, 2002.
E. N. Elnozahy, M. Kistler, R. Rajamony, “Energy-Efficient Server
Clusters,” in 2nd Workshop on Power-Aware Computing Systems, 2002.
J. Greenberg, The Theory of Social Situations: An Alternative GameTheoretic Approach, Cambridge University Press, Cambridge, UK,
1990.
S. Gurumurthi, A. Sivasubramaniam, M.J. Irwin, N. Vijaykrishnan, M.
Kandemir, T. Li and L.K. John, “Using Complete Machine Simulation
for Software Power Estimation: The SoftWatt Approach,” in
International Symposium on High Performance Computer Architecture,
2000, pp. 141-150.
I. Hong, G. Qu, M. Potkonajak and M. B. Srivastava, “Synthesis
Techniques for Low-power Hard Real-time Systems on Variable Voltage
Processors,” in IEEE Real-time Systems Symposium, 1998, pp. 178-187.
C. Hsu and U. Kremer, “The Design, Implementation, and Evaluation of
a Compiler Algorithm for CPU Energy Reduction,” in International
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
41
Conference on Programming Language Design and Implementation,
2003.
C.-H. Hwang and A. Wu, “A Predictive System Shutdown Method for
Energy Saving of Event-driven Computation,” in International
Conference on Computer-Aided Design, 1997, pp. 28-32.
S. U. Khan and I. Ahmad, “A Powerful Direct Mechanism for Optimal
WWW Content Replication,” in 19th IEEE International Parallel and
Distributed Processing Symposium (IPDPS), 2005.
S.U. Khan and I. Ahmad, “Non-cooperative, Semi-cooperative, and
Cooperative Games-based Grid Resource Allocation,” in 20th IEEE
International Parallel and Distributed Processing Symposium (IPDPS),
2006.
B. Khargharia, S. Hariri, F. Szidarovszky, M. Houri, H. El-Rewini, S. U.
Khan, I. Ahmad, and M. S. Yousif, “Autonomic Power & Performance
Management for Large-Scale Data Centers,” NSF Next Generation
Software Program Workshop, in 21th IEEE International Parallel and
Distributed Processing Symposium (IPDPS), 2007.
D. G. Luenberger, Linear and Nonlinear Programming, AddisonWesley, 1984.
C. Montet and D. Serra, Game Theory and Economics, Palgrave, 2003.
J. Nash, “The Bargaining Problem,” Econometrica, vol. 18, pp. 155-162,
1950.
J. Nash, “Non-Cooperative Games,” Annals of Mathematics, vol. 54, pp.
286-295, 1951.
M. J. Osborne and A. Rubistein, A Course in Game Theory, MIT Press,
Massachusetts, U.S.A., 1994.
C. H. Papadimitriou and M. Yannakakis, “On the Approximability of
Trade-offs and Optimal Access of Web Sources,” in 41st IEEE
Symposium on Foundations of Computer Science, 2000, pp. 86-92.
E. Pinheiro, R. Bianchini, E. V. Carrera and T. Heath, “Load Balancing
and Unbalancing for Power and Performance in Cluster-Based Systems,”
in Workshop on Compilers and Operating Systems for Low Power, 2001.
Q. Qiu and M. Pedram, “Dynamic Power Management Continuous-time
Markov Decision Processes,” in 36th Design Automation Conference,
1999, pp. 555-561.
L. Schrage, Linear, Integer, and Quadratic Programming with LINDO,
The Scientific Press, USA, 1986.
T. Simunic, “Dynamic Management of Power Consumption”, in Power
Aware Computing, pp. 101-125, 2002.
M. Srivastava, A. Chandrakasan and R. Brodersen, “Predictive System
Shutdown and other Architectural Techniques for Energy Efficient
Programmable Computation,” IEEE Transactions on VLSI Systems, vol.
4, pp. 42-55, 1996.
A. Stefanescu and M. W. Stefanescu, “The Arbitrated Solution for
Multiobjective Convex Programming,” Rev. Roum. Math. Pure
Applicat., vol. 29, pp. 593-598, 1984.
T. E. Truman, T. Pering, R. Doering and R. W. Brodersen, “The InfoPad
Multimedia Terminal: A Portable Device for Wireless Information
Access,” IEEE Trans. Computers, 47(10):1073-1087, 1998.
J. von Neumann and O. Morgenstern, Theory of Games and Economic
Behavior, 3ed, Princeton University Press, 1953.
M. Weiser, B. Welch, A. J. Demers and S. Shenker, “Scheduling for
reduced CPU Energy,” in Symposium on Operating Systems Design and
Implementation, Nov. 1994, pp. 13-23.
H. Yaiche, R. R. Mazumdar and C. Rosenberg, “A Game Theoretic
Framework for Bandwidth Allocation and Pricing in Broadband
Networks,” IEEE/ACM Transactions on Networking, vol. 8, no. 5, pp.
667-678, 2000.
Y. Yu and V. K. Prasanna, “Power-Aware Resource Allocation for
Independent Tasks in Heterogeneous Real-Time Systems,” in 9th IEEE
International Conference on Parallel and Distributed Systems, 2002.