This document discusses green computing and proposes a simulator for evaluating green scheduling algorithms. It begins with background on green computing and why it is important. It then outlines the key components of the simulator, including: a computation model using DAGs, an energy consumption model based on CPU throttling levels, and an abstraction for energy-aware schedulers. The document describes classes for modeling cores, throttling levels, and the overall simulation framework, which is designed to be extensible to different scheduling algorithms, core types, and energy models. The goal is to simulate and evaluate scheduling heuristics to minimize energy consumption while meeting performance targets.
SPACEMATE® Presentation | KTH 27 October 2011Brooks Patrick
The document discusses the concepts of density, urban form, and their relationship to sustainability over time. It provides examples from Amsterdam between 1650-2000, showing how density and the urban footprint have changed historically. Key metrics for measuring density are introduced, including dwellings per hectare, floor space index, plot ratio, and other space index. Diagrams illustrate the relationships between these metrics and how varying one factor impacts the others. Optimal density levels are debated between historical figures like Raymond Unwin and Jane Jacobs. Higher density development is linked to lower energy consumption. Networks and connectivity are also covered in relation to density.
This document outlines the key components of an informed consent form, including:
1) The purpose, procedures, risks, and expected duration of the research study.
2) Issues around ownership of specimens, possible benefits, and financial considerations.
3) Available treatment alternatives, medical treatment for adverse events, and confidentiality procedures.
4) The rights of subjects to terminate participation or have significant new findings communicated to them.
5) Contact information for questions about the study or rights as a research subject.
AiC BIM Body of Knowledge (BOK) Delphi Study Status ReportFresno State
We presented the status report on the 11th BIM Education Symposium hosted at the Autodesk Boston Headquarter Office. You can find other BIM education research articles presented at this symposium through this link: https://coremng.dcp.ufl.edu/bimeducation2017/2017AiCProceedings.pdf
This document outlines the proposed methodology for installing an offshore pipeline including 5 key stages: 1) Onshore pipeline installation, 2) Shore approach construction, 3) Offshore pipelaying using a pipelay barge, 4) Riser installation at an offshore platform, and 5) Post-lay trenching of the subsea pipeline. The document provides details on the pipeline route, scope of work, project schedule, management team, pipelay barge specifications, and methodology for each construction stage.
Lezione del 17 dicembre 2015 dell'Ing. Konstantinos Gkoumas al Corso di Costruzioni Metalliche del Prof. Ing. Franco Bontempi, Facolta' di Ingegneria Civile e Industriale, Universita' degli Studi di Roma La Sapienza.
The document outlines key events in the history of nuclear power, including Ernest Rutherford splitting the atom in 1919, Enrico Fermi achieving nuclear fission in 1934 and building the first nuclear reactor in 1942. It then discusses the first nuclear reactors to generate electricity in 1951 and 1954 and major accidents like Three Mile Island in 1972 and Chernobyl in 1986, showing both technological progress and public concerns over nuclear technology.
Plant layout of a Warehouse for a big toys companyPietro Galli
The project dealt with the definition of all the characteristics of a Warehouse using the principles of the Kaizen and of Lean manufacturing. The main points of the analysis were the following:
• Definition of the optimal geographical area were to build the Warehouse considering the connections among cities that will be served.
• Definition of the flux of the goods
• Definition of the Industrial shelving needed considering the dimension and frequency of movement of the goods.
• Definition of the plant layout of the Warehouse considering the principle of Lean manufacturing and the safety norms.
• Definition of the Forklift trucks needed
• Definition of the personnel needed
• Definition of the proper heating system
• Definition of the light system
The calculus were performed using Excel and the plant layout using Autocad and Solidworks
SPACEMATE® Presentation | KTH 27 October 2011Brooks Patrick
The document discusses the concepts of density, urban form, and their relationship to sustainability over time. It provides examples from Amsterdam between 1650-2000, showing how density and the urban footprint have changed historically. Key metrics for measuring density are introduced, including dwellings per hectare, floor space index, plot ratio, and other space index. Diagrams illustrate the relationships between these metrics and how varying one factor impacts the others. Optimal density levels are debated between historical figures like Raymond Unwin and Jane Jacobs. Higher density development is linked to lower energy consumption. Networks and connectivity are also covered in relation to density.
This document outlines the key components of an informed consent form, including:
1) The purpose, procedures, risks, and expected duration of the research study.
2) Issues around ownership of specimens, possible benefits, and financial considerations.
3) Available treatment alternatives, medical treatment for adverse events, and confidentiality procedures.
4) The rights of subjects to terminate participation or have significant new findings communicated to them.
5) Contact information for questions about the study or rights as a research subject.
AiC BIM Body of Knowledge (BOK) Delphi Study Status ReportFresno State
We presented the status report on the 11th BIM Education Symposium hosted at the Autodesk Boston Headquarter Office. You can find other BIM education research articles presented at this symposium through this link: https://coremng.dcp.ufl.edu/bimeducation2017/2017AiCProceedings.pdf
This document outlines the proposed methodology for installing an offshore pipeline including 5 key stages: 1) Onshore pipeline installation, 2) Shore approach construction, 3) Offshore pipelaying using a pipelay barge, 4) Riser installation at an offshore platform, and 5) Post-lay trenching of the subsea pipeline. The document provides details on the pipeline route, scope of work, project schedule, management team, pipelay barge specifications, and methodology for each construction stage.
Lezione del 17 dicembre 2015 dell'Ing. Konstantinos Gkoumas al Corso di Costruzioni Metalliche del Prof. Ing. Franco Bontempi, Facolta' di Ingegneria Civile e Industriale, Universita' degli Studi di Roma La Sapienza.
The document outlines key events in the history of nuclear power, including Ernest Rutherford splitting the atom in 1919, Enrico Fermi achieving nuclear fission in 1934 and building the first nuclear reactor in 1942. It then discusses the first nuclear reactors to generate electricity in 1951 and 1954 and major accidents like Three Mile Island in 1972 and Chernobyl in 1986, showing both technological progress and public concerns over nuclear technology.
Plant layout of a Warehouse for a big toys companyPietro Galli
The project dealt with the definition of all the characteristics of a Warehouse using the principles of the Kaizen and of Lean manufacturing. The main points of the analysis were the following:
• Definition of the optimal geographical area were to build the Warehouse considering the connections among cities that will be served.
• Definition of the flux of the goods
• Definition of the Industrial shelving needed considering the dimension and frequency of movement of the goods.
• Definition of the plant layout of the Warehouse considering the principle of Lean manufacturing and the safety norms.
• Definition of the Forklift trucks needed
• Definition of the personnel needed
• Definition of the proper heating system
• Definition of the light system
The calculus were performed using Excel and the plant layout using Autocad and Solidworks
This document discusses energy aware networking approaches. It outlines link level approaches like sleeping mode, Energy Efficient Ethernet, and rate adaptation. It also discusses proxying approaches, infrastructure level approaches like energy aware routing, and energy aware applications such as Green TCP/IP and Green BitTorrent. The document provides details on an on-off algorithm for link sleeping that puts interfaces to sleep when buffer occupancy is low and ensures enough time for remaining packets with high probability.
Smart Computing : Cloud + Mobile + SocialRomin Irani
Smart computing is defined as the integration of hardware, software, and people enabled by cloud, mobile, and social technologies, which are disrupting existing business models; opportunities exist for developers in these areas but challenges include privacy, security, interoperability, and developing a skilled workforce for an increasingly mobile and data-driven business environment. The retail sector was provided as an example domain that can leverage location data, offers, analytics and social/mobile integration to enhance customer experience.
Virtualization allows multiple operating systems and applications to run on a single computer using a hypervisor. It is considered green computing because it decreases energy usage and toxic waste by reducing the number of physical devices needed. There are several types of virtualization including server, application, network, storage, and desktop virtualization. Server virtualization specifically allows many virtual servers to run on a single physical server, decreasing energy usage and saving floor space. Overall, virtualization improves hardware utilization and flexibility while lowering costs and environmental impact through reduced resource consumption.
The document discusses ubiquitous computing and how mobile devices are enabling this vision. It defines ubiquitous computing as computers embedded everywhere and interacting seamlessly with users and the digital environment. Mobile phones have evolved beyond basic calling to incorporate additional functions and context awareness. As technologies like wireless communication, location services, and embedded sensors advance, mobile devices are playing a key role in ubiquitous computing by blending into the background yet augmenting human abilities. The document explores ideas like hardware becoming more invisible through displays in eyewear or phones in clothing, and applications adapting based on context like time of day. It encourages thinking of new ways for devices and applications to leverage context and interact intuitively in ubiquitous computing environments.
This document discusses power management techniques in green computing. It begins with an introduction to the Advanced Configuration and Power Interface (ACPI) standard, which allows an operating system to control hardware power savings features. It then discusses power supply efficiency and opportunities to optimize power usage in I/O devices, storage, processors, and operating systems. Specific examples are given around monitor power consumption based on brightness, contrast and display type. Testing showed processor power consumption differences between idle and peak loads were smaller than for graphics cards. The document concludes that power management has significant scope through optimized usage of processors and displays via the operating system.
In simple terms, Li-Fi can be thought of as a light-based Wi-Fi. That is, it uses light instead of radio waves to transmit information. And instead of Wi-Fi modems, Li-Fi would use transceiver-fitted LED lamps that can light a room as well as transmit and receive information. Since simple light bulbs are used, there can technically be any number of access points.
This technology uses a part of the electromagnetic spectrum that is still not greatly utilized- The Visible Spectrum. Light is in fact very much part of our lives for millions and millions of years and does not have any major ill effect. Moreover there is 10,000 times more space available in this spectrum and just counting on the bulbs in use, it also multiplies to 10,000 times more availability as an infrastructure, globally.
This document provides an overview of Li-Fi technology through a presentation on the topic. It discusses the history of Li-Fi, how it works by transmitting data through LED light, its advantages over Wi-Fi such as higher bandwidth and more secure communication through visible light. Example applications are given such as using traffic lights and street lamps to transmit data. Challenges for Li-Fi are also noted, such as the need for line of sight transmission and potential interference from other light sources.
This document discusses Li-Fi technology, which uses LED light bulbs to transmit data by varying the intensity of light faster than what the human eye can detect. Li-Fi was pioneered in the 1990s and demonstrated by Harald Haas in 2011. It provides several advantages over Wi-Fi such as higher speed potential and no interference with radio frequencies. Li-Fi works by encoding binary data in the on-off states of an LED and can achieve speeds of over 100 Mbps. Potential applications include use in planes, hospitals, and as public internet hotspots through street lamps. However, challenges include light not passing through solid objects and interference from other light sources.
This document describes a new approach for developing a high-level synthesis tool for low power VLSI designs called Gaut_w. Gaut_w is composed of low power modules that are used before an architectural synthesis tool to optimize designs at the behavioral and architectural levels for power savings. The key modules of Gaut_w are high level power estimation, module selection to choose optimal operators and supply voltages, optimization criteria to minimize area and power, and operator assignment to decrease switching activity. Experimental results on discrete wavelet transform algorithms show power savings from using Gaut_w.
A Study on Task Scheduling in Could Data Centers for Energy Efficacy Ehsan Sharifi
Abstract: The increasing energy consumption of Physical Machines (PM) in cloud data centers is nowadays a major problem, it has a negative impact on the environment while at the same time increasing the operational costs of data centers. This fosters the development of more energy-efficient scheduling approaches. In this study, we study the barriers of knowledge in energy efficiency for cloud data centers.
Users define a deadline for cluster computing tasks. The proposed solution uses stream processing and MapReduce to dynamically expand or contract the cluster size to meet the deadline while minimizing costs. The authors implemented deadline queries on Amazon EC2 and experiments showed the approach was feasible and effective in meeting deadlines even when introducing node perturbations.
byteLAKE's expertise across NVIDIA architectures and configurationsbyteLAKE
AI Solutions for Industries | Quality Inspection | Data Insights | AI-accelerated CFD | Self-Checkout | byteLAKE.com
byteLAKE: Empowering Industries with AI Solutions. Embrace cutting-edge technology for advanced quality inspection, data insights, and more. Harness the potential of our CFD Suite, accelerating Computational Fluid Dynamics for heightened productivity. Unlock new possibilities with Cognitive Services: image analytics for precise visual inspection for Manufacturing, sound analytics enabling proactive maintenance for Automotive, and wet line analytics for the Paper Industry. Seamlessly convert data into actionable insights using Data Insights' AI module, enabling advanced predictive maintenance and risk detection. Simplify Restaurant and Retail operations with our efficient self-checkout solution, recognizing meals and groceries and elevating customer satisfaction. Custom AI Development services available for tailored solutions. Discover more at www.byteLAKE.com.
Green cloud computing aims to make cloud computing more environmentally sustainable by reducing energy consumption and carbon emissions. The document discusses how cloud data centers use significant amounts of energy. It then introduces green cloud computing and the Green Cloud Simulator tool, which can model a data center's energy usage. The document provides steps to build a new virtual data center in the simulator and view statistics on device energy consumption and graphs of the results. The summary highlights the goal of reducing cloud computing's environmental impact.
DYNAMIC VOLTAGE SCALING FOR POWER CONSUMPTION REDUCTION IN REAL-TIME MIXED TA...cscpconf
The reduction in energy consumption without any deadline miss is one of the main challenges in real-time embedded systems. Dynamic voltage scaling (DVS) is a technique that reduces the power consumption of processors by utilizing various operating points provided to the DVS processor. These operating points consist of pairs of voltage and frequency. The selection of operating points can be done based on the load to the system at a particular point of time. In this work DVS is applied to both periodic and sporadic tasks, and an average of 40% of energy is reduced. The energy consumption of the processor is further reduced by 2-10% by reducing the number of pre-emption and frequency switching
Power consumption is an important metric tool in the context of the wireless sensor networks
(WSNs). In this paper, we described a new Energy-Degree (EDD) Clustering Algorithm for the
WSNs. A node with higher residual energy and higher degree is more likely elected as a
clusterhead (CH). The intercluster and intracluster communications are realized on one hop.
The principal goal of our algorithm is to optimize the energy power and energy load among all
nodes. By comparing EDD clustering algorithm with LEACH algorithm, simulation results
have showen its effectiveness in saving energy.
EDD CLUSTERING ALGORITHM FOR WIRELESS SENSOR NETWORKScscpconf
Power consumption is an important metric tool in the context of the wireless sensor networks
(WSNs). In this paper, we described a new Energy-Degree (EDD) Clustering Algorithm for the
WSNs. A node with higher residual energy and higher degree is more likely elected as a
clusterhead (CH). The intercluster and intracluster communications are realized on one hop.
The principal goal of our algorithm is to optimize the energy power and energy load among all
nodes. By comparing EDD clustering algorithm with LEACH algorithm, simulation results
have showen its effectiveness in saving energy.
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It begins with background on cloud computing and queuing theory. It then models a cloud data center as an [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. Key performance factors analyzed include mean number of tasks in the system. Analytical results are obtained by solving the model to estimate response time distribution and other metrics. The modeling approach allows determining the relationship between performance and number of servers/buffer size.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It presents an analytical model of a cloud data center as a [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. The model is solved to obtain important performance metrics like mean number of tasks in the system. Prior work on modeling cloud systems and queuing theory concepts are also reviewed. Key assumptions of the proposed model include tasks following a Poisson arrival process and service times having a general probability distribution.
This document proposes algorithms for dynamic task scheduling on a DVS system to minimize total system energy consumption. It presents two algorithms:
1) duSYS, which uses optimal speed setting and limited preemption to reduce device standby energy.
2) duSYS_PC, which further reduces preemptions compared to duSYS to achieve additional energy savings.
The algorithms achieve up to 43% energy savings compared to prior work and up to 30% savings compared to a CPU-energy efficient algorithm, showing their effectiveness in minimizing total system energy.
Genetic Algorithm for task scheduling in Cloud Computing EnvironmentSwapnil Shahade
This document proposes a modified genetic algorithm to schedule tasks in cloud computing environments. It begins with an introduction and background on cloud computing and task scheduling. It then describes the standard genetic algorithm approach and introduces the modified genetic algorithm which uses Longest Cloudlet to Fastest Processor and Smallest Cloudlet to Fastest Processor scheduling algorithms to generate the initial population. The implementation and results show that the modified genetic algorithm reduces makespan and cost compared to the standard genetic algorithm.
This document discusses energy aware networking approaches. It outlines link level approaches like sleeping mode, Energy Efficient Ethernet, and rate adaptation. It also discusses proxying approaches, infrastructure level approaches like energy aware routing, and energy aware applications such as Green TCP/IP and Green BitTorrent. The document provides details on an on-off algorithm for link sleeping that puts interfaces to sleep when buffer occupancy is low and ensures enough time for remaining packets with high probability.
Smart Computing : Cloud + Mobile + SocialRomin Irani
Smart computing is defined as the integration of hardware, software, and people enabled by cloud, mobile, and social technologies, which are disrupting existing business models; opportunities exist for developers in these areas but challenges include privacy, security, interoperability, and developing a skilled workforce for an increasingly mobile and data-driven business environment. The retail sector was provided as an example domain that can leverage location data, offers, analytics and social/mobile integration to enhance customer experience.
Virtualization allows multiple operating systems and applications to run on a single computer using a hypervisor. It is considered green computing because it decreases energy usage and toxic waste by reducing the number of physical devices needed. There are several types of virtualization including server, application, network, storage, and desktop virtualization. Server virtualization specifically allows many virtual servers to run on a single physical server, decreasing energy usage and saving floor space. Overall, virtualization improves hardware utilization and flexibility while lowering costs and environmental impact through reduced resource consumption.
The document discusses ubiquitous computing and how mobile devices are enabling this vision. It defines ubiquitous computing as computers embedded everywhere and interacting seamlessly with users and the digital environment. Mobile phones have evolved beyond basic calling to incorporate additional functions and context awareness. As technologies like wireless communication, location services, and embedded sensors advance, mobile devices are playing a key role in ubiquitous computing by blending into the background yet augmenting human abilities. The document explores ideas like hardware becoming more invisible through displays in eyewear or phones in clothing, and applications adapting based on context like time of day. It encourages thinking of new ways for devices and applications to leverage context and interact intuitively in ubiquitous computing environments.
This document discusses power management techniques in green computing. It begins with an introduction to the Advanced Configuration and Power Interface (ACPI) standard, which allows an operating system to control hardware power savings features. It then discusses power supply efficiency and opportunities to optimize power usage in I/O devices, storage, processors, and operating systems. Specific examples are given around monitor power consumption based on brightness, contrast and display type. Testing showed processor power consumption differences between idle and peak loads were smaller than for graphics cards. The document concludes that power management has significant scope through optimized usage of processors and displays via the operating system.
In simple terms, Li-Fi can be thought of as a light-based Wi-Fi. That is, it uses light instead of radio waves to transmit information. And instead of Wi-Fi modems, Li-Fi would use transceiver-fitted LED lamps that can light a room as well as transmit and receive information. Since simple light bulbs are used, there can technically be any number of access points.
This technology uses a part of the electromagnetic spectrum that is still not greatly utilized- The Visible Spectrum. Light is in fact very much part of our lives for millions and millions of years and does not have any major ill effect. Moreover there is 10,000 times more space available in this spectrum and just counting on the bulbs in use, it also multiplies to 10,000 times more availability as an infrastructure, globally.
This document provides an overview of Li-Fi technology through a presentation on the topic. It discusses the history of Li-Fi, how it works by transmitting data through LED light, its advantages over Wi-Fi such as higher bandwidth and more secure communication through visible light. Example applications are given such as using traffic lights and street lamps to transmit data. Challenges for Li-Fi are also noted, such as the need for line of sight transmission and potential interference from other light sources.
This document discusses Li-Fi technology, which uses LED light bulbs to transmit data by varying the intensity of light faster than what the human eye can detect. Li-Fi was pioneered in the 1990s and demonstrated by Harald Haas in 2011. It provides several advantages over Wi-Fi such as higher speed potential and no interference with radio frequencies. Li-Fi works by encoding binary data in the on-off states of an LED and can achieve speeds of over 100 Mbps. Potential applications include use in planes, hospitals, and as public internet hotspots through street lamps. However, challenges include light not passing through solid objects and interference from other light sources.
This document describes a new approach for developing a high-level synthesis tool for low power VLSI designs called Gaut_w. Gaut_w is composed of low power modules that are used before an architectural synthesis tool to optimize designs at the behavioral and architectural levels for power savings. The key modules of Gaut_w are high level power estimation, module selection to choose optimal operators and supply voltages, optimization criteria to minimize area and power, and operator assignment to decrease switching activity. Experimental results on discrete wavelet transform algorithms show power savings from using Gaut_w.
A Study on Task Scheduling in Could Data Centers for Energy Efficacy Ehsan Sharifi
Abstract: The increasing energy consumption of Physical Machines (PM) in cloud data centers is nowadays a major problem, it has a negative impact on the environment while at the same time increasing the operational costs of data centers. This fosters the development of more energy-efficient scheduling approaches. In this study, we study the barriers of knowledge in energy efficiency for cloud data centers.
Users define a deadline for cluster computing tasks. The proposed solution uses stream processing and MapReduce to dynamically expand or contract the cluster size to meet the deadline while minimizing costs. The authors implemented deadline queries on Amazon EC2 and experiments showed the approach was feasible and effective in meeting deadlines even when introducing node perturbations.
byteLAKE's expertise across NVIDIA architectures and configurationsbyteLAKE
AI Solutions for Industries | Quality Inspection | Data Insights | AI-accelerated CFD | Self-Checkout | byteLAKE.com
byteLAKE: Empowering Industries with AI Solutions. Embrace cutting-edge technology for advanced quality inspection, data insights, and more. Harness the potential of our CFD Suite, accelerating Computational Fluid Dynamics for heightened productivity. Unlock new possibilities with Cognitive Services: image analytics for precise visual inspection for Manufacturing, sound analytics enabling proactive maintenance for Automotive, and wet line analytics for the Paper Industry. Seamlessly convert data into actionable insights using Data Insights' AI module, enabling advanced predictive maintenance and risk detection. Simplify Restaurant and Retail operations with our efficient self-checkout solution, recognizing meals and groceries and elevating customer satisfaction. Custom AI Development services available for tailored solutions. Discover more at www.byteLAKE.com.
Green cloud computing aims to make cloud computing more environmentally sustainable by reducing energy consumption and carbon emissions. The document discusses how cloud data centers use significant amounts of energy. It then introduces green cloud computing and the Green Cloud Simulator tool, which can model a data center's energy usage. The document provides steps to build a new virtual data center in the simulator and view statistics on device energy consumption and graphs of the results. The summary highlights the goal of reducing cloud computing's environmental impact.
DYNAMIC VOLTAGE SCALING FOR POWER CONSUMPTION REDUCTION IN REAL-TIME MIXED TA...cscpconf
The reduction in energy consumption without any deadline miss is one of the main challenges in real-time embedded systems. Dynamic voltage scaling (DVS) is a technique that reduces the power consumption of processors by utilizing various operating points provided to the DVS processor. These operating points consist of pairs of voltage and frequency. The selection of operating points can be done based on the load to the system at a particular point of time. In this work DVS is applied to both periodic and sporadic tasks, and an average of 40% of energy is reduced. The energy consumption of the processor is further reduced by 2-10% by reducing the number of pre-emption and frequency switching
Power consumption is an important metric tool in the context of the wireless sensor networks
(WSNs). In this paper, we described a new Energy-Degree (EDD) Clustering Algorithm for the
WSNs. A node with higher residual energy and higher degree is more likely elected as a
clusterhead (CH). The intercluster and intracluster communications are realized on one hop.
The principal goal of our algorithm is to optimize the energy power and energy load among all
nodes. By comparing EDD clustering algorithm with LEACH algorithm, simulation results
have showen its effectiveness in saving energy.
EDD CLUSTERING ALGORITHM FOR WIRELESS SENSOR NETWORKScscpconf
Power consumption is an important metric tool in the context of the wireless sensor networks
(WSNs). In this paper, we described a new Energy-Degree (EDD) Clustering Algorithm for the
WSNs. A node with higher residual energy and higher degree is more likely elected as a
clusterhead (CH). The intercluster and intracluster communications are realized on one hop.
The principal goal of our algorithm is to optimize the energy power and energy load among all
nodes. By comparing EDD clustering algorithm with LEACH algorithm, simulation results
have showen its effectiveness in saving energy.
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It begins with background on cloud computing and queuing theory. It then models a cloud data center as an [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. Key performance factors analyzed include mean number of tasks in the system. Analytical results are obtained by solving the model to estimate response time distribution and other metrics. The modeling approach allows determining the relationship between performance and number of servers/buffer size.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It presents an analytical model of a cloud data center as a [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. The model is solved to obtain important performance metrics like mean number of tasks in the system. Prior work on modeling cloud systems and queuing theory concepts are also reviewed. Key assumptions of the proposed model include tasks following a Poisson arrival process and service times having a general probability distribution.
This document proposes algorithms for dynamic task scheduling on a DVS system to minimize total system energy consumption. It presents two algorithms:
1) duSYS, which uses optimal speed setting and limited preemption to reduce device standby energy.
2) duSYS_PC, which further reduces preemptions compared to duSYS to achieve additional energy savings.
The algorithms achieve up to 43% energy savings compared to prior work and up to 30% savings compared to a CPU-energy efficient algorithm, showing their effectiveness in minimizing total system energy.
Genetic Algorithm for task scheduling in Cloud Computing EnvironmentSwapnil Shahade
This document proposes a modified genetic algorithm to schedule tasks in cloud computing environments. It begins with an introduction and background on cloud computing and task scheduling. It then describes the standard genetic algorithm approach and introduces the modified genetic algorithm which uses Longest Cloudlet to Fastest Processor and Smallest Cloudlet to Fastest Processor scheduling algorithms to generate the initial population. The implementation and results show that the modified genetic algorithm reduces makespan and cost compared to the standard genetic algorithm.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
Power aware compilation is a software approach to reducing energy consumption by optimizing code during compilation. It works by [1] analyzing assembly code produced during compilation to calculate clock cycles needed for execution, [2] developing optimization models to replace instructions with equivalents needing fewer cycles, and [3] generating optimized assembly code requiring less energy to run. This technique aims to make developers aware of programs' energy efficiency and influence more energy-conscious software development.
LCU14-410: How to build an Energy Model for your SoCLinaro
LCU14-410: How to build an Energy Model for your SoC
---------------------------------------------------
Speaker: Morten Rasmussen
Date: September 18, 2014
---------------------------------------------------
★ Session Summary ★
- ARM to provide a quick overview of the current energy model
- Introduce the methodology/recipe used to build the energy model
- Discuss ways in which the model is used today and intended next steps
- Key outcomes:
- Describe the
- Identify gaps and limitations
Summary of EAS workshop (Amit)
-Summary of hacking sessions - plan to integrate Qualcomm-ARM-Linaro work to send upstream
-Key outcomes:
-List of features and responsibilities
-Dependencies between upstreaming of features, if any
---------------------------------------------------
★ Resources ★
Zerista: http://lcu14.zerista.com/event/member/137778
Google Event: https://plus.google.com/u/0/events/ck3ti7eurknnsq0a4e9ks5a1sbs
Video: https://www.youtube.com/watch?v=JfZt8W3NVgk&list=UUIVqQKxCyQLJS6xvSmfndLA
Etherpad: http://pad.linaro.org/p/lcu14-410
---------------------------------------------------
★ Event Details ★
Linaro Connect USA - #LCU14
September 15-19th, 2014
Hyatt Regency San Francisco Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Interplay of Communication and Computation Energy Consumption for Low Power S...ijasuc
The sensor network design approach normally considers the communication energy consumption for
evaluating a communication protocol. This is true for the low power devices such as MICAz/MICA2
which do not consume a lot of energy for the data treatment. However, recently developed sensor devices
for multimedia applications such as iMote2 do consume considerable amount of energy for data
processing. In this article, we consider various scenarios for routing the data in wireless multimedia
sensor networks by considering the local design parameters of devices such as PXA27x and beagleboard.
The proposed routing solution considers node level optimizations such as data compression, dynamic
voltage and frequency scaling (DVFS) for making a routing decision. The proposed approaches have
been simulated to prove the effectiveness of the approach.
This document summarizes a project report on optimizing fracking simulations for GPU acceleration. The simulations model hydraulic fracturing and consist of three phases. The focus was on the second phase, which calculates interaction factors and stresses between grid cells and takes 80% of the CPU execution time. This phase was implemented on a GPU using techniques like finding parallelism at the cell and grid level, optimizing data transfers, memory access, and using streams to execute cells concurrently. These optimizations led to speedups of up to 56x compared to the CPU implementation.
This document summarizes a presentation on CloudSim, a toolkit for modeling and simulating cloud computing environments. CloudSim allows modeling resources and services in cloud data centers and testing application services. It features discrete event-driven simulation of large cloud environments and supports modeling virtualized resources, data centers, and network connections. CloudSim has advantages for testing policies in a repeatable and controllable environment and tuning systems before real deployment. The presentation outlines CloudSim's architecture, modeling capabilities, simulation steps, and concludes with discussions of conclusions and future work, as well as green cloud computing.
Secondo seminario per il corso di calcolo delle probabilita` e statistica matematica del professor fedullo (conoscere Latex all\'epoca mi avrebbe fatto comodo)
Una presentazione fatta per il linux day 2010 organizzato dall'hcsslug all'università di Salerno. Si parla in particolare di
Logo
Kturtle
DrRacket
BlueJ
CoFFEE
Openstudy
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
2. Outline Introduction Theoretical Model Computation model Energy consumption model Throttling model Simulator Green Heuristics Results and future works References
3. What is green computing? “The study and practice of designing, manufacturing, using, and disposing of computers, servers, and associated subsystems such as monitors, printers, storage devices, and networking and communications systems efficiently and effectively with minimal or no impact on the environment.”[1] Professor Dr San MurugesanFaculty of ManagementMultimedia UniversityCyberjaya, Malaysia,
4. Why does green computing matters? Some numbers: 2 google searches = 14CO2 grams (as boiling a kettle!) (Alex Wissner-Gross, Harvard University physicist) [2][3] Windows 7 + Microsoft office 2007 requires 70 times more RAM than Windows 98 + Office 2000 to write exactly the same text or send the same email[4] In 2010, servers were responsible of the 2.5% of the total energy consumption of the USA. A Further 2.5% were used for their cooling.[5] It was estimated that by 2020, servers would use more of the world's energy than air travel if current trends continued[5]
5. Further references Green500 (www.green500.com) GreenIT (www.greenit.fr) CO2Stats (www.co2stats.com)
6. Why green scheduling? A green scheduler could provide Energy-oriented task assignment Setting the correct power level for current workload Improved use of the power management Learning power usage profile of job types Could be a part of the Operating System power management
7. What do we want from a green scheduler? Efficiency Simplicity Time is money!
8. Outline Introduction Theoretical Model Computation model Energy consumption model Throttling model Simulator Green Heuristics Results and future works References
9. Computation model Tasks usually depends on each other DAGs: Directed Acyclic Graphs If there’s a dependency between task u and v, we put an arc between nodes u and v
10. Computation model SP-DAGs: Serial parallel DAGs A DAG with 2 terminals (source and target) and an arc between them is a SP-DAG Made by parallel and series composition of other SP-DAGs
11. Why SP-DAGs? They describe several significant class of computation (for instance divide and conquer algorithms) They are the natural abstraction for several parallel programming languages (such as CILK) [10] We can recognize if a DAG is an SP-DAG in linear time We can easily transform an arbitrary DAG in an SP-DAG in linear time, using SP-ization
12. LEGO® DAGs Assessing the computational benefits of AREA-Oriented DAG-Scheduling (GennaroCordasco, Rosario De Chiara, Arnold L. Rosenberg) 2009 SP-DAGs made by a repertoire of Connected Bipartite Building Blocks DAGs representing the various subcomputations
13. Furtherdefinitions on DAGs and SP-DAGs A node in the DAG could be Unelegible Elegible Assigned/executed Schedule: Topologicalsort of the DAG Obtained by a rule for selectingwhichelegiblenode to executeateachstep of computation v has been scheduled for execution or executed v has at least a non-executed parent Allv’sparenthave been executed
14. Critical path Longest path from the source to the sink Why is so important? It’s clear to see that we can’t finish our computation before executing each node on the critical path So, time critical path execution takes it’s a trivial lower bound.
15. Further definitions on DAGs and SP-DAGs Yield of a node: number of nodes that become elegible when the given node completes his execution. 𝑬Σ(𝒊): Elegible nodes at step i in schedule Σ 𝑨𝑹𝑬𝑨(Σ)≜𝑖=0𝑛𝐸Σ𝑖
16. Outline Introduction Theoretical Model Computation model Energy consumption model Throttling model Simulator Green Heuristics Results and projectedworks References
17. Energy consumption model We need a realistic model for energy consumption We should check Circuits dissipation Throttling models
18. Energy consumption model CMOS Circuit dissipation: 𝑃=𝐶𝑉2𝑓+𝐼𝑚𝑒𝑎𝑛𝑉 +𝑉𝑙𝑒𝑎𝑘𝑎𝑔𝑒 (we won’t consider short circuit power and leakage) We assume a linear relationship between voltage and frequency 𝑓=𝑘𝑉
19. Energy consumption model Our model: 𝐸=𝐶 × 𝑇× 𝑓3 Where: 𝑇=𝑐𝑙𝑜𝑐𝑘 𝑐𝑦𝑐𝑙𝑒𝑠𝑓 𝑓= clock cycles per second C enclosesseveralconstantslikecapacitance, k and clock multiplier
20. Outline Introduction Theoretical Model Computation model Energy consumption model Throttling model Simulator Green Heuristics Results and projectedworks References
21. CPU throttlingmodels Whichis the common throttling model used by modern processors? ACPI: Advanced Configuration and Power-management Interface[6] A fullyplatform-independent standard thatprovides: Monitoring Configuring Hardware discovering Power management Definespowerstates for everydevice
22. Performance vs powerstates Powerstates: C0: Operationalpower state C1: Halt state C2: Stop-clock C3: Sleep Performance states: P0: Higher state P1: Lessthan P0, frequency / voltagescaled Pn: Lessthan Pn-1, frequancy/voltagescaled In our model, weimplementonly C0 power state and P0,P1,P2 Performance states.
23. Ourthrottling model We use a DFS (DynamicFrequencyScaling) Model, assumingthatscalingdoesn’taddenergyoverhead P0: 1.0 ∗𝑓 P1: 0.7 ∗𝑓 P2: 0.5 ∗ 𝑓
24. Further considerations In our model, an idle core consumes 0 We do not track the algorithm execution energy We do not track energy dissipated by memory using Energy is unbounded We’re assuming that you can set a single core throttling
25. Outline Introduction Theoretical Model Computation model Energy consumption model Throttling model Simulator Green Heuristics Results and future works References
26. The simulator We implemented this model in a DAG-Scheduling simulator, Providing classes and methods to calculate energy consumption Implementing the energy model we discussed earlier Paying attention to extensibility
27. A typical simulation Loads a DAG Computesgraphcriticalpath Initializesschedulersthatneeds to be tested Executesschedulers on the givengraphs for a givennumber of trials (usually 100, due to randomnessinfluencingschedulers) At the end of iterations, itcollectsstatisticsabout the executions, specifically Makespan (min, max, average) Energy consumptionaverage Repeats on each DAG
28. How we implemented the model Our focus: Extensibility We wanted our simulator to support multiple kind of models Providing Core abstraction Throttling level abstraction Energy aware scheduler abstraction Totally decoupled from core and throttling level Making easier to add Different scheduling algorithms Different core types Different energy models
29. Core abstraction A core can Execute tasks Set its own throttling level Track its power consumption Problem: different cores could implement different throttling strategies Solution: Every core has its own throttling levels array Throttling level is a nested class in the core implementation
30. Throttlinglevelabstraction A throttlinglevelcontains Informationsaboutfrequency and consumption Methods to calculate Due date of a task at a givenlevel (lesser the level, slower the task execution) Powerconsumptionat a givenlevel
31. Energy package Core interface We assume thatevery core can execute task and set hisownthrottling AbstractclassThrottlingLevel Implements a throttlinglevel, with energyconsumption info and frequency. Class DummyCore Core base implementation Class DefaultThrottlingLevel DummyCorenestedclass, implementsour performance states
32. Core interface /** * Execute a task on this core * @param node The node that models the task * @param length Task length if executed at max power * @return the real task length (this could differ from input * if Core is set to a different throttling level) */ public double executeTask(ICONodenode, double length); /** * Sets a core power consumption to his current throttling level * idleconsumption */ public voidsetIdle(); /** * Sets the core to a greater power level */ public voidincreaseThrottlingLevel(); /** * Sets the core to a lesser power level */ public voiddecreaseThrottlingLevel();
33. ThrottlingLevel /** * This method calculates the power consumption for a * given task length, according to power consumption unit * and other parameters, according to programmer's will that * implementsit. * * @param length The task length * @return Power consumption for this task */ abstract double getPowerConsumptionPerTask(double length); /** * This method calculates how task length is modified * for the given throttling level * * @param length ideal length of the task * @return the real task length for the given throttling level */ abstract double getRealLength(double length);
34. Throttlinglevelinitialization public voidinitializeThrottlingLevels(double hardwareConstant, double maxFreq, double maxVoltage, intthrottlingLevels) { this.levels= new ThrottlingLevel[throttlingLevels]; for( int i = 0; i < throttlingLevels - 1 ; i++ ){ double numerator,denominator; numerator = i + 1.0; denominator = i + 2.0; double fraction = numerator/denominator; levels[i] = new DefaultThrottlingLevel("LEVEL"+i, hardwareConstant, fraction * maxFreq, fraction * maxVoltage); } this.levels[throttlingLevels- 1] = new DefaultThrottlingLevel("LEVEL"+(throttlingLevels-1), hardwareConstant, maxFreq, maxVoltage); //necessary for correct use of increase and decrease Arrays.sort(levels); //by default we set the maximum power level this.currentThrottlingLevel= levels[2]; this.throttlingLevelIndex= 2; this.dissipatedPower= 0.0; }
35. Energy awareschedulerabstraction An energyawareschedulerhas to Work with differenttypes of cores Track the makespan and the energyconsumption Implementlogic for Core selection Elegiblenodeselection Choosing the right throttlinglevel
36. Energy awarescheduler package CoreSelector Implements free core selectionstrategy (In thosetestswe use DefaultCoreSelectorclass) EnergyAwareScheduler Base for eachschedulertrackingenergyconsumption
37. InspectingEnergyAwareSchedulerclass /** * Istantiates a new EnergyAwareScheduler * @paramnumCores number of cores * @paramcoreClass class that models the desired core type * @throwsInstantiationException * @throwsIllegalAccessException * @throws IllegalArgumentException if numCores <= 0 */ public EnergyAwareScheduler(intnumCores, Class<? extends Core> coreClass) throwsInstantiationException, IllegalAccessException, IllegalArgumentException /** * Calculates the task length on a given core * @paramcoreIndex index of the core in the corePool * @parameventLength ideal length of the task * @param node node to be executed * @return the task length if executed on coreIndex core */ protected double getTimeOffsetForCore(intcoreIndex, double eventLength, ICONodenode)
38. InspectingEnergyAwareSchedulerclass /** *Sets thtottlingfor core thatare going to execute a task in thisstep *@paramcoreIndex: the core id */ protectedvoidsetBusyThrottling(intcoreIndex) /** *Sets throttling state for core thatwillremainidle */ protectedvoidsetIdleThrottling() public double getTotalPowerConsumption() private voidcalculateIdleConsumptions()
39. Whataboutscheduling? Schedule steps are implementedusing the TimeLine Object A priorityqueuecontainingtwotypes of TimeEvent processorsArrives clientFinishes At eachschedulingstepremoves the first event from the TimeLine Schedulinglogicisimplemented in the runBatchedMakespanmethod Furtherinitialization are made in the initBatchedMakespanmethod
40. runBatchedMakespanmethod While ( executedNode != target) Event := timeline.pollNextEvent(); setOverallThrottlingLevel(); Switch(Event) Case(processorsArrives) 𝑛𝑒 := min(availableCores,elegibleNodesNum) For i := 0 to 𝑛𝑒 nextNode := getNextElegibleNode(); coreIndex := coreSelector.getCoreIndex(); corePool[coreIndex].setBusy(); setBusyThrottling(coreIndex); timeOffset := getTimeOffsetForCore(coreIndex, eventLength, nextNode); timeline.add(new TimeEvent(event.getTime+ timeOffset,ClientFinishes,nextNode)
42. Default strategies getNextElegibleCore() isabstract (every core has to implementit) setBusyThrottling(coreIndex) by default sets the maximum throttlinglevel, assetOverallThrottlingLevel() Furtherinitializations are made in the initBatchedMakespanmethod
43. Whatabout core selection? Core selectionisimplementedas a differentclassimplementing the CoreSelectorinterface CoreSelectorprovides the getCoreIndexmethod In oursimulationwe use only the DefaultCoreSelector, thatsimplytakes the highestfrequency free core
44. Outline Introduction Theoretical Model Computation model Energy consumption model Throttling model Simulator Green Heuristics Results and projectedworks References
45. Green heuristics CPScheduler AOSPDScheduler TFIHeuristicScheduler MarathonHeuristic Every heuristic has been implemented as an EnergyAwareScheduler subclass
46. CRITICAL PATH Based scheduling Computes graph critical path Select free core with highest energy Set core to maximum power Select node with maximum distance from the sink To implement this scheduler, only method getNextElegibleCore() has been overwritten
47. AOSPD SCHEDULING On scheduling DAGs to maximize AREA (GennaroCordasco, Arnold L. Rosenberg) An idea from Internet Computing scenario It’s quite impossible to determine when new processors become available for task execution So… What we can do? Solutions: Maximize the AREA at each execution step GREAT! Not always possibile [7] Maximize the average AREA over the execution steps Good! Always possible!
48. More on AOSPD scheduling At step 1, wehave to choose B or C for execution To maximize AREA atstep 1, wechoose C Whathappens in step 2? Choosingelegiblenodes in step 2 wecan’tmaximize AREA To maximize AREA in step 2 weshouldhavechosen B, thatwasnot AREA-Maximizing for step 1
50. TFI HEURISTIC The idea: if we have to wait for a task that requires much more time than others, we could slow down the faster ones to save energy TFI: Max due date for critical path value i
51. TFI HEURISTIC Computes graph critical path Select free core with highest frequency Sort elegible nodes by their critical path value and yield Find maximum due date TFINode := node with maximum critical path value and due date TFI:= maximum task length 𝑛𝑒:=min(𝑐𝑜𝑟𝑒𝐴𝑣𝑎𝑖𝑙𝑎𝑏𝑙𝑒, 𝑒𝑙𝑒𝑔𝑖𝑏𝑙𝑒𝑁𝑜𝑑𝑒𝑠𝑁𝑢𝑚) For i:=1 to 𝑛𝑒 Node := elegibleNodes[i] If Node == TFINode execute Node at max power Else if (elegibleNodes.size() <numCores) Execute our node at minimum throttling level that keeps his length lesser than TFI Else execute node at default throttling level
52. Marathon heuristic The idea: Our problem reminds a Marathon… We have to come first… … and possibly alive (with enough energy to come back home) Being lazier we’ll save more energy How should we run a marathon? According to my uncle: It’s better to preserve an average pace than squandering energies to run faster for a short stretch When you can’t overtake (road too narrow or you’re too tired), it’s better to slow down a little waiting for best conditions
53. Marathon heuristic Computes graph critical path Select free core with highest frequency Sort elegible nodes by their critical path value and Yield 𝑛𝑒:=min(𝑎𝑣𝑎𝑖𝑙𝑎𝑏𝑙𝑒𝐶𝑜𝑟𝑒𝑠, 𝑒𝑙𝑒𝑔𝑖𝑏𝑙𝑒𝑁𝑜𝑑𝑒𝑠𝑁𝑢𝑚) Front := sum of yields of the first 𝑛𝑒 nodes For i := 1 to 𝑛𝑒 Node := elegibleNodes[i] If front + n <= numcores – (numcores / DELTA) execute Node at minimum power Else Execute Node at average power
54. Outline Introduction Theoretical Model Computation model Energy consumption model Throttling model Simulator Green Heuristics Results and projectedworks References
55. Assessing results Remember “time is money”? Solution: 𝐸𝑇2 Remember Area-time complexity in VLSI design?[8][9] We use Energy-Time complexity to plot our schedulers performances Lesser the 𝐸𝑇2 score, better the scheduler
56. Tests Test parameters: Number of cores: 4, 8, 16 Standard deviation: 1, 2, 4, 8 Standard deviation influences task due date, which are generated by a Gaussian distribution with mean 1.0 and stdev in the given set
68. Conclusions We can’t obtain a makespanbetterthan the criticalpathscheduling AREA and Yieldconsiderationsdoesn’t seemtoaddmuch more in termsofenergysavings At least in a multicorescenario… Probablyweshould focus only on criticalpath Task due datesdoesn’t seemtoinfluencemakespantoomuch
69. Future works Tracking scheduler efficiency Adding a model for idle core’s consumption Considering a “finite energy” model Extend it in a volunteer computing scenario We could consider a scenario with many core on different dies Adding an extra cost to switch them on Adding thermal parameters
70. Outline Introduction Theoretical Model Computation model Energy consumption model Throttling model Simulator Green Heuristics Results and projectedworks References
71. References Harnessing GREEN IT: Principles and pratice (San Murugesan, 2009) "Research reveals environmental impact of Google searches.". Fox News. 2009-01-12. http://www.foxnews.com/story/0,2933,479127,00.html. Retrieved 2009-01-15. “Powering a Google search". Official Google Blog. Google. http://googleblog.blogspot.com/2009/01/powering-google-search.html. Retrieved 2009-10-01. "Office suite require 70 times more memory than 10 years ago.". GreenIT.fr. 2010-05-24. http://www.greenit.fr/article/logiciels/logiciel-la-cle-de-l-obsolescence-programmee-du-materiel-informatique-2748. Retrieved 2010-05-24.
72. References "ARM chief calls for low-drain wireless". The Inquirer. 29 June 2010. http://www.theinquirer.net/inquirer/news/1719749/arm-chief-calls-low-drain-wireless. Retrieved 30 June 2010. Advanced Configuration and Power Interface Specification, 2010 (www.acpi.info) Towarda theory for schedulingdags in internet-basedcomputing (G. Malewicz, A. L. Rosenberg, M. Yurkewych, 2006) Lower bound for VLSI (Richard J. Lipton, Robert Sedgewick, 1981)
73. References Area-time complexity for VLSI (C.D. Thompson, 1979) Cilk: an efficientmultithreadedruntimesystem (R.D. Blumofe, C.F. Joerg, B.C. Kuszmaul, C.E. Leiserson, K. H. Randall, Y. Zhou) 5° ACM SIGPLAN Symp. On Principles and practices of Parallel Programming (PPoPP ‘95)