This document discusses energy efficiency in data centers. It begins by outlining the large and growing energy consumption of data centers, noting they account for 1.3% of worldwide energy production and will exceed 400 GWh/year by 2015. It then discusses how data centers are used for traditional applications like web services as well as emerging applications in areas like smart cities that generate huge amounts of data. The document outlines various strategies for optimizing energy efficiency at different levels, from workload scheduling at the chip/server level to thermal-aware resource management and floorplanning. It stresses the need for holistic optimization across all levels from chips to data centers to minimize total energy consumption.
GMC: Greening MapReduce Clusters Considering both Computation Energy and Cool...Tarik Reza Toha
Increased processing power of MapReduce clusters generally enhances performance and availability at the cost of substantial energy consumption that often incurs higher operational costs (e.g., electricity bills) and negative environmental impacts (e.g., carbon dioxide emissions). There exist a few greening methods for computing clusters in the literature that focus mainly on computational energy consumption leaving cooling energy, which occupies a significant portion of the total energy consumed by the clusters. To this extent, in this paper, we propose a machine learning based approach named as Green MapReduce Cluster (GMC) that reduces the total energy consumption of a MapReduce cluster considering both computational energy and cooling energy. GMC predicts the number of machines that results in minimum total energy consumption. We perform the prediction through applying different machine learning techniques over year-long data collected from a real setup. We evaluate performance of GMC over a real testbed. Our evaluation reveals that GMC reduces total energy consumption by up to 47% compared to other alternatives while experiencing marginal throughput degradation in a few cases.
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...IJECEIAES
Energy consumption in cloud computing occur due to the unreasonable way in which tasks are scheduled. So energy aware task scheduling is a major concern in cloud computing as energy consumption results into significant waste of energy, reduce the profit margin and also high carbon emissions which is not environmentally sustainable. Hence, energy efficient task scheduling solutions are required to attain variable resource management, live migration, minimal virtual machine design, overall system efficiency, reduction in operating costs, increasing system reliability, and prompting environmental protection with minimal performance overhead. This paper provides a comprehensive overview of the energy efficient techniques and approaches and proposes the energy aware resource utilization framework to control traffic in cloud networks and overloads.
A SURVEY: TO HARNESS AN EFFICIENT ENERGY IN CLOUD COMPUTINGijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
A survey to harness an efficient energy in cloud computingijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to
optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
Also known as stepwise-refinement or decomposition, this approach takes the whole software system as one entity and decomposes it to achieve more than one subsystem based on some characteristics.
Presentations on Data Center Sustainability from:
• Dale Sartor, Staff Engineer, Building and Industrial Applications, Lawrence Berkeley National Laboratory
• Orlando Figueredo, Vice President, Consulting and Intelligence, Hewlett Packard Enterprise
• Barbara Humpton, President and CEO, Siemens Government Technologies, Inc.
GMC: Greening MapReduce Clusters Considering both Computation Energy and Cool...Tarik Reza Toha
Increased processing power of MapReduce clusters generally enhances performance and availability at the cost of substantial energy consumption that often incurs higher operational costs (e.g., electricity bills) and negative environmental impacts (e.g., carbon dioxide emissions). There exist a few greening methods for computing clusters in the literature that focus mainly on computational energy consumption leaving cooling energy, which occupies a significant portion of the total energy consumed by the clusters. To this extent, in this paper, we propose a machine learning based approach named as Green MapReduce Cluster (GMC) that reduces the total energy consumption of a MapReduce cluster considering both computational energy and cooling energy. GMC predicts the number of machines that results in minimum total energy consumption. We perform the prediction through applying different machine learning techniques over year-long data collected from a real setup. We evaluate performance of GMC over a real testbed. Our evaluation reveals that GMC reduces total energy consumption by up to 47% compared to other alternatives while experiencing marginal throughput degradation in a few cases.
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...IJECEIAES
Energy consumption in cloud computing occur due to the unreasonable way in which tasks are scheduled. So energy aware task scheduling is a major concern in cloud computing as energy consumption results into significant waste of energy, reduce the profit margin and also high carbon emissions which is not environmentally sustainable. Hence, energy efficient task scheduling solutions are required to attain variable resource management, live migration, minimal virtual machine design, overall system efficiency, reduction in operating costs, increasing system reliability, and prompting environmental protection with minimal performance overhead. This paper provides a comprehensive overview of the energy efficient techniques and approaches and proposes the energy aware resource utilization framework to control traffic in cloud networks and overloads.
A SURVEY: TO HARNESS AN EFFICIENT ENERGY IN CLOUD COMPUTINGijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
A survey to harness an efficient energy in cloud computingijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to
optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
Also known as stepwise-refinement or decomposition, this approach takes the whole software system as one entity and decomposes it to achieve more than one subsystem based on some characteristics.
Presentations on Data Center Sustainability from:
• Dale Sartor, Staff Engineer, Building and Industrial Applications, Lawrence Berkeley National Laboratory
• Orlando Figueredo, Vice President, Consulting and Intelligence, Hewlett Packard Enterprise
• Barbara Humpton, President and CEO, Siemens Government Technologies, Inc.
A survey on dynamic energy management at virtualization level in cloud data c...csandit
Data centers have become indispensable infrastructure for data storage and facilitating the
development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results
in increased energy consumption. This necessitates the development of efficient energy
management techniques in data center not only for operational cost but also to reduce the
amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy
management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys
various issues related to dynamic energy management at virtualization level in cloud data
centers.
Saving energy in data centers through workload consolidationEco4Cloud
This whitepaper on recently co‑authored with 4 top-notch European excellence centers such as the Institute for High Performance Computing and Networking of the Italian National Research Council, the Department of Electronics and Telecommunications at Politecnico di Torino, eERG – Energy Department at Politecnico di Milano and PrimeEnergyIT/EfficientDataCenters frames the whole workload consolidation topic and provides an overview of state-of-the-art approaches, including E4C’s of course.
It is Also known as Green It.Green computing is the environmentally responsible and eco-friendly use of computers and their resources. In broader terms, it is also defined as the study of designing, manufacturing/engineering, using and disposing of computing devices in a way that reduces their environmental impact
Energy harvesting earliest deadline first scheduling algorithm for increasing...IJECEIAES
In this paper, a new approach for energy minimization in energy harvesting real time systems has been investigated. Lifetime of a real time systems is depend upon its battery life. Energy is a parameter by which the lifetime of system can be enhanced. To work continuously and successively, energy harvesting is used as a regular source of energy. EDF (Earliest Deadline First) is a traditional real time tasks scheduling algorithm and DVS (Dynamic Voltage Scaling) is used for reducing energy consumption. In this paper, we propose an Energy Harvesting Earliest Deadline First (EH-EDF) scheduling algorithm for increasing lifetime of real time systems using DVS for reducing energy consumption and EDF for tasks scheduling with energy harvesting as regular energy supply. Our experimental results show that the proposed approach perform better to reduce energy consumption and increases the system lifetime as compared with existing approaches.
Green Business are the future business and Green IT can be one of the best business for future which help companies in Energy Efficiency and also help in CSR and also save the environment.
Achieving Energy Proportionality In Server ClustersCSCJournals
a great amount of interests in the past few years. Energy proportionality is a principal to ensure that energy consumption is proportional to the system workload. Energy proportional design can effectively improve energy efficiency of computing systems. In this paper, an energy proportional model is proposed based on queuing theory and service differentiation in server clusters, which can provide controllable and predictable quantitative control over power consumption with theoretically guaranteed service performance. Futher study for the transition overhead is carried out corresponding strategy is proposed to compensate the performance degradation caused by transition overhead. The model is evaluated via extensive simulations and is justified by the real workload data trace. The results show that our model can achieve satisfied service performance while still preserving energy efficiency in the system.
Energy Efficient Data Center
source : http://hightech.lbl.gov/presentations/6-23-05_PGE_Workshop.ppt&ei=BVxPVIy_Bse68gWwy4HAAw&usg=AFQjCNGHU_rSwcF4BMo2A6KnFfSZglP2UA&sig2=wZlTGXORD_HOUDJi-a2uAA&bvm=bv.77880786,d.dGc
Green Computing: A Methodology of Saving Energy by Resource Virtualization.IJCERT
In the past a couple of years computer standard was moved to remote data farms and the
software and hardware services accessible on the premise of pay for utilize .This is called
Cloud computing, In which client needs to pay for the Services .Cloud give the Services –
Programming as a Service ,stage as a Service and foundation as a Service .These
Services gave through the remote server farms (since the information is
scattered/disseminated over the web.), as Programming requisition and different Services
relocated to the remote server farm ,Service of these server farm in the imperative. Server
farm Service confronts the issue of force utilization. At present Cloud computing based
framework squander an extraordinary measure of force and produces co2. Since
numerous servers don't have a decent quality cooling framework. Green Computing can
empower more vitality proficient utilization of computing power .This paper indicates the
prerequisite of Green Computing and methods to spare the vitality by distinctive
methodologies
Green computing is the practice of designing, developing, using, and disposing of computer hardware, software, and systems in an environmentally friendly way. This involves reducing the environmental impact of computing by minimizing energy consumption, reducing electronic waste, and using sustainable materials. Green computing is becoming increasingly important as the use of technology continues to grow and the environmental impact of technology becomes more apparent.
A survey on dynamic energy management at virtualization level in cloud data c...csandit
Data centers have become indispensable infrastructure for data storage and facilitating the
development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results
in increased energy consumption. This necessitates the development of efficient energy
management techniques in data center not only for operational cost but also to reduce the
amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy
management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys
various issues related to dynamic energy management at virtualization level in cloud data
centers.
Saving energy in data centers through workload consolidationEco4Cloud
This whitepaper on recently co‑authored with 4 top-notch European excellence centers such as the Institute for High Performance Computing and Networking of the Italian National Research Council, the Department of Electronics and Telecommunications at Politecnico di Torino, eERG – Energy Department at Politecnico di Milano and PrimeEnergyIT/EfficientDataCenters frames the whole workload consolidation topic and provides an overview of state-of-the-art approaches, including E4C’s of course.
It is Also known as Green It.Green computing is the environmentally responsible and eco-friendly use of computers and their resources. In broader terms, it is also defined as the study of designing, manufacturing/engineering, using and disposing of computing devices in a way that reduces their environmental impact
Energy harvesting earliest deadline first scheduling algorithm for increasing...IJECEIAES
In this paper, a new approach for energy minimization in energy harvesting real time systems has been investigated. Lifetime of a real time systems is depend upon its battery life. Energy is a parameter by which the lifetime of system can be enhanced. To work continuously and successively, energy harvesting is used as a regular source of energy. EDF (Earliest Deadline First) is a traditional real time tasks scheduling algorithm and DVS (Dynamic Voltage Scaling) is used for reducing energy consumption. In this paper, we propose an Energy Harvesting Earliest Deadline First (EH-EDF) scheduling algorithm for increasing lifetime of real time systems using DVS for reducing energy consumption and EDF for tasks scheduling with energy harvesting as regular energy supply. Our experimental results show that the proposed approach perform better to reduce energy consumption and increases the system lifetime as compared with existing approaches.
Green Business are the future business and Green IT can be one of the best business for future which help companies in Energy Efficiency and also help in CSR and also save the environment.
Achieving Energy Proportionality In Server ClustersCSCJournals
a great amount of interests in the past few years. Energy proportionality is a principal to ensure that energy consumption is proportional to the system workload. Energy proportional design can effectively improve energy efficiency of computing systems. In this paper, an energy proportional model is proposed based on queuing theory and service differentiation in server clusters, which can provide controllable and predictable quantitative control over power consumption with theoretically guaranteed service performance. Futher study for the transition overhead is carried out corresponding strategy is proposed to compensate the performance degradation caused by transition overhead. The model is evaluated via extensive simulations and is justified by the real workload data trace. The results show that our model can achieve satisfied service performance while still preserving energy efficiency in the system.
Energy Efficient Data Center
source : http://hightech.lbl.gov/presentations/6-23-05_PGE_Workshop.ppt&ei=BVxPVIy_Bse68gWwy4HAAw&usg=AFQjCNGHU_rSwcF4BMo2A6KnFfSZglP2UA&sig2=wZlTGXORD_HOUDJi-a2uAA&bvm=bv.77880786,d.dGc
Green Computing: A Methodology of Saving Energy by Resource Virtualization.IJCERT
In the past a couple of years computer standard was moved to remote data farms and the
software and hardware services accessible on the premise of pay for utilize .This is called
Cloud computing, In which client needs to pay for the Services .Cloud give the Services –
Programming as a Service ,stage as a Service and foundation as a Service .These
Services gave through the remote server farms (since the information is
scattered/disseminated over the web.), as Programming requisition and different Services
relocated to the remote server farm ,Service of these server farm in the imperative. Server
farm Service confronts the issue of force utilization. At present Cloud computing based
framework squander an extraordinary measure of force and produces co2. Since
numerous servers don't have a decent quality cooling framework. Green Computing can
empower more vitality proficient utilization of computing power .This paper indicates the
prerequisite of Green Computing and methods to spare the vitality by distinctive
methodologies
Green computing is the practice of designing, developing, using, and disposing of computer hardware, software, and systems in an environmentally friendly way. This involves reducing the environmental impact of computing by minimizing energy consumption, reducing electronic waste, and using sustainable materials. Green computing is becoming increasingly important as the use of technology continues to grow and the environmental impact of technology becomes more apparent.
GreenDisc: A HW/SW energy optimization framework in globally distributed comp...GreenLSI Team, LSI, UPM
Marina Zapater attends as speaker to UCAmI 2012.
The main goal of this conference is to provide a discussion forum where researchers and practitioners on Ubiquitous Computing and Ambient Intelligence can meet, disseminate and exchange ideas and problems, identify some of the key issues related to these topics, and explore together possible solutions and future works.
The Ubiquitous Computing (UC) idea envisioned by Weiser in 1991, has recently evolved to a more general paradigm known as Ambient Intelligence (AmI). Ambient Intelligence then represents a new generation of user-centred computing environments aiming to find new ways to obtain a better integration of the information technology in everyday life devices and activities.
Marina has presented our first results within the GreenDISC project, proposing several research lines that target the power optimization in computing systems. In particular, we deal with two novel and highly differentiated computer paradigms that, however, coexist and interact in the current application scenarios: the Wireless Sensor Networks (WSN) and the high-performance computing in Data Centers (DC).
For further information, please, refer to the paper:
M. Zapater, J. L. Ayala, and J. M. Moya, “GreenDisc: a HW/SW energy optimization framework in globally distributed computation,” , J. Bravo, D. López-de Ipiña, and F. Moya, Ed., Springer Berlin Heidelberg, 2012, pp. 1-8. doi:10.1007/978-3-642-35377-2_1
ScottMadden has developed an approach for analyzing data center requirements and driving improvements in existing data center retrofits. Our approach takes into account the technological requirements, the physical attributes of a data center, and the requirements for a rigorous measurement and verification program needed to ensure improvements actually capture the energy efficiently gains and the resultant greenhouse gas reductions.
Our approach addresses the latest trends in data center management such as virtualization and cloud computing and provide a framework for developing metrics needed to drive changes in data center performance.
BIS Research conducted a webinar on Carbon Neutral Data Center PracticesBIS Research Inc.
Agenda:
To evaluate various emerging trends in Data center industry.
To analyze the initiatives taken and regulations implemented to increase sustainable practices.
To analyze the various types of technologies currently deployed.
To evaluate the major players in the ecosystem.
Speaker Profiles:
Name: JD Enright, Sr.
Designation: Chief Operating Officer
Company: TMGcore Inc.
Experienced and dedicated business professional with 29 years working with DOD, Multinational Private and Public sector organizations. Primary focus on developing and executing Strategic Step-Up Growth initiatives in Emerging Markets, and Technologies. Specialized leadership in developing improved business operations to include Financial and Operational efficiencies. Implementation of next generation HPC Platforms, Global Blockchain Development Strategies, Biotechnology, Viral Cell and Gene Therapies and Market Intelligence Assimilation for public applications.
BGPC: Energy-Efficient Parallel Computing Considering Both Computational and ...Tarik Reza Toha
Parallel computing has become popular now-a-days due to its computing efficiency and cost effectiveness. However, in parallel computing systems, the computing demands a set of machines instead of a single machine. Therefore, it consumes a significant amount of power compared to single-machine computing systems. Moreover, a noticeable amount of power is necessary for maintaining the optimum temperature in the working environment of the parallel systems. This power is generally known as the cooling power required for the systems.
Although several power saving parallel computing schemes have already been proposed in the literature to date in order to minimize computational power consumption of a parallel system, designing a scheme considering both computational and cooling power consumption with low-cost resource is yet to be investigated in the literature. Therefore, in this thesis, we propose a low-cost power saving scheme simultaneously considering both computational and cooling power consumption. We design a machine learning framework BPGC, which tries to find the number of machines needed to be activated to be optimal, or at least near-optimal, in terms of minimum total energy consumption, with minimal overhead.
In order to predict total energy, we need to predict response time, computational power, and cooling power. We fit different machine learning algorithms for these predictions by using a year long collected training data. K-nearest neighbors, Support Vector Machine for regression, and Additive Regression using Random Forest show the highest accuracy for these predictions respectively. We implement BPGC framework in our test-bed with two green methods and static method. Our framework outperforms the green methods with a little degradation of QoS compared to the best QoS provider, that is, static method.
Optimización energética de centros de datos aprovechando el conocimiento de l...GreenLSI Team, LSI, UPM
Talk “Advances in Electronic Systems Engineering” seminar, within the M.Sc. in Electronic Systems Engineering (MISE), to present the session on Energy Optimization in Data Centers.
Speech title: Energy efficiency beyond PUE: exploiting knowledge about application and resources
Abstract: The current techniques for data center energy optimization, based on efficiency metrics like PUE, pPUE, ERE, DCcE, etc., do not take into account the static and dynamic characteristics of the applications and resources (computing and cooling). However, the knowledge about the current state of the data center, the past history, the resource characteristics, and the characteristics of the jobs to be executed can be used very effectively to guide decision-making at all levels in the datacenter in order to minimize energy needs. For example, the allocation of jobs on the available machines, if done taking into account the most appropriate architecture for each job from the energetic point of view, and taking into account the type of jobs that will come later, can reduce energy needs by 30%.
Moreover, to achieve significant reductions in energy consumption of state-of-the-art data centers (low PUE) is becoming increasingly important a comprehensive and multi-level approach, ie, acting on different abstraction levels (scheduling and resource allocation, application, operating system, compilers and virtual machines, architecture, and technology), and at different scopes (chip, server, rack, room, and multi-room).
Date and Time: Tuesday, October 15, 2013, 16:00, room B-221
Energy-efficient data centers: Exploiting knowledge about application and res...GreenLSI Team, LSI, UPM
Presentation by Jose M. Moya at the IEEE Region 8 SB & GOLD Congress (25 – 29 July, 2012).
The current techniques for data center energy optimization, based on
efficiency metrics like PUE, pPUE, ERE, DCcE, etc., do not take into
account the static and dynamic characteristics of the applications and
resources (computing and cooling). However, the knowledge about the
current state of the data center, the past history, the resource
characteristics, and the characteristics of the jobs to be executed
can be used very effectively to guide decision-making at all levels in
the datacenter in order to minimize energy needs. For example, the
allocation of jobs on the available machines, if done taking into
account the most appropriate architecture for each job from the
energetic point of view, and taking into account the type of jobs that
will come later, can reduce energy needs by 30%.
Moreover, to achieve significant reductions in energy consumption of
state-of-the-art data centers (low PUE) is becoming increasingly
important a comprehensive and multi-level approach, ie, acting on
different abstraction levels (scheduling and resource allocation,
application, operating system, compilers and virtual machines,
architecture, and technology), and at different scopes (chip, server,
rack, room, and multi-room).
Eficiencia Energética Más Allá Del PUE: Explotando el Conocimiento de la Apli...GreenLSI Team, LSI, UPM
Conferencia invitada de Jose M. Moya en Datacenter Dynamics Converged Madrid 2012.
Las técnicas actuales de optimización energética de datacenters, basadas en métricas de eficiencia como PUE, pPUE, ERE, DCcE, etc., no tienen en cuenta las características estáticas y dinámicas de las
aplicaciones y los recursos (de computación y refrigeración). Sin embargo, el conocimiento del estado actual del datacenter, de la historia pasada, de las caracteriìsticas térmicas de los recursos y de las caracteriìsticas de demanda energética de los trabajos a ejecutar puede ser utilizado de manera muy eficaz para guiar la toma de decisiones a todos los niveles en el datacenter con objeto de minimizar las necesidades energeìticas. Por ejemplo, el reparto de trabajos en las maìquinas disponibles, si se hace teniendo en cuenta las arquitecturas maìs adecuadas para cada trabajo desde el punto de vista energeìtico, y teniendo en cuenta el tipo de trabajos que van a venir con posterioridad, puede reducir las necesidades energeìticas hasta un 30%.
Además, para conseguir una reducción significativa del consumo energético de datacenters ya eficientes (PUE bajo) cada vez es más importante un enfoque global y multi-nivel, esto es, actuando sobre los diferentes niveles de abstraccioìn del datacenter (planificación y asignación de recursos, aplicación, sistema operativo, compiladores y máquinas virtuales, arquitectura y tecnología), y en los distintos ámbitos (chip, servidor, rack, sala y multi-sala).
Proactive and reactive thermal optimization techniques to improve energy effi...GreenLSI Team, LSI, UPM
Marina Zapater presents her work at the PICATA Workshop. This workshop is intended to know the diverse groups of people recently incorporated thank to PICATA programme of Moncloa campus and who are researching and assessing the clusters.
The Program for International Talent Recruitment (PICATA) has focused on bringing in students and researchers from all over the world, in a determined effort towards internationalization and talent recruitment with different actions. The PICATA Programme offers sholarships for the development of PhD thesis marked by at least two practising doctors from the two associated Universities, the UCM and the UPM, with the possibility of participation by doctors from the other associated Institutions within the context of the Campus Moncloa in these areas: Global Change and New Energies, Materials for the Future, Agri-food and Health, Innovative Medicine, and Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
3. Green
Marina Zapater | Going Green 3
Outline
• Why Data Centers (DC) in
this Workshop?
• The DC in next-generation
applications
• Energy consumption at the
Data Center
• Insight on optimization
strategies
• Conclusions
4. Green
Marina Zapater | Going Green 4
Outline
• Why Data Centers (DC) in
this Workshop?
• The DC in next-generation
applications
• Energy consumption at the
Data Center
• Insight on optimization
strategies
• Conclusions
5. Green
Marina Zapater | Going Green 5
US EPA 2007 Report to Congress on Server and Data Center Energy Efficiency
Why DC in this Workshop?
Motivation
6. Green
Marina Zapater | Going Green 6
Motivation
• Energy consumption of data centers
– 1.3% of worldwide energy production in 2010
– USA: 80 mill MWh/year in 2011 = 1,5 x NYC
– 1 data center = 25 000 houses
• More than 43 Million Tons of CO2 emissions per
year (2% worldwide)
• More water consumption than many industries
(paper, automotive, petrol, wood, or plastic)
Jonathan Koomey. 2011. Growth in Data center electricity use 2005 to 2010
7. Green
Marina Zapater | Going Green 7
Motivation
José M.Moya | Madrid (Spain), July 27, 2012 7
• It is expected for total data
center electricity use to
exceed 400 GWh/year by
2015.
• The required energy for
cooling will continue to be at
least as important as the
energy required for the
computation.
• Energy optimization of future
data centers will require a
global and multi-disciplinary
approach.
0
5000
10000
15000
20000
25000
30000
35000
2000 2005 2010
Worldserverinstalledbase
(thousands)
High-end servers
Mid-range servers
Volume servers
0
50
100
150
200
250
300
2000 2005 2010
Electricityuse
(billionkWh/year)
Infrastructure
Communications
Storage
High-end servers
Mid-range servers
Volume servers
5,75 Million new servers per year
10% unused servers (CO2 emissions
similar to 6,5 million cars)
8. Green
Marina Zapater | Going Green 8
What about urban DC?
• 50% of urban DC have already or will soon reach the
maximum capacity of the power grid
10. Green
Marina Zapater | Going Green 10
Outline
• Why Data Centers (DC) in
this Workshop?
• The DC in next-generation
applications
• Energy consumption at the
Data Center
• Insight on optimization
strategies
• Our vision and future trends
11. Green
Marina Zapater | Going Green 11
The DC in next generation
applications
• Traditional uses of Data Centers:
– Webmail, Web search, Databases, Social networking or distributed
storage, High-performance computing (HPC), Cloud computing
• Next-generation applications:
– Population monitoring applications: e-Health, Ambient Assisted Living
– Smart cities
• Next-generation applications generate huge amounts of data
• Need to store, analize and generate knowledge
12. Green
Marina Zapater | Going Green 12
Global energy optimization
• Solution: GoingGreen!
• How: Global energy optimization strategies
– Proposal of a holistic energy optimization framework
– Minimizing overall power consumption
– Multi-level optimization: WBSN, Personal Servers and Data Centers
13. Green
Marina Zapater | Going Green 13
Global energy optimization
• Executing part of the workload in the Personal Servers
– Classifying tasks depending on their demand
– Resource management techniques based on fast runtime allocation
algorithms executed on the Personal Servers
– Executing some tasks in Personal Servers instead of forwarding load to DC.
– Up to 10% in energy savings and 15% execution time savings
14. Green
Marina Zapater | Going Green 14
Outline
• Why Data Centers (DC) in this
Workshop?
• The DC in next-generation
applications
• Energy consumption at the
Data Center
• Insight on optimization
strategies
• Conclusions
15. Green
Marina Zapater | Going Green 15
Energy Consumption at the DC
What is really a Data Center?
http://cesvima.upm.es
WORKLOAD Scheduler Resource
Manager
Execution
16. Green
Marina Zapater | Going Green 16
Energy Consumption at the DC
How does cooling work?
• Typical raised-floor air-cooled Data Center:
17. Green
Marina Zapater | Going Green 17
Energy Consumption at the DC
Power consumption breakdown
• The major contributors to electricity costs are:
– Cooling (around 50%)
– Servers (around 30-40%)
• The most common metric to measure efficiency in
Data Centers is PUE (Power Usage Effectiveness)
18. Green
Marina Zapater | Going Green 18
Power Usage Effectiveness
(PUE)
• Average PUE ≈ 2
• State of the Art: PUE ≈ 1,2
– The important part is IT energy consumption
– Current work in energy efficient data centers is focused in
decreasing PUE
– Decreasing PIT does not decrease PUE, but it has in impact
on the electricity bill
19. Green
Marina Zapater | Going Green 19
“Traditional” approaches
What would Google do?
PUE = 1.2
20. Green
Marina Zapater | Going Green 20
Research trends
Abstraction level
• Higher levels of
abstraction bring
more benefits
• Some areas have
brought more
benefits than
others
Solutions proposed by the State of the Art
21. Green
Marina Zapater | Going Green 21
Outline
• Why Data Centers (DC) in this
Workshop?
• The DC in next-generation
applications
• Energy consumption at the
Data Center
• Insight on optimization
strategies
• Conclusions
22. Green
Marina Zapater | Going Green 22
Our approach
• Global strategy to allow the
use of multiple information
sources to coordinate
decisions in order to reduce
the total energy consumption
• Use of knowledge about the
energy demand
characteristics of the
applications, and
characteristics of computing
and cooling resources to
implement proactive
optimization techniques
23. Green
Marina Zapater | Going Green 23
Energy Optimization:
Holistic Approach
Chip Server Rack Room Multi-
room
Sched & alloc 2 1
Application
OS/middleware
Compiler/VM 3 3
architecture 4 4
technology 5
24. Green
Marina Zapater | Going Green 24
Resource Management at
the Room level
Chip Server Rack Room Multi-
room
Sched & alloc 2 1
Application
OS/middleware
Compiler/VM 3 3
architecture 4 4
technology 5
25. Green
Marina Zapater | Going Green 25
Resource Management at the Room level
Leveraging heterogeneity – IT perspective
• Use heterogeneity to minimize energy consumption from a
static/dynamic point of view
– Static: Finding the best data center set-up, given a number of
heterogeneous machines
– Dynamic: Optimization of task allocation in the Resource Manager
• We show that the best solution implies an heterogeneous data
center
– Most data centers are heterogeneous (several generations of
computers)
– 5 to 22% energy savings for static solution
– 24% to 47% energy savings for dynamic solution
M. Zapater, J.M. Moya, J.L. Ayala. Leveraging Heterogeneity for
Energy Minimization in Data Centers, CCGrid 2012
26. Green
Marina Zapater | Going Green 26
Resource Management at the Room level
Leveraging heterogeneity – IT perspective
• Energy profiling of tasks of the SPEC CPU 2006 benchmark
• Usage of MILP algorithms to schedule tasks in servers where
they consume less energy
• Implemented in a real resource manager (SLURM)
27. Green
Marina Zapater | Going Green 27
Resource Management at the Room level
IT + Cooling perspective
• Generating a thermal model for
the data room:
– Data Center environmental
monitoring to gather temperature,
humidity, differential pressure
– Predict server temperature and
room temperature
• Optimum resource
management attending to
cooling and IT power
– Real environment with
heterogeneous servers
– SLURM resource manager
28. Green
Marina Zapater | Going Green 28
Resource Management at
the Server level
Chip Server Rack Room Multi-
room
Sched & alloc 2 2 1
Application
OS/middleware
Compiler/VM 3 3
architecture 4 4
technology 5
29. Green
Marina Zapater | Going Green 29
Resource Management at the Server level
Leakage-temperature tradeoffs - Cooling
• Exploring the leakage-temperature tradeoffs at the server level
– At higher temperatures, CPU increases power consumption due to
leakage
– To decrease CPU temperature, fan speed raises, increasing server
cooling consumption.
M. Zapater, J.L. Ayala., J.M. Moya, K. Vaidyanathan, K. Gross, and A. K. Coskun, “Leakage and
temperature aware server control for improving energy efficiency in data centers”, DATE 2013
30. Green
Marina Zapater | Going Green 30
Resource Management at the Server level
Leakage-temperature tradeoffs - Cooling
• Implemented fan speed controllers that reduce server power
consumption by 10%.
31. Green
Marina Zapater | Going Green 31
Resource Management at
the Chip level
Chip Server Rack Room Multi-
room
Sched & alloc 2 2 1
Application
OS/middleware
Compiler/VM 3 3
architecture 4 4
technology 5
32. Green
Marina Zapater | Going Green 32
Scheduling and resource allocation policies
in MPSoCs
A. Coskun , T. Rosing , K. Whisnant and K. Gross "Static and dynamic temperature-
aware scheduling for multiprocessor SoCs", IEEE Trans. Very Large Scale Integr. Syst.,
vol. 16, no. 9, pp.1127 -1140 2008
Fig. 3. Distribution of thermal hot spots, with DPM (ILP).
A. Static Scheduling Techniques
We next provide an extensive comparison of the ILP based
techniques. We refer to our static approach as Min-Th&Sp.
As discussed in Section III, we implemented the ILP for min-
Fig. 4. Distribution of spatial gradients, with DPM (ILP).
hot spots. While Min-Th reduces the high spatial differentials
above 15 C, we observe a substantial increase in the spatial
gradients above 10 C. In contrast, our method achieves lower
and more balanced temperature distribution in the die.
Fig. 3. Distribution of thermal hot spots, with DPM (ILP).
A. Static Scheduling Techniques
We next provide an extensive comparison of the ILP based
techniques. We refer to our static approach as Min-Th&Sp.
As discussed in Section III, we implemented the ILP for min-
Fig. 4. Distribution of spatial gradients, with DPM (ILP).
hot spots. While Min-Th reduces the high spatial differentials
above 15 C, we observe a substantial increase in the spatial
gradients above 10 C. In contrast, our method achieves lower
and more balanced temperature distribution in the die.
UCSD – System Energy Efficiency Lab
33. Green
Marina Zapater | Going Green 33
Scheduling and resource allocation
policies in MPSoCs
• Energy characterization of applications allows to
define proactive scheduling and resource allocation
policies, minimizing hotspots
• Hotspot reduction allows to raise cooling
temperature
+1oC means around 7% cooling energy savings
34. Green
Marina Zapater | Going Green 34
Energy Optimization:
Holistic Approach
Chip Server Rack Room Multi-
room
Sched & alloc 2 2 1
Application
OS/middleware
Compiler/VM 3 3
architecture 4 4
technology 5
35. Green
Marina Zapater | Going Green 35
JIT Compilation in Virtual
Machines
• Virtual machines compile
(JIT compilation) the
applications into native code
for performance reasons
• The optimizer is general-
purpose and focused in
performance optimization
36. Green
Marina Zapater | Going Green 36
Back-end
JIT compilation for
energy minimization
• Application-aware compiler
– Energy characterization of applications and transformations
– Application-dependent optimizer
– Global view of the data center workload
• Energy optimizer
– Currently, compilers for high-end processors oriented to performance
optimization
Front-end
Optimizer Code generator
37. Green
Marina Zapater | Going Green 37
Energy saving potential for the
compiler (MPSoCs)
T. Simunic, G. de Micheli, L. Benini, and M. Hans. “Source code optimization and
profiling of energy consumption in embedded systems,” International Symposium on
System Synthesis, pages 193 – 199, Sept. 2000
– 77% energy reduction in MP3 decoder
Fei, Y., Ravi, S., Raghunathan, A., and Jha, N. K. 2004. Energy-optimizing source code
transformations for OS-driven embedded software. In Proceedings of the International
Conference VLSI Design. 261–266.
– Up to 37,9% (mean 23,8%) energy savings in
multiprocess applications running on Linux
38. Green
Marina Zapater | Going Green 38
Global Management of
Low Power Modes
Chip Server Rack Room Multi-
room
Sched & alloc 2 2 1
Application
OS/middleware
Compiler/VM 3 3
architecture 4 4
technology 5
39. Green
Marina Zapater | Going Green 39
Global Management of
Low-power modes (DVFS)
• DVFS (Dynamic Voltage and Frequency Scaling) is based upon:
– As suppy voltage decreases, power decreases quadratically
– But delay increases (performance decreases) only linearly
– The maximum frequency also decreases linearly
• Currently, low-power modes, if used, are activated by
inactivity of the server operating system
• To minimize energy consumption, changes between modes
should be minimized
• On the other hand, workload knowledge allows to globally
schedule low-power modes without any impact in
performance
40. Green
Marina Zapater | Going Green 40
Global Management of
Low-power modes (DVFS)
• By using a thermal model,
we can predict the
behaviour of a workload
under each power mode
• We can use resource
management algorithms
to change DVFS on
runtime, adapting to our
workload.
41. Green
Marina Zapater | Going Green 41
Temperature-aware
floorplanning of MPSoCs
Chip Server Rack Room Multi-
room
Sched & alloc 2 2 1
Application
OS/middleware
Compiler/VM 3 3
architecture 4 4
technology 5
43. Green
Marina Zapater | Going Green 43
Potential energy savings
with floorplanning
– Up to 21oC reduction of maximum temperature
– Mean: -12oC in maximum temperature
– Better results in the most critical examples
Y. Han, I. Koren, and C. A. Moritz. Temperature Aware Floorplanning. In Proc. of the
Second Workshop on Temperature-Aware Computer Systems, June 2005
44. Green
Marina Zapater | Going Green 44
Temperature-aware
floorplanning in 3D chips
• 3D chips are getting interest due to:
– Scalability: reduces 2D
equivalent area
– Performance: shorter wire
length
– Reliability: less wiring
• Drawback:
– Huge increment of hotspots
compared with 2D equivalent designs
45. Green
Marina Zapater | Going Green 45
Temperature-aware
floorplanning in 3D chips
• Up to 30oC reduction per layer in a 3D chip with 4 layers and
48 cores
46. Green
Marina Zapater | Going Green 46
Outline
• Why Data Centers (DC) in this
Workshop?
• The DC in next-generation
applications
• Energy consumption at the
Data Center
• Insight on optimization
strategies
• Conclusions
47. Green
Marina Zapater | Going Green 47
There is still much more
to be done
• Smart Grids
– Consume energy when everybody else does not
– Decrease energy consumption when everybody
else is consuming
• Reducing the electricity bill
– Variable electricity rates
– Reactive power coefficient
– Peak energy demand
48. Green
Marina Zapater | Going Green 48
Conclusions
• Reducing PUE is not the same than reducing energy
consumption
– IT energy consumption dominates in state-of-the-art data
centers
• Application and resources knowledge can be effectively
used to define proactive policies to reduce the total energy
consumption
– At different levels
– In different scopes
– Taking into account cooling and computation at the same time
• Proper management of the knowledge of the data center
thermal behavior can reduce reliability issues
• Reducing energy consumption is not the same than
reducing the electricity bill
49. Green
Marina Zapater | Going Green 49
Thank you for your attention
Marina Zapater
marina@die.upm.es
http://greenlsi.die.upm.es
(+34) 91 549 57 00 x-4227
ETSI de Telecomunicación, B105
Avenida Complutense, 30
Madrid 28040, Spain
Thanks to our collaborators: