(1) Datacenters are facing increasing demands that many current facilities cannot meet, requiring transformation through consolidation, virtualization, and improved energy efficiency and availability.
(2) Datacenter designs are evolving from small, isolated IT islands to larger, standardized facilities with improved reliability through redundant critical systems and failover capabilities.
(3) Next generation datacenter designs focus on high power density, energy efficiency through technologies like containerization, and rapid deployment in multiple locations for business flexibility.
The document discusses the utility and limitations of PUE (Power Usage Effectiveness) as a metric for datacenter efficiency. While PUE is a widely used high-level metric, it does not provide enough information on its own to optimize efficiency. To enable effective efficiency actions, more detailed energy monitoring data is needed, including power consumption at the individual IT device level trended over time. Gathering additional operational data beyond just PUE can provide insights to reduce energy waste throughout the entire datacenter system.
How green standards are changing data center design and operationsSchneider Electric
An effort is underway to harmonize certain energy-efficiency standards. Could global standardization ultimately diminish the technical effectiveness of such standards? Which will emerge as the de facto standards? This session will explore these questions, as well in data center efficiency and sustainability guidelines.
The wide range of processes within the successful business, from planning to strategic implementation, requires accurate and ready information throughout. The cast of personnel involved across the business operation requires widely varying types of information to perform their assignments. In all, the successful business requires a powerful Business Intelligence technology.
Discussion covers the constitution and requirements of the effective Corporate Information Factory (CIF) Architecture. The Data Warehouse component of the CIF Architecture must be a flexible and reliable store of company information that allows a high degree of differentiation in data selection, modeling and analysis.
Next, the ETL processes — extract, transform and load — are responsible for accurately populating the Data Warehouse with information and enabling the use of this data. Again, differentiating methodologies, along with validating performance testing, must be accommodated.
Third, Business Intelligence tools for multi-dimensional analysis, budgeting and forecasting, efficient reporting, and data mining for enhanced insight assure the proper information is accessed for each specific business process. Developing and implementing the CIF Architecture involves definition of short-, medium-, and long-term objectives for the system as well as definition of the elements involved.
When a company implements a Business Intelligence technology, it is important that risk factors be identified and evaluated, including the scope and degree of difficulty of information integration, speed and adaptability, utility and practicality for the employee, and long-term effectiveness.
Schneider Electric Business Intelligence services are based on the company’s vast experience in helping organizations define their BI policies and develop their BI Architecture. It offers a productive competence center for consulting support, a proven product portfolio that allows efficient and effective development of specific BI solutions, and highly reliable technical assistance for specific needs or longer term. Several successful Business Intelligence technology solutions implemented by Schneider Electric are described.
This document discusses the need for green data centers and provides strategies for making data centers more energy efficient. It notes that while many organizations say they are green, few have specific targets or programs to reduce their carbon footprint. As data center electricity consumption and costs rise, running out of power capacity, cooling capacity, and physical space are major concerns. The document then provides questions to assess a data center's energy efficiency in terms of facilities, IT equipment, and utilization rates. It recommends strategies like optimizing infrastructure utilization and choosing more efficient hardware and cooling options. The goal is to improve the data center infrastructure efficiency metric and lower costs by reducing redundant, underutilized resources.
Virtualization and Cloud Computing: Optimized Power, Cooling, and Management ...Schneider Electric
IT virtualization, the engine behind cloud computing, can have significant consequences on the data center physical infrastructure (DCPI). Higher power densities that often result can challenge the cooling capabilities of an existing system. Reduced overall energy consumption that typically results from physical server consolidation may actually worsen the data center’s power usage effectiveness (PUE). Dynamic loads that vary in time and location may heighten the risk of downtime if rack-level power and cooling health are not understood and considered. Finally, the fault-tolerant nature of a highly virtualized environment could raise questions about the level of redundancy required in the physical infrastructure. These particular effects of virtualization are discussed and possible solutions or methods for dealing with them are offered.
The document discusses real-time monitoring tools for data centers and their value over traditional point-in-time measurements. It highlights results from surveys showing energy efficiency and monitoring are top concerns. Real-time tools provide continuous monitoring of metrics like temperature, humidity, power usage and IT load to ensure optimal performance, efficiency and availability. The presentation concludes with a live demo and emphasizes that the future of data center monitoring involves non-invasive, rack-level tools for ongoing assessment and improvement.
The document discusses how green a data center is from different perspectives such as being environmentally conscious, reducing costs through efficiency, using renewable energy sources, and lowering carbon footprint, and provides examples of data on power consumption, cooling waste, and challenges faced by data centers. It also includes charts showing common problems in data centers related to power, heat, and space as well as inventory of typical IT equipment in a data center rack.
The document discusses data center efficiency and focuses on Google's approach. It covers how Google builds its own custom data centers rather than relying on standard industry equipment and practices. It also describes how Google recommends five methods for reducing power consumption, which include measuring PUE, managing airflow, adjusting thermostats, using free cooling, and optimizing power distribution. The document notes that around 2% of global greenhouse gas emissions result from computing activities, with data centers accounting for 15% of that and large internet data centers making up 5%.
The document discusses the utility and limitations of PUE (Power Usage Effectiveness) as a metric for datacenter efficiency. While PUE is a widely used high-level metric, it does not provide enough information on its own to optimize efficiency. To enable effective efficiency actions, more detailed energy monitoring data is needed, including power consumption at the individual IT device level trended over time. Gathering additional operational data beyond just PUE can provide insights to reduce energy waste throughout the entire datacenter system.
How green standards are changing data center design and operationsSchneider Electric
An effort is underway to harmonize certain energy-efficiency standards. Could global standardization ultimately diminish the technical effectiveness of such standards? Which will emerge as the de facto standards? This session will explore these questions, as well in data center efficiency and sustainability guidelines.
The wide range of processes within the successful business, from planning to strategic implementation, requires accurate and ready information throughout. The cast of personnel involved across the business operation requires widely varying types of information to perform their assignments. In all, the successful business requires a powerful Business Intelligence technology.
Discussion covers the constitution and requirements of the effective Corporate Information Factory (CIF) Architecture. The Data Warehouse component of the CIF Architecture must be a flexible and reliable store of company information that allows a high degree of differentiation in data selection, modeling and analysis.
Next, the ETL processes — extract, transform and load — are responsible for accurately populating the Data Warehouse with information and enabling the use of this data. Again, differentiating methodologies, along with validating performance testing, must be accommodated.
Third, Business Intelligence tools for multi-dimensional analysis, budgeting and forecasting, efficient reporting, and data mining for enhanced insight assure the proper information is accessed for each specific business process. Developing and implementing the CIF Architecture involves definition of short-, medium-, and long-term objectives for the system as well as definition of the elements involved.
When a company implements a Business Intelligence technology, it is important that risk factors be identified and evaluated, including the scope and degree of difficulty of information integration, speed and adaptability, utility and practicality for the employee, and long-term effectiveness.
Schneider Electric Business Intelligence services are based on the company’s vast experience in helping organizations define their BI policies and develop their BI Architecture. It offers a productive competence center for consulting support, a proven product portfolio that allows efficient and effective development of specific BI solutions, and highly reliable technical assistance for specific needs or longer term. Several successful Business Intelligence technology solutions implemented by Schneider Electric are described.
This document discusses the need for green data centers and provides strategies for making data centers more energy efficient. It notes that while many organizations say they are green, few have specific targets or programs to reduce their carbon footprint. As data center electricity consumption and costs rise, running out of power capacity, cooling capacity, and physical space are major concerns. The document then provides questions to assess a data center's energy efficiency in terms of facilities, IT equipment, and utilization rates. It recommends strategies like optimizing infrastructure utilization and choosing more efficient hardware and cooling options. The goal is to improve the data center infrastructure efficiency metric and lower costs by reducing redundant, underutilized resources.
Virtualization and Cloud Computing: Optimized Power, Cooling, and Management ...Schneider Electric
IT virtualization, the engine behind cloud computing, can have significant consequences on the data center physical infrastructure (DCPI). Higher power densities that often result can challenge the cooling capabilities of an existing system. Reduced overall energy consumption that typically results from physical server consolidation may actually worsen the data center’s power usage effectiveness (PUE). Dynamic loads that vary in time and location may heighten the risk of downtime if rack-level power and cooling health are not understood and considered. Finally, the fault-tolerant nature of a highly virtualized environment could raise questions about the level of redundancy required in the physical infrastructure. These particular effects of virtualization are discussed and possible solutions or methods for dealing with them are offered.
The document discusses real-time monitoring tools for data centers and their value over traditional point-in-time measurements. It highlights results from surveys showing energy efficiency and monitoring are top concerns. Real-time tools provide continuous monitoring of metrics like temperature, humidity, power usage and IT load to ensure optimal performance, efficiency and availability. The presentation concludes with a live demo and emphasizes that the future of data center monitoring involves non-invasive, rack-level tools for ongoing assessment and improvement.
The document discusses how green a data center is from different perspectives such as being environmentally conscious, reducing costs through efficiency, using renewable energy sources, and lowering carbon footprint, and provides examples of data on power consumption, cooling waste, and challenges faced by data centers. It also includes charts showing common problems in data centers related to power, heat, and space as well as inventory of typical IT equipment in a data center rack.
The document discusses data center efficiency and focuses on Google's approach. It covers how Google builds its own custom data centers rather than relying on standard industry equipment and practices. It also describes how Google recommends five methods for reducing power consumption, which include measuring PUE, managing airflow, adjusting thermostats, using free cooling, and optimizing power distribution. The document notes that around 2% of global greenhouse gas emissions result from computing activities, with data centers accounting for 15% of that and large internet data centers making up 5%.
Power Strategies for Data Center Efficiency – Identifying Cost Reduction Opportunities
In a survey conducted by the Uptime Institute, enterprise data center managers responded that 42% of them expected to run out of power capacity within 12-24 months and another 23% claimed that they would run out of power capacity in 24-60 months. Greater attention to energy efficiency and consumption is critical.
To view the recorded webinar presentation, please visit http://www.42u.com/power-strategies-webinar.htm
It is recognized within the industry that most data centers are not energy efficient. Traditional data center designs do not fully address optimizing the data center. While data center managers struggle with uptime and reliability, business executives are looking for ways to reduce capital and operational expenses to improve the bottom line. Green initiatives are also in place to not only save money but to be environmentally responsible. New green data center designs (based on hot and cold air containment) have started to become more popular. Containment strategies and air flow optimization are recognized as a way to achieve both technical and business objectives. By separating hot and cold air within the data center, capital and operational expenses can be reduced for the business and a more stable and predictable environment can be achieved for the IT organization.
Electricity usage costs have become an increasing fraction of the total cost of ownership (TCO) for data centers. It is possible to dramatically reduce the electrical consumption of typical data centers through appropriate design of the data center physical infrastructure and through the design of the IT architecture. This paper explains how to quantify the electricity savings and provides examples of methods that can greatly reduce electrical power consumption.
Energy solutions for federal facilities : How to harness sustainable savings ...Schneider Electric
Looming Mandates. Energy insecurity. Shrinking budgets. Discover solutions available today to help you tackle your energy dilemma. Take a 30K foot tour of solutions to increase energy efficiency and reliability, maximize energy ROI and enhance mission assurance. Get tips for navigating the event to make the most of your Xperience.
The document discusses optimizing facility efficiency in federal mission-critical environments. It recommends taking a long-term approach to planning by understanding organizational goals and bridging IT and facilities. Key steps include assessing existing facilities, selecting efficient equipment, right-sizing capacity, and establishing monitoring, maintenance, and benchmarking programs to ensure optimization over time. Regular maintenance is emphasized as critical for sustained efficiency gains and reliability.
StruxureWare is Schneider Electric's DCIM software suite that integrates various data center management applications. It provides visibility and control of infrastructure assets from the building level down to the server. The software suite monitors and manages key metrics like power, cooling capacity, and IT asset usage. It helps optimize data center performance and efficiency through features like real-time monitoring, capacity planning, and energy analytics. Schneider Electric is a leading DCIM provider due to its comprehensive product portfolio, expertise, and ability to deliver an end-to-end solution for data center management.
Green IT in the boardroom, Jose Iglesias SymantecIT Executive
Conferentie Greening the Enterprise,
IT Executive, 25 november 2009
Green IT in the Boardroom
Spreker: Jose Iglesias (VP of Global Solutions, Symantec)
At the same time that data centers are running short on space and power, IT organizations are also finding themselves dealing with skyrocketing amounts of information. But such challenges often have a way of presenting new opportunities. Today, Green IT is re-shaping the data center and bringing IT to the forefront in the boardroom. Energy efficiency is not just a set of quick fixes like virtualizing everything or focusing on new hardware – but rather a fundamental shift in how to approach the problem from the start by leveraging an existing investment in software and planning for how to save “green” while “going green” year over year. It covers the entire IT organization including the endpoints, servers, storage and communications. Jose will cover the practical issues of implementing green IT technologies into businesses and what the consequences are locally and across the globe.
Metering Energy Consumption in Data Centres - Michael RudgyardGoodCampus
- Concurrent Thinking is a UK startup spun out of an established systems integrator that developed technology for managing high performance computing resources. Their product allows for comprehensive monitoring and management of data center infrastructure and IT equipment.
- Their system monitors environmental conditions, server health, operating systems, virtual machines, and power usage. It aims to optimize data center efficiency through active power management and identifying underutilized IT equipment.
- By improving PUE, monitoring IT usage, rightsizing servers, virtualization, and replacing old equipment, their software helps customers realize energy savings from multiple sources and reduce costs by optimizing combined facilities and IT efficiency.
This document discusses data center air flow management solutions from Wright Line. It outlines industry trends showing rising energy consumption and costs from data centers. Common problems in data centers include outdated designs and lack of airflow management. Wright Line strategies and products aim to contain hot and cold air streams to improve separation and efficiency. These include aisle containment solutions to reduce wasted cooling and capture higher return air temperatures for increased cooling capacity and chiller efficiency.
Retrofit, build, or go cloud/colo? Choosing your best directionSchneider Electric
When faced with the decision of upgrading an existing data center, building a new data center or leasing space in a third party colocation data center, there are both quantitative and qualitative differences to consider. This session reviews several key factors to help make a sound decision including a business’ sensitivity to cash flow, deployment timeframe, data center life expectancy, regulatory requirements, and other strategic factors.
The document discusses information technology (IT) and its role in the smart grid. It describes how IT systems process large amounts of data and enable real-time communication and sharing of information. The development of the internet and ubiquitous connectivity has led to new conveniences but also creates challenges around storing vast amounts of "Big Data." The document outlines strategies that data centers and IT professionals can use to improve energy efficiency, such as storage consolidation, virtualization, optimized hardware, and energy-efficient software development practices. Implementing these strategies can significantly reduce data center power consumption and energy costs.
This document provides an overview of Google's data center architecture. It discusses Google's distributed computing approach which uses thousands of commodity servers clustered together. It describes Google's layered architecture with abstraction between layers. The computing infrastructure uses modular shipping containers to house servers connected by Ethernet switches. The software infrastructure includes tools like Google File System for storage, MapReduce for distributed computing, and BigTable for a high-performance database. The document presents an overview of how Google designs its data centers and builds custom software platforms to manage the large computing infrastructure.
This presentation shows that Data Center Infrastructure Management (DCIM) Software to a Data Center Manager is what ERP software is to a VP - Manufacturing. This is the 2nd presentation from a series of 3-part series from GreenField Software on the subject: DCIM for High Availability.
DCIM Software charts out the relationship maps for assets by identifying various dependencies among them. Threshold-based alerts on critical parameters, combined with impact analysis of Move-Add-Change, mitigates risks of DC failures.
GreenField Software’s Mission is to help Data Centers control capital expenditures reduce operating expenses and mitigate the risks of Data Center failures. Besides DCIM Software, GFS offers Data Center Advisory Services in the areas of best practices, capacity planning, energy efficiency and business continuity of data centers.
Electricity use and efficiency of servers and data centers was reviewed. Recent data shows that in 2005, servers accounted for 1.2% of total US electricity use and data centers including servers, networking and cooling accounted for 1.5% of US electricity use. Total electricity use of servers and data centers is expected to increase by 40-76% by 2010 based on current growth forecasts. Opportunities for improving efficiency include whole system redesign, aligning incentives, virtualization, consolidation, and new more efficient server designs like Intel's Eco-Rack which can provide 16-18% savings over standard racks.
The document discusses APC by Schneider Electric solutions for data centers and IT environments. It introduces their latest SMB solution called the Netshelter CX, which is a soundproofed "server room in a box" available in three sizes. It also discusses how cloud computing impacts data center power and infrastructure, and how APC can help through services like efficiency assessments and claims of efficiency entitlement. The document promotes APC's software solutions for data center management and optimization through virtual machine migration and communication between physical and virtual infrastructure systems.
The document discusses data center infrastructure management (DCIM) solutions. It defines DCIM as systems that collect and manage data about a data center's assets, resource use, and operational status throughout the lifecycle to help optimize performance and meet business goals. The document outlines challenges in data center management like availability, efficiency, costs, and changing needs. It then describes Schneider Electric's DCIM solutions and tools that provide integrated management of physical infrastructure, IT systems, and business processes to address these challenges.
The Productization of the Data Center-- With the rapid evolution of the data center service provider segment, the concept of efficiency has expanded to embrace not only energy, but a multitude of elements including capital, operations, and the useful life of the facility as well. In this presentation, Chris Crosby, CEO of Compass Datacenters will demonstrate how the historical development of related industries dictates that productization is the required methodology to deliver these expanded efficiency requirements to an increasingly sophisticated customer base.
Data center power and cooling infrastructure worldwide wastes more than 60,000,000 megawatt-hours per year of electricity that does no useful work powering IT equipment. This represents an enormous financial burden on industry, and is a significant public policy environmental issue. This paper describes the principles of a new, commercially available data center
architecture that can be implemented today to dramatically improve the electrical efficiency of data centers.
The document discusses HP's StorageWorks solutions for bridging the gap between data explosion and storage infrastructure. Some key points:
1. Data has become critical for businesses and is growing exponentially, posing challenges for storage.
2. HP StorageWorks provides integrated storage solutions including blades, extreme capacity systems, virtualized storage, and data protection/archiving to optimize storage infrastructure.
3. The solutions aim to make infrastructure change-ready, lower costs through features like thin provisioning and data reduction, and provide a trusted partner to businesses.
Case Study - HPs Own Data Centre TransformationHPDutchWorld
HP underwent a large-scale data center transformation project to consolidate over 85 global data centers into six new next-generation data centers located in three zones across the US. This consolidation aimed to standardize HP's technology environment, retire legacy applications, build state-of-the-art infrastructure, automate monitoring and control, improve business continuity, and significantly reduce IT costs. The new data centers employ technologies like Dynamic Smart Cooling and are designed for high availability, disaster recovery, and rapid service delivery.
Power Strategies for Data Center Efficiency – Identifying Cost Reduction Opportunities
In a survey conducted by the Uptime Institute, enterprise data center managers responded that 42% of them expected to run out of power capacity within 12-24 months and another 23% claimed that they would run out of power capacity in 24-60 months. Greater attention to energy efficiency and consumption is critical.
To view the recorded webinar presentation, please visit http://www.42u.com/power-strategies-webinar.htm
It is recognized within the industry that most data centers are not energy efficient. Traditional data center designs do not fully address optimizing the data center. While data center managers struggle with uptime and reliability, business executives are looking for ways to reduce capital and operational expenses to improve the bottom line. Green initiatives are also in place to not only save money but to be environmentally responsible. New green data center designs (based on hot and cold air containment) have started to become more popular. Containment strategies and air flow optimization are recognized as a way to achieve both technical and business objectives. By separating hot and cold air within the data center, capital and operational expenses can be reduced for the business and a more stable and predictable environment can be achieved for the IT organization.
Electricity usage costs have become an increasing fraction of the total cost of ownership (TCO) for data centers. It is possible to dramatically reduce the electrical consumption of typical data centers through appropriate design of the data center physical infrastructure and through the design of the IT architecture. This paper explains how to quantify the electricity savings and provides examples of methods that can greatly reduce electrical power consumption.
Energy solutions for federal facilities : How to harness sustainable savings ...Schneider Electric
Looming Mandates. Energy insecurity. Shrinking budgets. Discover solutions available today to help you tackle your energy dilemma. Take a 30K foot tour of solutions to increase energy efficiency and reliability, maximize energy ROI and enhance mission assurance. Get tips for navigating the event to make the most of your Xperience.
The document discusses optimizing facility efficiency in federal mission-critical environments. It recommends taking a long-term approach to planning by understanding organizational goals and bridging IT and facilities. Key steps include assessing existing facilities, selecting efficient equipment, right-sizing capacity, and establishing monitoring, maintenance, and benchmarking programs to ensure optimization over time. Regular maintenance is emphasized as critical for sustained efficiency gains and reliability.
StruxureWare is Schneider Electric's DCIM software suite that integrates various data center management applications. It provides visibility and control of infrastructure assets from the building level down to the server. The software suite monitors and manages key metrics like power, cooling capacity, and IT asset usage. It helps optimize data center performance and efficiency through features like real-time monitoring, capacity planning, and energy analytics. Schneider Electric is a leading DCIM provider due to its comprehensive product portfolio, expertise, and ability to deliver an end-to-end solution for data center management.
Green IT in the boardroom, Jose Iglesias SymantecIT Executive
Conferentie Greening the Enterprise,
IT Executive, 25 november 2009
Green IT in the Boardroom
Spreker: Jose Iglesias (VP of Global Solutions, Symantec)
At the same time that data centers are running short on space and power, IT organizations are also finding themselves dealing with skyrocketing amounts of information. But such challenges often have a way of presenting new opportunities. Today, Green IT is re-shaping the data center and bringing IT to the forefront in the boardroom. Energy efficiency is not just a set of quick fixes like virtualizing everything or focusing on new hardware – but rather a fundamental shift in how to approach the problem from the start by leveraging an existing investment in software and planning for how to save “green” while “going green” year over year. It covers the entire IT organization including the endpoints, servers, storage and communications. Jose will cover the practical issues of implementing green IT technologies into businesses and what the consequences are locally and across the globe.
Metering Energy Consumption in Data Centres - Michael RudgyardGoodCampus
- Concurrent Thinking is a UK startup spun out of an established systems integrator that developed technology for managing high performance computing resources. Their product allows for comprehensive monitoring and management of data center infrastructure and IT equipment.
- Their system monitors environmental conditions, server health, operating systems, virtual machines, and power usage. It aims to optimize data center efficiency through active power management and identifying underutilized IT equipment.
- By improving PUE, monitoring IT usage, rightsizing servers, virtualization, and replacing old equipment, their software helps customers realize energy savings from multiple sources and reduce costs by optimizing combined facilities and IT efficiency.
This document discusses data center air flow management solutions from Wright Line. It outlines industry trends showing rising energy consumption and costs from data centers. Common problems in data centers include outdated designs and lack of airflow management. Wright Line strategies and products aim to contain hot and cold air streams to improve separation and efficiency. These include aisle containment solutions to reduce wasted cooling and capture higher return air temperatures for increased cooling capacity and chiller efficiency.
Retrofit, build, or go cloud/colo? Choosing your best directionSchneider Electric
When faced with the decision of upgrading an existing data center, building a new data center or leasing space in a third party colocation data center, there are both quantitative and qualitative differences to consider. This session reviews several key factors to help make a sound decision including a business’ sensitivity to cash flow, deployment timeframe, data center life expectancy, regulatory requirements, and other strategic factors.
The document discusses information technology (IT) and its role in the smart grid. It describes how IT systems process large amounts of data and enable real-time communication and sharing of information. The development of the internet and ubiquitous connectivity has led to new conveniences but also creates challenges around storing vast amounts of "Big Data." The document outlines strategies that data centers and IT professionals can use to improve energy efficiency, such as storage consolidation, virtualization, optimized hardware, and energy-efficient software development practices. Implementing these strategies can significantly reduce data center power consumption and energy costs.
This document provides an overview of Google's data center architecture. It discusses Google's distributed computing approach which uses thousands of commodity servers clustered together. It describes Google's layered architecture with abstraction between layers. The computing infrastructure uses modular shipping containers to house servers connected by Ethernet switches. The software infrastructure includes tools like Google File System for storage, MapReduce for distributed computing, and BigTable for a high-performance database. The document presents an overview of how Google designs its data centers and builds custom software platforms to manage the large computing infrastructure.
This presentation shows that Data Center Infrastructure Management (DCIM) Software to a Data Center Manager is what ERP software is to a VP - Manufacturing. This is the 2nd presentation from a series of 3-part series from GreenField Software on the subject: DCIM for High Availability.
DCIM Software charts out the relationship maps for assets by identifying various dependencies among them. Threshold-based alerts on critical parameters, combined with impact analysis of Move-Add-Change, mitigates risks of DC failures.
GreenField Software’s Mission is to help Data Centers control capital expenditures reduce operating expenses and mitigate the risks of Data Center failures. Besides DCIM Software, GFS offers Data Center Advisory Services in the areas of best practices, capacity planning, energy efficiency and business continuity of data centers.
Electricity use and efficiency of servers and data centers was reviewed. Recent data shows that in 2005, servers accounted for 1.2% of total US electricity use and data centers including servers, networking and cooling accounted for 1.5% of US electricity use. Total electricity use of servers and data centers is expected to increase by 40-76% by 2010 based on current growth forecasts. Opportunities for improving efficiency include whole system redesign, aligning incentives, virtualization, consolidation, and new more efficient server designs like Intel's Eco-Rack which can provide 16-18% savings over standard racks.
The document discusses APC by Schneider Electric solutions for data centers and IT environments. It introduces their latest SMB solution called the Netshelter CX, which is a soundproofed "server room in a box" available in three sizes. It also discusses how cloud computing impacts data center power and infrastructure, and how APC can help through services like efficiency assessments and claims of efficiency entitlement. The document promotes APC's software solutions for data center management and optimization through virtual machine migration and communication between physical and virtual infrastructure systems.
The document discusses data center infrastructure management (DCIM) solutions. It defines DCIM as systems that collect and manage data about a data center's assets, resource use, and operational status throughout the lifecycle to help optimize performance and meet business goals. The document outlines challenges in data center management like availability, efficiency, costs, and changing needs. It then describes Schneider Electric's DCIM solutions and tools that provide integrated management of physical infrastructure, IT systems, and business processes to address these challenges.
The Productization of the Data Center-- With the rapid evolution of the data center service provider segment, the concept of efficiency has expanded to embrace not only energy, but a multitude of elements including capital, operations, and the useful life of the facility as well. In this presentation, Chris Crosby, CEO of Compass Datacenters will demonstrate how the historical development of related industries dictates that productization is the required methodology to deliver these expanded efficiency requirements to an increasingly sophisticated customer base.
Data center power and cooling infrastructure worldwide wastes more than 60,000,000 megawatt-hours per year of electricity that does no useful work powering IT equipment. This represents an enormous financial burden on industry, and is a significant public policy environmental issue. This paper describes the principles of a new, commercially available data center
architecture that can be implemented today to dramatically improve the electrical efficiency of data centers.
The document discusses HP's StorageWorks solutions for bridging the gap between data explosion and storage infrastructure. Some key points:
1. Data has become critical for businesses and is growing exponentially, posing challenges for storage.
2. HP StorageWorks provides integrated storage solutions including blades, extreme capacity systems, virtualized storage, and data protection/archiving to optimize storage infrastructure.
3. The solutions aim to make infrastructure change-ready, lower costs through features like thin provisioning and data reduction, and provide a trusted partner to businesses.
Case Study - HPs Own Data Centre TransformationHPDutchWorld
HP underwent a large-scale data center transformation project to consolidate over 85 global data centers into six new next-generation data centers located in three zones across the US. This consolidation aimed to standardize HP's technology environment, retire legacy applications, build state-of-the-art infrastructure, automate monitoring and control, improve business continuity, and significantly reduce IT costs. The new data centers employ technologies like Dynamic Smart Cooling and are designed for high availability, disaster recovery, and rapid service delivery.
Oracle - Next Generation Datacenter - Alan HartwellHPDutchWorld
The document discusses next generation data center solutions from Oracle and HP. It highlights the need for businesses to have agile infrastructure that can quickly adapt to changing needs. Oracle and HP are introducing new products like the Exadata Storage Server and HP Oracle Database Machine that promise unprecedented performance, scalability, and availability for data warehousing. These solutions are optimized to handle the exponential growth of data and claim to be at least 10 times faster than conventional data warehouse deployments.
Next Generation Datacenter Oracle - Alan HartwellHPDutchWorld
The document discusses next generation data center solutions from Oracle and HP. It highlights the need for businesses to have agile infrastructure that can quickly adapt to changing needs. Oracle and HP are introducing new products like the Exadata Storage Server and HP Oracle Database Machine that promise unprecedented performance, scalability, and availability for data warehousing. These solutions are optimized to handle the exponential growth of data and claim to be at least 10 times faster than conventional data warehouse deployments.
The document discusses the importance of physical infrastructure management in data centers. It notes that data center management involves monitoring various interconnected systems, including building management, IT systems, security systems, and power infrastructure. Effective data center management requires collecting and integrating data from monitoring devices across these different domains to minimize downtime, maximize power efficiency, enable fast decision making, and predict and prevent problems.
The document describes a personal cloud computing service that offers security through continuous backup and versioning of computer files, control through a shared workspace with integrated web services, and mobility through access to files even when offline. It is a white label service designed for easy use and sharing across any mobile device. The document also discusses trends in mobile devices and cloud computing, noting growing adoption of both and importance for productivity and responsiveness. It presents the company providing the cloud computing service as having been founded in 2005 and having received funding from investors including Intel, Cisco, and others.
Archstone Consulting recommends targeting a company's IT service delivery model to reduce IT costs more effectively than solely focusing on technology assets. A robust IT service delivery model has four key components: governance, organization, operational processes, and performance management. Archstone's rapid assessment identifies improvement opportunities within 5-7 days through workshops and a maturity model analysis to understand gaps and savings potential. The assessment delivers a comparative spend analysis and recommendations.
Understanding the Value of the Cloud - Centare Lunch & Learn - June 2, 2011Eric D. Boyd
The document discusses the value of cloud computing by explaining how it allows companies to pay for computing resources on demand rather than owning their own IT infrastructure, highlighting how cloud computing can help companies reduce costs, improve agility, and focus on their core business activities rather than IT management. Several examples are provided showing how different types of applications could be deployed on the Windows Azure cloud platform and the potential cost savings compared to traditional on-premises infrastructure.
We will review a few customers’ decision to move to the Cloud, why they made the decisions they made, how their move to the cloud went and key learnings. Lastly, with the experience gained from several migrations will explain the Silver Lining Planning Services and how Avtex can help you with a turbulence free evaluation and migration to the Cloud.
The document discusses the emergence of cloud computing and HP's role in pioneering cloud computing technologies and services. It provides an overview of cloud computing concepts, HP's flexible computing services, and the open cloud computing research testbed being developed by HP, Intel, and Yahoo to advance cloud computing research. The testbed will provide a large-scale, global platform for researchers to experiment with data center management and cloud services technologies.
This document discusses cloud computing trends through 2012. It notes that cloud computing has become a major IT trend and that 69% of online users already utilize public cloud services. However, not all enterprises fully subscribe to the cloud due to constraints versus benefits. The document explores what cloud computing is, how it has evolved from previous technologies, and the value it provides to businesses through flexibility, economies of scale, and pay-per-use models.
Virtualisatie In Het NGDC - Marc JanssenHPDutchWorld
HP DUTCHWORLD 2008 introduces HP Insight Dynamics - VSE, which allows organizations to treat physical and virtual servers in the same way by using "Logical Servers". Logical Servers are server profiles that contain resource requirements and can be instantiated on physical blades or as virtual machines. HP Insight Dynamics - VSE also provides capacity planning and workload optimization capabilities to reduce costs and energy usage.
Deerns Data Center Chameleon 20110913 V1.1euroamerican
The Chameleon Data Center was designed to dynamically adapt to meet changing business needs in terms of IT space, cooling, power and reliability tiers, while maintaining energy efficiency. It utilizes a unique combination of centralized and decentralized systems to deliver flexible IT power and cooling across a range of power densities, reliability tiers and capacities from the same infrastructure. This reduces upfront investment costs and allows customers to decide how to configure the data center space until equipment installation. The design achieves flexibility without additional costs through a modular infrastructure that can be easily adapted.
The document discusses top storage trends that will reshape datacenters in 2012 according to IDC predictions. It finds that data is exploding due to more connected devices and digital content creation. Survey results show organizations prioritizing IT security and cost reduction. IDC predicts that in 2012, storage virtualization will go mainstream, SSDs will be integrated into ROI strategies, unified storage will be standard, and cloud storage services will provide more sophisticated features to help organizations manage big data.
Five best practices for ensuring uptime with Data Center Infrastructure Manag...CA Nimsoft
This document introduces Nimsoft DCIM, a data center infrastructure management solution that provides a unified view of power, cooling, and infrastructure performance. It extends Nimsoft IT management capabilities to the data center. Key features include monitoring and alerts, energy optimization insights, chargeback/showback reporting, and easy deployment. Best practices for uptime include understanding energy costs, maintaining optimal temperature and humidity, reducing waste, leveraging alerts and dashboards, and power usage reports. Success stories demonstrate savings in cooling costs and energy management improvements.
Leveraging Virtualization from an IT Project to a Business StrategyDavid Resnic
This document discusses leveraging virtualization as a business strategy. It argues that virtualization has the potential to provide an on-demand, flexible IT infrastructure and help fulfill the promise of cloud computing. However, management challenges like visibility, skills, and complexity can stall its evolution. The document recommends pairing virtualization with management and automation tools, applying best practices from physical and virtual domains, integrating management across physical and virtual infrastructures, and connecting virtualization, service management, and automation to overcome these challenges and maximize the business value of virtualization investments.
The document discusses the importance of optimizing physical infrastructure in data centers to improve efficiency and reduce costs. It notes that data centers currently operate at only 30% efficiency, with 70% of electricity consumed by non-IT infrastructure components. The document promotes the services of Carousel to provide consultative physical infrastructure solutions and optimize areas like power usage, cooling, and network monitoring to improve overall data center efficiency.
Intergen Twilight Seminar: Constructive Disruption with Cloud TechnologiesIntergen
What is cloud computing and what does it mean for your business today?
Microsoft New Zealand will share insights into cloud computing including:
• Beyond the hype - what really is cloud computing?
• The business case for cloud
• Showcases of what cloud computing is doing for New Zealand companies
• Economics of cloud computing and cost considerations
• Implementation tips and recommendations to get started
• Demonstration of Microsoft’s leading cloud productivity suite – Office365
Learn about Microsoft Office365 - a set of cloud-enabled tools that let you access your email, documents, contacts, and calendars from virtually anywhere, on almost any device. Office 365 brings together our best communication and collaboration tools including Microsoft Office, Microsoft SharePoint, Microsoft Exchange and Microsoft Lync in an always-up-to-date cloud service, for a low flexible monthly subscription. And we’ll show you this works and how to assess whether or not cloud computing makes sense for your organisation and what it takes to get there.
Similar to Datacenter Transformation - Energy And Availability - Dio Van Der Arend (20)
Datacenter transformation - Dion van der ArendHPDutchWorld
(1) Datacenters are facing increasing demands that many current facilities cannot meet, requiring transformation through consolidation, virtualization, and improved energy efficiency and availability.
(2) Datacenter designs are evolving from small, isolated IT islands to larger, standardized facilities with improved reliability, energy conservation, and reduced costs. Next-generation designs feature modular pods that can be deployed rapidly and offer high power densities up to 20kW/m2.
(3) As datacenter economics have changed, managing costs such as power and cooling have become priorities, driving the need for more energy-efficient computing and facility solutions.
Polyserve DB Consolidation Platform - Clemens EsserHPDutchWorld
HP's PolyServe platform allows for consolidating multiple SQL Server instances onto a single physical server or across multiple servers for higher utilization and fault tolerance compared to virtualization. Key benefits include: (1) Increasing SQL Server utilization from 5% to over 75% (2) Guaranteeing high availability for all instances (3) Reducing ongoing administration costs through features like one-click updates. PolyServe offers more efficient consolidation and management of SQL Server workloads than VMware by utilizing shared storage and enabling rapid instance failover between physical servers.
The document discusses Business Technology Optimization (BTO) software from HP that aims to align IT with business goals while reducing costs. BTO integrates solutions across IT strategy, applications, and operations to automate and standardize processes. This helps deliver measurable business outcomes, improve predictability and accountability of IT, and demonstrate IT's value. HP claims market leadership across the IT value chain with best-in-class products in categories like project management, application security, and asset management.
This document is an agenda for the HP Dutchworld 2008 event. The agenda outlines several presentations that will be given on networking topics such as next generation datacenter networking trends, wireless 802.11n solutions, and demonstrations of datacenter and wireless networking technologies and solutions. The event will also include sessions on unifying wired and wireless networks and HP's roadmap and technology management software demonstration.
Data Center Automation - Erwin Van KruiningHPDutchWorld
1) Data center infrastructure is growing exponentially but management costs are spiraling out of control due to the complexity and shortage of qualified talent.
2) HP Business Service Automation provides a comprehensive and integrated suite for automating the entire data center across networks, servers, storage, applications and business services.
3) It enables organizations to optimize operations, improve efficiency, ensure compliance and reduce costs through automated discovery, provisioning, patching, configuration and more.
The document summarizes security risks related to web applications and discusses how applications have become the main target of attacks. It notes that over 85% of scanned sites show vulnerabilities that can expose sensitive data and that costs of data breaches to enterprises can range from $90 to $305 per compromised record. The document advocates that application security needs to be addressed at the development stage rather than trying to bolt on security after applications are built.
1) The document discusses HP's Trilogy approach which provides the most comprehensive warranty, best management optimization for an IT environment, most flexible IT platform, and most energy efficient complete IT platform.
2) It then discusses how HP solutions like BladeSystems, dynamic smart cooling, power management tools, and virtualization can help reduce IT power usage and costs from the server chip to the data center level.
3) The document also explains HP Virtual Connect which reduces cable clutter by separating server connectivity from LAN and SAN administration through the use of modules.
Trends In Telepresence - Andrew CampbellHPDutchWorld
This document discusses trends in telepresence and videoconferencing solutions. It outlines the challenges that distributed organizations face with distance, efficiency, and accessing expertise. Telepresence solutions can help by reducing travel costs, improving productivity, and increasing collaboration. The key is providing a solution with enough quality to be useful and encourage adoption. This includes life-like video and audio quality, natural interactions, and flexible collaboration tools. HP offers several telepresence solutions like Halo studios and meeting rooms that connect to private video exchange networks for high-quality experiences across locations. The document emphasizes starting with high quality and adding flexibility tailored to user needs and business goals.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
2. Agenda
• Datacenter Trends
• Design Evolution
• Energy Efficiency
• High Availability
• Summary
2 11 December 2008
3. Datacenter Transformation Solutions
• More than one-third of CEOs and CIOs believe that in two to five years their data
centers will be incapable of dealing with the rapidly growing demand for services and
applications.
• Nearly half of CIOs plan to reduce the number of data centers over the next five years
through transformation: improving technology, increasing productivity, lowering
overhead and management costs.
Current state Low-cost Future state
High-cost
pooled IT
IT islands Next generation
assets
data center
IT Systems Power & Cooling Management Security Virtualization Automation
& Services
• Scalability • Energy- • Unified • Proactive, • Pooling and • Dynamic
based on efficient infrastructure built-in sharing of IT control of IT
standards computing management infrastructure resources service
and data delivery
• IT services • Integrated IT protection
and support and business
services • Compliance
management validation
3 11 December 2008
4. HP’s Datacenter Transformation
Domains|Objectives
Management &
Manage, automate and protect DC
Operations operations
Application & Rationalize, modernize and migrate
Information your applications
IT
Infrastructure
Consolidate and virtualize IT
Consolidate, design new and/or
Facilities
modernize aging facilities
4 11 December 2008
6. Datacenters need to be transformed
Pressures impacting Datacenter performance
Business
• Growth • Globalization
• Compliance • Environmental
• Security • Innovation
IT Infrastructure Facilities
• Increasing number of • Aging facilities
assets • Insufficient floor space
• Complexity • Excessive heat
• Need to serve • Insufficient power and
extended enterprise cooling
• Growing information • Energy efficiency
• Network complexity crisis
• More integration • Suboptimal location
• Security and Continuity • Regulations
Operations
• 24x7 expectation • Globalization
• No room for error • • Seasonal spikes
Seasonal spikes
• Management Complexity • • External sourcing
External sourcing
• IT service commitments • • Dynamic business
Dynamic business and IT
and IT
6 11 December 2008
7. Datacenters need to be transformed (i)
Facilities are ageing rapidly
85% of facilities built before 2001 are obsolete.
By 2010, more than half of all Datacenters will have to relocate to new
facilities or outsource some applications.
Over 50% of large enterprises will face Datacenter floor storage space
shortage by 2013.
50% of current Datacenters will have insufficient power and cooling
capacity to meet the demands of high density equipment.
1 out of every 4 Datacenters will experience a business disruption due
to power failures
By 2015, the talent pool of qualified senior level technical and
management Datacenter professionals will shrink by 45%.
Gartner, 2007
Datacenter Institute, 2006
7 11 December 2008
8. Datacenters need to be transformed (ii)
Datacenter economics have changed
Cost of physical space was a
primary consideration in data
center design
Cost of power and cooling has
risen to prominence
Data center managers now must
prioritize investment in efficient
power and cooling systems to
lower the total cost of operating
(TCO) of their facilities.
Belady, C., “In the Data Center, Power and Cooling
Costs More than IT Equipment it Supports”, Electronics
Cooling Magazine (Feb 2007)
8 11 December 2008
10. Data Center Trends
• Reliability can be achieved
in several ways
− Better critical support infrastructure
topology
− Instantaneous and reliable fail-over
− Mirroring
• Drive towards larger data centres
− Cost containment
− Hardware growth
− Server rationalization
− Reliability improvement
− Energy conservation
− Security
10 11 December 2008
11. Data Center Design Trends
The Next Generation Data Center Model
Today the power density
is designed to a minimum
of 1kW/m2 scalable to
over 2 kW/m2. (vs 500
to 800 W/m2 two years
ago)
11 11 December 2008
12. POD complements Brick-and-Mortar
PUE of <1.25
20 kW/m2
600kW+ Capacity
Six weeks to ship
Power density Geographic Flexibility
Maximum Security IT Flexibility Max Redundancy
Energy Efficiency Speed of deployment
Brick-and-mortar Container
12 11 December 2008
13. Container-
Interior view Serviceable high efficiency,
variable speed blowers
Serviceable high efficiency
from HP MCS
heat exchangers (HEX) from
HP MCS Separate Utility
module segregate
IT/UPS security
access and
environmentals
Standard 50U
racks Facilities
management on
exterior of cold
aisle
Hot aisle with
rear access
through doors in 92cm cold aisle
the container can run at >32°
13 11 December 2008
14. Next Generation Data Center
Lights out
• Flexible, nimble, adaptable, modular,
highly configurable-essential in today’s
dynamic technology environment
• Ability to deploy quickly
and reconfigure
• Highly automated and virtualized
• Power and cooling dynamic response
to processing
• Continuous and comprehensive
monitoring
• New integrated approach-hardware,
software, applications, network
and facility
• “Data Center is the Computer”
14 11 December 2008
16. Why does Energy Efficiency have to
Improve?
• Rising energy costs
• Environmental pressures
• Legislation
• Competitive advantage
16 11 December 2008
17. Benchmark for Datacenter Efficiency
Proposed by Green Grid
Benchmark DCiE PUE
Platinum > 0.8 < 1.25
Gold 0.7 – 0.8 1.25 – 1.43
Silver 0.6 – 0.7 1.43 – 1.67
Bronze 0.5 – 0.6 1.67 - 2
Recognised 0.4 – 0.5 2 – 2.5
Not recognised < 0.4 > 2.5
17 11 December 2008
19. Data Centre Energy
Data centre Average Power Consumption
Misc Mech
600
Fuel
500 Hum/Dehum
CRAC Fans
400
Cooling systems
kW
300
IT
200 Misc Elec
Lights
100
Gen Pre-heater
UPS Losses
0
Mech IT Elec
19 11 December 2008
20. Energy Efficiency Analysis
M IT Utility
E
X
XM XE X House
Hum Fuel Gen Light WC
Misc Misc
Lift
Fans UPS HVAC
Cooling
IT
20 11 December 2008
21. PUE From Energy Assessments
4.00
3.57 3.53
3.50
3.31
3.00
P U E : D a t a C e n t re E n e rg y / IT E n e rg y
2.68
2.53
2.50
2.27 2.30 2.24
2.17
2.06 2.05
1.96 1.95 1.94 1.98
2.00
1.50
1.00
0.50
0.00
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Center Number
21 11 December 2008
22. Early planning yields best results
EYP’s Consulting & Design approach
y y er
eg t eg w in g
tr at t ra , Po ion
S rS li ng i ss
IT t e
en oo mm
aC ,C Co
Ability to Influence Energy Use
t /
D at en in g
ip m st
qu Te
TE io n/
High I at
e nt
le m
I mp
s
io n
at
p er
gO
g o in
On
Low
Proactive Typical Reactive
Decision Making Timeline
22 11 December 2008
23. Datacenter Energy Strategy
Air Management Objective:
Increase air & chw set points
- Minimise Negative flow and Bypass
- Minimise Recirculation
IT Server: Mechanical
(> Efficient) - Free Cooling (air or chw)
(> Utilisation) - Plant / system optimise
> T (& RH) range - Chillers
- CRACs
- Humidifiers, etc
Renewable power (mains /on-site) Electric performance
- Mains (wind, hydro) - UPS
- Site (bio-fuel cogeneration) not sustainable - Gen heaters
- Lights
23 11 December 2008
24. Datacenter Design Trends
The LEED Approach
• Meeting with clients whose mission requires a high level of operational
continuity and performance from their facilities.
• Providing LEED® consulting, energy modeling, design and Cx services
on LEED Platinum, Gold, Silver and Certified facilities.
Together with Lawrence Berkeley National Laboratory, focusing on 4 identified areas to reduce
energy consumption:
• Optimize existing data centers through proper airflow distribution and supply air temperatures.
• In new facilities, use different cooling/ventilation alternatives to reduce energy consumption of the
HVAC systems by up to 30%.
• Use DC power distribution to increase reliability and reduce energy consumption of the
UPS/electrical distribution by approximately 25% and to also improve both energy performance
and space efficiency.
• In new or existing facilities, use on-site power generation to decrease capital expenditures,
increase reliability, and significantly reduce utility costs.
LEED ® Leadership Energy Efficient Design
11 December
24 11 December 2008
2008
25. Fannie Mae: 1st LEED certified datacenter
Scope:
• MEP roles included a detailed
analysis and selection of green-
field sites, programming, design
development, construction
documentation, construction
administration & commissioning
• LEED certification
Total cost:
• $130M Construction Cost
Schedule:
• 6 Month design development
• 22 months construction
program
Follow-on activity:
• Additional design, testing and
operational consulting projects.
Data Center/Office Building/Operations Center (250,000 sf)
25 11 December 2008
26. Optimizing Energy & Space efficiency
From chip to chiller HP
e
Uniqu
• Thermal quick assessment
Thermal • Thermal intermediate assessment
• Thermal comprehensive assessment
Assessments w/thermal zone mapping
• HP EYP Energy Efficiency analysis
Data Center HP EYP Energy Efficiency
Conversion
Assessments analysis can bring down
• Analysis of infrastructure with costs by up to 30% in Data
detailed report Powering IT Centers.
• Explanation of risks, deficiencies
and recommendations Cooling
Savings IT
• Comprehensive site-preparation audit to integrate new equipment
Data Center
• In-depth reporting of deficiencies, including floor plan drawings
Site Planning locating equipment, receptacles, airflow panels etc.
26 11 December 2008
28. Why does Availability have to Improve?
• 7x24 h operational demands
• No margin for error or downtime
• Consolidation requires more high reliability of DC
• 99.999% availability means:
−One hour and 46 min. of downtime every 20
years
−5.3 minutes of downtime each year
−Six, one second interruptions per week
28 11 December 2008
29. Tier Classification
Tier I Tier II Tier III Tier IV
No redundancy paths Only 1 Only 1 1 Active 2 Active
1 Passive
Redundancy N N+1 N+1 S+S or
2(N+1)
Compartimentation No No No Yes
Concurrently No No Yes Yes
maintainable
Fault tolerant to worse None None None Yes
event
Downtime of IT > 1 Day < 1 Day > 1 hour < 1 hour
29 11 December 2008
30. Performance Evaluation - Reliability
• MTBF
• Availability
• Probability of Failure
• Reliability
• Availability is a function of MTBF and MTTR
30 11 December 2008
31. Reliability Modelling
• Reliability modelling is used to compare system
designs and assist in the evaluation of risk versus the
cost to mitigate the risk.
• Published failure rates and repair times for the various
components in the electrical distribution system are
used for the modelling.
• Monte Carlo simulation to evaluate the system
behaviour and to quantify risk
31 11 December 2008
32. DC Performance Benchmarking
Tier Classification 1
0 1 2 3 4
Theoretical Probability of 24.00%
Failure (5 yrs) Electrical
100% 80% 60% 40% 20% 0%
Area of Raised Floor
2060
/ m2
used
Infrastructure Installation:
design construction, 64%
commissioning
possible improved score = 99
Operational Control
Management: 35%
Maintenance &
Operations
32 11 December 2008
33. Cost, Availability and Design Topology
• Certain things can be
overdone
33 11 December 2008
35. Datacenter transformation: Business outcomes
Consolidate, design new and/or
Facilities
modernize aging facilities
• Up to 30% savings from IT consolidation, apps rationalization
Reduce • Up to 45% energy savings from modern facilities
cost • Up to 25% real estate, location savings
• Lower IT maintenance costs, staff requirements
• Centralize and standardize IT and datacenter processes
Mitigate • Establish compliance with industry best practices
risk • Protect company revenue, brand & reputation from outage
or disaster
• Increase datacenter capacity
Grow • Provide global reach to datacenters
business • Serve extended enterprise
• Support new business initiatives faster
11 December
35 11 December 2008
2008
36. Datacenter Transformation Solution
mapping
Investment for
bigger savings
Dynamic
Data Center Design / Consulting
Monitoring
Thermal Comprehensive
with Thermal Zone Mapping
Investment for
Thermal
quick wins
Quick
Best practices Design, analysis Real-time rebalancing
consultation and consultation
Thermal Intermediate
36 11 December 2008
37. Introducing EYP MCF (acquired February 2008)
• EYP facts • HP and EYP
− Projects Throughout the − EYP MCF consults to HP IT on
World their Data Centre
− Majority of Customers Multi- Consolidation Initiative
National Corporations starting in 2005
− Designed the Majority of the − EYP MCF Supports HP’s
Tier 4 “Greenfield” Dynamic Smart Cooling
Data/Operations Centres in Development
the Last 3 Years − EYP MCF’s Services Enhance
− Best in Class High HP’s Data Centre
Performance Computing Transformation Solution
Leadership and Experience
− Designed 3+ Million m2 and
Commissioned 2+ Million m2
of Raised-Floor Environments
37 11 December 2008
42. Critical Facilities Services
Business outcomes HP Difference
• Reduce costs to the business with cost- Consult – determine critical facility strategy
efficient facility, providing high ROI
Where, how many, topology, cost magnitude
on infrastructure investment
• Assured infrastructure capacity to Design – specification to achieve enterprise goals
support future business growth Model, design, engineer, evaluate
• Integrated facility and IT infrastructure
matched to business goals Assure – operational continuity & performance
• Green - Support corporate Commissioning, testing, training, maintenance
environmental goals
Services
• Critical Facilities Consulting
• Critical Facilities Design
• Critical Facilities Assurance
42 11 December 2008
43. HP Energy Efficiency Services
Business outcomes HP Difference
• Reduce energy-related operating costs Assess
to the business Unrivalled expertise in
• Reduce or eliminate capacity-based identifying efficiency
growth constraints to the business gains in mission critical
facilities
• Meet corporate commitments and
compliance standards Design & build
More than 50 million sq ft
and 50 green-field sites
Services Deliver
Environmentally certified,
• Facility & technology assessment peer-comparison and
services availability-assured
facilities
• Energy Efficiency Design
• Assessment service for Blades
environments Data center energy efficiency leadership
43 11 December 2008
44. Deployment, resource and migration
services
Business outcomes HP Difference
• Keep fixed costs down by adopting a
more variable resource model for IT
expertise On-time and to cost project Leverage HP resources as an extension
delivery to facilitate growth and of your in-house staff
reduce risk
• Uninterrupted service levels achieved
during technology relocation,
providing business continuity Variable skills and project expertise
Services Fixed staff skills & expertise
• Deployment services
• Relocation & project services
• Technical assistance and resource Technology experts for projects and skills
• Datamigration services when and where you need them
44 11 December 2008
45. Education services
Business outcomes HP Difference
• Reducing human error minimizes
IT/business risk with decreased Our “sandbox” allows you to
downtime & data loss Delivery learn and experiment with real
• Accelerate growth through improved excellence technology, in our dedicated
staff productivity non-production environment
Cutting edge courses,
Content you unmatched content, delivered
Services need, where everyday in 100+ countries
you need it around the globe
• Technical Training
• Service Management
Innovative web enabled
• Learning Solutions Learning learning technologies
Innovation combined with innovative
learning methodologies
45 11 December 2008
46. HP Mission Critical Services portfolio
Business outcomes HP Difference
• Keep business services running 24x7 Seamless blending of reactive and proactive for
- get back up fast when downtime the SLA you need
occurs and proactively attack all
possible sources of future downtime Reactive Proactive
• Improve operational efficiency, Technology drives
reduce costs by leveraging HP Mission Critical Partnership
business growth
support automation technology and IP
to restructure your in-house staff and
Bridging the
process Critical Service
business and
IT GAP
Services Proactive
24
• Mission Critical Partnership Keep the right
services running
• Critical Service
Proactive
• Proactive 24 Select
• Proactive Select
46 11 December 2008