Data centers today lack a formal system for classifying infrastructure management tools. As a result, confusion exists regarding which management systems are necessary and which are optional for secure and
efficient data center operation. This paper divides the realm of data center management tools into four distinct subsets and compares the primary and secondary functions of key subsystems within these subsets. With a classification system in place, data center professionals can begin to determine which physical infrastructure management tools they need – and don’t need – to operate their data centers.
Data center systems or subsystems that are pre-assembled in a factory are often described with terms like prefabricated, containerized, modular, skid-based, pod-based, mobile, portable, self-contained, all-in-one, and more. There are, however, important distinctions between the various types of factory-built building blocks on the market. This paper proposes standard terminology for categorizing the types of prefabricated modular data centers, defines and compares their key attributes, and provides a framework for choosing the best approach(es) based on business requirements.
Multi tiered hybrid data center designMehmet Cetin
This document discusses a multi-tiered hybrid data center design that allows for modular and flexible infrastructure. It proposes designing the data center with separate tiered sections (Tier II, III, IV) that can each be scaled independently as needed. This approach provides a more cost effective and energy efficient solution than a single-tiered design, allows the data center to meet varying operational needs simultaneously, and facilitates future-proofing and scalability as demands change over time.
A data center infrastructure management (DCIM) system collects and manages information about a data center's assets, resource use, and operational status. This information is analyzed and distributed to help optimize the data center's performance and meet business goals. Implementing DCIM solutions such as instrumentation, monitoring, and analytics can improve efficiency, reduce costs, and enable proactive management of the physical infrastructure and IT systems. Emerson Network Power provides a comprehensive portfolio of DCIM hardware and software products to help organizations gain visibility and control over their data center resources.
70% of data center outages are directly attributable to human error according to the Uptime Institute’s analysis of their “abnormal incident” reporting (AIR) database1. This figure highlights the critical importance of having an effective operations and maintenance (O&M) program. This paper describes unique management principles and provides a comprehensive, high-level overview of the necessary program elements for operating a mission critical facility efficiently and reliably throughout its life cycle. Practical management tips and advice are also given.
The document discusses how data center infrastructure management (DCIM) software can help with operations, planning, and analytics for data centers. It provides examples of common issues that can occur without DCIM tools, such as accidentally overloading circuits or racks. The cheat sheet also lists questions that DCIM tools can answer, such as identifying hot spots or excess capacity. DCIM software allows monitoring of equipment power usage, generating audit trails, and calculating power usage effectiveness. It enables more efficient provisioning, load balancing, and capacity planning to optimize data center resources.
StruxureWare is Schneider Electric's DCIM software suite that integrates various data center management applications. It provides visibility and control of infrastructure assets from the building level down to the server. The software suite monitors and manages key metrics like power, cooling capacity, and IT asset usage. It helps optimize data center performance and efficiency through features like real-time monitoring, capacity planning, and energy analytics. Schneider Electric is a leading DCIM provider due to its comprehensive product portfolio, expertise, and ability to deliver an end-to-end solution for data center management.
ScottMadden has developed an approach for analyzing data center requirements and driving improvements in existing data center retrofits. Our approach takes into account the technological requirements, the physical attributes of a data center, and the requirements for a rigorous measurement and verification program needed to ensure improvements actually capture the energy efficiently gains and the resultant greenhouse gas reductions.
Our approach addresses the latest trends in data center management such as virtualization and cloud computing and provide a framework for developing metrics needed to drive changes in data center performance.
Effect on Substation Engineering Costs of IEC61850 & System Configuration ToolsSchneider Electric
Change management, software configuration training, and human error all impact the cost associated with substation automation engineering. Object-oriented engineering approaches as defined in the IEC 61850 standard represent significant cost savings when compared to traditional methods using hardwire and Distributed Network Protocol (DNP3). New multivendor system configuration tools are described that further reduce substation automation engineering costs.
Data center systems or subsystems that are pre-assembled in a factory are often described with terms like prefabricated, containerized, modular, skid-based, pod-based, mobile, portable, self-contained, all-in-one, and more. There are, however, important distinctions between the various types of factory-built building blocks on the market. This paper proposes standard terminology for categorizing the types of prefabricated modular data centers, defines and compares their key attributes, and provides a framework for choosing the best approach(es) based on business requirements.
Multi tiered hybrid data center designMehmet Cetin
This document discusses a multi-tiered hybrid data center design that allows for modular and flexible infrastructure. It proposes designing the data center with separate tiered sections (Tier II, III, IV) that can each be scaled independently as needed. This approach provides a more cost effective and energy efficient solution than a single-tiered design, allows the data center to meet varying operational needs simultaneously, and facilitates future-proofing and scalability as demands change over time.
A data center infrastructure management (DCIM) system collects and manages information about a data center's assets, resource use, and operational status. This information is analyzed and distributed to help optimize the data center's performance and meet business goals. Implementing DCIM solutions such as instrumentation, monitoring, and analytics can improve efficiency, reduce costs, and enable proactive management of the physical infrastructure and IT systems. Emerson Network Power provides a comprehensive portfolio of DCIM hardware and software products to help organizations gain visibility and control over their data center resources.
70% of data center outages are directly attributable to human error according to the Uptime Institute’s analysis of their “abnormal incident” reporting (AIR) database1. This figure highlights the critical importance of having an effective operations and maintenance (O&M) program. This paper describes unique management principles and provides a comprehensive, high-level overview of the necessary program elements for operating a mission critical facility efficiently and reliably throughout its life cycle. Practical management tips and advice are also given.
The document discusses how data center infrastructure management (DCIM) software can help with operations, planning, and analytics for data centers. It provides examples of common issues that can occur without DCIM tools, such as accidentally overloading circuits or racks. The cheat sheet also lists questions that DCIM tools can answer, such as identifying hot spots or excess capacity. DCIM software allows monitoring of equipment power usage, generating audit trails, and calculating power usage effectiveness. It enables more efficient provisioning, load balancing, and capacity planning to optimize data center resources.
StruxureWare is Schneider Electric's DCIM software suite that integrates various data center management applications. It provides visibility and control of infrastructure assets from the building level down to the server. The software suite monitors and manages key metrics like power, cooling capacity, and IT asset usage. It helps optimize data center performance and efficiency through features like real-time monitoring, capacity planning, and energy analytics. Schneider Electric is a leading DCIM provider due to its comprehensive product portfolio, expertise, and ability to deliver an end-to-end solution for data center management.
ScottMadden has developed an approach for analyzing data center requirements and driving improvements in existing data center retrofits. Our approach takes into account the technological requirements, the physical attributes of a data center, and the requirements for a rigorous measurement and verification program needed to ensure improvements actually capture the energy efficiently gains and the resultant greenhouse gas reductions.
Our approach addresses the latest trends in data center management such as virtualization and cloud computing and provide a framework for developing metrics needed to drive changes in data center performance.
Effect on Substation Engineering Costs of IEC61850 & System Configuration ToolsSchneider Electric
Change management, software configuration training, and human error all impact the cost associated with substation automation engineering. Object-oriented engineering approaches as defined in the IEC 61850 standard represent significant cost savings when compared to traditional methods using hardwire and Distributed Network Protocol (DNP3). New multivendor system configuration tools are described that further reduce substation automation engineering costs.
Determining your data center strategy is critical in this expanding world of big data, cloud and mobility. Should you build your own data center, consider a wholesale arrangement, colocate with another carrier or transfer your critical information to the cloud? Or, does some combination of these options best suit your needs? Where do you even begin when planning these large enterprise decisions?
Join Randy Ortiz, VP of Data Center Design and Engineering, from Internap as he breaks down the steps you need to take to achieve a successful outcome for your data center initiatives.
Key topics include:
*Important decision-making considerations
*Why flexibility matters
*Top trends to watch today
DCIM: An Integral Part of the Software Defined Data CentreConcurrentThinking
Do you know what DCIM is? Discover with Concurrent Thinking how to improve your data centre efficiency, how to overcome challenges in data centres and what the future of DCIM is.
Find out more here: http://www.concurrent-thinking.com/
NERC Critical Infrastructure Protection (CIP) and Security for Field DevicesSchneider Electric
The document discusses NERC CIP guidelines for securing critical infrastructure devices in the electric grid. It provides an overview of the six main CIP guidelines regarding personnel authorization, training, security of the electronic perimeter, physical security, operations security, and incident reporting. The document emphasizes that compliance requires both compliant technologies and security-focused procedures. It also outlines key security principles like least privilege and role-based access controls. Overall, the summary provides a high-level view of the document's coverage of NERC CIP compliance objectives and guidelines.
This presentation shows that Data Center Infrastructure Management (DCIM) Software to a Data Center Manager is what ERP software is to a VP - Manufacturing. This is the 2nd presentation from a series of 3-part series from GreenField Software on the subject: DCIM for High Availability.
DCIM Software charts out the relationship maps for assets by identifying various dependencies among them. Threshold-based alerts on critical parameters, combined with impact analysis of Move-Add-Change, mitigates risks of DC failures.
GreenField Software’s Mission is to help Data Centers control capital expenditures reduce operating expenses and mitigate the risks of Data Center failures. Besides DCIM Software, GFS offers Data Center Advisory Services in the areas of best practices, capacity planning, energy efficiency and business continuity of data centers.
DCIM tools help data center managers improve planning, lower costs, and speed up information delivery by providing visibility into data center infrastructure and automation of manual processes. They solve problems around maintaining availability and responding quickly to business needs while lowering costs during consolidation and cloud computing initiatives. DCIM tools provide concise summaries of capacity, assets, and performance to help answer questions around planning, operations, and analytics. Their use is important for reducing operational expenses by 10-30% annually while improving productivity.
NER & Emerson Infrastructure Optimization Capabilties StoryboardGreg Stover
NER Data Corporation is the Premier National Value Added Master Distributuor of Emerson, Liebert and Avocent Technology Solutions.
No other Distributor knows Emerson Cooling, Power, Monitoring and DCIM Solutions better, has better programs, has better pricing and/or has better Technical Consultiing capability then NER.
NER will show you how to optimize you existing Emerson, Liebert and Avocent investments while ensuring you are properly positioned for the future!
As steel operations rely heavily on low-voltage motors, the introduction of new technologies which target motor performance have a direct impact on energy, commissioning and maintenance costs. Networking allows for easy monitoring of critical data of
each motor or load connected to the intelligent motor control center (iMCC), enabling precise process control. However, the iMCC concept isn’t a new technology. Networked protection relays and speed drivers are mature technologies with consolidated acceptance. Explore new trends for iMCCs including new Ethernet technologies, Web, wireless, biometric devices, and new technologies for metering and motor branch circuit protection. Copyright AIST Reprinted with Permission
[Webinar Presentation] Best Practices for IT/OT ConvergenceSchneider Electric
All over the world, utilities are facing up to the task of integrating information technology (IT) operations with those of operational technology (OT). What's driving it? How can utilities prepare? What should they expect?
The webinar recording is also available on-demand. To view it, please click here: http://goo.gl/b3kxm5
Here are the key points of a collapsed multitier design:
- All server farms are directly connected without physical separation between Layer 2 switches. This reduces hardware costs.
- Services like load balancing, firewalling, etc. are concentrated at the aggregation layer rather than being distributed between tiers.
- Less hardware is required compared to an expanded design as there is no need for separate switches and devices at each tier.
- However, it provides less control and scalability compared to an expanded design as tiers are not physically isolated. For example, if one tier needs to be scaled out, it affects the other tiers.
- Security may also be weaker as there is no firewall segmentation between tiers.
What Does It Cost to Build a Data Center? (SlideShare)SP Home Run Inc.
http://DataCenterLeadGen.com
What Does It Cost to Build a Data Center? (SlideShare).
The “build a data center” decision is not to be taken lightly. Consider these different cost factors to see if a build or lease is better.
Copyright (C) SP Home Run Inc. All worldwide rights reserved.
The breadth of information provided by DCIM enables Data Center Managers to impart intelligence cross-departmentally and directly to the desk of executives. Learn how a DCIM solution can be leveraged for optimal understanding of the data centers current state and projected needs in terms of cost. This webinar will provide a high-level view of DCIM's value and is intended for any person involved in making decisions in the data center.
Best Practices for Creating Your Smart Grid Network ModelSchneider Electric
A real-time model of their distribution network enables
utilities to implement Smart Grid strategies such as
managing demand and integrating renewable energy
sources. They build this model in an Advanced
Distribution Management System (ADMS) based on
accurate and up-to-date information of the distribution
network infrastructure. Yet, a recent survey shows that
less than 5% of utilities are confident about the quality
of the network data. This paper discusses best practices
for ensuring complete, correct, and current data for a
Smart Grid network model.
Power Protection for Digital Medical Imaging and Diagnostic EquipmentSchneider Electric
Medical imaging and diagnostic equipment (MIDE) is
increasingly being networked to Picture Archiving and
Communications Systems (PACS), Radiology Information
Systems (RIS), Hospital Information Systems
(HIS), and getting connected to the hospital intranet as
well as the Internet. Failing to implement the necessary
physical infrastructure can result in unexpected
downtime, and safety and compliance issues, which
translates into lost revenue and exposure to expensive
litigations, negatively affecting the bottom line. This
paper explains how to plan for physical infrastructure
when deploying medical imaging and diagnostic
equipment, with emphasis on power and cooling.
Schneider Electric provides a comprehensive approach to cyber security for critical infrastructure. They recognize cyber attacks have expanded from disrupting IT systems to endangering physical assets and human life. The document outlines Schneider's investments in security technologies and services to protect customers across industries. It describes their defense-in-depth strategy including secure product design, testing, compliance with standards, and security services to monitor, detect, and respond to threats. The goal is to help customers comply with regulations and mitigate risks through an integrated portfolio.
The document provides a five-step process for planning a new data center: 1) Determine design parameters like capacity, budget, growth plan, etc. 2) Develop a system concept by selecting a reference design. 3) Determine user requirements like preferences and constraints. 4) Generate a specification. 5) Generate a construction design. It emphasizes involving the right stakeholders, communicating at the right level of abstraction, and avoiding common mistakes like poor budgeting or an IT-focused rather than business-focused design. Following the standardized process can help complete projects on time and on budget by eliminating potential pitfalls.
This document provides an overview of data center design and infrastructure. It discusses the history and evolution of data centers from large computer rooms in early computing to modern facilities. Key aspects covered include facilities layout, mechanical and electrical systems for power, cooling, fire protection and more. Modern data center design principles emphasize modularity, scalability, efficiency and resiliency. The document also examines data center infrastructure management tools and the use of modular or containerized data center solutions.
Asset Management - what are some of your top priorties?Schneider Electric
The document discusses Schneider Electric's Foxboro Evo asset management software. The software aims to [1] improve operational uptime by enabling remote device monitoring and diagnostics to reduce unnecessary field trips, [2] streamline engineering workflows through template-based device configuration and commissioning wizards, and [3] reduce costs and risks through features like a maintenance response center for alert triaging and work order management integration.
CERTIFIED Data Center Professional - CDCPAPEXMarCom
The document provides information about a 2-day Certified Data Center Professional (CDCP) course on managing mission critical data center facilities. The course aims to teach participants best practices for designing, maintaining, and operating data centers with high availability and efficiency. It will cover key components of data centers like power, cooling, security, and cabling and how to set them up and improve them. The course also addresses operations and maintenance aspects. The target audience includes IT professionals, facilities professionals, and operations professionals. The benefits of the course include learning how to identify the best site for a data center, understand components for high availability, and apply industry standards to improve efficiency, security, and maintenance.
Every business has a data center, regardless of the size. Even the smallest business has it. It is an ever-growing part of business in the modern world and a key business parameter, since data center influences the functioning of business enterprise. Imagine what happens to the business operation when the data center is interrupted. Any interruption can lead to serious breakdown. That is why efficient backup strategy is essential.
The document discusses data center infrastructure and operations. It explains that data centers must transform from traditional environments to ones that are efficient, automated, and service-oriented to reduce costs and complexity while enabling growth. A typical data center securely houses an organization's IT systems and provides power, cooling, and redundancy to ensure maximum availability and resilience. It also discusses considerations for data center design like power usage efficiency and virtualization strategy.
WP107 How Data Center Management Software Improves Planning and Cuts OPEXSE_NAM_Training
Modern data center infrastructure management software tools can help simplify operations, cut costs, and speed up information delivery in three key ways:
1. Planning tools simulate the impact of infrastructure changes to help with capacity planning and ensure redundancy.
2. Operations tools provide rapid impact analysis when issues arise and can proactively prevent downtime.
3. Analytics tools leverage historical data to identify strengths and weaknesses to improve future performance.
Determining your data center strategy is critical in this expanding world of big data, cloud and mobility. Should you build your own data center, consider a wholesale arrangement, colocate with another carrier or transfer your critical information to the cloud? Or, does some combination of these options best suit your needs? Where do you even begin when planning these large enterprise decisions?
Join Randy Ortiz, VP of Data Center Design and Engineering, from Internap as he breaks down the steps you need to take to achieve a successful outcome for your data center initiatives.
Key topics include:
*Important decision-making considerations
*Why flexibility matters
*Top trends to watch today
DCIM: An Integral Part of the Software Defined Data CentreConcurrentThinking
Do you know what DCIM is? Discover with Concurrent Thinking how to improve your data centre efficiency, how to overcome challenges in data centres and what the future of DCIM is.
Find out more here: http://www.concurrent-thinking.com/
NERC Critical Infrastructure Protection (CIP) and Security for Field DevicesSchneider Electric
The document discusses NERC CIP guidelines for securing critical infrastructure devices in the electric grid. It provides an overview of the six main CIP guidelines regarding personnel authorization, training, security of the electronic perimeter, physical security, operations security, and incident reporting. The document emphasizes that compliance requires both compliant technologies and security-focused procedures. It also outlines key security principles like least privilege and role-based access controls. Overall, the summary provides a high-level view of the document's coverage of NERC CIP compliance objectives and guidelines.
This presentation shows that Data Center Infrastructure Management (DCIM) Software to a Data Center Manager is what ERP software is to a VP - Manufacturing. This is the 2nd presentation from a series of 3-part series from GreenField Software on the subject: DCIM for High Availability.
DCIM Software charts out the relationship maps for assets by identifying various dependencies among them. Threshold-based alerts on critical parameters, combined with impact analysis of Move-Add-Change, mitigates risks of DC failures.
GreenField Software’s Mission is to help Data Centers control capital expenditures reduce operating expenses and mitigate the risks of Data Center failures. Besides DCIM Software, GFS offers Data Center Advisory Services in the areas of best practices, capacity planning, energy efficiency and business continuity of data centers.
DCIM tools help data center managers improve planning, lower costs, and speed up information delivery by providing visibility into data center infrastructure and automation of manual processes. They solve problems around maintaining availability and responding quickly to business needs while lowering costs during consolidation and cloud computing initiatives. DCIM tools provide concise summaries of capacity, assets, and performance to help answer questions around planning, operations, and analytics. Their use is important for reducing operational expenses by 10-30% annually while improving productivity.
NER & Emerson Infrastructure Optimization Capabilties StoryboardGreg Stover
NER Data Corporation is the Premier National Value Added Master Distributuor of Emerson, Liebert and Avocent Technology Solutions.
No other Distributor knows Emerson Cooling, Power, Monitoring and DCIM Solutions better, has better programs, has better pricing and/or has better Technical Consultiing capability then NER.
NER will show you how to optimize you existing Emerson, Liebert and Avocent investments while ensuring you are properly positioned for the future!
As steel operations rely heavily on low-voltage motors, the introduction of new technologies which target motor performance have a direct impact on energy, commissioning and maintenance costs. Networking allows for easy monitoring of critical data of
each motor or load connected to the intelligent motor control center (iMCC), enabling precise process control. However, the iMCC concept isn’t a new technology. Networked protection relays and speed drivers are mature technologies with consolidated acceptance. Explore new trends for iMCCs including new Ethernet technologies, Web, wireless, biometric devices, and new technologies for metering and motor branch circuit protection. Copyright AIST Reprinted with Permission
[Webinar Presentation] Best Practices for IT/OT ConvergenceSchneider Electric
All over the world, utilities are facing up to the task of integrating information technology (IT) operations with those of operational technology (OT). What's driving it? How can utilities prepare? What should they expect?
The webinar recording is also available on-demand. To view it, please click here: http://goo.gl/b3kxm5
Here are the key points of a collapsed multitier design:
- All server farms are directly connected without physical separation between Layer 2 switches. This reduces hardware costs.
- Services like load balancing, firewalling, etc. are concentrated at the aggregation layer rather than being distributed between tiers.
- Less hardware is required compared to an expanded design as there is no need for separate switches and devices at each tier.
- However, it provides less control and scalability compared to an expanded design as tiers are not physically isolated. For example, if one tier needs to be scaled out, it affects the other tiers.
- Security may also be weaker as there is no firewall segmentation between tiers.
What Does It Cost to Build a Data Center? (SlideShare)SP Home Run Inc.
http://DataCenterLeadGen.com
What Does It Cost to Build a Data Center? (SlideShare).
The “build a data center” decision is not to be taken lightly. Consider these different cost factors to see if a build or lease is better.
Copyright (C) SP Home Run Inc. All worldwide rights reserved.
The breadth of information provided by DCIM enables Data Center Managers to impart intelligence cross-departmentally and directly to the desk of executives. Learn how a DCIM solution can be leveraged for optimal understanding of the data centers current state and projected needs in terms of cost. This webinar will provide a high-level view of DCIM's value and is intended for any person involved in making decisions in the data center.
Best Practices for Creating Your Smart Grid Network ModelSchneider Electric
A real-time model of their distribution network enables
utilities to implement Smart Grid strategies such as
managing demand and integrating renewable energy
sources. They build this model in an Advanced
Distribution Management System (ADMS) based on
accurate and up-to-date information of the distribution
network infrastructure. Yet, a recent survey shows that
less than 5% of utilities are confident about the quality
of the network data. This paper discusses best practices
for ensuring complete, correct, and current data for a
Smart Grid network model.
Power Protection for Digital Medical Imaging and Diagnostic EquipmentSchneider Electric
Medical imaging and diagnostic equipment (MIDE) is
increasingly being networked to Picture Archiving and
Communications Systems (PACS), Radiology Information
Systems (RIS), Hospital Information Systems
(HIS), and getting connected to the hospital intranet as
well as the Internet. Failing to implement the necessary
physical infrastructure can result in unexpected
downtime, and safety and compliance issues, which
translates into lost revenue and exposure to expensive
litigations, negatively affecting the bottom line. This
paper explains how to plan for physical infrastructure
when deploying medical imaging and diagnostic
equipment, with emphasis on power and cooling.
Schneider Electric provides a comprehensive approach to cyber security for critical infrastructure. They recognize cyber attacks have expanded from disrupting IT systems to endangering physical assets and human life. The document outlines Schneider's investments in security technologies and services to protect customers across industries. It describes their defense-in-depth strategy including secure product design, testing, compliance with standards, and security services to monitor, detect, and respond to threats. The goal is to help customers comply with regulations and mitigate risks through an integrated portfolio.
The document provides a five-step process for planning a new data center: 1) Determine design parameters like capacity, budget, growth plan, etc. 2) Develop a system concept by selecting a reference design. 3) Determine user requirements like preferences and constraints. 4) Generate a specification. 5) Generate a construction design. It emphasizes involving the right stakeholders, communicating at the right level of abstraction, and avoiding common mistakes like poor budgeting or an IT-focused rather than business-focused design. Following the standardized process can help complete projects on time and on budget by eliminating potential pitfalls.
This document provides an overview of data center design and infrastructure. It discusses the history and evolution of data centers from large computer rooms in early computing to modern facilities. Key aspects covered include facilities layout, mechanical and electrical systems for power, cooling, fire protection and more. Modern data center design principles emphasize modularity, scalability, efficiency and resiliency. The document also examines data center infrastructure management tools and the use of modular or containerized data center solutions.
Asset Management - what are some of your top priorties?Schneider Electric
The document discusses Schneider Electric's Foxboro Evo asset management software. The software aims to [1] improve operational uptime by enabling remote device monitoring and diagnostics to reduce unnecessary field trips, [2] streamline engineering workflows through template-based device configuration and commissioning wizards, and [3] reduce costs and risks through features like a maintenance response center for alert triaging and work order management integration.
CERTIFIED Data Center Professional - CDCPAPEXMarCom
The document provides information about a 2-day Certified Data Center Professional (CDCP) course on managing mission critical data center facilities. The course aims to teach participants best practices for designing, maintaining, and operating data centers with high availability and efficiency. It will cover key components of data centers like power, cooling, security, and cabling and how to set them up and improve them. The course also addresses operations and maintenance aspects. The target audience includes IT professionals, facilities professionals, and operations professionals. The benefits of the course include learning how to identify the best site for a data center, understand components for high availability, and apply industry standards to improve efficiency, security, and maintenance.
Every business has a data center, regardless of the size. Even the smallest business has it. It is an ever-growing part of business in the modern world and a key business parameter, since data center influences the functioning of business enterprise. Imagine what happens to the business operation when the data center is interrupted. Any interruption can lead to serious breakdown. That is why efficient backup strategy is essential.
The document discusses data center infrastructure and operations. It explains that data centers must transform from traditional environments to ones that are efficient, automated, and service-oriented to reduce costs and complexity while enabling growth. A typical data center securely houses an organization's IT systems and provides power, cooling, and redundancy to ensure maximum availability and resilience. It also discusses considerations for data center design like power usage efficiency and virtualization strategy.
WP107 How Data Center Management Software Improves Planning and Cuts OPEXSE_NAM_Training
Modern data center infrastructure management software tools can help simplify operations, cut costs, and speed up information delivery in three key ways:
1. Planning tools simulate the impact of infrastructure changes to help with capacity planning and ensure redundancy.
2. Operations tools provide rapid impact analysis when issues arise and can proactively prevent downtime.
3. Analytics tools leverage historical data to identify strengths and weaknesses to improve future performance.
The document discusses key aspects of data centers including:
- Defining what a data center is and its main components: white space, support infrastructure, IT equipment, and operations staff.
- How data centers are managed through coordinated efforts between IT and facilities to maintain systems and infrastructure.
- What a green data center is and how the federal government is involved in improving energy efficiency.
- Common concerns of key stakeholders like IT, facilities, and finance when managing a data center.
- Options for addressing lack of power, space or cooling through optimization, moving locations, or outsourcing.
- Important measurements and benchmarks for data center efficiency like PUE and where to find standards from groups
The document discusses data center infrastructure and operations. It explains that data centers must transform from traditional environments to ones that are efficient, automated, and service-oriented to reduce costs and complexity while enabling growth. A typical data center securely houses an organization's IT systems and provides power, cooling, and redundancy to ensure maximum availability. It also discusses business benefits of data centers like availability, continuity, lower total cost of ownership, and agility. The document provides considerations for data center design like power usage efficiency and virtualization strategy. It includes a glossary of terms.
This document provides an introduction to IT infrastructure, defining its key components and concepts. It discusses how infrastructures have become more complex with new applications and specialized hardware. The infrastructure is comprised of building blocks including processes/information, applications, application platforms, and underlying infrastructure components like servers, storage, networking, and datacenters. Non-functional attributes like availability, performance, and security are also essential considerations in infrastructure architecture.
Research Paper Find a peer reviewed article in the following d.docxeleanorg1
Research Paper:
Find a peer reviewed article in the following databases provided by the UC Library and write a 500
-word
paper reviewing the literature concerning
Data Center Technology. Choose one of the technologies discussed in Chapter 5, Section 5.2 (Erl, 2014).
Abstract <>
Introduction <>
1-
Virtualization --
provide some flow chat also.
(Note:- But you can take anyone from 1 to 7)
2- Standardization and Modularity
3- Automation
4- Remote Operation and Management
5- High Availability
6- Security-Aware Design, Operation, and Management
7- Facilities
Etc…
======This is must
Use the following databases for your research:
· ACM Digital Library
· IEEE/IET Electronic Library
· SAGE Premier
=======
Conclusion<>
You may choose any scholarly peer reviewed articles and papers.
FYI -- PDF BOOK
Section 5.2
5.2. DATA CENTER TECHNOLOGY
Grouping IT resources in close proximity with one another, rather than having them geographically dispersed, allows for
power sharing, higher efficiency in shared IT resource usage, and improved accessibility for IT personnel. These are the
advantages that naturally popularized the data center concept. Modern data centers exist as specialized IT infrastructure
Chapter 5. Cloud-Enabling Technology - Cloud Computing: Concepts, Technology & Architecture
https://www.safaribooksonline.com/library/view/cloud-computing-concepts/9780133387568/ch05.html[11/15/2017 5:49:24 PM]
used to house centralized IT resources, such as servers, databases, networking and telecommunication devices, and
software systems.
Data centers are typically comprised of the following technologies and components:
Virtualization
Data centers consist of both physical and virtualized IT resources. The physical IT resource layer refers to the facility
infrastructure that houses computing/networking systems and equipment, together with hardware systems and their
operating systems (Figure 5.7). The resource abstraction and control of the virtualization layer is comprised of operational
and management tools that are often based on virtualization platforms that abstract the physical computing and
networking IT resources as virtualized components that are easier to allocate, operate, release, monitor, and control.
Chapter 5. Cloud-Enabling Technology - Cloud Computing: Concepts, Technology & Architecture
https://www.safaribooksonline.com/library/view/cloud-computing-concepts/9780133387568/ch05.html[11/15/2017 5:49:24 PM]
Figure 5.7.
The common components of a data center working together to provide virtualized IT resources
supported by physical IT resources.
Virtualization components are discussed separately in the upcoming
Virtualization Technology
section.
Standardization and Modularity
Data centers are built upon standardized commodity hardware and designed with modular architectures, aggregating
multiple identical building blocks of facility infrastructure and equipment to support scalability, gro.
This document describes a method for qualifying IT infrastructure in a way that can scale to organizations of different sizes. It defines what constitutes IT infrastructure, including servers, networks, desktops, and management applications. The method aims to minimize validation effort through a risk-based and layered approach while still meeting regulatory requirements. IT infrastructure is expected to be fault-free, continuously available, and compliant with processes and procedures like critical utilities. Regulations surrounding IT infrastructure are discussed, noting the need to demonstrate control over infrastructure through a planned qualification process and ongoing compliance procedures.
This document discusses IT infrastructure, including its components, types, and management. IT infrastructure consists of hardware, software, networking components, operating systems, and data storage used to deliver IT services. There are traditional infrastructures owned by organizations and cloud infrastructures using public, private or hybrid cloud models. IT infrastructure management coordinates resources, systems, platforms, and environments for tasks like operations, automation, orchestration, and risk management. Benefits of effective IT infrastructure include high performance storage, low latency networks, security, virtualization, and zero downtime.
The document summarizes key aspects of architectural design for software systems. It defines software architecture as the structure of system components and relationships between them. Architecture is important for analyzing design effectiveness, considering alternatives, and managing risks. Key architectural styles described include data-centered, data flow, call and return, object-oriented, and layered. The document also discusses defining architectural context diagrams, archetypes, and components to design system architecture.
The document defines cloud computing as a model enabling ubiquitous and convenient on-demand access to a shared pool of configurable computing resources that can be rapidly provisioned with minimal management effort. It identifies five essential characteristics, three service models (Software as a Service, Platform as a Service, and Infrastructure as a Service), and four deployment models (Private cloud, Community cloud, Public cloud, and Hybrid cloud). The purpose is to serve as a means for broad comparisons of cloud services and deployment strategies.
The document defines cloud computing according to the National Institute of Standards and Technology (NIST). It identifies five essential characteristics of cloud computing (on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service). It also outlines three service models (Software as a Service, Platform as a Service, and Infrastructure as a Service) and four deployment models (private cloud, community cloud, public cloud, and hybrid cloud). The purpose is to provide a baseline definition and taxonomy to facilitate comparisons of cloud services and deployment strategies.
The document defines cloud computing according to the National Institute of Standards and Technology (NIST). It identifies five essential characteristics of cloud computing (on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service). It also outlines three service models (Software as a Service, Platform as a Service, and Infrastructure as a Service) and four deployment models (private cloud, community cloud, public cloud, and hybrid cloud). The purpose is to provide a baseline definition and taxonomy to facilitate comparisons of cloud services and deployment strategies.
RES Software dynamic desktop solutions simplify desktops into centrally managed, secure, personalized, context-aware and less complex desktops for your users.
Faster logins and access to applications make a difference in EHR adoption.
Research Paper Find a peer reviewed article in the following dat.docxaudeleypearl
Research Paper: Find a peer reviewed article in the following databases provided by the UC Library and write a 250-word paper reviewing the literature concerning Data Center Technology. Choose one of the technologies discussed in Chapter 5, Section 5.2 (Erl, 2014).
1- Virtualization -- <I prefer this one> provide some flow chat also.
2- Standardization and Modularity
3- Automation
4- Remote Operation and Management
5- High Availability
6- Security-Aware Design, Operation, and Management
7- Facilities
Etc…
You may choose any scholarly peer reviewed articles and papers.
Use the following databases for your research:
· ACM Digital Library
· IEEE/IET Electronic Library
· SAGE Premier
Section 5.2 <From here we can choose one topic)
5.2. DATA CENTER TECHNOLOGY
Grouping IT resources in close proximity with one another, rather than having them geographically dispersed, allows for
power sharing, higher efficiency in shared IT resource usage, and improved accessibility for IT personnel. These are the
advantages that naturally popularized the data center concept. Modern data centers exist as specialized IT infrastructure
Chapter 5. Cloud-Enabling Technology - Cloud Computing: Concepts, Technology & Architecture
https://www.safaribooksonline.com/library/view/cloud-computing-concepts/9780133387568/ch05.html[11/15/2017 5:49:24 PM]
used to house centralized IT resources, such as servers, databases, networking and telecommunication devices, and
software systems.
Data centers are typically comprised of the following technologies and components:
Virtualization
Data centers consist of both physical and virtualized IT resources. The physical IT resource layer refers to the facility
infrastructure that houses computing/networking systems and equipment, together with hardware systems and their
operating systems (Figure 5.7). The resource abstraction and control of the virtualization layer is comprised of operational
and management tools that are often based on virtualization platforms that abstract the physical computing and
networking IT resources as virtualized components that are easier to allocate, operate, release, monitor, and control.
Chapter 5. Cloud-Enabling Technology - Cloud Computing: Concepts, Technology & Architecture
https://www.safaribooksonline.com/library/view/cloud-computing-concepts/9780133387568/ch05.html[11/15/2017 5:49:24 PM]
Figure 5.7. The common components of a data center working together to provide virtualized IT resources
supported by physical IT resources.
Virtualization components are discussed separately in the upcoming Virtualization Technology section.
Standardization and Modularity
Data centers are built upon standardized commodity hardware and designed with modular architectures, aggregating
multiple identical building blocks of facility infrastructure and equipment to support scalability, growth, and speedy
hardware replacements. Modularity and standardization are key requirements for reducing investment and operation ...
The document discusses developing a System Security Plan (SSP) for the Federal Risk and Authorization Management Program (FedRAMP). The SSP is a detailed document that describes how security controls have been implemented based on NIST SP 800-53. It provides an overview of the system, identifies responsible personnel, and delineates control responsibilities. Developing a thorough SSP can streamline the FedRAMP assessment process. The SSP template is lengthy at 352 pages to fully document the system and control implementation.
This document provides an introduction to IT infrastructure architecture, defining key concepts and building blocks. It discusses how infrastructures have become more complex with new applications and the need for agility. The definition of infrastructure is examined, noting it depends on perspective. Infrastructure comprises processes/information, applications, application platforms, and underlying hardware/network blocks. Non-functional requirements like availability, performance and security are crucial to infrastructure and often conflicting to balance. Architecture is needed to manage infrastructure design, use and changes.
The United States National Institute of Standards and Technology (NIST) has p...Michael Hudak
The document defines cloud computing based on recommendations from the National Institute of Standards and Technology (NIST). It identifies five essential characteristics of cloud computing (on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service). It also outlines three service models (Software as a Service, Platform as a Service, and Infrastructure as a Service) and four deployment models (private cloud, community cloud, public cloud, and hybrid cloud). The purpose is to provide an informal definition to inform public debate on cloud computing.
«Определение понятия «облачные вычисления» (от National Institute of Standard...Victor Gridnev
The document defines cloud computing based on recommendations from the National Institute of Standards and Technology (NIST). It identifies five essential characteristics of cloud computing (on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service). It also outlines three service models (Software as a Service, Platform as a Service, and Infrastructure as a Service) and four deployment models (private cloud, community cloud, public cloud, and hybrid cloud). The purpose is to provide an informal definition to inform public debate on cloud computing.
Grid computing allows for the sharing and aggregation of distributed computing resources like computers, networks, databases and instruments. It provides a large virtual computing system for end users and applications. Key characteristics include facilitating solutions to large, complex problems across locations and organizations through integrated and collaborative use of heterogeneous resources. Popular applications include medical research, astronomy, climate modeling and more. Examples of operational grids discussed are TeraGrid, Pauá Grid Project and academic research projects like SETI@home.
Similar to Classification of data center operations technology management tools (20)
This document presents a 5-step approach for cities to become more efficient and sustainable through smart systems. It argues that critical systems like energy, transportation, and buildings need to be improved and integrated using both bottom-up and top-down approaches. The document outlines challenges of rapid urbanization, noting that 70% of the world's population will live in cities by 2050, necessitating expansion. It advocates making cities more efficient, livable, and sustainable to attract residents and businesses through technologies available today and an approach focused on systems.
This document presents a 5-step approach for cities to become more efficient and sustainable through smart systems. It argues that critical systems like energy, transportation, and buildings must be improved and integrated using both bottom-up and top-down approaches. The document outlines challenges of rapid urbanization, noting that 70% of the world's population will live in cities by 2050, necessitating expansion. It advocates making cities more efficient, livable, and sustainable to attract residents and businesses through technologies available today to monitor systems and manage resources.
Enforcing vehicle speed limits is vital in lowering
road accident rates and improving road safety.
LIDAR (Light Detection and Ranging) cinemometer
technology has shown to be more accurate than
radar-based Doppler systems because it can
measure at farther distances, resulting in more
readings possible with the vehicle in the beam
detection area longer. This paper summarizes
LIDAR cinemometer methodology and describes
the primary advantages of these systems
compared to those applying conventional Doppler
Effect-based technologies.
In today’s commercial buildings, installing an effective
WAGES (water, air, gas, electricity, steam) metering
system can be a source of substantial energy and cost
savings. This white paper examines WAGES metering
as the essential first step toward a comprehensive
energy management strategy. Best practices for
selecting meters, and identifying metering points are
described. In addition, metrics for measuring gains in
energy efficiency are explained.
The document discusses Schneider Electric's smart city solutions, which include smart energy, mobility, water, public services, buildings/homes, and integration solutions. The solutions aim to increase efficiency, improve quality of life, and drive sustainability in cities by addressing issues like energy usage, water usage, reliability of resources, traffic congestion, safety, digitized services, sustainability planning, and holistic infrastructure management.
The document discusses Schneider Electric's approach to making cities smarter and more efficient through collaboration between various stakeholders. Key points include:
1) Cities face challenges like congestion, pollution and high costs that smart technologies can help address through solutions like smart energy grids, mobility systems, buildings and water infrastructure.
2) A smart city approach focuses on increasing efficiency through information sharing and integration rather than just expanding infrastructure. It also takes a long-term, collaborative approach.
3) Schneider Electric provides hardware, software and process expertise across various smart city domains and has over 200 project references worldwide delivering benefits like energy savings, reduced losses and emissions, and economic and social gains.
In less than 40 years, 70% of the world’s population will reside in our cities. This rapid
migration will push both current and future urban centres to their seams and expand industrial
and residential infrastructures beyond their breaking points.
This eye-opening fact raises important questions that must be considered by cities around the
world. Can this growth be done in a sustainable way? Will cities be able to reduce their
environmental impact and carbon emissions? Will we be able to meet the sustainability
challenges brought on by regulation and the impact of this massive growth? And, will we
expand in ways which ensure communities are enjoyable places to live and promote social
equality?
We can answer affirmatively to these concerns, and re-design our cities with these thoughts
in mind. With the movement towards smart cities, the urban centres we live in can become
more efficient, livable, and sustainable in both the short and long term, thanks to involvement from city, citizens, and businesses.
1. Schneider Electric is a global energy management company with over 175 years of history and presence in over 100 countries.
2. The presentation discusses Schneider Electric's offerings across various smart grid domains including generation and transmission, distribution, renewable energy, buildings, industry, and IT.
3. It defines smart grid as combining electricity infrastructure with information technology and communication infrastructure to efficiently balance demand and supply over an increasingly complex network with integrated users and new roles like prosumers and aggregators.
A high performance green building is designed for economic and environmental performance over its entire life cycle, considering unique local climate and cultural needs and providing for the health, safety and productivity of its occupants. With continuous care over its life cycle, it minimises energy use, CO2 emissions, and total environmental impacts, and provides ongoing measurable value to building owners, occupants and society.
1. The document discusses making buildings smarter and more intelligent to address the projected 56% increase in global energy demand by 2040. 40% of total global energy is currently consumed by buildings.
2. Key aspects of smart buildings discussed include building management systems, lighting control automation, smart meters, connectivity, and building analytics services to improve energy efficiency, safety, security, and sustainability.
3. The document promotes Schneider Electric's SmartStruxure solutions for building management that integrate systems like HVAC, lighting, and metering to provide monitoring, control, and energy savings across small to large buildings.
This document discusses improving urban efficiency through smart city initiatives. It describes how integrating operational technology and information technology can make infrastructure like transportation systems more efficient. This involves collecting data from across systems and departments to give city managers a holistic view for better decision making. The document also emphasizes that smart cities should put citizens at the center and involve both public and private stakeholders. It provides an example of an integrated management platform being used in cities to coordinate different transportation modes for shorter travel times and less pollution.
This document discusses smart energy systems and the future of energy in India. It addresses the increasing energy demand, shortage of sources, and issues of pollution and climate change. Smart energy solutions are presented as being available now to help manage these challenges through greater energy efficiency, distributed generation, smart grids, and demand response. The role of various players and new technologies in creating a more decentralized and interactive energy system is outlined.
The Schneider Electric ‘Innovate Something Wonderful” contest helps you innovate something new by solving jigsaw puzzles.
You just have to solve 6 puzzles over a period of 20 days and make sure you solve them smartly and quickly.
Participate in the Wall contests.
Participate in contests on other Social Media channels of Schneider Electric.
What do you win?
Schneider Electric Pen Drives
A pair of Bose Headphones
A Samsung Galaxy Tab
Contest Duration – March 12th to Mar 31st, 2013
https://www.facebook.com/SchneiderElectricIndia
Schneider Electric provides a comprehensive range of energy management services to help businesses optimize their energy usage and costs. Their services cover energy demand, supply, and certification and can help improve efficiency, reduce environmental impact, and achieve certification standards. Schneider Electric has expertise to support customers throughout the entire energy management lifecycle from strategy to optimization.
Make an impact on your environmental balance sheet:
1. Adopt a clear plan that is simple to measure and communicate to stakeholders.
2. Reduce your carbon footprint and environmental impact.
3. Improve public, market and leadership efficacy perceptions of your company.
4. Instill pride in employees, who know that their company is taking real steps forward to conserve energy.
Electricity usage costs have become an increasing fraction of the total cost of ownership (TCO) for data centers. It is possible to dramatically reduce the electrical consumption of typical data centers through appropriate design of the data center physical infrastructure and through the design of the IT architecture. This paper explains how to quantify the electricity savings and provides examples of methods that can greatly reduce electrical power consumption.
Many newer UPS systems have an energy-saving operating mode known as “eco-mode” or by some other descriptor. Nevertheless, surveys show that virtually no data centers actually use this mode, because of the known or anticipated side-effects. Unfortunately, the marketing materials for these operating modes do not adequately explain the cost / benefit tradeoffs. This paper shows that eco-mode provides a reduction of approximately 2% in data center energy consumption and explains the various limitations and concerns that arise from eco-mode use. Situations where these operating modes are recommended and contraindicated are also described.
Data center power and cooling infrastructure worldwide wastes more than 60,000,000 megawatt-hours per year of electricity that does no useful work powering IT equipment. This represents an enormous financial burden on industry, and is a significant public policy environmental issue. This paper describes the principles of a new, commercially available data center
architecture that can be implemented today to dramatically improve the electrical efficiency of data centers.
An improved architecture for high efficiency, high-density data centers
Classification of data center operations technology management tools
1. Classification of Data Center
Infrastructure Management (DCIM)
Tools
White Paper 104
Revision 3
by Kevin Brown
Dennis Bouley
Contents
> Executive summary Click on a section to jump to it
Introduction 2
Data centers today lack a formal system for classifying
infrastructure management tools. As a result, confu- Classification system context 3
sion exists regarding which management systems are
necessary and which are optional for secure and Monitoring & Automation 4
efficient data center operation. This paper divides the
realm of data center management tools into four Planning & Implementation 8
distinct subsets and compares the primary and sec- Data collection 11
ondary functions of key subsystems within these
subsets. With a classification system in place, data Dashboard 11
center professionals can begin to determine which
physical infrastructure management tools they need – Conclusion 13
and don’t need – to operate their data centers. Resources 14
Appendix 15
by Schneider Electric White Papers are now part of the Schneider Electric
white paper library produced by Schneider Electric’s Data Center Science Center
DCSC@Schneider-Electric.com
2. Classification of Data Center Infrastructure Management (DCIM) Tools
Introduction The total data center universe that most data center professionals are familiar with principally
consists of two realms. The first realm, information technology (IT), refers to all systems that
address the information processing aspects of the data center (e.g., servers, storage arrays
and network switches). The second realm revolves around the physical infrastructure and
controls that allow the IT realm to function. This second realm includes the physical infra-
structure systems that support both the IT (“white space”) realm of the data center as well as
the larger data center facility itself. This would include facility power, cooling and security
systems. The management classification system described in this paper is limited in
scope to the physical infrastructure of the data center facility and IT areas.
Both realms are interrelated but the subsystems within each are procured, managed, and
maintained by separate users. Typically, facilities and engineering departments “own” and
operate facility and IT infrastructure systems. IT department personnel “own” the IT equip-
ment. In some larger data centers both IT and infrastructure devices share a common
communications backbone. As the total data center evolves, these departments will become
more intertwined as will the management systems that support them. Table 1 provides
definitions of terms utilized in this paper to describe and contrast the data center infrastruc-
ture management classification system.
Data Center
Term Definition
Examples
This represents the totality of the material systems
and foundational physical equipment necessary to • Power systems
Facility & IT
facilitate operations of a reliable, controlled and • Cooling systems
infrastructure • Security systems
secured IT environment.
The entire spectrum of technologies for infor-
• Servers
Information mation processing, including software, hardware,
• Storage systems
Technology (IT) communications technologies and related • Network systems
services.
The total physical surroundings within a building • IT room
Environment or facility that house the various pockets of data • Electrical room
center related hardware and software. • Mechanical room
Table 1 • Monitoring & Automation
Terminology definitions A logical grouping of physical subsystems with • Planning & Implementation
Subset similar primary functions (four of these). • Dashboard
and examples
• Data Collection
• Facility power device
A purpose-built software package that addresses a monitoring subsystem
Subsystem specific need (potentially hundreds of these). • IT room security monitoring
subsystem
A software function that is first in order of develop- • The PowerLogic ION
Primary ment and first in rank or importance when compared Enterprise software
function to other software functions available within that package’s electrical room
particular subsystem. power analytics function
• The PowerLogic ION
A software function that is second in rank of Enterprise software
Secondary importance or later in order of development coming package’s facility HVAC
function after the primary function. cooling device monitoring
function
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 2
3. Classification of Data Center Infrastructure Management (DCIM) Tools
In an ideal world, data center managers should be able to run one management software
> A note regarding package that addresses all of their basic needs. However, the concept of “one system” does
not exist in any practical sense. While numerous vendors promise a vision of the ultimate
energy management
“unified” management system, this has been an elusive dream that will be difficult to realize.
The data center infrastructure The following points illustrate why “one system” is an unlikely goal in the foreseeable future:
management context map as
presented in Figure 1 does not
specifically call out energy • A need for simple tools that fulfill specific requirements – IT and facilities employ-
management in any of its ees have different priorities and no one package will meet all of their needs. These em-
subsets. In fact energy manage- ployees prefer simple tools that focus on addressing their specific need.
ment is involved throughout all
layers of the management • Investments in pre-existing systems - Most data center professionals already have
software construct and is not software in place that performs part of the management function. In many cases, it is
concentrated in any one subset or neither feasible nor cost effective to replace existing software.
subsystem.
• Open protocols enable integration of disparate software – Facility and IT infrastruc-
ture management software is highly specialized. However, when these tools are based
on standardized, open protocols, it becomes quite easy to add new software tools, as
needed, to an existing tool set and have them communicate and work together effective-
ly. This capability, therefore, diminishes the demand or need for a single, unified sys-
tem that covers everything.
Classification Figure 1 illustrates a context map of the four subsets within the facility and IT infrastructure
portion of the data center. Depending upon the size of a given data center, the total data
system context center (i.e., both realms described above) could consist of hundreds of management software
subsystems. The first step when classifying these subsystems is to group them into general
subsets. Although the focus of this paper is facility and IT infrastructure management
software, the subsets can also be used to classify IT management software.
Data Center
Facility and IT Infrastructure
Management Dashboard
Subset
Monitoring &
Automation
Domain
Cross subsystem GUIs
Figure 1
This data center facility
and IT infrastructure Monitoring & Planning &
software context map Automation Implementation
demonstrates how the Monitoring &
Subset Monitoring &
Subset
various subsets interact Automation Automation
Domain Domain
Subsystem specific GUIs Subsystem specific GUIs
Data Collection
Subset
Monitoring &
Automation
Domain
Subsystem specific HMIs
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 3
4. Classification of Data Center Infrastructure Management (DCIM) Tools
Note that the subsets in Figure 1 have either graphical user interfaces (GUI) or human
machine interfaces (HMI) associated with them. Also note that the Dashboard subset is the
primary area within the context map that allows for the visualization of cross subset infor-
mation.
The first step for data center operators who are evaluating their management software is to
examine key data center infrastructure systems such as the power distribution system,
building mechanical and cooling facilities, IT room, and security. This will help to determine
which subsystem management tools are already in place and, looking forward, which
subsystem tools are actually needed. A colocation data center, for example, may not require
a subsystem that manages at the IT room level. However, HVAC control and power man-
agement subsystems may be essential for that same colocation facility. A small or medium-
sized data center with an IT room housing 100 racks might forgo a facility control and power
management subsystem, leaving that to the facilities staff. However, the IT staff may wish to
directly monitor performance data by investing in an IT room management subsystem.
Monitoring & Subsystems grouped within the Monitoring & Automation subset ensure that 1) the data
center functions as designed, and 2), activities are automated to maintain / maximize the
Automation availability and efficiency of the data center. Monitoring & Automation software acts upon
user-set thresholds by alarming, logging, or even controlling physical devices. The Monitor-
ing & Automation subset of subsystems includes facility power, facility environmental control,
facility security subsystems, and IT room management (see Figure 2). Table 2 helps to
differentiate the mainstream Monitoring & Automation subsystems in terms of their primary
and secondary functions (see side bar “Not all monitoring solutions are created equal”).
Monitoring &
Automation
Subset
Sub-systems Functions
Figure 2 Facility Power
Alarming & Notification
The Monitoring & Automa-
tion subset contains Status
Facility
several sub-systems each Environmental
Control Control
of which provide a number
of functions Configuration
Facility
Security Visualization
IT Room
Reporting & Analytics
Four sub-system groupings exist within the Monitoring & Automation subset:
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 4
5. Classification of Data Center Infrastructure Management (DCIM) Tools
Facility power
The facility power management subsystem provides detailed insight into the status and
operation of the entire electrical distribution network (from utility feeds, to transformers, to
PDUs, to racks) within a building, often including the data center. Electrical engineering staff
and consultants utilize this subsystem to manage the electrical distribution network. The key
functions provided by this type of subsystem include power monitoring of current conditions
(critical and non-critical load), power alarming, and “power analytics”. These functions
support critical activities such as notification of and response to electrical network problems,
maintenance (planned and unplanned), capacity planning, facility expansion / retro-fit
projects, energy efficiency projects, power quality analysis, and power reliability analysis.
Figure 3
Monitoring of facility
power utilizing
Schneider Electric’s
StruxureWare Power
Monitoring Expert GUI
> Not all monitoring
software solutions
are created equal
Monitoring subsystems are built
with a primary function in mind. The facility power management subsystem offers a clear and complete view of facility power
Schneider Electric’s distribution. Also provided is actionable information based on detailed electrical data such as
StruxureWare Central IT room
monitoring system, for example, power, energy, power factor, amperage, voltage, frequency, harmonics, and waveforms. The
has as its primary function the subsystem’s output includes 3-D graphical views of the facility, electrical one-lines, and
ability to monitor power and equipment detail. The facility power management subsystem also provides visual alarm
cooling in the IT room.
However, many monitoring indicators and alarm notification, data analysis tools, and the ability to schedule and distribute
systems expand their capabili- reports.
ties over time.
Facility power management subsystems can either provide a fairly simple, primary electrical
These secondary functions are
typically less robust than those monitoring function for smaller data centers, or can provide extremely high speed and high
found in a purpose-built system. performance feedback for large sites. Schneider Electric’s StruxureWare Power Monitoring
StruxureWare Central, for Expert is an example of a facility power monitoring subsystem (see Figure 3).
example, has a secondary ability
to monitor Modbus devices
outside the IT room. While not
its primary function, that ability
may be enough for data center
Facility environmental control
operators with simple Modbus
Facility environmental control subsystems traditionally support the requirements of corporate
device monitoring requirements.
Table 2 shows examples of the facilities departments. In addition to facility heat, ventilation, and air conditioning (HVAC)
primary and secondary functions control, facility environmental subsystems can also encompass fire systems, water, steam,
of physical infrastructure and gas systems. The preferred communication protocols for facility environmental control
monitoring systems.
systems include BACnet, LONworks, and Modbus.
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 5
6. Classification of Data Center Infrastructure Management (DCIM) Tools
Everyday functions of facility environmental control systems include the opening and closing
of valves and dampers, the spinning up fans, initialization of pumps, and the controlled
cooling and heating of targeted spaces within the facility.
Schneider Electric’s StruxureWare Building Expert is an example of a mainstream facility
environmental control subsystem. Facility environmental control subsystems are also
differentiated from facility power management subsystems in that facility environmental
control handles the coordination, control and reporting for all energies, not just electrical
power.
Power Monitor-
Building Expert
Pelco Digital
Primary function
ing Expert**
Data Center
Expert*
Sentry
(BMS)
Secondary Function
Table 2 No Function
Comparison of Monitoring &
Automation primary and
Facility Power
secondary functions using
Schneider Electric’s
StruxureWare for Data Power device monitoring
Centers suite as an example
Power analytics
PUE monitoring
> Note regarding Facility Environmental Control
tables 2 and 3
Automation and control
Many physical infrastructure
software products from multiple
manufacturers exist in the F Cooling device monitoring
marketplace today. Most offer a
wide variety of functions. U
Tables 2 and 3 compare the N Facility Security
functions of only a partial C
sampling of the Schneider T
Electric products that fit within Surveillance
the Operations Technology (OT)
I
universe. O
N Access control
Tables 2 and 3 are not meant S
to be a comprehensive IT Room
representation of what is
available in the marketplace. In
fact, it is not Schneider Power device monitoring
Electric’s role to represent other
manufacturer’s products in
these tables. The functions of Cooling device monitoring
other manufacturer’s products
are often in a state of flux and
could easily be misrepresented. Environmental monitoring
Therefore these tables are
restricted to a portion of the
Schneider Electric current suite Security monitoring
of products.
Generic tables are located in Partial PUE monitoring
the Appendix which allow for
data center operators to enter
their own personal suite of Facility power devices include: breakers, trip units, medium voltage and low voltage metering (i.e.,
management software products transformers, switches), programmable logic controllers (PLCs), remote terminal units (RTUs), auto-
for comparison purposes. matic transfer switches (ATS), generator controls, UPS controls.
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 6
7. Classification of Data Center Infrastructure Management (DCIM) Tools
IT power devices include: UPS controls, power distribution units (PDUs) and branch circuit metering,
rack power strip metering
* Includes security add- ons such as NetBotz, and PUE monitoring tools such as StruxureWare Data
Center Operation : Energy Efficiency
** Works in coordination with installed meters for data collection
Facility security
As new technologies such as advanced optical video management systems, biometric
identification, and remote management systems become more widely available, traditional
card-and-guard security is being supplanted by facility security subsystems that can provide
positive identification and tracking of human activity in and around the data center. Identifica-
tion technology is changing as fast as the facilities, information, and communication it
protects. Schneider Electric Pelco is an example of a facility security subsystem capable of
providing both indoor and outdoor video security support.
Figure 4
Typical data center IT
room security interface
> A note regarding
subsystem users
Subsystems throughout the data
center are managed by
individuals with differing job
IT room
responsibilities. On the IT side,
IT room management subsystems monitor the power and cooling systems on the IT room
operators tend to focus on a
series of individual subsystem floor so that uptime of servers, communication equipment, and storage equipment can be
GUIs whereas management maintained. Data center IT room management subsystems are developed around the needs
focuses on the consolidated and requirements of the computer room operators (a need for faster speed and real-time
information reported on the
dashboard. information). The IT environment is characterized by frequent changes, intelligent devices,
and a management philosophy based on exception. These subsystems can also integrate
On the facilities side, a similar with security cameras within rows of racks such as Schneider Electric’s NetBotz cameras.
situation occurs. Engineers
monitor individual building HVAC
systems, for example, and the IT room management subsystems are designed to accommodate simultaneous firmware
facilities management tends to upgrades to multiple systems, and to monitor battery health by identifying exceptions that
interact with the dashboards that
display cross facility information.
indicate behavioral characteristics beyond pre-programmed thresholds. IT room manage-
ment subsystems are built around the expectation that power and cooling monitoring
operates in a manner similar to other IT applications. That is, the software can be self
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 7
8. Classification of Data Center Infrastructure Management (DCIM) Tools
installed, and the software performs auto discovery of linked components. In essence,
everything just “works” out of the box, with the ability to change the configuration. These
subsystems generally utilize an IP network communication protocol. Schneider Electric’s
StruxureWare Data Center Expert is an example of an IT room management subsystem.
Planning & Planning & Implementation, the second subset of subsystems (see Figure 1), ensures
1) efficient deployment of new equipment, 2) execution of planning in order to facilitate
Implementation changes in the data center, 3) tracking of assets within the data center, and 4) simulation of
potential changes in order to analyze the future impact on the data center. Functions within
Planning & Implementation involve prediction and modeling (“What happens if I do this?”),
change tracking (“At what point does my system get obsolete?”), inventory tracking (“How do
I track the history and movements of this piece of equipment?”), and dependency analysis (“If
I change the contents of this rack, how will it impact my cooling?”).
Five subsystem groupings exist within the Planning & Implementation subset:
Facility asset management – This subsystem allows for management of asset
deployment, generation of facility-related parts specifications, calibration, costing and
tracking of building equipment assets.
Facility capacity management – This subsystem aids facilities staff to plan both
moves and changes within the mechanical and electrical rooms, by providing real-time
measurements of energy consumption and water flows in addition to the project impact of
changes to the power and cooling infrastructure.
Planning &
Implementation
Subset
Figure 5 Sub-systems Functions
Facility asset
The Planning & Imple- management Change tracking
mentation subset
contains several sub- Facility capacity Inventory tracking
management
systems each of which
provide a number of IT room workflow
management Dependency analysis
functions
IT room capacity Visualization
management
IT room asset &
lifecycle management Prediction & modeling
IT room workflow management – This subsystem facilitates the execution of
equipment additions, moves, and changes by presenting a hierarchical overview of data
center locations, including global and local views and from groups to single assets.
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 8
9. Classification of Data Center Infrastructure Management (DCIM) Tools
IT room capacity management – From a power consumption efficiency perspec-
Link to resource tive, the system identifies the optimal physical location for power, cooling, and rack-based IT
White Paper 150 equipment. User defined requirements such as redundancy, network use, and line of
Power and Cooling Capacity business groupings are also factored in. Live data is utilized to create simulations which
Management for Data Centers analyze the impact of changes before they occur. This level of planning allows for reductions
in stranded cooling and power capacity. For more information on the subject of stranded
capacity, please see White Paper 150, Power and Cooling Capacity Management for Data
Centers.
IT room asset & lifecycle management – This subsystem allows for the man-
agement of IT room inventory. Visual models of the data center layout enable tracking of IT
assets and available space. The rendering of the data center physical layout also allows for
visualization of power consumption per rack as well as identification and location of power
failures.
Figure 6
Planning &
Implementation for the
IT room environment
utilizing the Schneider
Electric StruxureWare
Data Center Operation
GUI
Table 3 helps to differentiate some of the mainstream planning and implementation subsys-
tems in terms of their primary and secondary functions.
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 9
10. Classification of Data Center Infrastructure Management (DCIM) Tools
Power Monitor-
Building Expert
Data Center
Data Center
Data Center
Operation -
Operation -
Primary function
ing Expert
Operation
Capacity
Change
(BMS)
Secondary Function
No Function
FACILITY ASSET MANAGEMENT
Inventory tracking
Maintenance tracking
FACILITY CAPACITY MANAGEMENT
Impact and dependen-
cy analysis - Power
Table 3
Impact and dependen-
Comparison of cy analysis - Cooling
Planning & Implementation
F
primary and secondary functions IT ROOM WORKFLOW MANAGEMENT
U
using Schneider Electric’s
StruxureWare for Data Centers N Prediction and model-
suite as an example C ing
T
I Workflow tracking
O
N IT ROOM CAPACITY MANAGEMENT
S
Impact and dependen-
cy analysis - Power
Impact and dependen-
cy analysis - Cooling
Impact and dependen-
cy analysis - Network
IT ROOM ASSET & LIFECYCLE MANAGEMENT
Inventory tracking
Change tracking
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 10
11. Classification of Data Center Infrastructure Management (DCIM) Tools
Data collection The data collection subset represents devices such as meters, power protection devices,
embedded cards, programmable logic controllers (PLCs), sensors and other such devices.
These devices perform the fundamental function of gathering data and forwarding it to
management software for processing.
Figure 7
Human machine interface
(HMI) provides configuration
and operation
information for individual
UPS device
Dashboard Data center managers all require some means for consolidating critical information about the
performance of their data center. Not only does the critical information need to be aggregat-
ed, but the user needs to visualize the data in a manner that is meaningful and actionable. In
fact, this visualization of the data via a dashboard is a key function that allows a view across
the four main subsystem subsets.
Operational dashboard data may include the following: average temperature and humidity,
high temperature and humidity for a determined period, IT load, total data center load, and a
summary of the last 10 critical alerts. From a security perspective the dashboard could also
highlight the last 10 physical entries into the data center and the times when these entries
took place. Some operators, who are responsible for controlling their own energy costs, may
also require PUE data on their dashboard.
Some data center operators may choose to access data in its raw form without the benefit of
a dashboard. For example, queries from SQL tables may be generated and transferred to an
Excel file so that a report can be produced that meets the immediate requirement for
performance information. Various monitoring subsystems can also highlight urgent issues.
But as data centers become more complex, the information required needs to be easily
formatted and presented into a formal dashboard. A dashboard represents a third subset
which captures data from the three other subsets and then updates to a management
package, providing KPIs and data summaries, over the existing network.
Some dashboards are more focused on the performance of the IT equipment while others
provide summary views into the physical infrastructure (cooling, power, security). Dash-
boards draw their information from monitoring & automation and planning & implementation,
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 11
12. Classification of Data Center Infrastructure Management (DCIM) Tools
and data collection subsets. Some dashboards are custom built or are purchased from third
parties (see Figure 8 for sample dashboard).
Visualization software
Although the dashboard is the key centerpiece for aggregation of actionable data, various
levels of human machine interface (HMI) and graphical user interface (GUI) exist and enable
meaningful data to be visualized by specific users via the various subsystems across the data
center (see GUI and HMI in Figure 1). Although the HMI used by the facilities engineer may
not resemble the GUI utilized by the IT operator, both extract information from the system
based upon the individual user’s preferences and priorities.
Figure 8
Sample dashboard collects
data across OT subsets
and centralizes infor-
mation in one or more user
interfaces
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 12
13. Classification of Data Center Infrastructure Management (DCIM) Tools
Conclusion By sharing key data points, alarm notifications, historical data, and asset tracking information,
data center facility and IT infrastructure management software allows users to make informed
decisions based upon real-time power and cooling capacity and redundancy data.
The classification system presented in this paper takes the first step in laying the groundwork
for a logical approach which can be summarized as follows:
A Whole Data Center, from which is selected the
Facility and IT infrastructure portion, which is divided into
Subsets, each of which consists of multiple
Subsystems, which are compared and contrasted by illustrating
Primary and Secondary functions, which enable
Efficient investment in management software
…with key steps supported by visualization software
Today multiple management applications across the principle domains of IT room manage-
ment, building control, security, and power address various parts of the enterprise suite, but
no one application does it all. The segmented approach will continue for the foreseeable
future. However, innovative dashboards are being developed that will facilitate prudent,
informed operational decisions that consolidate information from these sources and enhance
uptime and reduce energy costs.
About the author
Kevin Brown is the Vice President of Data Center Global Solution Offer & Strategy at
Schneider Electric. Kevin holds a BS in mechanical engineering from Cornell University.
Prior to this position at Schneider Electric, Kevin served as Director of Market Development
at Airxchange, a manufacturer of energy recovery ventilation products and components in the
HVAC industry. Before joining Airxchange, Kevin held numerous senior management roles
at Schneider Electric, including Director, Software Development Group.
Dennis Bouley is a Senior Research Analyst at Schneider Electric's Data Center Science
Center. He holds bachelor’s degrees in journalism and French from the University of Rhode
Island and holds the Certificat Annuel from the Sorbonne in Paris, France. He has published
multiple articles in global journals focused on data center IT and physical infra-structure
environments and has authored several white papers for The Green Grid.
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 13
14. Classification of Data Center Infrastructure Management (DCIM) Tools
Resources
Click on icon to link to resource
Power and Cooling Capacity
Management for Data Centers
White Paper 150
Browse all
white papers
whitepapers.apc.com
Browse all
TradeOff Tools™
tools.apc.com
Contact us
For feedback and comments about the content of this white paper:
Data Center Science Center
DCSC@Schneider-Electric.com
If you are a customer and have questions specific to your data center project:
Contact your Schneider Electric representative at
www.apc.com/support/contact/index.cfm
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 14
15. Classification of Data Center Infrastructure Management (DCIM) Tools
Appendix
Table A1
Monitoring & Automation
product comparison worksheet
Power Monitor-
Building Expert
Primary function
Pelco Digital
Name of product
Name of product
Name of product
Name of product
Name of product
ing Expert**
Data Center
Expert*
Sentry
(BMS)
Secondary Function
No Function
FACILITY POWER
Power device monitoring
Power analytics
Efficiency monitoring
FACILITY ENVIRONMENTAL CONTROL
Cooling device monitoring
F
Automation and control
U
N FACILITY SECURITY
C
T
I Surveillance
O
N Access control
S
IT ROOM
Power device monitoring
Cooling device monitoring
Environmental monitoring
Security monitoring
Partial PUE monitoring
* Includes security add- ons such as NetBotz, and PUE monitoring tools such as InfraStruxure Energy
Efficiency
** Works in coordination with installed meters for data collection
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 15
16. Classification of Data Center Infrastructure Management (DCIM) Tools
Table A2
Planning & Implementation
product comparison worksheet
Power Monitor-
Building Expert
Name of product
Name of product
Name of product
Name of product
Data Center
Primary function
ing Expert
Operation
Data Center
Data Center
Operation -
Operation -
Capacity
(BMS)
Change
Secondary Function
No Function
FACILITY ASSET MANAGEMENT
Inventory tracking
Maintenance tracking
FACILITY CAPACITY MANAGEMENT
Impact and dependency
analysis - Power
Impact dependency
analysis - Cooling
F
IT ROOM WORKFLOW MANAGEMENT
U
N
C Prediction and modeling
T
I Workflow tracking
O
N IT ROOM CAPACITY MANAGEMENT
S
Impact and dependency
analysis - Power
Impact and dependency
analysis - Cooling
Impact and dependency
analysis - Network ports
IT ROOM ASSET & LIFECYCLE MANAGEMENT
Inventory tracking
Change tracking
Schneider Electric – Data Center Science Center White Paper 104 Rev 3 16