Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and
operational costs required for running datacenters. These tools are helpful for business owners to improve
and evaluate the costs and the underlying efficiency of such facilities or evaluate the costs of alternatives,
such as off-site computing. Well understanding of the cost drivers of TCO models can provide more
opportunities to business owners to control costs .In addition, they also introduce an analytical structure in
which anecdotal information can be cross-checked for consistency with other well-known parameters
driving data center costs. This work focuses on comparing between number of proposed tools and
spreadsheets which are publicly available to calculate datacenter total cost of ownership (TCO) ,The
comparison is based on many aspects such as what are the parameters included and not included in such
tools and whether the tools are documented or not. Such an approach presents a solid ground for designing
more and better tools and spreadsheets in the future.
This document presents a cost model for establishing a data center. It aims to calculate not only hardware costs but also software licensing costs, which previous models have ignored. The model was developed to help a university faculty establish a new data center for their 120 desktop computers currently in computer labs. The new data center would enable virtualization, allowing students to run untrusted applications in isolated virtual machines without infecting the host systems. In addition to calculating costs for servers, networking equipment, power systems and cooling, the model also provides a way to estimate costs for the virtualization software licenses needed to manage the virtual machines.
Servers and other IT devices inside datacenters have hardware component and other software or virtual component. Some programs are required to create and secure everything related to the virtual environment on IT devices. These programs can be free or require a pre-paid license to be accessed .Most of the proposed datacenter total cost of ownership (TCO) models focus only on calculating the costs of the hardware component of IT devices and they ignore the costs of the other virtual component.In this paper, we present a cost model for building a datacenter and provide through it an analysis way to calculate theIT software license cost. Our model helps in solving a real problem in faculty of computers and information which plans to establish a datacenter. Our model can help the faculty administrators to know how much money they need to buy the IT devices and how much IT software license cost. We also calculate the cost ofthe power distribution equipment(PDU) and the uninterruptable power supply (UPS) systems which are required for operating the IT devices. The cost of the cooling systems that take the heat away once the
power is consumed from IT devices, PDU devices and UPS systems is also calculated in our model.
Servers and other IT devices inside datacenters have hardware component and other software or virtual component. Some programs are required to create and secure everything related to the virtual environment on IT devices. These programs can be free or require a pre-paid license to be accessed .Most of the proposed datacenter total cost of ownership (TCO) models focus only on calculating the costs of the hardware component of IT devices and they ignore the costs of the other virtual component.In this paper, we present a cost model for building a datacenter and provide through it an analysis way to calculate the IT software license cost. Our model helps in solving a real problem in faculty of computers and information which plans to establish a datacenter. Our model can help the faculty administrators to know how much money they need to buy the IT devices and how much IT software license cost. We also calculate the cost of the power distribution equipment(PDU) and the uninterruptable power supply (UPS) systems which are required for operating the IT devices. The cost of the cooling systems that take the heat away once the power is consumed from IT devices, PDU devices and UPS systems is also calculated in our model.
This document discusses considerations for planning data center facilities systems. It covers key issues like capacity planning, reliability, budget, and aesthetics that facilities managers should keep in mind. The planning process requires collaboration between business, IT, and facilities teams to develop an integrated solution that meets business needs. Factors like tier levels, standards, and power requirements are also discussed to ensure the facilities infrastructure can support operations now and grow over time.
Delivering IT as A Utility- A Systematic Reviewijfcstjournal
Utility Computing has facilitated the creation of new markets that has made it possible to realize the longheld
dream of delivering IT as a Utility. Even though utility computing is in its nascent stage today, the
proponents of utility computing envisage that it will become a commodity business in the upcoming time
and utility service providers will meet all the IT requests of the companies. This paper takes a crosssectional
view at the emergence of utility computing along with different requirements needed to realize
utility model. It also surveys the current trends in utility computing highlighting diverse architecture
models aligned towards delivering IT as a utility. Different resource management systems for proficient
allocation of resources have been listed together with various resource scheduling and pricing strategies
used by them. Further, a review of generic key perspectives closely related to the concept of delivering IT
as a Utility has been taken citing the contenders for the future enhancements in this technology in the form
of Grid and Cloud Computing.
As a leading Managed service provider with datacenters in India, Netmagic solutions, fulfills your entire IT infrastructure requirements: from collocation services to backup solutions.
ScottMadden has developed an approach for analyzing data center requirements and driving improvements in existing data center retrofits. Our approach takes into account the technological requirements, the physical attributes of a data center, and the requirements for a rigorous measurement and verification program needed to ensure improvements actually capture the energy efficiently gains and the resultant greenhouse gas reductions.
Our approach addresses the latest trends in data center management such as virtualization and cloud computing and provide a framework for developing metrics needed to drive changes in data center performance.
574An Integrated Framework for IT Infrastructure Management by Work Flow Mana...idescitation
Information Technology (IT) is one of the most emerging
fields in today’s Internet world. IT can be defined in various ways,
but is broadly considered to encompass the use of computers and
telecommunications equipment to store, retrieve, transmit and
manipulate data. Infrastructure is the base for everything. IT also
has an infrastructure, which can be managed and maintained
properly. For an organization’s Information Technology,
Infrastructure Management (IM) is the management of essential
operation components, such as policies, processes, equipment,
data, human resources and external contacts, for overall
effectiveness.
In this paper, we propose a methodology to manage the IT
Infrastructure in a better way. Our methodology uses the tree-
structure based architecture to manage the infrastructure with less
manual power. The process of how to manage the infrastructure is
discussed with efficient methodology and necessary steps with
algorithm, in this paper. Also, in this paper, the process of workflow
management on IT infrastructure management has been provided.
This document presents a cost model for establishing a data center. It aims to calculate not only hardware costs but also software licensing costs, which previous models have ignored. The model was developed to help a university faculty establish a new data center for their 120 desktop computers currently in computer labs. The new data center would enable virtualization, allowing students to run untrusted applications in isolated virtual machines without infecting the host systems. In addition to calculating costs for servers, networking equipment, power systems and cooling, the model also provides a way to estimate costs for the virtualization software licenses needed to manage the virtual machines.
Servers and other IT devices inside datacenters have hardware component and other software or virtual component. Some programs are required to create and secure everything related to the virtual environment on IT devices. These programs can be free or require a pre-paid license to be accessed .Most of the proposed datacenter total cost of ownership (TCO) models focus only on calculating the costs of the hardware component of IT devices and they ignore the costs of the other virtual component.In this paper, we present a cost model for building a datacenter and provide through it an analysis way to calculate theIT software license cost. Our model helps in solving a real problem in faculty of computers and information which plans to establish a datacenter. Our model can help the faculty administrators to know how much money they need to buy the IT devices and how much IT software license cost. We also calculate the cost ofthe power distribution equipment(PDU) and the uninterruptable power supply (UPS) systems which are required for operating the IT devices. The cost of the cooling systems that take the heat away once the
power is consumed from IT devices, PDU devices and UPS systems is also calculated in our model.
Servers and other IT devices inside datacenters have hardware component and other software or virtual component. Some programs are required to create and secure everything related to the virtual environment on IT devices. These programs can be free or require a pre-paid license to be accessed .Most of the proposed datacenter total cost of ownership (TCO) models focus only on calculating the costs of the hardware component of IT devices and they ignore the costs of the other virtual component.In this paper, we present a cost model for building a datacenter and provide through it an analysis way to calculate the IT software license cost. Our model helps in solving a real problem in faculty of computers and information which plans to establish a datacenter. Our model can help the faculty administrators to know how much money they need to buy the IT devices and how much IT software license cost. We also calculate the cost of the power distribution equipment(PDU) and the uninterruptable power supply (UPS) systems which are required for operating the IT devices. The cost of the cooling systems that take the heat away once the power is consumed from IT devices, PDU devices and UPS systems is also calculated in our model.
This document discusses considerations for planning data center facilities systems. It covers key issues like capacity planning, reliability, budget, and aesthetics that facilities managers should keep in mind. The planning process requires collaboration between business, IT, and facilities teams to develop an integrated solution that meets business needs. Factors like tier levels, standards, and power requirements are also discussed to ensure the facilities infrastructure can support operations now and grow over time.
Delivering IT as A Utility- A Systematic Reviewijfcstjournal
Utility Computing has facilitated the creation of new markets that has made it possible to realize the longheld
dream of delivering IT as a Utility. Even though utility computing is in its nascent stage today, the
proponents of utility computing envisage that it will become a commodity business in the upcoming time
and utility service providers will meet all the IT requests of the companies. This paper takes a crosssectional
view at the emergence of utility computing along with different requirements needed to realize
utility model. It also surveys the current trends in utility computing highlighting diverse architecture
models aligned towards delivering IT as a utility. Different resource management systems for proficient
allocation of resources have been listed together with various resource scheduling and pricing strategies
used by them. Further, a review of generic key perspectives closely related to the concept of delivering IT
as a Utility has been taken citing the contenders for the future enhancements in this technology in the form
of Grid and Cloud Computing.
As a leading Managed service provider with datacenters in India, Netmagic solutions, fulfills your entire IT infrastructure requirements: from collocation services to backup solutions.
ScottMadden has developed an approach for analyzing data center requirements and driving improvements in existing data center retrofits. Our approach takes into account the technological requirements, the physical attributes of a data center, and the requirements for a rigorous measurement and verification program needed to ensure improvements actually capture the energy efficiently gains and the resultant greenhouse gas reductions.
Our approach addresses the latest trends in data center management such as virtualization and cloud computing and provide a framework for developing metrics needed to drive changes in data center performance.
574An Integrated Framework for IT Infrastructure Management by Work Flow Mana...idescitation
Information Technology (IT) is one of the most emerging
fields in today’s Internet world. IT can be defined in various ways,
but is broadly considered to encompass the use of computers and
telecommunications equipment to store, retrieve, transmit and
manipulate data. Infrastructure is the base for everything. IT also
has an infrastructure, which can be managed and maintained
properly. For an organization’s Information Technology,
Infrastructure Management (IM) is the management of essential
operation components, such as policies, processes, equipment,
data, human resources and external contacts, for overall
effectiveness.
In this paper, we propose a methodology to manage the IT
Infrastructure in a better way. Our methodology uses the tree-
structure based architecture to manage the infrastructure with less
manual power. The process of how to manage the infrastructure is
discussed with efficient methodology and necessary steps with
algorithm, in this paper. Also, in this paper, the process of workflow
management on IT infrastructure management has been provided.
The document discusses the benefits of converged systems over traditional siloed IT infrastructures. It outlines key challenges with complexity in today's IT environments and how converged systems provide advantages like reduced costs, faster deployment times, and improved performance and availability. The summary highlights that Hitachi Data Systems provides converged infrastructure solutions called Unified Compute Platforms that integrate servers, storage, networking and software to optimize support for mission-critical applications.
IRJET- Analysis of using Software Defined and Service Coherence ApproachIRJET Journal
This document discusses using a software-defined approach and service coherence to analyze the performance of querying electronic health records stored in a cloud database. It proposes creating a query processing service that can store, access, and manipulate unstructured electronic medical record data using a file-based storage system. The performance of this system will be evaluated by measuring query response times and comparing to a traditional database. A software-defined controller is used to manage and provision resources from a pooled infrastructure to applications. Electronic health records are stored as objects with attributes rather than files to improve access and allow metadata tagging for easier retrieval.
DCIM tools address key challenges of IT managers by monitoring infrastructure data and enabling intelligent analysis. This allows issues like downtime and wasted resources to be addressed. DCIM tools integrate various systems to provide visibility of how all components work as an ecosystem. They help plan resource allocation, analyze historical data to improve performance, and simulate scenarios to prevent failures. Virtualization and cloud computing enhance efficiency by enabling higher power densities, focused cooling, and reducing unused servers. DCIM tools are important for reliability and monitoring infrastructure in dynamic virtualized environments. They also help predict future capacity and investment needs.
This document discusses data analytics for IoT systems. It notes that as more devices are added to IoT networks, the generated data becomes overwhelming. A new approach to data analytics is needed to handle this "big data". The real value of IoT is in the data produced and the insights that can be gained. However, the data needs to be organized and controlled. The document then discusses categorizing data as structured or unstructured, and data in motion or at rest. It outlines challenges of using traditional databases for IoT data and introduces machine learning as central to analyzing IoT data.
A native enhanced elastic extension tables multi-tenant database IJECEIAES
Multi-tenancy is a cloud computing service that enables multiple customers to share the same software. The internet of things (IoT) is based on distributing devices and objects through the internet that needs an effective multi-tenant schema. Merging the multi-tenant structure with IoT systems becomes more challenging due to the different data types of IoT and workloads. There is a need to handle a massive amount of metadata by multi-tenant applications. This paper has two main aims to solve this problem. First, it proposes an enhanced elastic extension tables (E T) schema to query the IoT database efficiently. Second, it proposes a Hybrid multitenant database (NXD-E 3 T) as a combination of the E 3 T and Native XML Database to reduce the response time and support a common query language. The proposed schema was implemented on an IoT benchmark. The simulation results show that the proposed NXD-E 3 T outperforms the other multi-tenant databases in terms of the query speed.
The document discusses how cloud implementation can maximize ROI for laboratories. It explains that adopting a thin-client architecture hosted on the cloud provides benefits like high storage capacity, cost effectiveness, strong data security, and the ability for multiple simultaneous users. The cloud's pay-as-you-go model allows laboratories to access laboratory informatics software without large upfront hardware costs. Overall, the cloud enables laboratories to streamline operations while minimizing total cost of ownership.
IRJET- Cost Effective Workflow Scheduling in BigdataIRJET Journal
This document summarizes research on cost-effective workflow scheduling in big data environments. It discusses a Pointer Gossip Content Addressable Network Montage Framework that allows automatic selection of target clouds, uniform access to clouds, and workflow data management while meeting service level agreements at lowest cost. The framework is evaluated using a real scientific workflow application in different deployment scenarios. Results show it can execute workflows with expected performance and quality at lowest cost.
AN INTRODUCTION TO DIGITAL TRANSFORMATION TECHNOLOGIESIAEME Publication
Digital transformation is process of using digital technologies and integrating the technologies in all areas of business which transforms four important aspects including system, process, technology and people involved in an enterprise. Business organizations cannot remain without integrating digital technologies in the businesses as expectations of customers become so trendy and they want to make all business transactions via smart technologies and internet. The emerging technologies support business organization in automating their processes and facilitate improved processes. This ultimately achieves the customers’ expectations. In this paper, the need for digital transformation, steps of digital transformation and important digital transformation technologies namely cloud computing, big data, data mining, machine learning techniques, etc., are discussed.
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
This document discusses architectural and security management for grid computing. It begins by defining grid computing as an environment that enables sharing of distributed resources across organizations to achieve common goals. It then describes the key components of a grid, including computation resources, storage, communications, software/licenses, and special equipment. The document outlines a four-level grid architecture including a fabric level, core middleware level, user middleware level, and application level. It also discusses important aspects of grid computing such as resource balancing, reliability through distribution, parallel CPU capacity, and management of different projects. Finally, it emphasizes that security is a major concern for grid computing due to the open nature of sharing resources across organizational boundaries.
Compu Dynamics White Paper - Essential Elements for Data Center OptimizationDan Ephraim
This white paper discusses essential elements for optimizing data center operations, including airflow management, data center infrastructure management (DCIM) tools, power management, and operational best practices. It focuses on recent government initiatives like the Data Center Optimization Initiative (DCOI) that mandate increased energy efficiency in federal data centers through metrics like power usage effectiveness (PUE). The paper examines strategies like hot/cold aisle containment and cloud migration that can help data centers improve optimization and meet new efficiency requirements.
Unraveling the Difference Between Data Centers and Cloud Computing.pptx3Gen Data Systems
The terms cloud computing and data center are not same. Datacenter is a storage setup while cloud computing is the processing infrastructure. for more details visit the website:- https://3gendatasystems.com/
Data Ware House System in Cloud EnvironmentIJERA Editor
To reduce Cost of data ware house deployment , virtualization is very Important. virtualization can reduce Cost
and as well as tremendous Pressure of managing devices, Storages Servers, application models & main Power.
In current time, data were house is more effective and important Concepts that can make much impact in
decision support system in Organization. Data ware house system takes large amount of time, cost and efforts
then data base system to Deploy and develop in house system for an Organization . Due to this reason that,
people now think about cloud computing as a solution of the problem instead of implementing their own data
were house system . In this paper, how cloud environment can be established as an alternative of data ware
house system. It will given the some knowledge about better environment choice for the organizational need.
Organizational Data were house and EC2 (elastic cloud computing ) are discussed with different parameter like
ROI, Security, scalability, robustness of data, maintained of system etc
IRJET- Cipher Text-Policy Attribute-Based Encryption and with Delegation ...IRJET Journal
This document presents a proposed system for ciphertext-policy attribute-based encryption and verifiable delegation in cloud computing. It discusses the existing systems and their disadvantages, including that cloud servers could tamper with or replace encrypted data. The proposed system uses general circuits to define access policies and allows users to verify the correctness of decrypted data from cloud servers. It provides security based on computational assumptions and implements the system over integers. The system aims to ensure data confidentiality and verification of delegated access on untrusted cloud servers through its ciphertext-policy attribute-based encryption and verifiable delegation design.
Field Data Gathering Services — A Cloud-Based ApproachSchneider Electric
Utilities today wish to facilitate the capture of asset information in the field in a way that is not only scalable but cost effective. They need a system that is simple to use, inexpensive to implement, flexible enough to meet ever-changing needs, yet also powerful enough to cover a majority of their needs with immediacy. This paper describes Schneider Electric's powerful cloud-based solution to optimize the inspection and gathering of field information.
SECURE FILE STORAGE IN THE CLOUD WITH HYBRID ENCRYPTIONIRJET Journal
The document proposes a new public audit program for secure cloud storage using a dynamic hash table (DHT) data structure. A DHT would store data integrity information in the Third Party Auditor (TPA) to reduce computational costs and communication overhead compared to previous schemes. The approach combines public-key-based homomorphic authenticators with a TPA-generated random mask to achieve data protection and batch verification. This allows the cloud storage system to meet security requirements for data confidentiality, integrity, and availability while improving audit efficiency.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
This document discusses how cloud computing can provide a framework called Education Cloud to help non-profit organizations (NGOs) with e-management. It proposes that the Education Cloud could help NGOs save costs, focus on their missions rather than IT infrastructure, and provide improved and scalable educational services. The document also presents a case study of an NGO in India called Kalgidhar Trust that is using education to address social issues.
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
A Case Study On Implementation Of Grid Computing To Academic InstitutionArlene Smith
This document discusses implementing a grid computing environment in an academic institution. It begins by outlining the vision, strategy, and roadmap for setting up a grid. The basic hardware, software, and human resource requirements are then described. Setting up grid applications is covered, including deploying code and data. An intra-grid topology is proposed as the initial design, with the ability to later expand to extra-grid and inter-grid models. Maintaining and upgrading the grid is also addressed. The goal is to provide a guideline for IT managers on exploring how computer clusters on campus could be linked and shared as a grid to tackle computational problems through coordinated resource sharing.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
More Related Content
Similar to DATACENTRE TOTAL COST OF OWNERSHIP (TCO) MODELS: A SURVEY
The document discusses the benefits of converged systems over traditional siloed IT infrastructures. It outlines key challenges with complexity in today's IT environments and how converged systems provide advantages like reduced costs, faster deployment times, and improved performance and availability. The summary highlights that Hitachi Data Systems provides converged infrastructure solutions called Unified Compute Platforms that integrate servers, storage, networking and software to optimize support for mission-critical applications.
IRJET- Analysis of using Software Defined and Service Coherence ApproachIRJET Journal
This document discusses using a software-defined approach and service coherence to analyze the performance of querying electronic health records stored in a cloud database. It proposes creating a query processing service that can store, access, and manipulate unstructured electronic medical record data using a file-based storage system. The performance of this system will be evaluated by measuring query response times and comparing to a traditional database. A software-defined controller is used to manage and provision resources from a pooled infrastructure to applications. Electronic health records are stored as objects with attributes rather than files to improve access and allow metadata tagging for easier retrieval.
DCIM tools address key challenges of IT managers by monitoring infrastructure data and enabling intelligent analysis. This allows issues like downtime and wasted resources to be addressed. DCIM tools integrate various systems to provide visibility of how all components work as an ecosystem. They help plan resource allocation, analyze historical data to improve performance, and simulate scenarios to prevent failures. Virtualization and cloud computing enhance efficiency by enabling higher power densities, focused cooling, and reducing unused servers. DCIM tools are important for reliability and monitoring infrastructure in dynamic virtualized environments. They also help predict future capacity and investment needs.
This document discusses data analytics for IoT systems. It notes that as more devices are added to IoT networks, the generated data becomes overwhelming. A new approach to data analytics is needed to handle this "big data". The real value of IoT is in the data produced and the insights that can be gained. However, the data needs to be organized and controlled. The document then discusses categorizing data as structured or unstructured, and data in motion or at rest. It outlines challenges of using traditional databases for IoT data and introduces machine learning as central to analyzing IoT data.
A native enhanced elastic extension tables multi-tenant database IJECEIAES
Multi-tenancy is a cloud computing service that enables multiple customers to share the same software. The internet of things (IoT) is based on distributing devices and objects through the internet that needs an effective multi-tenant schema. Merging the multi-tenant structure with IoT systems becomes more challenging due to the different data types of IoT and workloads. There is a need to handle a massive amount of metadata by multi-tenant applications. This paper has two main aims to solve this problem. First, it proposes an enhanced elastic extension tables (E T) schema to query the IoT database efficiently. Second, it proposes a Hybrid multitenant database (NXD-E 3 T) as a combination of the E 3 T and Native XML Database to reduce the response time and support a common query language. The proposed schema was implemented on an IoT benchmark. The simulation results show that the proposed NXD-E 3 T outperforms the other multi-tenant databases in terms of the query speed.
The document discusses how cloud implementation can maximize ROI for laboratories. It explains that adopting a thin-client architecture hosted on the cloud provides benefits like high storage capacity, cost effectiveness, strong data security, and the ability for multiple simultaneous users. The cloud's pay-as-you-go model allows laboratories to access laboratory informatics software without large upfront hardware costs. Overall, the cloud enables laboratories to streamline operations while minimizing total cost of ownership.
IRJET- Cost Effective Workflow Scheduling in BigdataIRJET Journal
This document summarizes research on cost-effective workflow scheduling in big data environments. It discusses a Pointer Gossip Content Addressable Network Montage Framework that allows automatic selection of target clouds, uniform access to clouds, and workflow data management while meeting service level agreements at lowest cost. The framework is evaluated using a real scientific workflow application in different deployment scenarios. Results show it can execute workflows with expected performance and quality at lowest cost.
AN INTRODUCTION TO DIGITAL TRANSFORMATION TECHNOLOGIESIAEME Publication
Digital transformation is process of using digital technologies and integrating the technologies in all areas of business which transforms four important aspects including system, process, technology and people involved in an enterprise. Business organizations cannot remain without integrating digital technologies in the businesses as expectations of customers become so trendy and they want to make all business transactions via smart technologies and internet. The emerging technologies support business organization in automating their processes and facilitate improved processes. This ultimately achieves the customers’ expectations. In this paper, the need for digital transformation, steps of digital transformation and important digital transformation technologies namely cloud computing, big data, data mining, machine learning techniques, etc., are discussed.
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
This document discusses architectural and security management for grid computing. It begins by defining grid computing as an environment that enables sharing of distributed resources across organizations to achieve common goals. It then describes the key components of a grid, including computation resources, storage, communications, software/licenses, and special equipment. The document outlines a four-level grid architecture including a fabric level, core middleware level, user middleware level, and application level. It also discusses important aspects of grid computing such as resource balancing, reliability through distribution, parallel CPU capacity, and management of different projects. Finally, it emphasizes that security is a major concern for grid computing due to the open nature of sharing resources across organizational boundaries.
Compu Dynamics White Paper - Essential Elements for Data Center OptimizationDan Ephraim
This white paper discusses essential elements for optimizing data center operations, including airflow management, data center infrastructure management (DCIM) tools, power management, and operational best practices. It focuses on recent government initiatives like the Data Center Optimization Initiative (DCOI) that mandate increased energy efficiency in federal data centers through metrics like power usage effectiveness (PUE). The paper examines strategies like hot/cold aisle containment and cloud migration that can help data centers improve optimization and meet new efficiency requirements.
Unraveling the Difference Between Data Centers and Cloud Computing.pptx3Gen Data Systems
The terms cloud computing and data center are not same. Datacenter is a storage setup while cloud computing is the processing infrastructure. for more details visit the website:- https://3gendatasystems.com/
Data Ware House System in Cloud EnvironmentIJERA Editor
To reduce Cost of data ware house deployment , virtualization is very Important. virtualization can reduce Cost
and as well as tremendous Pressure of managing devices, Storages Servers, application models & main Power.
In current time, data were house is more effective and important Concepts that can make much impact in
decision support system in Organization. Data ware house system takes large amount of time, cost and efforts
then data base system to Deploy and develop in house system for an Organization . Due to this reason that,
people now think about cloud computing as a solution of the problem instead of implementing their own data
were house system . In this paper, how cloud environment can be established as an alternative of data ware
house system. It will given the some knowledge about better environment choice for the organizational need.
Organizational Data were house and EC2 (elastic cloud computing ) are discussed with different parameter like
ROI, Security, scalability, robustness of data, maintained of system etc
IRJET- Cipher Text-Policy Attribute-Based Encryption and with Delegation ...IRJET Journal
This document presents a proposed system for ciphertext-policy attribute-based encryption and verifiable delegation in cloud computing. It discusses the existing systems and their disadvantages, including that cloud servers could tamper with or replace encrypted data. The proposed system uses general circuits to define access policies and allows users to verify the correctness of decrypted data from cloud servers. It provides security based on computational assumptions and implements the system over integers. The system aims to ensure data confidentiality and verification of delegated access on untrusted cloud servers through its ciphertext-policy attribute-based encryption and verifiable delegation design.
Field Data Gathering Services — A Cloud-Based ApproachSchneider Electric
Utilities today wish to facilitate the capture of asset information in the field in a way that is not only scalable but cost effective. They need a system that is simple to use, inexpensive to implement, flexible enough to meet ever-changing needs, yet also powerful enough to cover a majority of their needs with immediacy. This paper describes Schneider Electric's powerful cloud-based solution to optimize the inspection and gathering of field information.
SECURE FILE STORAGE IN THE CLOUD WITH HYBRID ENCRYPTIONIRJET Journal
The document proposes a new public audit program for secure cloud storage using a dynamic hash table (DHT) data structure. A DHT would store data integrity information in the Third Party Auditor (TPA) to reduce computational costs and communication overhead compared to previous schemes. The approach combines public-key-based homomorphic authenticators with a TPA-generated random mask to achieve data protection and batch verification. This allows the cloud storage system to meet security requirements for data confidentiality, integrity, and availability while improving audit efficiency.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
This document discusses how cloud computing can provide a framework called Education Cloud to help non-profit organizations (NGOs) with e-management. It proposes that the Education Cloud could help NGOs save costs, focus on their missions rather than IT infrastructure, and provide improved and scalable educational services. The document also presents a case study of an NGO in India called Kalgidhar Trust that is using education to address social issues.
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
A Case Study On Implementation Of Grid Computing To Academic InstitutionArlene Smith
This document discusses implementing a grid computing environment in an academic institution. It begins by outlining the vision, strategy, and roadmap for setting up a grid. The basic hardware, software, and human resource requirements are then described. Setting up grid applications is covered, including deploying code and data. An intra-grid topology is proposed as the initial design, with the ability to later expand to extra-grid and inter-grid models. Maintaining and upgrading the grid is also addressed. The goal is to provide a guideline for IT managers on exploring how computer clusters on campus could be linked and shared as a grid to tackle computational problems through coordinated resource sharing.
Similar to DATACENTRE TOTAL COST OF OWNERSHIP (TCO) MODELS: A SURVEY (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Rainfall intensity duration frequency curve statistical analysis and modeling...
DATACENTRE TOTAL COST OF OWNERSHIP (TCO) MODELS: A SURVEY
1. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
DOI: 10.5121/ijcsea.2018.8404 47
DATACENTRE TOTAL COST OF OWNERSHIP (TCO)
MODELS: A SURVEY
Doaa Bliedy, Sherif Mazen and Ehab Ezzat
Department of Information System, Cairo University, Cairo
ABSTRACT
Datacenter total cost of ownerships (TCO) tools and spreadsheets can be used to estimate the capital and
operational costs required for running datacenters. These tools are helpful for business owners to improve
and evaluate the costs and the underlying efficiency of such facilities or evaluate the costs of alternatives,
such as off-site computing. Well understanding of the cost drivers of TCO models can provide more
opportunities to business owners to control costs .In addition, they also introduce an analytical structure in
which anecdotal information can be cross-checked for consistency with other well-known parameters
driving data center costs. This work focuses on comparing between number of proposed tools and
spreadsheets which are publicly available to calculate datacenter total cost of ownership (TCO) ,The
comparison is based on many aspects such as what are the parameters included and not included in such
tools and whether the tools are documented or not. Such an approach presents a solid ground for designing
more and better tools and spreadsheets in the future.
KEYWORDS
Datacenters, TCO model, TCO parameters, IT software license cost, network, server
1. INTRODUCTION
Data centers can be defined as any space whose main function is to house servers [1] or
computing devices that are in-use, i.e., are powered on and performing functions. Although a
small computing room inside a multipurpose building can be believed as a data center, the term is
traditionally used to define buildings whose major function is to locate these servers. In this
conventional sense, human occupancy is limited to small Information Technology (IT) support
groups who may have office space within the building—these office spaces are small relative to
the total size of the building. These facilities differ greatly from most buildings from a
construction perspective. For example, mechanical and electrical systems account for 70% of
construction costs in data centers, in contrast to only 15% of costs in commercial buildings [2].
Nowadays, network-based activities or internet are the major reasons for using datacenters.
Datacenters locate servers so that they can Manipulate large amount of data, talk to each other or
to other computer networks and process user interactions with server-based software tools or web
portals.
The other main purpose of the existence of data centers today is the need for managing the critical
data and operations such as organization's databases and email sending As a result, reliability—
i.e., its ability for the servers to be working properly and not waste data—is a critical interest for
many data centers. This reliability depends on many factors such as the characteristics of the
servers used and the data center “infrastructure” which contains the power delivery system and
cooling resources. Servers and other computing devices need continuous power supply system
and are less susceptible to hardware breakdown when operating below a certain temperature. It is
significant to mention that the servers can produce large amount of heatand, as a result, data
centers often have large cooling loads [3]). Facility owners often use redundant computing setups,
2. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
48
redundant power delivery systems, Uninterruptable Power Supply (UPS) devices, and cooling
resources to protect server machines from failure.
The level of redundancy in the power generation system and cooling resources are usually used to
evaluate the rank of reliability in data centers. This is known as the data center ‘Tier level’ [4]
where higher Tier levels represent greater redundancy. Using redundant cooling resources, power
delivery devices and UPS units increase the electrical energy consumed by the facilities. On the
other hand, some buildings don’t need such redundancies and equipment. As internet-based
services have become increasingly used, datacenter establishment has increased. Through the
period 2000 to 2010, the yearly establishment of data centers (in terms of money spent) increased
over 300%, from approximately $15 billion USD to $50 billion USD [5]. Some of this increased
spending is linked to the more facilities being constructed annually, while others are linked to the
increased redundancy in the newer facilities. That is, a ‘Tier 4’ data center can cost $22 million
USD/MW in comparison to $10 million USD/MW for a ‘Tier 1’ facility [5].
The datacenter market can be split roughly into two sub-sectors: colocation and enterprise.
Colocation, or “colo” operators provide what is known colloquially as “position, power and ping”
which basically means that they provide the infrastructure – security, constant electricity supply,
broadband connectivity and an environment in which temperature and humidity is controlled to
suit servers. They then sell or lease space within those specialized facilities to companies who
install and manage their own IT equipment. The term colocation comes from the fact that these
customers share or “co-locate” their IT operations in one purpose built facility. Enterprise
operators are those who use their datacenter provision for their own purposes – i.e. for their own
corporate IT functions (in the case of banks and supermarkets and government departments).
Enterprise operators also include those who provide IT services to third parties (HP, Fujitsu,
IBM, BT, Atos, CapGemini, etc). Enterprise datacenter operators may build their own datacenter
facilities or locate their datacenters within colo facilities – Many companies do both. Some
companies operate enterprise datacenters but also sell some colo space to other companies, and
this is a logical option for enterprise operators who find themselves with spare capacity.
As mentioned above, enterprise operators may build their own datacenters but for most it makes
commercial sense to use a third party provider. Once an organization's data requirements reach a
certain size or become mission critical (where disruption of service has significant adverse
consequences and generates liabilities), this data will need to be housed in an environment with
guaranteed levels of security, continuity of power supply and connectivity. Companies have a
number of options here: They could build their own facility or they could take space from a
wholesale or a colo operator and still manage their IT themselves or they could outsource the
whole IT function to an IT services provider (who in turn may have their own datacenter or have
taken space within a colo). They could even buy a site and then contract a third party to manage
their IT for them within it. In this paper the authors focus on the situation that the company
chooses to build its in-house and own datacenter from scratch.
The rest of the paper is organized as follows. Section 2 presents background about the proposed
datacenter TCO models and spreadsheet tools, while the details about each model are discussed in
section 3 .the comparison between the models is given in Section 4. Finally, section 5 concludes
the paper.
2. BACKGROUND
In this part, the authors overview the proposed datacenter TCO models and spreadsheet tools .The
calculations and estimations of TCO of the data centers don’t have any recognized standards or
3. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
49
rules; they depend on the size, location and design of datacenters [6]. .Many TCO models have
been proposed for guiding datacenters design [7], [8], [9], [10], [11] that mainly depend on two
types of datacenter costs the capital costs and operating costs for running the datacenter. Only few
tools are publicly available to calculate TCO. APC [12] [13] provide an online estimator tools
while [10], [14] provide spreadsheets to estimate the TCO. These tools can be used to assess the
benefits and drawbacks of datacenter design choices on the TCO and the environmental impact
but they do not allow easy exploration and fine grain design choices.
Based on these TCO models and tools, the authors divided Datacenter environment into number
of IT equipment and number of operating devices required for operating the IT equipment. The
authors collect the different devices and parameters in the datacenter and show them in Table [1].
Table 1. The Different devices in Datacenter.
IT Equipment Operating devices and systems for IT equipment
Server machines
storage equipment
Networking machines
Power delivery and generation systems including (UPS
systems and power distribution units (PDU))
Cooling resources
Fire protection systems
Raised floor / dropped ceiling (may be needed or not)
enclosures and containment (Racks)
Lighting system
The TCO models and spread sheet tools also helped the authors to know the different kinds of
expenses in datacenters. The TCO of a datacenter includes Capital expenses and other
Operational expenses. The authors in this survey designed diagram [1] to show the different costs
in datacenters.
CAPITAL COSTS
Hardware price
Hardware price of IT devices
Hardware price of Operating devices and systems for IT equipment
basic installation , Design /engineering costs
basic installation , Design /engineering costs for IT devices
basic installation , Design /engineering costs for Operating devices and systems for IT equipment
project management / facility engineering cost
OPERATION COST
Power consumption cost
The cost of electricity for IT equipment ( servers, networking equipment and storage system.)
the cost of electricity for cooling resources
the cost of electricity for power delivery and generation system
the cost of electricity for lightning system
Personnel cost (personnel salaries)
Maintenance and repairs costs
(depreciation) costs
depreciation cost of IT equipment
Depreciation cost of all other Operating devices and systems for IT equipment including (power delivery
and cooling systems, racks …).
IT software license cost
4. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
50
Diagram [1] different costs in datacenter
Another important cost component must be included in datacenter TCO model is space cost. as it
is mentioned before, some companies tend to build their datacenter building from scratch or rent
space to house their datacenter devices or choose to buy already constructed building and use it
for their own needs and purposes .If the companies choose to build their own datacenter building
from scratch ,then cost of land and Construction costs of a datacenter building must be included
and they are considered as capital costs and in that case , the established datacenter building will
have a deprecation cost (operation cost) and land doesn’t have deprecation cost but if the
companies choose to rent space, cost of renting space should be included in Datacenter TCO
model and , it is considered as operating cost because it is continually paid during datacenter life
cycle Or if the companies choose to buy already constructed building , the price of building
should be included in TCO model and it is considered capital cost and it also will have
deprecation cost (operation cost). The authors in this survey designed the following diagram [2]
to show the different options that the customer can choose between from when planning for
establishing datacenter.
5. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
51
Diagram [2] different options for the customers
In this survey, the authors compare between Numbers of models and publicly available tools
which are used to calculate TCO. The comparison is done to guide the users of tools about the
parameters that aren't included in the tools and must be taken in consideration when calculating
the total datacenter cost and show if the tool is documented or not.
3. DATACENTER TCO MODELS AND SPREAD SHEET TOOLS
3.1 COST MODEL FOR PLANNING, DEVELOPMENT AND OPERATION OF A DATA
CENTER
The cost model proposed by HP Laboratories take into account all costs which are involved in
operating the data center of an organization: cost of power delivery, cost of cooling, cost of space
and cost of operation. The total cost of ownership of a datacenter is summarized in equation [1] as
follows:
Cost total = Cost space + Cost power the hardware + Cost cooling + Cost operation [1]
Each of these costs expresses the “amount” of correspondent resource, consumed by the data
center in a specific period of time. The methods for calculating them are detailed in [7]:
6. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
52
Typically, the cost of space includes the cost of real estate necessary for the data center, for
power generation systems and other auxiliary subsystems. It is considered that operating income
is realized only in that portion of the data center space which is populated with computing
equipment;
Cost of power delivery includes conditioning, battery back-up, on-site power generation, and
redundancy in both delivery as well as generation;
Cost of cooling is the cost of power consumed by the cooling resources;
Cost of operation includes personnel costs, depreciation of IT Equipment, software and licensing
costs.
This model offers a method for calculating the total costs of a data center but the authors of this
study do not go any further in finding any costs related to the Fire protection systems, enclosures
and containment (Racks) and Lighting system. Another significant note observed in this model is
that The cost of electricity for IT equipment and the cost of electricity for cooling resources are
included in the cost model but the cost of electricity for power delivery and generation system
And the cost of electricity for lightning system are not taken in consideration by the authors in the
model. Basic installation, Design /engineering costs and project management / facility
engineering costs are also not included in the cost model.
The authors of this study discussed their calculations about datacenter TCO in [7] but they didn’t
present spread sheet tool.
3.2 OVER ALL DATACENTER COSTS
The author of this study provided a spreadsheet tool to estimate the datacenter TCO. The tool is
publicly available for download at [15]. Many researches said that power is the dominate cost in a
large-scale data center”. Servers dominate with mechanical systems and power distribution close
behind but the author showed in this model that power is incredibly important but it’s not the
utility kWh charge that makes power important, it’s the cost of the power distribution equipment
required to consume power and the cost of the mechanical systems that take the heat away once
the power is consumed (cooling infrastructure). The author referred to this as fully burdened
power. Measured this way, power alone itself isn’t anywhere close to the most significant cost
and it is the second most important cost.
The main challenge is how to compare between the capital costs of datacenter devices with that
monthly bill for power? To know the dominate cost in datacenter .The amortization periods are
completely different. Data center amortization periods run from 10 to 15 years while server
amortizations are typically in the three year range. Servers are purchased 3 to 5 times during the
life of a datacenter.
The author in this model solved that challenge by normalizing all costs to a monthly bill by taking
consumable like power and billing them monthly by consumption and taking capital expenses
like servers, networking or datacenter infrastructure (power distribution and cooling), and
amortizing over their useful lifetime using a 5% cost of money and, again, billing monthly. This
approach helps in comparing non-comparable costs such as data center infrastructure with servers
and networking gear each with different lifetimes.
7. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
53
The users can enter the inputs about their datacenter environment in the spread sheet tool and
based on these inputs the tool automatically calculate the monthly depreciation costs and the
monthly power consumption cost.
The tool includes all costs "below the operating system" but the software licensing costs are not
included because the author thought that open source is dominant in high scale datacenters and
licensing costs can vary so widely. Administrative costs are not included for the same reason.
Hardware administration, security, and other infrastructure-related people costs disappear into the
single digits with the very best services down in the 3% range. Because administrative costs vary
so greatly, the author didn't include them in the model.
In the spreadsheet tool, the facility server count is kept to be big enough in the 45,000 to 50,000
server range. The author only modeled those techniques well understood and reasonably broadly
accepted as good quality data center design practices, but Most of the big operators will be
operating at efficiency levels far beyond those used in this tool. For example, in the tool the
author is using a Power Usage Efficiency (PUE) of 1.45 but Google, for example, reports PUE
across the fleet of less than (1.2). The PUE value in the tool can be changed by the users as
appropriate.
The model isn’t documented but the spread sheet is available at [15] and the article about the tool
can be read at [14].
3.3 A SIMPLE MODEL FOR DETERMINING TRUE TOTAL COST OF OWNERSHIP FOR
DATA CENTERS
The author introduced spreadsheet tool for calculating the true total cost of ownership (True
TCO) that can help in defining all the components of data center costs , including both operating
and capital costs (OpEx /CapEx). The spreadsheet tool can be downloaded at [16] .Companies
can use the spreadsheet tool and the inputs can be edited to reflect any company’s particular
circumstances.
In this study, the TCO model was split into IT hardware and software such as servers, disks, tape
storage, and networking. This model had taken the energy usage and cost into consideration. In
turn, this model developed a relation to current technology data for other high-performance
computing program’s financials. Furthermore, this TCO model recommended that data center’s
evaluate their “architectural and engineering fees, land, inert gas fire suppression costs, IT build-
out costs for racks, cabling, internal routers and switches , electricity costs, security costs, and
operations and maintenance costs for both IT and facilities.” This allowed the data center to better
assess their energy usage as well as their IT expenditure .a detailed layout of the spreadsheet used
to hold this relevant information is in [10].
The authors of this model collected the model assumptions and calculations in table. They split
the IT hardware into servers, disk storage, tape storage, and networking categories. They
estimated the power used by IT hardware (IT loads). electricity consumed by cooling devices and
electricity consumed by Auxiliaries systems were estimated and added to IT load to get the Total
electricity use of datacenter. They calculated the total annual power cost. Cooling devices include
(fans, chillers, CRAC units and pumps) and Auxiliaries systems include ((lights, UPS/PDU
losses, and other losses).the total electricity of the datacenter is shown in equation [2].
Total electricity of the datacenter = IT loads + cooling loads + Auxiliaries systems load [2]
8. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
54
The authors of this study calculated the Total electricity costs and added them to network fees and
other operating expenses per year to get Total annual datacenter operating expenses. Other
operating expenses include IT site management staff, Facilities site management staff,
Maintenance, Janitorial and landscaping and Security costs. Total annual datacenter operating
expenses
Total annual datacenter operating expenses= Total electricity costs+ network fees+ other
operating expenses [3]
The authors also take in consideration the capital costs of all devices mentioned in table [1]
including IT equipment, the site infrastructure (cooling and auxiliary devices). The model also
calculates Rack costs, Land costs, Architectural and engineering fees, Interest during
construction, Inert gas fire suppression and other facility costs.
The model doesn’t include application software and Operating system licenses costs which are an
important factor that must be included to get overall cost required for datacenter running.
3.4 DATACENTER CAPITAL COST CALCULATOR TOOL
Datacenter capital cost calculator [12] is a new Web-based application presented by Schneider
Electric with easy-to-use interfaces designed for use in the early stages of datacenter concept and
design development.
The aim of launching that tool is defining the datacenter physical infrastructure parameters and
calculating estimated costs based on the various parameters inputs. The tool provides calculations
about the required number of racks, the required floor space, and the whole capital cost based on
the user's inputs of the key power and cooling infrastructure, level of redundancy, and the IT load
identification. This tool presents rough order of magnitude values. The results of this calculator
help users to know how changes in IT load, location of datacenter, cooling and power
infrastructure can affect the whole capital costs.
The data center capital cost calculator can be an invaluable tool to help ensure that a particular
design will be in line with the user project’s budget, whether it be a new data center or a retrofit.
Used early in the planning stages, it can quickly set users expectations as to what their data center
project might cost. It’ll give users a rough order of magnitude cost, +/- 20% (it’s not meant to be
a quoting tool), to help companies or users understand what different choices cost; and the tool
can help to avoid the time and cost of cycling through iterations of change.
The tool is great for walking through different design scenarios to make decisions about
competing design requirements. For instance, higher redundancy (i.e. tier 3 or 4) is something
often known as a top priority, but budget has a direct impact on just how much redundancy is
feasible. This tool helps companies see the numbers so they can make the appropriate decisions
regarding capacity, criticality, density, budget, and more. There’s not one particular business
requirement that trumps the others. It really depends on the business needs.
3.5 BACKGROUND ON THE METHODOLOGY OF THE TOOL
As mentioned in [17], the designers of the calculator tool first researched more about what the
various data center subsystems cost. They obtained costs from actual configurations of their own
systems, quotes from 3rd party suppliers, and also partners.
9. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
55
Since they wanted a calculator that could compute the cost estimates for a wide range of data
center styles & sizes, they relied on data trending. All of their subsystem costs are in dollars per
watt, dollars per sqft, or dollars per rack; and then baseline costs are adjusted with various
multipliers:
Capacity multiplier – they collected cost data for two or more capacities for each subsystem, and
then trend-lined the data to estimate any size system.
Density multiplier – they did the same thing for densities. For each subsystem, they determined
the cost at 1 kW/rack vs. 15kW/rack, and then we trend-lined the densities in-between.
Redundancy multiplier – Redundancy impacts material cost, design cost, and installation costs
differently. Multipliers were established based on system costs and 3rd party design engineering
and installation quotes.
Some key assumptions and data sources in the tool are:
Data center size – White space size is computed using 27 sqft / rack. This is used to estimate
certain costs like raised floor cost and lighting cost, which are priced as $/sq ft.
Labor rates – Labor rate data comes from a US Dept of Labor, Bureau of Labor Statistics,
January 2008 report
Currency, Currency conversions are based on data from Oanda [18].
Scalability of subsystems – the tool designer assume certain subsystems and costs are not scalable
for the purpose of estimating costs: things like engineering design cost, or raised floor cost,
security, lighting costs, etc.
Based on the user inputs for location, capacity, density, power & cooling architecture, labor, and
redundancy, the tool will show the estimated total data center cost, and also breakdown the costs
by system (i.e. power, cooling), subsystem (i.e. UPS, generator, switchgear), and by type (i.e.
material, installation, engineering, etc.). the user can make changes on the fly to see how the
results are impacted.
The tool has many options that user can select between them, the level of redundancy of
operational equipment increase the capital cost. Some companies have small capital money and
other companies have more budget which can be paid for the datacenter design so the level of
redundancy depend on the available budget.
Probably many datacenters may have the same number and type of equipment but they are
located in different countries so the equipment prices are different which affect the capital cost.
Each user has different requirements and plans about the Datacenter environment design. The
following figure shows the tool interface and design
10. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
56
Figure 1 .the tool interface and design
STEPS FOR USING THE TOOL
TO USE THE TOOL, THE USER SHOULD FOLLOW THE STEPS.
Starting with selecting the continent. Then selecting country. Datacenter capital costs will be
affected by the location due to the change in equipment costs and labor rates. Results are showed
in the country's local concurrency, but can also be edited to other currencies.
Entering the datacenter IT load power consumption quantity in KW, selecting a value between 40
and 5000 kW.
Selecting a value for the Average power density in the datacenter between 1 and 20 kW per rack.
Average power density has an influence on the space as well as number of required racks and
whole datacenter capital cost.
Choosing in the local concurrency the labor rate.
Selecting the cooling architecture, the user can select from one to thirteen cooling architecture.
The architecture of cooling system in datacenter has a large impact on the capital costs. Perimeter
cooling indicates that cooling devices are housed outside of the rows of racks, row based cooling
presents cooling closer to the loads and includes ceiling-mounted and rack-mount cooling units.
Air distribution is set to ducted hot air containment for the indirect air economizer architecture.
Scalable static uninterruptible power supply systems allow scaling of capacity within a single
UPS frame; this introduces increase in flexibility of redundancy and capacity.
Choosing from four power distribution Architectures within the IT space, overhead busway,
remote power panels (RPPs), PDUs or wall mount panelboards.
Entering the value of the core and shell cost per square meter (or square foot), which involves the
construction of the base building structure and envelope. The total core and shell cost will be
calculated according to the facility total space, involving the IT, mechanical and electrical spaces.
11. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
57
The total capital cost is calculated according to the user's inputs, selections in the level of
redundancy for the power and cooling architecture impact the system design of other
infrastructure system such as fire suppression /detection and security which means that the more
redundancy (reliability) level selected for power and cooling architecture, the more reliability
considered for the other systems. For example, selecting 2N for cooling systems and 2N for
power systems would lead the fire suppression system to be VESDA with a clean agent for
suppression.
The total capital cost can also involve other systems. Fire suppression /detection includes
preaction sprinkler system using schedule 40m black steel piping and detection using heat/smoke
detectors with communication and alarm system.
According to the containment method, raised floor and/or dropped ceiling may be needed.
Datacenter cost introduces rough order of magnitude estimates of the datacenter, according to the
user specifications of datacenter environment.
Input selections automatically drive the calculation of the number of the racks and the floor space
required. The datacenter size refers only to the "IT room" and is used to compute raised floor and
dropped ceiling costs.
Core and shell, lightening, fire protection and security are calculated based off of total datacenter
size, including ancillary rooms housing physical infrastructure equipment.
The total datacenter system cost includes the materials, installation, designing /engineering,
project management / facility engineering, and core &shell.
This tool calculates the capital cost of the datacenter physical infrastructure but it doesn’t take in
consideration the software license cost and the operational cost such as power consumption cost
and other operational costs.
3.6 VIRTUALIZATION ENERGY COST CALCULATORTOOL
Virtualization Energy Cost Calculator is an online tool that can be used by users to aid in their
data center planning decisions. The tool can be accessed in [13]
.
The purpose of this tool is to illustrate the energy savings resulting from the virtualization of
servers within a datacenter. The tool allows the users to input data regarding datacenter IT
capacity, load, number of servers, energy cost, and other datacenter elements. Both the "as is" and
"to be" environmental characteristics can be entered as inputs. The tool then compares both
environments (pre and post virtualization) in terms of percentage of savings and also calculates
the impact on the annual electrical bill. The power usage effectiveness (PUE) of both
environments is compared as is the savings allocation and the space utilization.
Here’s how it works. The total power consumed by servers and the average server power draw is
estimated based on three inputs; total IT load, the percent of load that is servers, and the number
of servers. Then, based on post-consolidation inputs including the percent of servers that can be
virtualized and the consolidation ratio, the tool estimates the new server power consumption
(after virtualization). The following figure shows the interface and the design of the Virtualization
Energy Cost Calculator tool.
12. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
58
Figure 2 .the interface of the virtualization Energy Cost Calculator tool
Server consolidation is often where the energy discussion ends for virtualization projects, but this
tool includes another critical aspect – the impact of the physical infrastructure on overall
performance after a virtualization project. It is known that if the datacenter owners leave the
power and cooling systems alone, and decrease their IT load through consolidation, the overall
energy bill goes down (a good thing), but from the perspective of PUE or physical infrastructure
efficiency, the data center just got worse. This tool helps the users see the impact of this, and
allows them to see how power and cooling improvements like right-sizing the CRAC/CRAH or
UPS, replacing outdated equipment with high efficiency systems, and improving airflow with
targeted cooling and blanking panels can add significantly to the energy savings overall.
For the power & cooling energy, this calculator allows the users to choose from six typical data
center power & cooling architectures. These architectures vary in redundancy (i.e. N or 2N), and
cooling approach (i.e. DX or chilled water). The users would pick the one that most closely
aligns to their environment and then do some “what-ifs” with changes to their power & cooling to
see how the energy losses are impacted.
The space analysis within the tool is pretty straight-forward. It is assumed that each rack is 42U
tall and requires 27 sq. ft (2.5 sq.m). Pre-virtualization servers are assumed to be 2U in height,
while post-virtualization servers are assumed to be 2U for consolidation ratios less than 4 and 3U
for consolidation ratios greater than 4.
More details about the topic of virtualization and how optimized power & cooling can maximize
the energy benefits are illustrated in [19]. This research also discussed the power and cooling
challenges associated with virtualization, how to improve the PUE, impact of virtualization on
redundancy requirements, and how dynamic loads can impact the risk of downtime if rack-level
power and cooling health are not considered.
To use the spreadsheet tool, the user should follow the steps:
Entering the datacenter IT capacity
The datacenter IT capacity is the maximum amount of power the datacenter can provide to the
critical IT load.
13. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
59
Entering the IT load
The IT load is the amount of power consumed by servers, storage and networking machines.
Entering the percentage of IT load that is servers
The percentage of IT load that is servers refers to % of the total IT load consumed by servers.
Typical datacenters are in the 50-60% range.
Entering The total number of servers
The total number of servers is the amount of actual physical servers (not virtual servers).
Entering The IT rack U space utilization
IT rack U space utilization is the average % of vertical height consumed by the IT (assuming 42U
racks).
Entering the Electricity cost per KWh
The users should first choose the currency type and then enter the value of the cost. The
electricity rate represents an average that takes into account peak and off-peak rates.
Choosing the datacenter description
The datacenter choices all include generator, raised floor, hot/cold aisle, perimeter coolers.
N power, N cooling (CW): 1N UPS, PDU, Chiller, cooling tower, pumps &CRACHS.
N power, N cooling (DX): same as previous except cooling is air-cooled DX
2N power, N cooling (CW): same as first except 2N UPS, PDU with STS.
2N power, N cooling (VFD CW): same as previous except pumps and Chillers include VFD
2N power, 2N cooling (CW): same as previous except cooling systems are 2N & don’t use VFD.
2N power, N cooling (VFD CW): same as previous except all pumps & Chillers include VFD
All these inputs are about the pre virtualization datacenter environment
Other inputs should be entered by users as follows
Entering the value of percentage of servers that can be virtualized
More and more IT applications are becoming candidates for virtualization, the percentage of
servers that can be virtualized depends on various business requirements and the specific
applications.
Entering the server consolidation ratio
A typical value for server consolidation is 10:1, but can vary substantially, based on server
specifications and business applications.
The users of the tool also have the option to select number of choices for improving datacenter
environment such as the choice of Installation of blanking panels and the choice of a row –based
cooling. Here are the benefits of each choice
Oversizing is the traditional practice of purchasing excess capacity up front to accommodate
potential long term growth, right sizing implies scalable solutions that can grow or decrease
capacity only when required with minimal upfront investment.
Modern UPS attain higher efficiencies than their traditional counterparts. A change in efficiency
of just a few percentage points can significantly reduce energy consumption
A row –based cooling approach brings the source of the cool air closer to the IT load .this gain in
cool air distribution efficiency helps reduce energy bills.
Installation of blanking panels to eliminate gaps and spaces in racks helps to manage the efficient
removal of hot air from the IT loads thereby increasing the efficiency of the cooling system.
Based on user inputs, the tool automatically calculate number of servers and racks, Power
consumption of IT equipment and the physical infrastructure, and the annual electrical bill.
14. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
60
The tool only calculates the annual power consumption cost of datacenter before and after using
virtualization technology. All other costs such as maintenance cost or software license cost are
not calculated in that too
4. THE COMPARISON BETWEEN DATACENTER TCO MODELS AND SPREAD
SHEET TOOLS
In this section, the authors of this survey compare between datacenter TCO models and spread
sheet tools. The authors use the different cost parameters mentioned in diagram [1] to compare
between the datacenter TCO models and spread sheet tools. The comparison is based on many
aspects such as what are the parameters included and not included in such tools and whether the
tools are documented or not. The following table shows the comparison
Table 2 .the comparison between the datacenter TCO models
Cost model Parameters included in
the cost model
Parameters not
included in the cost
model
The model
documented
or not
Cost model for
planning,
development
and operation of
a data center
Datacenter operating costs
Space cost
Power consumption cost
The cost of electricity for IT
equipment
the cost of electricity for
cooling resources
Software and Licensing Costs
Personnel Costs
Depreciation costs
All costs related to the
following systems are
not included
the Fire protection
system
enclosures and
containment (Racks)
Lighting system
The following capital
costs also are not
included
Basic installation,
Design /engineering
costs
project management /
facility engineering costs
operation cost : power
consumption cost
the cost of electricity for
power delivery
generation system
the cost of electricity for
lightning system
The model is
documented and
available at [1]
A simple model
for determining
true total cost of
ownership for
data center
All Capital cost
Hardware price
land cost
building cost
basic installation , Design
/engineering costs
project management / facility
engineering cost
Operation cost
Power consumption cost
Personnel cost (personnel
salaries)
Maintenance and repairs
costs(depreciation) costs
Operating system
licenses and application
software
The model is
Documented
and available at
[10]
The spread
sheet is
available at [16]
15. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
61
Overall data
center costs
model
Operation cost
Power consumption cost
The cost of electricity for IT
equipment
(depreciation) costs
hardware administration,
security, and other
infrastructure-related
people costs (personnel
cost)
Power consumption cost
the cost of electricity for
cooling resources
the cost of electricity for
power delivery and
generation system
the cost of electricity for
lightning system
Maintenance and repairs
costs
IT software license cost
basic installation ,
Design /engineering
costs
project management /
facility engineering cost
The model isn’t
documented but
the spread sheet
is available at
[15]
the article about
the spread sheet
is available at
[14]
Virtualization
Energy Cost
Calculator tool
Power consumption costs All capital costs and the
other operation costs are
not calculated in the tool
The model isn’t
documented but
the online
estimator tool is
available at [13]
APC [12]
Datacenter
capital cost
calculator
Hardware price of Operating
devices and systems for IT
equipment
core and shell cost
basic installation , Design
/engineering costs for
Operating devices and systems
for IT equipment
project management / facility
engineering cost
All other costs are not
included in that tool
The model isn’t
documented but
the online
estimator tool is
available at [12]
5. CONCLUSION
In this work, the authors have reviewed datacenters key concepts and attributes and provided a
thorough background of the importance of datacenters in businesses. the authors have presented
inclusive assessments and comparisons between several datacenter TCO spread sheets and tools.
The comparison is based on the cost parameters included and not included in the cost model and
if the model is documented or not. Such an approach provides a solid ground for designing better
models in the future.
16. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol. 8, No. 2/3/4, August 2018
62
REFERENCES
[1] J. G. Koomey, "Worldwide electricity used in data centers," Environmental research letters, vol. 3, p.
034008, 2008.
[2] W. P. Turner IV and K. G. Brill, "Cost model: Dollars per kW plus dollars per square foot of
computer floor," Uptime Institute, 2008.
[3] T. ASHRAE, "Datacom equipment power trends and cooling applications," ed: American Society of
Heating, Refrigerating and Air‐Conditioning Engineers, Inc. Atlanta, 2005.
[4] N. CRAFT, "Explain: Tier 1 / Tier 2 / Tier 3 / Tier 4 Data Center”, nixCraft Website, 29 January
2011," ed.
[5] C. Belady, "Projecting annual new datacenter construction market size," Microsoft, Global
Foundation Services Report, 2011.
[6] N. Rasmussen, "Determining total cost of ownership for data center and network room
infrastructure," Relatóriotécnico, Schneider Electric, Paris, vol. 8, 2011.
[7] C. D. Patel and A. J. Shah, "Cost model for planning, development and operation of a data center,"
ed, 2005.
[8] J. Karidis, J. E. Moreira, and J. Moreno, "True value: assessing and optimizing the cost of computing
at the data center level," in Proceedings of the 6th ACM conference on Computing frontiers, 2009,
pp. 185-192.
[9] J. D. Moore, J. S. Chase, P. Ranganathan, and R. K. Sharma, "Making Scheduling" Cool":
Temperature-Aware Workload Placement in Data Centers," in USENIX annual technical conference,
General Track, 2005, pp. 61-75.
[10] J. Koomey, K. Brill, P. Turner, J. Stanley, and B. Taylor, "A simple model for determining true total
cost of ownership for data centers," Uptime Institute White Paper, Version, vol. 2, p. 2007, 2007.
[11] K. V. Vishwanath, A. Greenberg, and D. A. Reed, "Modular data centers: how to design them?," in
Proceedings of the 1st ACM workshop on Large-Scale system and application performance, 2009, pp.
3-10.
[12] Datacenter capital cost calculator https://www.schneider-
electric.com/en/work/solutions/system/s1/data-center-and-network-systems/trade-off-tools/data-
center-capital-cost-calculator/tool.html.
[13] Virtualization Energy Cost Calculator https://www.schneider-
electric.com/en/work/solutions/system/s1/data-center-and-network-systems/trade-off-tools/data-
center-virtualization-energy-savings-calculator/tool.html
[14] J. Hamilton, "Overall data center costs," Perspectives at
http://perspectives.mvdirona.com/2010/09/18/ overalldatacentercosts.aspx. mvdirona. com, 2010.
[15] Overall data center costs
http://mvdirona.com/jrh/TalksAndPapers/PerspectivesDataCenterCostAndPower.xls
[16] A simple model for determining true total cost of ownership for data centers
https://virtualizationandstorage.files.wordpress.com/2014/05/truetco_model_2-1.xls
[17] W.Torell, “Data Center Capital Cost Calculator – A Tool to Help Align Your Data Center Business
Requirements with Your Project Budget” https://blog.schneider-
electric.com/datacenter/2012/06/28/data-center-capital-cost-calculator-a-tool-to-help-align-your-data-
center-business-requirements-with-your-project-budget/
[18] Oanda https://www.oanda.com/currency/converter/
[19] P. Niles and P. Donovan, "Virtualization and Cloud Computing: Optimized Power, Cooling, and
Management Maximizes Benefits. White paper 118. Revision 5," Schneider Electric, Tech. Rep2015.