Models and Architecture - Connected Services and Cloud ComputingEueung Mulyana
Lecture #2 - ET-3010
Models and Architecture
Connected Services and Cloud Computing
School of Electrical Engineering and Informatics SEEI / STEI
Institut Teknologi Bandung ITB
Update January 2017
Digital Ecosystems - Connected Services and Cloud ComputingEueung Mulyana
Lecture #3 - ET-3010
Digital Ecosystems
Connected Services and Cloud Computing
School of Electrical Engineering and Informatics SEEI / STEI
Institut Teknologi Bandung ITB
Update February 2017
Cloud computing is Internet-based computing, whereby shared resource, software, and information are provided to computers and other devices on demand, like the electricity grid.
CLOUD COMPUTING: A NEW VISION OF THE DISTRIBUTED SYSTEM cscpconf
Cloud computing is a new emerging system which offers information technologies via Internet. Clients use services they need when they need and at the place they want and pay only for what they have consumed. So, cloud computing offers many advantages especially for business. A deep study and understanding of this emerging system and the inherent components help a lot in identifying what should we do in order to improve its performance. In this work, we present first cloud computing and its components then we describe an idea which attempts to optimize the management of cloud computing system that are composed of many data centers.
Paolo Merialdo, Cloud Computing and Virtualization: una introduzioneInnovAction Lab
Intervento di Paolo Merialdo, Professore dell'Università degli Studi Roma Tre all'evento "Cloud Computing e Virtualization" di Roma, 17 Settembre 2010, organizzato da Innovation Lab. http://innovationlab.dia.uniroma3.it/?p=124
Models and Architecture - Connected Services and Cloud ComputingEueung Mulyana
Lecture #2 - ET-3010
Models and Architecture
Connected Services and Cloud Computing
School of Electrical Engineering and Informatics SEEI / STEI
Institut Teknologi Bandung ITB
Update January 2017
Digital Ecosystems - Connected Services and Cloud ComputingEueung Mulyana
Lecture #3 - ET-3010
Digital Ecosystems
Connected Services and Cloud Computing
School of Electrical Engineering and Informatics SEEI / STEI
Institut Teknologi Bandung ITB
Update February 2017
Cloud computing is Internet-based computing, whereby shared resource, software, and information are provided to computers and other devices on demand, like the electricity grid.
CLOUD COMPUTING: A NEW VISION OF THE DISTRIBUTED SYSTEM cscpconf
Cloud computing is a new emerging system which offers information technologies via Internet. Clients use services they need when they need and at the place they want and pay only for what they have consumed. So, cloud computing offers many advantages especially for business. A deep study and understanding of this emerging system and the inherent components help a lot in identifying what should we do in order to improve its performance. In this work, we present first cloud computing and its components then we describe an idea which attempts to optimize the management of cloud computing system that are composed of many data centers.
Paolo Merialdo, Cloud Computing and Virtualization: una introduzioneInnovAction Lab
Intervento di Paolo Merialdo, Professore dell'Università degli Studi Roma Tre all'evento "Cloud Computing e Virtualization" di Roma, 17 Settembre 2010, organizzato da Innovation Lab. http://innovationlab.dia.uniroma3.it/?p=124
In this study, we propose situations where cloud is suitable and fog is more compatible, also define some services according to the cloud and fog architecture. We also provide a comparison of task scheduling algorithms of cloud computing and determine that fog is a light weight network so which is the best suitable algorithm for fog architecture on the basis of some attributes. The implementations of fog computing are challenging in today’s computational era; we define some reasons in which fog computing implementation is difficult.
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
Application-Aware Big Data Deduplication in Cloud EnvironmentSafayet Hossain
Here I present a paper based on Application-Aware Big Data Deduplication in Cloud Environment. It is published on IEEE on 31 May 2017.
Abstract of this paper:
Deduplication has become a widely deployed technology in cloud data centers to improve IT resources efficiency. However, traditional techniques face a great challenge in big data deduplication to strike a sensible tradeoff between the conflicting goals of scalable deduplication throughput and high duplicate elimination ratio. We propose AppDedupe, an application-aware scalable inline distributed deduplication framework in cloud environment, to meet this challenge by exploiting application awareness, data similarity and locality to optimize distributed deduplication with inter-node two-tiered data routing and intra-node application-aware deduplication. It first dispenses application data at file level with an application-aware routing to keep application locality, then assigns similar application data to the same storage node at the super-chunk granularity using a handprinting-based stateful data routing scheme to maintain high global deduplication efficiency, meanwhile balances the workload across nodes. AppDedupe builds application-aware similarity indices with super-chunk handprints to speedup the intra-node deduplication process with high efficiency. Our experimental evaluation of AppDedupe against state-of-the-art, driven by real-world datasets, demonstrates that AppDedupe achieves the highest global deduplication efficiency with a higher global deduplication effectiveness than the high-overhead and poorly scalable traditional scheme, but at an overhead only slightly higher than that of the scalable but low duplicate-elimination-ratio approaches.
Link of this paper:
https://ieeexplore.ieee.org/document/7936577
A Comparison of Cloud Execution Mechanisms Fog, Edge, and Clone Cloud Computing IJECEIAES
Cloud computing is a technology that was developed a decade ago to provide uninterrupted, scalable services to users and organizations. Cloud computing has also become an attractive feature for mobile users due to the limited features of mobile devices. The combination of cloud technologies with mobile technologies resulted in a new area of computing called mobile cloud computing. This combined technology is used to augment the resources existing in Smart devices. In recent times, Fog computing, Edge computing, and Clone Cloud computing techniques have become the latest trends after mobile cloud computing, which have all been developed to address the limitations in cloud computing. This paper reviews these recent technologies in detail and provides a comparative study of them. It also addresses the differences in these technologies and how each of them is effective for organizations and developers.
Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user.
The Grid means the infrastructure for the Advanced Web, for computing, collaboration and communication.
The goal is to create the illusion of a simple yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources.
“Grid” computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and ,in some cases, high-performance orientation .
We presented the Grid concept in analogy with that of an electrical power grid and Grid vision
Twitter's Data Platform is built using multiple complex open source and in house projects to support Data Analytics on hundreds of petabytes of data. Our platform support storage, compute, data ingestion, discovery and management and various tools and libraries to help users for both batch and realtime analytics. Our DataPlatform operates on multiple clusters across different data centers to help thousands of users discover valuable insights. As we were scaling our Data Platform to multiple clusters, we also evaluated various cloud vendors to support use cases outside of our data centers. In this talk we share our architecture and how we extend our data platform to use cloud as another datacenter. We walk through our evaluation process, challenges we faced supporting data analytics at Twitter scale on cloud and present our current solution. Extending Twitter's Data platform to cloud was complex task which we deep dive in this presentation.
In this study, we propose situations where cloud is suitable and fog is more compatible, also define some services according to the cloud and fog architecture. We also provide a comparison of task scheduling algorithms of cloud computing and determine that fog is a light weight network so which is the best suitable algorithm for fog architecture on the basis of some attributes. The implementations of fog computing are challenging in today’s computational era; we define some reasons in which fog computing implementation is difficult.
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
Application-Aware Big Data Deduplication in Cloud EnvironmentSafayet Hossain
Here I present a paper based on Application-Aware Big Data Deduplication in Cloud Environment. It is published on IEEE on 31 May 2017.
Abstract of this paper:
Deduplication has become a widely deployed technology in cloud data centers to improve IT resources efficiency. However, traditional techniques face a great challenge in big data deduplication to strike a sensible tradeoff between the conflicting goals of scalable deduplication throughput and high duplicate elimination ratio. We propose AppDedupe, an application-aware scalable inline distributed deduplication framework in cloud environment, to meet this challenge by exploiting application awareness, data similarity and locality to optimize distributed deduplication with inter-node two-tiered data routing and intra-node application-aware deduplication. It first dispenses application data at file level with an application-aware routing to keep application locality, then assigns similar application data to the same storage node at the super-chunk granularity using a handprinting-based stateful data routing scheme to maintain high global deduplication efficiency, meanwhile balances the workload across nodes. AppDedupe builds application-aware similarity indices with super-chunk handprints to speedup the intra-node deduplication process with high efficiency. Our experimental evaluation of AppDedupe against state-of-the-art, driven by real-world datasets, demonstrates that AppDedupe achieves the highest global deduplication efficiency with a higher global deduplication effectiveness than the high-overhead and poorly scalable traditional scheme, but at an overhead only slightly higher than that of the scalable but low duplicate-elimination-ratio approaches.
Link of this paper:
https://ieeexplore.ieee.org/document/7936577
A Comparison of Cloud Execution Mechanisms Fog, Edge, and Clone Cloud Computing IJECEIAES
Cloud computing is a technology that was developed a decade ago to provide uninterrupted, scalable services to users and organizations. Cloud computing has also become an attractive feature for mobile users due to the limited features of mobile devices. The combination of cloud technologies with mobile technologies resulted in a new area of computing called mobile cloud computing. This combined technology is used to augment the resources existing in Smart devices. In recent times, Fog computing, Edge computing, and Clone Cloud computing techniques have become the latest trends after mobile cloud computing, which have all been developed to address the limitations in cloud computing. This paper reviews these recent technologies in detail and provides a comparative study of them. It also addresses the differences in these technologies and how each of them is effective for organizations and developers.
Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user.
The Grid means the infrastructure for the Advanced Web, for computing, collaboration and communication.
The goal is to create the illusion of a simple yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources.
“Grid” computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and ,in some cases, high-performance orientation .
We presented the Grid concept in analogy with that of an electrical power grid and Grid vision
Twitter's Data Platform is built using multiple complex open source and in house projects to support Data Analytics on hundreds of petabytes of data. Our platform support storage, compute, data ingestion, discovery and management and various tools and libraries to help users for both batch and realtime analytics. Our DataPlatform operates on multiple clusters across different data centers to help thousands of users discover valuable insights. As we were scaling our Data Platform to multiple clusters, we also evaluated various cloud vendors to support use cases outside of our data centers. In this talk we share our architecture and how we extend our data platform to use cloud as another datacenter. We walk through our evaluation process, challenges we faced supporting data analytics at Twitter scale on cloud and present our current solution. Extending Twitter's Data platform to cloud was complex task which we deep dive in this presentation.
Extending Twitter's Data Platform to Google CloudDataWorks Summit
Twitter's Data Platform is built using multiple complex open source and in house projects to support Data Analytics on hundreds of petabytes of data. Our platform support storage, compute, data ingestion, discovery and management and various tools and libraries to help users for both batch and realtime analytics. Our DataPlatform operates on multiple clusters across different data centers to help thousands of users discover valuable insights. As we were scaling our Data Platform to multiple clusters, we also evaluated various cloud vendors to support use cases outside of our data centers. In this talk we share our architecture and how we extend our data platform to use cloud as another datacenter. We walk through our evaluation process, challenges we faced supporting data analytics at Twitter scale on cloud and present our current solution. Extending Twitter's Data platform to cloud was complex task which we deep dive in this presentation.
Presentation of Eco-efficient Cloud Computing Framework for Higher Learning I...rodrickmero
Tanzanian Higher Learning Institutions (HLIs) are facing challenges in providing the necessary Information Technology (IT) support for education, research and development activities. Currently, HLIs use traditional computing (TC) which has proven to be uneconomical in terms of maintenance, software purchase costs, huge power consumption and staffing.
Cloud computing (CC) is the way forward for HLIs in solving the computing challenges. However, the HLIs policies regarding security of critical data in CC environment prevent adoption of CC services from existing vendors. The reliable and secure way is to establish and operate CC data centers dedicated to HLIs critical data and services. Owning and operating the traditional data centers is a challenge to HLIs because it consumes huge amounts of power. Tanzania like other developing countries has a low level of electrification, while the need for electric power consumption is increasing year after year. The need to consider energy efficient approaches in data center operation is very important for reducing both the operation costs and carbon footprint to the environment.
Therefore, this thesis presents the eco-efficient cloud computing framework that integrates renewable and non-renewable power sources, and free cooling in reducing carbon emission and power consumption in HLIT cloud data centers.
To develop the framework, we conducted a study in Tanzania HLIs to explore the current situation and cloud computing requirements. Interview, Observation, and document review were data collection method used by the study. After analysis of the results, we defined guidelines for developing CC building blocks. We used CloudSim tool kit and Netbin IDE to develop and to simulate eco-efficient framework.
At the end, eco-efficient framework has shown improvement on power consumption, efficiency and carbon emission. Therefore, eco-efficient approaches give HLIs of Tanzania sustainable solution to their computing needs by significantly reducing operating costs. Moreover, it ensures environment protection for the benefit of current and future generations.
Datacenter and cloud architectures continue to evolve to address the needs of large-scale multi-tenant data centers and clouds. These needs are centered around dimensions such as scalability in computing, storage, and bandwidth, scalability in network services, efficiency in resource utilization, agility in service creation, cost efficiency, service reliability, and security. Data centers are interconnected across the wide area network via routing and transport technologies to provide a pool of resources, known as the cloud. High-speed optical interfaces and dense wavelength-division multiplexing optical transport are used to provide for high-capacity transport intra- and inter-datacenter. This presentation will provide some brief descriptions on the working principles of Cloud & Data Center Networks.
Ever since the beginning of the microelectronics era there has been an eternal quest to reduce the characteristic features on the devices: some devices are now in qualification states on the sub 40nm gate oxide range for an scheduled commercial release towards the end of the year, and there are a lot of efforts in the sub 30nm range.
Ant colony Optimization: A Solution of Load balancing in Cloud dannyijwest
As the cloud computing is a new style of computing over internet. It has many advantages along with some
crucial issues to be resolved in order to improve reliability of cloud environment. These issues are related
with the load management, fault tolerance and different security issues in cloud environment. In this paper
the main concern is load balancing in cloud computing. The load can be CPU load, memory capacity,
delay or network load. Load balancing is the process of distributing the load among various nodes of a
distributed system to improve both resource utilization and job response time while also avoiding a
situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work.
Load balancing ensures that all the processor in the system or every node in the network does
approximately the equal amount of work at any instant of time. Many methods to resolve this problem has
been came into existence like Particle Swarm Optimization, hash method, genetic algorithms and several
scheduling based algorithms are there. In this paper we are proposing a method based on Ant Colony
optimization to resolve the problem of load balancing in cloud environment.
Sector - Presentation at Cloud Computing & Its Applications 2009Robert Grossman
This is a presentation about Sector that I gave at the Cloud Computing and Its Applications (CCA 09) Workshop that took place in Chicago on October 20, 2009. Sector is an open source cloud computing framework designed for data intensive computing.
Concept of edge computing is to leverage new generation technologies, processes, services, and applications that are built to take an advantage of new infrastructure.
Put processing closer to the edge of the network pre-process data and send to the cloud.
Cost Optimization in Multi Cloud Platforms using Priority Assignmentijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Similar to Connected Services - Models and Architecture (20)
My talk at IDNOG5 (ID Network Operators Group) Conference, Jakarta, 2018, covers a short overview of fintech, cryptocurrency & blockchain + a networking perspective/use cases at the end
Lecture #6 - ET-3010
Cloud Computing - Overview and Examples
Connected Services and Cloud Computing
School of Electrical Engineering and Informatics SEEI / STEI
Institut Teknologi Bandung ITB
Update April 2017
Lecture #5 - ET-3010
Connected Things, IoT (Internet of Things), and 5G Infrastructure
Connected Services and Cloud Computing
School of Electrical Engineering and Informatics SEEI / STEI
Institut Teknologi Bandung ITB
Update April 2017
Trends and Enablers - Connected Services and Cloud ComputingEueung Mulyana
Lecture #4 - ET-3010
Trends and Technology Enablers
Connected Services and Cloud Computing
School of Electrical Engineering and Informatics SEEI / STEI
Institut Teknologi Bandung ITB
Update February 2017
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
6. Convention: Cloud vs. Network
Historically, before the rise of CC, the term Cloud was very common to be used
to refer to Networks or Interconnected-Systems
Thus, in this lecture, Cloud may refer to both of them, depends on context.
Notes
In CC concept, Network is a part of CC, but
Network is not used only for CC
6 / 51
9. Gmail Service
The Story of Send
Steps
1. Sender
2. Sender's Provider (ISP)
3. Backbone
4. Data Center - Front Server
5. Data Center - Gmail Backend (Cloud)
6. Backbone
7. Recipient's Provider
8. Recipient
9 / 51
39. Some Defs
Literal: Facility to store and process data.
A large group of networked computer servers typically
used by organizations for the remote storage, processing,
or distribution of large amounts of data (Google Def).
A datacenter is a (centralized) facility, either physical or
virtual, for the storage, management, and dissemination
of data and information organized around a particular
body of knowledge or pertaining to a particular business
(techtarget.com).
A data center is a facility that centralizes an organization's
IT operations and equipment, and where it stores,
manages, and disseminates its data. Data centers house a
network's most critical systems and are vital to the
continuity of daily operations (paloaltonetworks.com).
Major Functional Components
Compute (Servers)
Storage
Network
39 / 51