Cloud infrastructure refers to the hardware, software, and services that support cloud computing. It includes servers, storage, networking equipment, virtualization software, and tools for managing the infrastructure. Cloud infrastructure also includes an abstraction layer that presents resources and services through APIs to users.
Russian Call girls in Dubai +971563133746 Dubai Call girls
Cloud Infrastructure.pptx
1. Cloud Infrastructure
What is cloud infrastructure?
Cloud infrastructure refers to the
hardware and software components,
such as servers, storage, networking,
virtualization software, services and
management tools, that support the
computing requirements of a cloud
computing model.
Cloud infrastructure also includes an abstraction layer that virtualizes
and logically presents resources and services to users
through application programming interfaces and API-enabled
command-line or graphical interfaces.
UNIT 2
Prof. Vivek R. Shelke
2. Historical Perspective of Data Centres
Prof. Vivek R. Shelke
There is a huge need and a huge opportunity to get everyone in the world
connected, to give everyone a voice and to help transform society for the
future. The scale of the technology and infrastructure that must be built is
unprecedented, and we believe this is the most important problem we can
focus on.
Mark Zuckerberg, 2012
3. Historical Perspective of Data Centres
Prof. Vivek R. Shelke
Data centers trace their
roots back to the 1940s, a
time where the world's first
programmable computer,
the Electronic Numerical
Integrator and Computer,
or ENIAC, was the pinnacle
of computational
technology.
It was designed by the U.S. army to calculate artillery fire during the Second
World War, and was even used by mathematicians and scientists on the
Manhattan Project to develop the first thermonuclear bomb, which would
eventually be dropped on Hiroshima and Nagasaki in 1945
4. Historical Perspective of Data Centres
Prof. Vivek R. Shelke
This new technology held great
potential, especially in the defense
sector. U.S. President Harry S. Truman
had ENIAC centers built at military
installations. In 1947, he assigned
engineers and researchers from the
newly-founded Central Intelligence
Agency (CIA) to assist in the
development and innovation of ENIAC
technology at the dawn of the Cold
War.
Early data centers were incredibly complex. With their primary intention geared
towards intelligence and military functions, secrecy was very important. Most data
centers only had one secure door and no windows. Huge (and expensive) vents and
fans were needed for cooling, and hundreds of feet of wiring and vacuum tubes to
connect all of the components. Data centers were still primitive. Wires often
overheated and failed, overheating could even lead to fires.
5. Historical Perspective of Data Centres
Prof. Vivek R. Shelke
Through the 1960s and 1970s, computational technology skyrocketed. In the early 1960s,
International Business Machines (IBM) released their first transistorized computer called
TRADIC. This new model helped data centers branch out from the military sphere to the
commercial, eliminating the need for sophisticated vacuum tube systems. The TRADIC also
exponentially increased computational abilities and features. It also made computer
systems smaller and easier to fit into multipurpose spaces like office buildings. These
systems enabled NASA to put astronauts on the Moon.
Rapid innovation and developments through the 1960s and 1970s, coupled with a
massive proliferation of technology and computer companies such as IBM, Intel Xerox and
Sun Microsystems heralded the arrival of personal computing in the 1980s. Information
technology started to become a huge economic contributor around the world and had
boomed at an unprecedented scale.
UNIX set the standard for personal computing and was the dawn of data-driven
technology. It depended on a "client-server" model in which many computers, some
personal, some commercial, connected over the newly-developed Internet to a host
server. Through the 1980s and early 1990s, users began to interface with servers located
in data centers around the world using the Internet, laying the groundwork for the
modern data center
6. Historical Perspective of Data Centres
Prof. Vivek R. Shelke
The dot-com era was a wholesale
reshuffling of world economies
brought on the by rise of the modern
internet. Consequently, it also marked
the boom of the data center
worldwide. The data center became
critical for national security, internet
infrastructure and economic output.
Even when the dot-com bubble burst
and left massive economic strife in its
wake, the data center boom has
persisted ever since.
Companies needed fast internet connectivity and nonstop operation at an unprecedented
scale. Establishing such expensive data centers and Internet infrastructure was not viable for a
lot of small companies, so multinational corporations like Amazon and Google began to
develop vast data centers which provide companies with an array of services and
technological solutions. Proliferation at a massive scale followed into the 2000s and 2010s.
Engineers aroundthe world devoted themselves to crafting data centers perfectly.
7. Historical Perspective of Data Centres
Prof. Vivek R. Shelke
Today, data centers are enigmas to most people. Locked away in secure locations around the
globe, they ensure the operation of the Internet and the data of billions of people. While
large companies like Facebook and Google have begun holding "public tours" of their
facilities, many of the inner-workings of these highly advanced technological centers remain a
mystery. Like the secretive beginnings of data centers after the Second World War, data
centers remain largely shrouded and hidden away from the public to this day. With current
controversies surrounding big tech monopolies, cyberwarfare, illegal mass surveillance and
data mining, are data centers and the Internet becoming Generation Z's battlefield?
8. Data centre Components
Prof. Vivek R. Shelke
IT Equipment and Facilities
Data centers are complex
facilities that house and manage
a variety of IT equipment and
facilities to ensure the secure
and efficient operation of
computer systems and data
storage. These components can
be broadly categorized into two
main groups: IT equipment and
facilities infrastructure.
9. Data centre Components
Prof. Vivek R. Shelke
IT Equipment
IT equipment refers to the hardware
and software components that are
essential for processing, storing, and
managing data and applications within
a data center. Some common IT
equipment includes.
a. Servers: These are powerful computers that run applications, store data, and
handle network requests. They come in various forms, including rack-mounted
servers, blade servers, and tower servers.
b. Storage Devices: Data centers use various storage devices, such as hard disk
drives (HDDs), solid-state drives (SSDs), and network-attached storage (NAS)
devices, to store and manage data.
c. Networking Equipment: This includes switches, routers, firewalls, load
balancers, and other network devices that enable data communication within the
data center and with external networks.
10. Data centre Components
Prof. Vivek R. Shelke
IT Equipment
d. Virtualization Infrastructure: Virtualization technologies allow multiple
virtual machines (VMs) to run on a single physical server, optimizing resource
utilization.
e. Power Distribution Units (PDUs): PDUs distribute electrical power to the IT
equipment in the data center racks.
f. Uninterruptible Power Supplies (UPS): UPS systems provide backup power
to critical IT equipment in case of electrical grid failures.
g. Cooling Systems: Data centers use cooling equipment such as air
conditioning units and precision cooling systems to maintain a suitable
temperature for IT equipment.
h. Security Systems: Access control systems, surveillance cameras, and
biometric authentication are used to protect IT equipment from physical
security threats.
i. Management and Monitoring Tools: Software and hardware solutions are
employed to manage and monitor the performance, health, and security of IT
equipment.
11. Data centre Components
Prof. Vivek R. Shelke
Facilities Infrastructure:
Facilities infrastructure includes the physical components and systems
required to support the IT equipment and ensure the overall reliability
and availability of the data center. Key elements of facilities infrastructure
include:
a. Power Infrastructure: This includes electrical substations, generators,
transformers, and power distribution systems to provide a reliable source
of electricity to the data center.
b. HVAC (Heating, Ventilation, and Air Conditioning): HVAC systems
maintain a controlled environment by regulating temperature, humidity,
and airflow to prevent overheating of IT equipment.
c. Data Center Layout and Design: The layout of racks, aisles, and
equipment placement is carefully designed to optimize space,
airflow, and accessibility.
12. Data centre Components
Prof. Vivek R. Shelke
Facilities Infrastructure:
d. Fire Suppression Systems: Fire detection and suppression systems, such
as fire alarms and gas-based suppression systems, are implemented to
protect against fire hazards.
e. Physical Security: Perimeter fencing, access control, security personnel,
and surveillance systems help safeguard the data center against
unauthorized access.
f. Redundancy and Backup Systems: Data centers often incorporate
redundancy in power and cooling systems, as well as backup systems to
ensure uninterrupted operations in case of failures.
g. Cable Management: Proper cable management is crucial for organized
and efficient connectivity between IT equipment and networking
infrastructure.
h. Environmental Monitoring: Sensors and monitoring systems
continuously track conditions within the data center, such as temperature,
humidity, and air quality.
13. Data centre Components
Prof. Vivek R. Shelke
Facilities Infrastructure:
The effective integration
and management of both IT
equipment and facilities
infrastructure are critical
for ensuring the reliability,
security, and scalability of a
data center.
Modern data centers are
often designed with high
efficiency and sustainability
in mind, utilizing advanced
technologies to reduce
energy consumption and
environmental impact.
15. Design Considerations
Prof. Vivek R. Shelke
PUE and Challenges in Cloud
What is power usage effectiveness (PUE)?
Power usage effectiveness (PUE) is a metric used to determine the energy
efficiency of a data center.
PUE is determined by dividing the total amount of power entering a data
center by the power used to run the IT equipment within it. PUE is
expressed as a ratio, with overall efficiency improving as the quotient
decreases toward 1.0.
Data center infrastructure and the processing power within it require a lot
of energy, and data centers that do not operate efficiently will use more
energy. Monitoring a metric like PUE is useful for benchmarking data
center efficiency. Organizations and data center managers can use this
metric once to measure their data center efficiency and then again to
measure the effect of any changes made to the data center.
This helps reduce power consumption and energy costs.
17. Design Considerations: PUE and Challenges in Cloud
Prof. Vivek R. Shelke
Components of PUE
Power usage effectiveness is the ratio between the total energy amount a facility
consumes and the energy specifically used by the IT equipment. However, to
understand what that shows, we need to take a look at its components.
IT Equipment Power
This component of PUE focuses on the power consumed by the core IT equipment
within the data center, including servers, switches, storage devices, and networking
infrastructure. It encompasses the energy required for data processing,
computation, and transmission.
Cooling Infrastructure Power
Data centers generate substantial heat due to the operational intensity of IT
equipment. To maintain optimal temperatures and prevent equipment from
overheating, cooling systems such as computer room air conditioning (CRAC) units,
chillers, fans, and pumps are employed. The power consumed by these cooling
mechanisms plays a crucial role in the overall PUE assessment.
18. Prof. Vivek R. Shelke
Lighting and Miscellaneous Power
While individual lighting fixtures and miscellaneous electrical loads may seem
insignificant, their cumulative energy consumption can impact the overall PUE
significantly. This component encompasses the power used by lighting systems, security
equipment, and other miscellaneous electrical devices present in the data center.
Uninterruptible Power Supply (UPS) Losses
UPS systems provide backup power during utility outages to ensure uninterrupted
operations. However, the UPS units themselves introduce inefficiencies during power
conversion and conditioning processes, resulting in power losses. These losses occur
during charging, discharging, and maintaining batteries and are factored into the PUE
calculation.
Power Distribution Losses
This last component of PUE refers to the power distribution infrastructure, including
transformers, switchgear, power distribution units (PDUs), and cabling. Each of these
components incurs electrical resistance and associated inefficiencies, leading to power
losses during the transmission of electricity from the utility source to the IT equipment.
These losses are taken into account when calculating PUE.
Design Considerations: PUE and Challenges in Cloud
19. Prof. Vivek R. Shelke
Benefits Limitations
Cost savings: Reducing energy
consumption will translate into cost
savings.
Focus: PUE fails to capture the complete
picture of energy consumption within the
data center.
Environmental sustainability: A
reduced PUE will minimize the carbon
footprint of the data center.
Inaccuracy: Values for various data
center components are often estimated,
which can lead to inaccuracies.
Enhanced reliability: The more efficient
the data center, the more stable the
critical infrastructure.
Complexity: Obtaining precise
measurements requires robust monitoring
systems and specialized tools that can be
costly to implement.
Capacity optimization: An improved
PUE enables organizations to make the
most of their existing infrastructure.
Cost: Implementing energy-efficient
technologies requires a significant
investment.
Design Considerations: PUE and Challenges in Cloud
20. Prof. Vivek R. Shelke
1. Implement efficient cooling techniques such as hot and cold
aisle containment, precision air conditioning, and airflow
management.
2. Optimize IT equipment using techniques like server
virtualization and consolidation.
3. Enhance power distribution and energy consumption using
such techniques as load balancing and virtualization.
4. Adjust the temperature settings of your cooling systems to
the recommended levels.
5. Locate your data centers in the Arctic or under the sea to
reduce your reliance on energy-intensive cooling systems.
Design Considerations: PUE and Challenges in Cloud
21. Prof. Vivek R. Shelke
Data Centers, Cloud Management
Data centers are very complex and contain many
moving parts that have to work in unison together to
support the business and its customers.
There are many different technical and non-technical
disciplines needed for the data center to perform at its
best.
Data center management software is only a small
portion of what is required to manage a data center.
Today, third parties provide data center managed
services to help organizations with end-to-end
management of all the disciplines needed for successful
outcomes.
Today, many new organizations rely on third parties for
most of the management of the data center to focus on
their core competencies and be more customer-centric
with their businesses.
This perspective has led to the rise of cloud-based data
center solutions globally. In the past, it may have taken
many months to start a company and establish a data
center, but today it can be done in days or weeks.
22. Prof. Vivek R. Shelke
Data Centers, Cloud Management
Data Center Management includes the following elements;
• The management of applications for
service or product delivery.
• The information technology
infrastructure to support applications.
• Technology platforms to enable
service capabilities.
• Specialized people that support the
applications, infrastructure, and
platforms.
• The practices that are used either
manually or in an automated fashion
to support business and customer
outcomes.
23. Prof. Vivek R. Shelke
Data Centers, Cloud Management
What is the role of a data center manager?
Either physically onsite or remotely, a data center manager performs
general maintenance—such as software and hardware upgrades,
general cleaning or deciding the physical arrangement of servers—
and takes proactive or reactive measures against any threat or event
that harms data center performance, security and compliance.
The typical responsibilities of a data center manager include the
following:
• Performing lifecycle tasks like installing and decommissioning
equipment
• Maintaining service level agreements (SLAs)
• Ensuring licensing and contractual obligations are met
• Identifying and resolving IT problems like connection issues
between edge computing devices and the data center.
24. Prof. Vivek R. Shelke
Data Centers, Cloud Management
What is the role of a data center manager?
• Securing data center networks and ensuring backup systems and
processes are in place for disaster recovery.
• Monitoring the data center environment’s energy efficiency (e.g.,
lighting, cooling, etc.)
• Managing and allocating resources to maximize budgetary
spending and performance
• Determining optimal server arrangement and cabling organization
• Planning emergency contingencies in case of natural disaster or
other unplanned downtime
• Making necessary updates and repairs to systems while
minimizing downtime and impact to IT operations and business
functions (also known as change management)
25. Prof. Vivek R. Shelke
Data Centers, Cloud Management
Common challenges of data center management
Navigating complexity:
By nature, asset management within an enterprise data center is complex. A data
center is often comprised of hardware and software from multiple vendors,
including numerous applications and tools.
Meeting SLAs
Because of a data center’s complex multi-vendor environment, it can be difficult
for data center managers to ensure all SLAs are being upheld. These SLAs can
span:
• Application availability
• Data retention
• Recovery speed
• Network uptime and availability
Tracking warranties
Data center managers can struggle in a complex environment to track which
warranties have expired or what each warranty covers. Without visibility over
warranty information, money may be needlessly spent on components that may
have otherwise been covered.
26. Prof. Vivek R. Shelke
Data Centers, Cloud Management
Common challenges of data center management
Costs
For private data centers, IT staff, energy and cooling costs can consume much of the
limited budget allocated to what’s typically deemed a non-value-added cost to the
organization.
Monitoring
Data center managers may be forced to use insufficient or outdated equipment to
monitor their complex data center operations. This can result in gaps in
performance visibility and inefficient workload distribution. Capacity planning is also
negatively impacted, since data center managers reliant on disparate or outdated
equipment may not have the accurate metrics needed to assess how well a data
center is meeting current demands.
Limited resources
Data center managers often work with limited staff, power and space due to
budgetary constraints. In many cases, they also lack the proper tools needed to
manage these limited resources effectively. Limited resources can hinder service
management, resulting in the delivery of delayed or inadequate IT resources to
business users and other stakeholders across an organization.
27. Prof. Vivek R. Shelke
Data Centers, Cloud Management
How to overcome the challenges of data center management
DCIM software
Data center managers can use a data
center infrastructure management (DCIM)
solution to simplify their management
tasks and achieve IT performance
optimization. DCIM software provides a
centralized platform where data center
managers can monitor, measure, manage
and control all elements of a data center in
real-time—from on-premises IT
components to data center facilities like
heating, cooling and lighting.
With a DCIM solution, data center
managers gain a single, streamlined view
over their data center and can better
understand what’s happening throughout
the IT infrastructure.
28. Prof. Vivek R. Shelke
Data Centers, Cloud Management
How to overcome the challenges of data center management
A DCIM solution provides visibility into the following:
• Power and cooling status
• Which IT equipment and software components are ready for upgrading
• Licensing/contractual terms and SLAs for all components
• Device health and security status
• Energy consumption and power management
• Network bandwidth and server capacity
• Use of floor space
• Location of all physical data center assets
A DCIM solution can also help data center managers adopt virtualization
to combine and better manage their data center’s IT resources. More
advanced DCIM solutions can even automate tasks and eliminate manual
steps, freeing up the data center manager’s time and reducing costs.
29. Prof. Vivek R. Shelke
Data Centers, Cloud Management
Data Center Management certification
• Certified Data Center Management
Professional (CDCMP ®)
• Data Center Operations Manager
(DCOM ®)
• VMWARE Certified Professional
• Cisco Certified Network Associate
(CCNA ®)
• Information Technology Infrastructure
Library (ITIL ®)
• Data Center Virtualization
• Data Center Design Consultant (DCDC)
• Certified Data Center Programs (CDCP ®)
• Certified Data Center Facilities Operations
Manager (CDFOM®)