A discussion with two leading IT and critical infrastructure executives on how the state of data centers in 2020 demands better speed, agility, and efficiency from IT resources wherever they reside.
Data Sovereignty, Security, and Performance Panacea: Why Mastercard Sets the ...Dana Gardner
Transcript of a discussion on how a major financial transactions provider is exploiting cloud models to extend a distributed real-time payment capability across the globe despite some of the strictest security and performance requirements.
Big Data World Forum (BDWF http://www.bigdatawf.com/) is specially designed for data-driven decision makers, managers, and data practitioners, who are shaping the future of the big data.
Security and Privacy of Sensitive Data in Cloud Computing : A Survey of Recen...csandit
Cloud computing is revolutionizing many ecosystems by providing organizations with
computing resources featuring easy deployment, connectivity, configuration, automation and
scalability. This paradigm shift raises a broad range of security and privacy issues that must be
taken into consideration. Multi-tenancy, loss of control, and trust are key challenges in cloud
computing environments. This paper reviews the existing technologies and a wide array of both
earlier and state-of-the-art projects on cloud security and privacy. We categorize the existing
research according to the cloud reference architecture orchestration, resource control, physical
resource, and cloud service management layers, in addition to reviewing the existing
developments in privacy-preserving sensitive data approaches in cloud computing such as
privacy threat modeling and privacy enhancing protocols and solutions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Data Sovereignty, Security, and Performance Panacea: Why Mastercard Sets the ...Dana Gardner
Transcript of a discussion on how a major financial transactions provider is exploiting cloud models to extend a distributed real-time payment capability across the globe despite some of the strictest security and performance requirements.
Big Data World Forum (BDWF http://www.bigdatawf.com/) is specially designed for data-driven decision makers, managers, and data practitioners, who are shaping the future of the big data.
Security and Privacy of Sensitive Data in Cloud Computing : A Survey of Recen...csandit
Cloud computing is revolutionizing many ecosystems by providing organizations with
computing resources featuring easy deployment, connectivity, configuration, automation and
scalability. This paradigm shift raises a broad range of security and privacy issues that must be
taken into consideration. Multi-tenancy, loss of control, and trust are key challenges in cloud
computing environments. This paper reviews the existing technologies and a wide array of both
earlier and state-of-the-art projects on cloud security and privacy. We categorize the existing
research according to the cloud reference architecture orchestration, resource control, physical
resource, and cloud service management layers, in addition to reviewing the existing
developments in privacy-preserving sensitive data approaches in cloud computing such as
privacy threat modeling and privacy enhancing protocols and solutions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Market Research Reports, Inc. has announced the addition of “Big Data in Global Telecom Market: Key Trends, Market Opportunities and Industry Forecast 2015-2020" research report to their Offering. See more at - http://mrr.cm/oqS
Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...Dana Gardner
Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 52 on client-side architectures and the prospect of heightened disruption in the PC and device software arenas.
Data center outsourcing a new paradigm for the ITAlessandro Guli
Decisions relating to the hosting of IT assets are reaching new levels of risk and of complexity.
The availability of new technologies and services, principally those associated with the cloud,
have created new possibilities for aligning IT delivery with business needs and, in the process,
meeting new challenges of data traffic, mobility and the cluster of initiatives that are included
under ‘speed to market’.
Big Data World Forum (BDWF http://www.bigdatawf.com/) is specially designed for data-driven decision makers, managers, and data practitioners, who are shaping the future of the big data.
CIO's implanting digital transformation strategies are facing increasing challenges on how to migrate security integrated hybrid technologies. Find out here the future of Future Networks today
The 4 Biggest Trends In Big Data and Analytics Right For 2021Bernard Marr
Big Data is a term that’s come to be used to describe the technology and practice of working with data that’s not only large in volume but also fast and comes in many different forms. For every Elon Musk with a self-driving car to sell, or Jeff Bezos with a cashier-less convenience store, there is a sophisticated Big Data operation and an army of clever data scientists who’ve turned a vision into reality.
White paper from Cohesive Networks - Enterprise Architecture Networking
How cloud service providers can use VNS3 and overlay networks to offer customer-focused security and control.
With physical interaction no longer being an acceptable form of communication in light of social distancing efforts, enterprises and institutions around the world have made a sudden shift to digital solutions in a bid to retain productivity. Companies are now starting to realize the benefits of cloud computing, even beyond the immediate need for remote work generated by COVID-19 this year. As a result, it's likely that many businesses will begin scaling up their digital transformation efforts and invest heavily in IT and cloud resources in the coming years.
This documentation explains how cloud technology is the corporate world’s biggest partner in times of crises like COVID-19.
The Value of Signal (and the Cost of Noise): The New Economics of Meaning-MakingCognizant
It’s a new era in business, in which growth will be driven by finding meaning and insights in data. Recent research demonstrates what separates winners from losers and how to rise to the top as a "meaning maker."
Data Has A Shelf Life: Why You Should Be Thinking About Real-Time AnalyticsBernard Marr
Real-time analytics enable companies to see, understand, and work with data as soon as it arrives, which helps companies make better business decisions and create smarter products. Find out how your company can get ready to work with data in real-time.
How Global Data Availability Accelerates Collaboration And Delivers Business ...Dana Gardner
A transcript of a discussion that explores how comprehensive and global data storage access delivers the rapid insights businesses need for digital business transformation.
How the Modern Data Center Extends Across Remote Locations Due to Automation ...Dana Gardner
A discussion on how new demands from the industrial edge, 5G networks, and hybrid deployment models will lead to more diverse types of data centers in more places.
Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...Dana Gardner
Transcript of a discussion on why new ways of thinking are demanded if comprehensive analysis of relevant data can become practical across a multi- and hybrid-cloud deployments world.
Market Research Reports, Inc. has announced the addition of “Big Data in Global Telecom Market: Key Trends, Market Opportunities and Industry Forecast 2015-2020" research report to their Offering. See more at - http://mrr.cm/oqS
Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...Dana Gardner
Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 52 on client-side architectures and the prospect of heightened disruption in the PC and device software arenas.
Data center outsourcing a new paradigm for the ITAlessandro Guli
Decisions relating to the hosting of IT assets are reaching new levels of risk and of complexity.
The availability of new technologies and services, principally those associated with the cloud,
have created new possibilities for aligning IT delivery with business needs and, in the process,
meeting new challenges of data traffic, mobility and the cluster of initiatives that are included
under ‘speed to market’.
Big Data World Forum (BDWF http://www.bigdatawf.com/) is specially designed for data-driven decision makers, managers, and data practitioners, who are shaping the future of the big data.
CIO's implanting digital transformation strategies are facing increasing challenges on how to migrate security integrated hybrid technologies. Find out here the future of Future Networks today
The 4 Biggest Trends In Big Data and Analytics Right For 2021Bernard Marr
Big Data is a term that’s come to be used to describe the technology and practice of working with data that’s not only large in volume but also fast and comes in many different forms. For every Elon Musk with a self-driving car to sell, or Jeff Bezos with a cashier-less convenience store, there is a sophisticated Big Data operation and an army of clever data scientists who’ve turned a vision into reality.
White paper from Cohesive Networks - Enterprise Architecture Networking
How cloud service providers can use VNS3 and overlay networks to offer customer-focused security and control.
With physical interaction no longer being an acceptable form of communication in light of social distancing efforts, enterprises and institutions around the world have made a sudden shift to digital solutions in a bid to retain productivity. Companies are now starting to realize the benefits of cloud computing, even beyond the immediate need for remote work generated by COVID-19 this year. As a result, it's likely that many businesses will begin scaling up their digital transformation efforts and invest heavily in IT and cloud resources in the coming years.
This documentation explains how cloud technology is the corporate world’s biggest partner in times of crises like COVID-19.
The Value of Signal (and the Cost of Noise): The New Economics of Meaning-MakingCognizant
It’s a new era in business, in which growth will be driven by finding meaning and insights in data. Recent research demonstrates what separates winners from losers and how to rise to the top as a "meaning maker."
Data Has A Shelf Life: Why You Should Be Thinking About Real-Time AnalyticsBernard Marr
Real-time analytics enable companies to see, understand, and work with data as soon as it arrives, which helps companies make better business decisions and create smarter products. Find out how your company can get ready to work with data in real-time.
How Global Data Availability Accelerates Collaboration And Delivers Business ...Dana Gardner
A transcript of a discussion that explores how comprehensive and global data storage access delivers the rapid insights businesses need for digital business transformation.
How the Modern Data Center Extends Across Remote Locations Due to Automation ...Dana Gardner
A discussion on how new demands from the industrial edge, 5G networks, and hybrid deployment models will lead to more diverse types of data centers in more places.
Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...Dana Gardner
Transcript of a discussion on why new ways of thinking are demanded if comprehensive analysis of relevant data can become practical across a multi- and hybrid-cloud deployments world.
How Software-Defined Storage Translates into Just-in-Time Data Center ScalingDana Gardner
Transcript of a discussion on how scaling of customized IT infrastructure for a hosting organization at a multitenant environment is getting great benefits.
Transcript of a BriefingsDirect podcast on how a major telecom company has improved its IT performance to deliver better experiences and payoffs for its businesses and end users alike.
Open Group Cloud Panel Forecasts Transition Phase for Enterprise IT ArchitectureDana Gardner
Transcript of a sponsored podcast panel discussion from The Open Group 2011 U.S. Conference on newly emerging cloud models and their impact on business and government.
Converged IoT Systems: Bringing the Data Center to the Edge of EverythingDana Gardner
Transcript of a discussion on the rapidly evolving architectural shift of moving advanced information technology (IT) capabilities to the edge to support Internet of Things (IoT) requirements for operational integrity benefits.
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...Dana Gardner
A discussion on how intelligent data center designs and components are delivering what amounts to data centers-as-a-service to SMBs, enterprises, and public sector agencies.
With 2017 upon us, now is a decent time as any to investigate the new advancements that will be forming the data center industry for quite a long time to come. Investigate what driving analysts and specialists say in regards to the best rising patterns, including the move far from corporate server farms and advancing employment requests.
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid ITDana Gardner
A discussion on how IT operators are now looking to increased automation, orchestration, and compatibility benefits to further exploit containers as a mainstay across their next-generation hybrid IT estate.
Citrix and HPE Team to Make Sense of the Core-Cloud-Edge ArchitectureDana Gardner
Transcript of a discussion on how Citrix and Hewlett Packard Enterprise are aligned to bring new capabilities to the coalescing architectures around data center core, hybrid cloud, and edge computing.
Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...Dana Gardner
Transcript of a panel discussion exploring how organizations can gain a future-proof path to hybrid computing that simplifies architecture and makes total economic sense.
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower CostDana Gardner
A transcript of a discussion on the latest technologies and products delivering common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.
If CSPs want to live in "the best of times" after automating and virtualizing their network, they will turn their attention to virtualized value-add services distribution and their supporting managed services as new streams of revenue.
Equinix Performance Hub gives Enterprise Networks a Giant BoostEquinix
Learn how this powerful solution can help you reimagine your network and deliver significant performance results.
With the explosion of cloud traffic growth, demands on enterprise networks are changing fast. Fortunately, we have a solution.
Equinix Performance Hub can improve the performance of your entire network — and give your users the experience they’re demanding — while simplifying your infrastructure.
This infopaper will show you how to take advantage of all the benefits Performance Hub has to offer.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
A New Status Quo for Data Centers --Seamless Communication From Core to Cloud to Edge
1. Page 1 of 14
A New Status Quo for Data Centers --
Seamless Communication
From Core to Cloud to Edge
A discussion with two leading IT and critical infrastructure executives on how the state of data
centers in 2020 demands better speed, agility, and efficiency from IT resources wherever they
reside.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast
series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and
moderator for this ongoing discussion on the latest insights into data center strategies.
As 2020 ushers in a new decade, the forces shaping data center decisions are
extending compute resources to new places. With the challenging goals of speed, agility,
and efficiency, enterprises and service providers alike will be seeking new balance
between the need for low latency and optimal utilization of workload placement.
Hybrid models will therefore include more distributed, confined, and modular data
centers at or near the edge.
These are but some of a few top-line predictions on the future state of the modern data
center design. Stay with us as we examine, with two leading IT and critical infrastructure
executives, how these data center variations nonetheless must also interoperate
seamlessly from core to cloud to edge.
Here to help us learn more about the state of data
centers in 2020 is Peter Panfil, Vice President of Global
Power at VertivTM. Welcome, Peter.
Peter Panfil: How are you, Dana?
Gardner: I’m doing great. We’re also here with Steve
Madara, Vice President of Global Thermal at Vertiv.
Welcome, Steve.
Steve Madara: Thank you, Dana.
Gardner: The world is rapidly changing in 2020.
Organizations are moving past the debate around hybrid
deployments, from on-premises to public clouds. Why do we need to also think about IT
architectures and hybrid computing differently, Peter?
Panfil
2. Page 2 of 14
Moving to the edge, with momentum
Panfil: We noticed a trend at Vertiv in our customer base. That trend is toward a new
generation of data centers. We have been living with distributed IT, client-server data
centers moving to cloud, either a public cloud or a private cloud.
But what we are seeing is the evolution of an edge-to-core, near-real-time data center
generation. And it’s being driven by devices everywhere, the “connected-all-the-time”
model that all of us seem to be going to.
And so, when you are in a near-real-time
world, you have to have infrastructure
that supports your near-real-time
applications. And that is what the
technology folks are facing. I refer to it as
a pack of dogs chasing them -- the
amount of data that’s being generated,
the applications running remotely, and
the demand for availability, low latency,
and driving cost down as much as you possibly can. This is what’s changing how they
approach their critical infrastructure space.
Gardner: And so, a new equilibrium is emerging. How is this different from the past?
Madara: If we go back 20 years, everything was
centralized at enterprise data centers. Then we decided
to move to decentralized, and then back to centralized.
We saw a move to colocation as people decided that’s
where they could get lower cost to run their apps. And
then things went to the cloud, as Peter said earlier.
And now, we have a huge number of devices connected
locally. Cisco says by late 2020 that it’s going to have 23
billion connected devices, and over half of those are
going to be machine-to-machine communications, which,
as Peter mentioned earlier, the latency is going to be
very, very critical.
An interesting read is Michael Lewis’s book Flash Boys about the arbitrage that’s taking
place with the low latency that you have in stock market trading. I think we are going to
see more of that moving to the edge. The edge is more like a smart rack or smart row
deployment in an existing facility. It’s going to be multi-tenant, because it’s going to be
able to be throughout large cities. There could be 20 or 30 of these edge data center
sites hosting different applications for customers.
[It’s] a pack of dogs chasing [the
technology folks] – the amount of
data that’s being generated, the
applications running remotely, and
the demand for availability, low
latency, and driving cost down as
much as you possibly can.
Madara
3. Page 3 of 14
This move to the edge is also going to provide IT resources in a lot of underserved
markets that don’t yet have pervasive compute, especially in emerging countries.
Gardner: Why is speed so important? We have been talking about this now for years,
but it seems like the need for speed to market and speed to value continues to ramp up.
What’s driving that?
Panfil: There is more than one kind of
speed. There is speed of response of the
application, that’s something that all of us
demand -- speed of response of the
applications. I have to have low latency in
the transactions I am performing with my
data or with my applications. So there is the
speed of the actual data being transmitted.
There is also speed of deployment. When Steve talked earlier about centralized cloud
deployments in these core data centers, your data might be going over a significant
distance, hopping along the way. Well, if you can’t live with that latency that gets
inserted, then you have to take the IT application and put it closer to the source and
consumer of the data. So there is a speed of deployment, from core to edge, that
happens.
And the third type of speed is you have to have low-first-cost, high-asset-utilization, and
rapid-scalability. So that’s a speed of infrastructure adaptation to what the demands for
the IT applications are.
So when we mean speed, I often say it’s speed, speed, and speed. First, it’s the data IT.
Once I have data IT speed, how did I achieve that? l did it by deploying fast, in the scale
needed for the applications, and lastly at a cost and reliability that makes it tolerable for
the businesses.
Gardner: So I guess it’s speed-cubed, right?
Panfil: At least, speed-cubed. Steve, if we had a nickel for every time one of our
customers said “speed,” we wouldn’t have to work anymore. They are consumed with
the different speeds that they have to deal with -- and it’s really the demands of their
customers.
Gardner: Vertiv for years has been looking at the data center of the future and making
some predictions around what to expect. You have been rather prescient. To continue,
you have now identified several areas for 2020, too. Let’s go through those trends.
Steve, Vertiv predicts that “hybrid architectures will go mainstream.” Why did you identify
that, and what do you mean?
There is more than one kind of
speed. … Speed of response of the
applications … speed of deployment
… and speed of infrastructure
adaptation to what the demands for
the IT applications are.
4. Page 4 of 14
The future is hybrid
Madara: If we look at the history of going from centralized to decentralized, and going
to colocation and cloud applications, it shows the ongoing evolution of Internet of Things
(IoT) sensors, 5G networks, smart cities, autonomous cars, and how more and more of
that data is generated and will need to be processed locally. A lot of that is from
machine-to-machine applications.
So when we now talk about hybrid, we have to get very, very close to the source, as far
as the processing is concerned. That’s going to be a large-scale evolution that’s going to
drive the need for hybrid applications. There is going to be processing at the edge as
well as centralized applications -- whether it’s in a cloud or hosted in colocation-based
applications.
Panfil: Steve, you and I both came up through the ranks. I remember when the data
closet down the hall was basically a communications matrix. Its intent was to get
communications from wherever we were to wherever our core data center was.
Well, the cloud is not going away. Number two, enterprise IT is not going away. What the
enterprise is saying is, “Okay, I am going to take my secret sauce and I am going to put
it in an edge data center. I am going to put the compute power as close to my consumer
of that data and that application as I possibly can. And then I am going to figure out
where the rest of it’s going to go.”
“If I can live with the latency I get out of a core data center, I am going to stay in the
cloud. If I can’t, I might even break up my enterprise data center into small or micro data
centers that give me even better responses.”
Dana, it’s interesting, there was a recent wholesale market summary published that said
the difference between the smaller and the larger wholesale deals widened. So what that
says is the large wholesale deals are getting bigger, the small wholesale deals are
getting smaller, and that the enterprise-based demand, in deployments under 600
kilowatts, is focused on low-latency and multi-cloud access.
That tells us that our customers, the
users of that critical space, are trying
to place their IT appliances as close
as they can to their customers,
eliminating the latency, responding
with speed, and then figuring out how
to mesh that edge deployment with
their core strategy.
Gardner: Our second trend gets back to the speed-cubed notion. I have heard people
describe this as a new arms race, because while it might be difficult to differentiate
Our customers … are trying to place
their IT appliances as close as they can
to their customers, eliminating the
latency, responding with speed, and then
figuring out how to mesh that edge
deployment with their core strategy.
5. Page 5 of 14
yourself when everyone is using the same public cloud services, you can really
differentiate yourself on how well you can conduct yourself at speed.
What kinds of capabilities across your technologies will make differentiation around
speed work to an advantage as a company?
The need for speed
Panfil: Well, I was with an analyst recently, and I said the new reality is not that the big
will eat the small -- it’s that the fast will eat the slow. And any advantage that you can get
in speed of applications, speed of deployment, deploying those IT assets -- or morphing
the data center infrastructure or critical space infrastructure – helps improve capital
efficiency. What many customers tell us is that they have to shorten the period of time
between deciding to spend money on IT assets and the time that those asset start
creating revenue.
They want help being creative in lowering their first-cost, in increasing asset utilization,
and in maintaining reliability. If, holy cow, my application goes down, I am out of
business. And then they want to figure out how to manage things like supply chains and
forecasting, which is difficult to do in this market, and to help them be as responsive as
they can to their customers.
Madara: Forecasting and understanding the new applications -- whether it’s artificial
intelligence (AI) or 5G -- the CIOs need to decide where they need to put those
applications whether they should be in the cloud or at the edge. Technology is changing
so fast that nobody can predict far out into the future as far as to where I will need that
capacity and what type of capacity I will need.
So, it comes down to being able to put that
capacity in the place where I need it, right when I
need it, and not too far in advance. Again, I don’t
want to spend the capital, because I may put it in
the wrong place. So it’s got to be about tying the
demand with the supply, and that’s what’s key as
far as the infrastructure.
And the other element I see is technology is changing fast, even on the infrastructure
side. For our equipment, we are constantly making improvements every day, making it
more efficient, lower cost, and with more capability. And if you put capacity in today that
you don’t need for a year or two down the road, you are not taking advantage of the
latest, greatest technology. So really it’s coupling the demand to the actual supply of the
infrastructure -- and that’s what’s key.
Another consideration is that many of these large companies, especially in the
colocation market, have their financial structure as a real estate investment trust (REIT).
It comes down to being
able to put that capacity in
the place where I need it,
right when I need it, and
not too far in advance.
6. Page 6 of 14
As a result, they need to tie revenue with expenses tighter and tighter, along with capital
spending.
Panfil: That’s a good point, Steve. We redesigned our entire large power portfolio at
Vertiv specifically to be able to address this demand.
In previous generations, for example, the uninterruptible power supply (UPS) was built
as a complete UPS. The new generation is built as a power converter, plus an I/O
section, plus an interface section that can be rapidly configured to the customer, or, in
some cases, put into a vendor-managed inventory program. This approach allows us to
respond to the market and customers quicker.
We were forced to change our business model in such a way that we can respond in real
time to these kinds of capacity-demand changes.
Madara: And to add to that, we have to put
together more and more modules and
solutions where we are bundling the
equipment to deliver it faster, so that you don’t
have to do testing on site or assembly on site.
Again, we are putting together solutions that
help the end-user address the speed of the
construction of the infrastructure.
I also think that this ties into the relationship that the person who owns the infrastructure
has with their supplier base. Those relationships have to build in, as Peter mentioned
earlier, the ability to do stocking of inventory, of having parts available on-site to go fast.
Gardner: In summary so far, we have this need for speed across multiple dimensions.
We are looking at more hybrid architectures, up and down the scale -- from edge to core,
on-premises to the cloud. And we are also looking at crunching more data and making
real-time analytics part of that speed advantage. That means being able to have
intelligence brought to bear on our business decisions and making that as fast as
possible.
So what’s going on now with the analytics efficiency trend? Even if average rack density
remains static due to a lack of space, how will such IT developments as high
performance computing (HPC) help make this analysis equation work to the business
outcome’s advantage?
High performance computing in high density pods
Madara: The development of AI applications, machine learning (ML), and what could
be called deep learning are evolving. Many applications are requiring these HPC
systems. We see this in the areas of defense, gaming, the banking industry, and people
We are putting together solutions
that help the end-user address
the speed of the construction of
the infrastructure.
7. Page 7 of 14
doing advanced analytics and tying it to a lot of the sensor data we talked about for
manufacturing.
It’s not yet widespread, it’s not across the whole enterprise or the entire data center, and
these are often unique applications. What I hear in large data centers, especially from
the banks, is that they will need to put these AI applications up on 30-, 40-, 50- or 60-kW
racks -- but they only have three or four of these racks in the whole data center.
The end-user will need to decide how to tune or adjust facilities to accommodate this
small but growing pods of high-density compute. And if they are in their own facility, if it’s
an enterprise that has its own data center, they will need to decide how they are going to
facilitize for that type of equipment.
A lot of the colocation hosting facilities have customers saying, “Hey, I am going to be
bringing in the future a couple of racks that are very high density. A lot of these multi-
tenant data centers are saying, ‘Oh, how do I provision for these, because my data
center was laid out for this average of maybe 8 kW per rack? How do I manage that,
especially for data centers that didn’t previously have chilled water to provide liquid to
the rack?’”
We are now seeing a need to provide chilled water cooling that would go to a rear door
heat exchanger on the back of the rack. It could be chilled water that would go to a rack
for chip cooling applications. And again, it’s not the whole data center; it’s a small
segment of the data center. But it raises questions of how I do that without overkill on the
infrastructure needed.
Gardner: Steve, do you expect those small pods of HPC in the data center to make their
way out to the edge when people do more data crunching for the low-latency
requirements, where you can’t move the data to a data center? Do you expect to have
this trend grow more distributed?
Madara: Yes, I expect this will be for more
than the enterprise data center and cloud
data centers. I think you are going to see
analytics applications developed that are
going to be out at the edge because of the
requirements for latency.
When you think about the autonomous car; none of us know what's going to be required
there for that high-performance processing, but I would expect there is going to be a
need for that down at the edge.
Gardner: Peter, looking at the power side of things when we look at the batteries that
help UPS and systems remain mission-critical regardless of external factors, what’s
going on with battery technology? How will we be using batteries differently in the
modern data center?
You are going to see analytics
applications developed that are
going to be out at the edge because
of the requirements for latency.
8. Page 8 of 14
Battery-powered savings
Panfil: That’s a great question. Battery technology has been evolving at an incredibly
fast rate. It’s being driven by the electric vehicles. That growth is bringing to the market
batteries that have a size and weight advantage. You can’t put a big, heavy pack of
batteries in a car and hope to have it perform well.
It also gives a long-life expectation. So data centers used to have to decide between
long-life, high-maintenance, wet cells and the shorter-life, high-maintenance, valve-
regulated lead-acid (VRLA) batteries. In step with the lithium-ion batteries (LIBs) and thin
plate pure lead (TPPL) batteries, what’s happened is the total cost of ownership (TCO)
has started to become very advantageous for these batteries.
Our sales leadership lead sent me the most recent TCO between either TPPL or LIBs
versus traditional VRLA batteries, and the TCO is a winner for the LIBs and the TPPL
batteries. In some cases, over a 10-year period, the TCO is a factor of two lower for LIB
and TPPL.
Where in the cloud generation of data
centers was all about lowest first cost, in
this edge-to-core mentality of data
centers, it’s about TCO. There are other
levers that they can start to play with, too.
So, for example, they have life cycle and
operating temperature variables. That used to be a real limitation. Nobody in the data
center wanted their systems to go on batteries. They tried everything they could to not
have their systems go on the battery because of the potential for shortening the life of
their batteries or causing an outage.
Today we are developing IT systems infrastructure that takes advantage of not only
LIBs, but also pure lead batteries that can increase the number of [discharge/recharge]
cycles. Once you increase the number of cycles, you can think about deploying smart
power configurations. That means using batteries not only in the critical infrastructure for
a very short period of time when the power grid utility fails, but to use that in critical
infrastructure to help offset cost.
If I can reduce utility use at peak demand periods, for example, or I can reduce stress on
the grid at specified times, then batteries are not only a reliability play – they are also a
revenue-offset play. And so, we’re seeing more folks talking to us about how they can
apply these new energy storage technologies to change the way they think about using
their critical space.
Also, folks used to think that the longer the battery time, the better off they were because
it gave more time to react to issues. Now, folks know what they are doing, they are going
with runtimes that are tuned to their operations team’s capabilities. So, if my operations
Where in the cloud generation of
data centers was all about lowest
first cost, in this edge-to-core
mentality of data centers, it’s about
total cost of ownership.
9. Page 9 of 14
team can do a hot swap over an IT application -- either to a backup critical space
application or to a redundant data center -- then all of a sudden, I don’t need 5 to 12
minutes of runtime, I just need the bridge time. I might only need 60 to 120 seconds.
Now, if I can have these battery times tuned to the operations’ capabilities -- and I can
use the batteries more often or in higher temperature applications -- then I can really
start to impact my TCO and make it very, very cost-effective.
Gardner: It’s interesting; there is almost a power analog to hybrid computing. We can
either go to the cloud or the grid, or we can go to on-premises or the battery. Then we
can start to mix and match intelligently. That’s really exciting. How does lessening
dependence on the grid impact issues such as sustainability and conserving energy?
Sustainability surges forward
Panfil: We are having such conversations with our key accounts virtually every day.
What they are saying is, “I am eventually not going to make smoke and steam. I want to
limit the number of times my system goes on a generator. So, I might put in more
batteries, more LIBs or TPPL batteries, in certain applications because if my TCO is half
the amount of the old way, I could potentially put in twice as much, and have the same
cost basis and get that economic benefit.”
And so from a sustainability perspective, they are saying, “Okay, I might need at some
point in the useful life of that critical space to not draw what I think I need to draw from
my utility. I can limit the amount of power I draw from that utility.”
This is not a criticism, I love all of you out there in data center design, but most of them
are designed for peak usage. So what these changes allow them to do is to design more
for the norm of the requirements. That means they can put in less infrastructure, the
potential to put in less battery. They have the potential to right-size their generators;
same thing on the cooling side, to right-size the cooling to what they need and not for the
extremes of what that data center is going to see.
From a sustainability perspective, we used to talk
about the glass as half-full or half-empty. Now,
we say there is too much of a glass. Let’s right-
size the glass itself, and then all of the other
things you have to do in support of that
infrastructure are reduced.
Madara: As we look at the edge applications, many will not have backup generators. We
will have alternate energy sources, and we will probably be taking more hits to the
batteries. Is the LIB the better solution for that?
Panfil: Yes, Steve, it sure is. We will see customers with an expectation of sustainability,
a path to an energy source that is not fossil fuel-based. That could be a renewable
From a sustainability
perspective, we used to talk
about the glass as half-full or
half-empty. Now, we say there
is too much of a glass.
10. Page 10 of 14
energy source. We might not be able to deploy that today, but they can now deploy what
I call foundational technologies that allow them to take advantage of it. If I can have a
LIB, for example, that stores excess energy and allows me to absorb energy when I’m
creating more than I need -- then I can consume that energy on the other side. It’s better
for everybody.
Gardner: We are entering an era where we have the agility to optimize utilization and
reduce our total costs. The thing is that it varies from region to region. There are some
areas where compliance is a top requirement. There are others where energy issues are
a top requirement because of cost.
What’s going on in terms of global cross-pollination? Are we seeing different markets
react to their power and thermal needs in different ways? How can we learn from that?
Global differences, normalized
Madara: If you look at the size of data centers around the world, the data centers in the
U.S. are generally much larger than in Europe. And what’s in Europe is much larger than
what we have in other developed countries. So, there are a couple of things, as you
mentioned, energy availability, cost of energy, the size of the market and the users that it
serves. We may be looking at more edge data centers in very underserved markets that
have been in underdeveloped countries.
So, you are going to see the size of the data center and the technology used potentially
different to better fit needs of the specific markets and applications. Across the globe,
certain regions will have different requirements with regard to security and sustainability.
Even though we have these potential
differences, we can meet the end-user needs
to right-size the IT resources in that region.
We are all more common than we are different
in many respects. We all have needs for
security, we all have needs for efficiency, it
may just be to different degrees.
Panfil: There are different regional agency requirements, different governmental
regulations that companies have to comply with. And so what we find, Dana, is that what
our customers are trying to do is normalize their designs. I won’t say they are
standardizing their design because standardization says I am going to deploy exactly the
same way everywhere in the world. I am a fan of Kit Kats and Kit Kats are not the same
globally, they vary by region, the same is true for data centers.
So, when you look at how the customers are trying to deal with the regional and agency
differences that they have to live with, what they find themselves doing is trying to
normalize their designs as much as they possibly can globally, realizing that they might
not to be able to use exactly the same power configuration or exactly the same thermal
We are all more common that
we are different in many
respects. We all have needs for
security, we all have needs for
efficiency, it may just be to
different degrees.
11. Page 11 of 14
configuration. But we also see pockets where different technologies are moving to the
forefront. For example, China has data centers that are running at high voltage DC, 240
volts DC, we have always had 48-volt DC IT applications in the Americas and in Europe.
Customers are looking at three things -- speed, speed, and speed.
And so when we look at the application, for example, of DC, there used to be a debate,
is it AC or DC? Well, it’s not an “or” it’s an “and.” Most of the customers we talk to, for
example, in Asia are deploying high-voltage DC and have some form of hybrid AC plus
DC deployment. They are doing it so that they can speed their applications deployments.
In the Americas, the Open Compute Project (OCP) deploys either 12 or 48 volts to the
rack. I look at it very simply. We have been seeing a move from 2N architecture to N
plus 1 architecture in the power world for a decade, this is nothing more than adopting
the N plus 1 architecture at the rack level versus the 2N architecture at the rack level.
And so what we see is when folks are trying to, number one, increase the speed;
number two, increase their utilization; number three, lower their total cost, they are going
to deploy infrastructures that are most advantageous for either the IT appliances that
they are deploying or for the IT applications that they are running, and it’s not the same
for everybody, right Steve?
You and I have been around the planet
way too many times, you are a million
miler, so am I. It’s amazing how a city
might be completely different in a
different time zone, but once you walk
into that data center, you see how very
consistent they have gotten, even
though they have done it completely
independently from anybody else.
Madara: Correct!
Consistency lowers costs and risks
Gardner: A lot of what we have talked about boils down to a need to preserve speed-to-
value while managing total cost of utilization. What is there about these multiple trends
that people can consider when it comes to getting the right balance, the right equilibrium,
between TCO and that all important speed-to-value?
Madara: Everybody strives to drive cost down. The more you can drive the cost down of
the infrastructure, the more you can do to develop more edge applications.
I think we are seeing a very large rate of change of driving cost down. Yet we still have a
lot of stranded capacity out there in the marketplace. And people are making decisions
to take that down without impacting risk, but I think they can do it faster.
It’s amazing how a city might be
completely different in a different time
zone, but once you walk into that data
center, you see how very consistent
they have gotten, even though they
have done it completely
independently from anybody else.
12. Page 12 of 14
Peter mentioned standardization. Standardization helps drive speed, whether it’s
normalization or similarity. What allows people to move fast is to repeat what they are
doing instead of snowflake data centers, where every new one is different.
Repeating allows you to build a supply
base ecosystem where everybody has
the same goal, knows what to do, and
can be partners in driving out cost and in
driving speed. Those are some of the key
elements as we go forward.
Gardner: Peter when we look to that standardization, you also allow for more seamless
communication from core to cloud to edge. Why is that important, and how can we better
add intelligence and seamless communication among and between all these different
distributed data centers?
Panfil: When we normalize designs globally, we take a look at the regional differences,
sort out what the regional differences have to be, and then put a proof of concept
deployment. And out of that comes a consistent method of procedure.
When we talk about managing the data center effectively and efficiently, first of all, you
have to know what you have. And second, you have to know what it’s doing. And so, we
are seeing more folks normalizing their designs and getting consistency. They can then
start looking at how much of their available capacity from a design perspective they are
actually using both on a normal basis and on a peak basis and then they can determine
how much of that they are willing to use.
We have some customers who are very risk-averse. They stay in the 2N world, which is
a 50 percent maximum utilization. We applaud them for it because they are not going to
miss a transaction.
There are others who will say, “I can live with the availability that an N+1 architecture
gives me. I know I am going to have to be prepared for more failures. I am going to have
to figure out how to mitigate those failures.”
So they are working constantly at figuring out how to monitor what they have and figure
out what the equipment is doing, and how they can best optimize the performance. We
talked earlier about battery runtimes, for example. Sometimes they might get short or
sometimes they might be long.
As these companies get into this step and repeat function, they are going to get
consistency of their methods of procedure. They’re going to get consistency of how their
operations teams run their physical infrastructure. They are going to think about running
their equipment in ways that is nontraditional today but will become the norm in the next
generation of data centers. And then they are going to look at us and say, “Okay, now
Repeating allows you to build a
supply base ecosystem where
everybody has the same goal, knows
what to do, and can be partners in
driving out cost and in driving speed.
13. Page 13 of 14
that I have normalized my design, can I use rapid deployment configuration? Can I put it
on a skid, in a container? Can I drop it in place as the complete data center?”
Well, we build it one piece of equipment at a time and stitch it all together. The question
that you asked about monitoring, it’s interesting because we talked to a major company
just last month. Steve and I were visiting them at their site. And they said, “You know
what? We spend an awful lot of time figuring out how our building management system
and our data exchange happens at the site. Could Vertiv do some of that in the factory?
Could you configure our data acquisition systems? Could you test them there in the
factory? Could we know that when the stuff shows up on site that it’s doing the things
that it’s supposed to be doing instead of us playing hunt and peck to figure out what the
issues are?”
We said, “Of course.” So we are adding that capability now into our factory testing
environment. What we see is a move up the evolutionary scale. Instead of buying
separate boxes, we are seeing them buying solutions -- and those solutions include both
monitoring and controls.
Steve didn’t even get a chance to mention the industry-leading Vertiv Liebert® iCOM™
control for thermal. These controls and monitoring systems allow them to increase their
utilization rates because they know what they have and what it’s doing.
Gardner: It certainly seems to me, with all that we have said today, that the data center
status quo just can’t stand. Change and improvement is inevitable. Let’s close out with
your thoughts on why people shouldn’t be standing still; why it’s just not acceptable.
Innovation is inevitable
Madara: At the end of the day, the IT world is changing rapidly every day. Whether in
the cloud or down at the edge, the IT world needs to adjust to those needs. They need to
be able to be cut out enough of the cost structure. There is always a demand to drive
cost down.
If we don’t change with the world around us, if we don’t meet the requirements of our
customers, things aren’t going to work out – and somebody else is going to take it and
go for it.
Panfil: Remember, it’s not the big that eats the
small, it’s the fast that eats the slow.
Madara: Yes, right.
Panfil: And so, what I have been telling folks is, you got to go. The technology is there.
The technology is there for you to cut your cost, improve your speed, and increase
utilization. Let’s do it. Otherwise, somebody else is going to do it for you.
Remember, it’s not the big
that eats the small, it’s the
fast that eats the slow.
14. Page 14 of 14
Gardner: I’m afraid we’ll have to leave it there. We have been exploring the forces
shaping data center decisions and how that’s extending compute resources to new
places with the challenging goals of speed, agility, and efficiency.
And we have learned how enterprises and service providers alike are seeking new
balance between the need for low latency and optimal utilization of workload placement.
So please join me in thanking our guests, Peter Panfil, Vice President of Global Power at
Vertiv. Thank you so much, Peter.
Panfil: Thanks for having me. I appreciate it.
Gardner: And we have also been joined by Steve Madara, Vice President of Global
Thermal at Vertiv. Thanks so much, Steve.
Madara: You’re welcome, Dana.
Gardner: And a big thank you as well to our audience for joining us for this sponsored
BriefingsDirect data centers strategies interview. I’m Dana Gardner, Principal Analyst at
Interarbor Solutions, your host for this ongoing series of Vertiv-sponsored discussions.
Thanks again for listening. Please pass this along to your community, and do come back
next time.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.
A discussion with two leading IT and critical infrastructure executives on how the state of data
centers in 2020 demands better speed, agility, and efficiency from IT resources wherever they
reside. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.
You may also be interested in:
• How smart IT infrastructure has evolved into the era of data centers-as-a-service
• The next line of defense—How new security leverages virtualization to counter
sophisticated threats
• Expert Panel Explores the New Reality for Cloud Security and Trusted Mobile Apps
Delivery
• How IT innovators turn digital disruption into a business productivity force multiplier
• Cerner’s lifesaving sepsis control solution shows the potential of bringing more AI-
enabled IoT to the healthcare edge
• How containers are the new basic currency for pay as you go hybrid IT
• How rapid machine learning at the racing edge accelerates Venturi Formula E Team to
top-efficiency wins
• Data-driven and intelligent healthcare processes improve patient outcomes while making
the IT increasingly invisible
• Citrix and HPE team to bring simplicity to the hybrid core-cloud-edge architecture