The document discusses data center micro-segmentation using a software defined data center (SDDC) approach with VMware NSX network virtualization. Key points:
- SDDC with NSX allows fine-grained network segmentation down to individual VMs through automated provisioning of security policies. This micro-segmentation improves security but was previously difficult to implement.
- NSX provides isolation of virtual networks by default without configuration. It also allows segmentation within a virtual network through distributed firewalling to separate network tiers like web, app, and database.
- NSX firewalling performance of 20Gbps per host meets the needs of micro-segmentation, and automation addresses the operational challenges of managing thousands of policies
The Journey Toward the Software-Defined Data CenterCognizant
Computing's evolution toward a software-defined data center (SDDC) -- an extension of the cloud delivery model of infrastructure as a service (IaaS) -- is complex and multifaceted, involving multiple layers of virtualization: servers, storage, and networking. We provide a detailed adoption roadmap to guide your efforts.
Juniper Networks MetaFabric Architecture. Enabling a Simple, Open, and Smart Data Center
Follow GCC Computers
http://www.facebook.com/GccComputersLtd
http://www.linkedin.com/company/gcc-computers-ltd
http://twitter.com/gcc_computers
http://www.youtube.com/channel/UCjWj_h4lrCdw65x2xbM3lhQ
http://www.gcc.com.cy/
The Journey Toward the Software-Defined Data CenterCognizant
Computing's evolution toward a software-defined data center (SDDC) -- an extension of the cloud delivery model of infrastructure as a service (IaaS) -- is complex and multifaceted, involving multiple layers of virtualization: servers, storage, and networking. We provide a detailed adoption roadmap to guide your efforts.
Juniper Networks MetaFabric Architecture. Enabling a Simple, Open, and Smart Data Center
Follow GCC Computers
http://www.facebook.com/GccComputersLtd
http://www.linkedin.com/company/gcc-computers-ltd
http://twitter.com/gcc_computers
http://www.youtube.com/channel/UCjWj_h4lrCdw65x2xbM3lhQ
http://www.gcc.com.cy/
Ctrls delineates how organizations are moving towards Virtualization and Cloud Computing to optimize their IT Infrastructure needs. Benefits such as cost effectiveness, scalability on demand, moving from a CAPEX to OPEX model and increased returns on investments have made virtualization a lucrative datacenter option.
Enterprise IT is transitioning from the use of traditional on-premise data centers to hybrid cloud environments. As a result, we’re experiencing a paradigm shift in the way we must think about and manage enterprise security. From Four Walls to No Walls Until now, the conventional view on IT security has been that applications and data are safe because they’re physically housed within the confines of a company’s data center walls using company-owned equipment. So, it’s not surprising that many decision makers perceive greater risks as they trade physical assets for cloud-based solutions.
Through our partnerships with leading cloud providers, we are able to offer hybrid, private and public cloud solutions. At Epoch Universal, we supply cloud the way you want it with deep control, extreme performance, and broad customization capabilities. When you join the Epoch Universal fold, you take back the keys to your kingdom. Reign as supreme commander in chief of your cloud. No compromises. No exceptions.
Netscaler for mobility and secure remote accessCitrix
This session describes practical approaches to utilizing provisioning services for Citrix XenDesktop and Citrix XenApp, taken from actual customer deployments in the 25- to 500-device range. We will discuss how to use provisioning services correctly, including best practices for vDisks and cache placement. Other topics will include high availability and load balancing. Live demos will illustrate some of the best practices of a provisioning services deployment.
Design & Development of a Trustworthy and Secure Billing System for Cloud Com...iosrjce
Cloud computing is an important transition that makes change in service oriented computing
technology. Cloud service provider follows pay-as-you-go pricing approach which means consumer uses as
many resources as he need and billed by the provider based on the resource consumed. CSP give a quality of
service in the form of a service level agreement. For transparent billing, each billing transaction should be
protected against forgery and false modifications. Although CSPs provide service billing records, they cannot
provide trustworthiness. It is due to user or CSP can modify the billing records. In this case even a third party
cannot confirm that the user’s record is correct or CSPs record is correct. To overcome these limitations we
introduced a secure billing system called THEMIS. For secure billing system THEMIS introduces a concept of
cloud notary authority (CNA). CNA generates mutually verifiable binding information that can be used to
resolve future disputes between user and CSP. This project will produce the secure billing through monitoring
the service level agreement (SLA) by using the SMon module. CNA can get a service logs from SMon and stored
it in a local repository for further reference. Even administrator of a cloud system cannot modify or falsify the
data.
On December 10th Thomas Länger from University of Lausanne presented PRISMACLOUD project during the 6th International Conference on eDemocracy
Citizen rights in the world of the new computing paradigms in Athens, Greece.
PRISMACLOUD generated considerable interest among the participants!
Thanks to the advent of public and private clouds, both IT and business have become more agile – more able to quickly respond to fluctuating needs and demands in information processing. However, to achieve a fully agile infrastructure, businesses need to integrate their traditional IT with clouds in all their variants. Hybrid clouds provide that path forward.
For companies considering a hybrid cloud infrastructure, there are significant concerns, with security being number one. Companies must protect corporate data and applications, even as that data moves in a geographically distributed IT infrastructure. Simultaneously, they must ensure the security of data from point of capture at the edge to consumption and storage in the back end. A second concern is ease of infrastructure management and maintenance. This concern becomes more relevant as the number of vendors and management interfaces increase. A related concern has to do with simplifying management and maintenance with automation. For automation to succeed, it requires a policy-driven infrastructure. Finally, because businesses are ultimately looking for greater agility from hybrid clouds, another key concern is the ease of application development and application deployment to production.
For this paper, we used publicly available information to compare two major hybrid cloud technology and service companies: Cisco, through its hybrid cloud portfolio, and HP, through its Helion portfolio. Although it is difficult to pinpoint exactly where each vendor falls in the hybrid cloud spectrum, we can draw a few broad conclusions. The Cisco approach is network-centric and application-centric. The HP approach, on the other hand, is more infrastructure-centric, with an emphasis in developer support, and includes some elements to support the software development lifecycle. The differences between the two companies’ approaches are clearest in the question of security. From our research, it is clear that HP and Cisco are both strong contenders. Their offerings span compute, storage, and network for hybrid clouds and offer different approaches to and levels of security, automation, SDLC support, network virtualization, cloud management, workload mobility technologies, and more. Each company has its own specific target niche in enterprise cloud deployments.
As the interconnectivity between private and public clouds grows, the world of the hybrid cloud is quickly changing. We expect significant changes in the near future –not only in offerings from Cisco and HP, but in the hybrid cloud ecosystem generally. We look forward to watching how Cisco, HP, and other cloud vendors adapt to the expansions and shifts in the future of the hybrid cloud.
The Virtual Index Architecture is
a pre-integrated and extensible
framework that defines how
VirtualWorks’ content virtualization
solutions understand and track
content across disparate storage
architectures, applications and data
structures.
The networking industry has been slow to evolve due to the concept of “Black Boxes”; many enterprise customers perceive it as the primary inhibitor to agility and innovation.
FLSS vuole essere un supporto tecnologico alla gestione della vita condivisa, semplice, giocoso e facile da usare, volto a rendere piacevole e formativo quel periodo della vita in cui giovani studenti e lavoratori condividono un appartamento, soprattutto nelle grandi città dove i canoni d'affitto sono molto alti.
Ctrls delineates how organizations are moving towards Virtualization and Cloud Computing to optimize their IT Infrastructure needs. Benefits such as cost effectiveness, scalability on demand, moving from a CAPEX to OPEX model and increased returns on investments have made virtualization a lucrative datacenter option.
Enterprise IT is transitioning from the use of traditional on-premise data centers to hybrid cloud environments. As a result, we’re experiencing a paradigm shift in the way we must think about and manage enterprise security. From Four Walls to No Walls Until now, the conventional view on IT security has been that applications and data are safe because they’re physically housed within the confines of a company’s data center walls using company-owned equipment. So, it’s not surprising that many decision makers perceive greater risks as they trade physical assets for cloud-based solutions.
Through our partnerships with leading cloud providers, we are able to offer hybrid, private and public cloud solutions. At Epoch Universal, we supply cloud the way you want it with deep control, extreme performance, and broad customization capabilities. When you join the Epoch Universal fold, you take back the keys to your kingdom. Reign as supreme commander in chief of your cloud. No compromises. No exceptions.
Netscaler for mobility and secure remote accessCitrix
This session describes practical approaches to utilizing provisioning services for Citrix XenDesktop and Citrix XenApp, taken from actual customer deployments in the 25- to 500-device range. We will discuss how to use provisioning services correctly, including best practices for vDisks and cache placement. Other topics will include high availability and load balancing. Live demos will illustrate some of the best practices of a provisioning services deployment.
Design & Development of a Trustworthy and Secure Billing System for Cloud Com...iosrjce
Cloud computing is an important transition that makes change in service oriented computing
technology. Cloud service provider follows pay-as-you-go pricing approach which means consumer uses as
many resources as he need and billed by the provider based on the resource consumed. CSP give a quality of
service in the form of a service level agreement. For transparent billing, each billing transaction should be
protected against forgery and false modifications. Although CSPs provide service billing records, they cannot
provide trustworthiness. It is due to user or CSP can modify the billing records. In this case even a third party
cannot confirm that the user’s record is correct or CSPs record is correct. To overcome these limitations we
introduced a secure billing system called THEMIS. For secure billing system THEMIS introduces a concept of
cloud notary authority (CNA). CNA generates mutually verifiable binding information that can be used to
resolve future disputes between user and CSP. This project will produce the secure billing through monitoring
the service level agreement (SLA) by using the SMon module. CNA can get a service logs from SMon and stored
it in a local repository for further reference. Even administrator of a cloud system cannot modify or falsify the
data.
On December 10th Thomas Länger from University of Lausanne presented PRISMACLOUD project during the 6th International Conference on eDemocracy
Citizen rights in the world of the new computing paradigms in Athens, Greece.
PRISMACLOUD generated considerable interest among the participants!
Thanks to the advent of public and private clouds, both IT and business have become more agile – more able to quickly respond to fluctuating needs and demands in information processing. However, to achieve a fully agile infrastructure, businesses need to integrate their traditional IT with clouds in all their variants. Hybrid clouds provide that path forward.
For companies considering a hybrid cloud infrastructure, there are significant concerns, with security being number one. Companies must protect corporate data and applications, even as that data moves in a geographically distributed IT infrastructure. Simultaneously, they must ensure the security of data from point of capture at the edge to consumption and storage in the back end. A second concern is ease of infrastructure management and maintenance. This concern becomes more relevant as the number of vendors and management interfaces increase. A related concern has to do with simplifying management and maintenance with automation. For automation to succeed, it requires a policy-driven infrastructure. Finally, because businesses are ultimately looking for greater agility from hybrid clouds, another key concern is the ease of application development and application deployment to production.
For this paper, we used publicly available information to compare two major hybrid cloud technology and service companies: Cisco, through its hybrid cloud portfolio, and HP, through its Helion portfolio. Although it is difficult to pinpoint exactly where each vendor falls in the hybrid cloud spectrum, we can draw a few broad conclusions. The Cisco approach is network-centric and application-centric. The HP approach, on the other hand, is more infrastructure-centric, with an emphasis in developer support, and includes some elements to support the software development lifecycle. The differences between the two companies’ approaches are clearest in the question of security. From our research, it is clear that HP and Cisco are both strong contenders. Their offerings span compute, storage, and network for hybrid clouds and offer different approaches to and levels of security, automation, SDLC support, network virtualization, cloud management, workload mobility technologies, and more. Each company has its own specific target niche in enterprise cloud deployments.
As the interconnectivity between private and public clouds grows, the world of the hybrid cloud is quickly changing. We expect significant changes in the near future –not only in offerings from Cisco and HP, but in the hybrid cloud ecosystem generally. We look forward to watching how Cisco, HP, and other cloud vendors adapt to the expansions and shifts in the future of the hybrid cloud.
The Virtual Index Architecture is
a pre-integrated and extensible
framework that defines how
VirtualWorks’ content virtualization
solutions understand and track
content across disparate storage
architectures, applications and data
structures.
The networking industry has been slow to evolve due to the concept of “Black Boxes”; many enterprise customers perceive it as the primary inhibitor to agility and innovation.
FLSS vuole essere un supporto tecnologico alla gestione della vita condivisa, semplice, giocoso e facile da usare, volto a rendere piacevole e formativo quel periodo della vita in cui giovani studenti e lavoratori condividono un appartamento, soprattutto nelle grandi città dove i canoni d'affitto sono molto alti.
The 2015 Guide to SDN and NFV: Part 1 – Software Defined Networking (SDN)EMC
The 2015 Guide to SDN and NFV: Part 1 – Software Defined Networking (SDN)
The goals of The 2015 Guide to SDN and NFV are to eliminate the confusion that surrounds SDN and NFV and accelerate the analysis and potential adoption of these new architectural approaches. Part 1 focuses on Software Defined Networking.
Part 2 : https://www.slideshare.net/emcacademics/2015ebook-sdn-nfvch2
The Future of Software Defined Data Center (SDDC)Ahmed Banafa
Software-Defined Data Center (SDDC) refers to a data center where all infrastructures -- networking, storage, CPU and security – are virtualized and delivered as a service. Deployment, provisioning, configuration and operation of the entire infrastructure is abstracted from hardware and implemented through software.
With SDDC, the entire data center will be controlled using a single virtualization layer. This means that all aspects of the infrastructure can be managed and controlled from one end to the other.
Software-Defined Data Centers: Understanding, Key Components, Benefits, Chall...GQ Research
In this article, we delve into the concept of Software-Defined Data Centers, its key components, benefits, challenges, and the future outlook for this transformative technology.
Short Economic EssayPlease answer MINIMUM 400 word I need this.docxbudabrooks46239
Short Economic Essay
Please answer MINIMUM 400 word
I need this maximum in 2,5 hour because now I’m doing the online final exam and the clock is ticking.
Question:
What is the purpose of the term sheet and why is it important? Be sure to write a detailed long essay to this question. Think about who the term sheet is written for, why it is written, and what does it need to convey.
Cloud Computing: Virtualization and Resiliency for
Data Center Computing
Valentina Salapura
IBM T. J. Watson Research Center
Yorktown Heights, NY, USA
[email protected]
Index Terms — Cloud computing, data center management,
data center optimization, virtualization, Infrastructure as a
service (IaaS), Platform as a service (PaaS), Software as a service
(SaaS), high availability, disaster recovery, virtual appliance.
INTRODUCTION
Cloud computing is being rapidly adopted across the IT
industry, driven by the need to reduce the total cost of
ownership of increasingly more demanding workloads. Within
companies, private clouds are offering a more efficient way to
manage and use private data centers. In the broader
marketplace, public clouds offer the promise of buying
computing capabilities based on a utility model. This utility
model enables IT consumers to purchase compute resources on
demand to fit current business needs and scale expenses
associated with computing resources. Thus, cloud computing
offers IT to be treated as an ongoing variable operating expense
billed by usage rather than requiring capital expenditures that
must be planned years in advance. Advantageously, operating
expenses can be charged against the revenue generated by these
expenses directly. In contrast, capital expenses incurred by the
purchase of a system need to be paid at the time of purchase,
but can only be depreciated to reduce the taxable income over
the lifetime of the system.
THE MAIN ATTRIBUTES OF CLOUD COMPUTING
The main attributes of cloud computing are scalable,
shared, on-demand computing resources delivered over the
network, and pay-per-use pricing. This offers flexibility in
using as few or as many IT resources as needed at any point in
time. Thus, users do not need to predict future resources they
might need, and to commit to capital investment in hardware.
This is especially advantageous for start-ups, and small and
medium businesses which might otherwise not be able to afford
the IT infrastructure they need to support their growing
business. At the same time, redirecting capital investment from
IT infrastructure to the core business is attractive even for large
and financially strong businesses.
From a technical perspective, cloud computing brings the
benefits of virtualization and multi-tenancy to scale-out
systems. Virtualization techniques allow multiple system
images to share the same hardware resources: CPU
virtualization techniques create multiple virtual hardware
systems, while network virtualization .
SDN creates a tailored or customized network experience which enables greater level of speed, flexibility, agility and scale in the data center. Read here from Netmagic Solutions.
IDC: Selecting the Optimal Path to Private CloudEMC
This IDC white paper discusses the challenges and benefits of various cloud paths: prebuilt or integrated infrastructure systems such as VCE Vblock, reference architectures such as EMC VSPEX, and traditional or "build your own" systems.
The changing landscape of SDN. What your customers need to know.Tech Data
Software-defined networking is crucial for customers who are looking to virtualise their data centres. Find out why it’s becoming increasingly important and how to capitalise on the opportunities it presents.
2021 will be a transformational year for the CIO. One of the main themes has been how to facilitate easier and more efficient access to applications, while increasing security throughout the WAN. In this discussion, we cover the model for “anywhere operations,” best practices, cloud and cybersecurity mesh.
Learn how employing a combination of automation, integration, and thoughtful system design can allow you to achieve a level of security with cloud-based services that is higher than most legacy in-house services currently provide.
Many security professionals are skeptical about cloud-based services and infrastructure. But it's a skepticism we've seen before, when a new computing paradigm encounters a suspicious--if not downright hostile--mindset (data-center-centric) and installed base. In this paper, we will discuss some of the general philosophies and perspectives that will assist anyone who wants to securely leverage the benefits the cloud by using its strengths to overcome issues that have traditionally been labeled as weaknesses.
Similar to V mware sddc-micro-segmentation-white-paper (20)
INDUSTRY-LEADING TECHNOLOGY FOR LONG TERM RETENTION OF BACKUPS IN THE CLOUDEMC
CloudBoost is a cloud-enabling solution from EMC
Facilitates secure, automatic, efficient data transfer to private and public clouds for Long-Term Retention (LTR) of backups. Seamlessly extends existing data protection solutions to elastic, resilient, scale-out cloud storage
Transforming Desktop Virtualization with Citrix XenDesktop and EMC XtremIOEMC
With EMC XtremIO all-flash array, improve
1) your competitive agility with real-time analytics & development
2) your infrastructure agility with elastic provisioning for performance & capacity
3) your TCO with 50% lower capex and opex and double the storage lifecycle.
• Citrix & EMC XtremIO: Better Together
• XtremIO Design Fundamentals for VDI
• Citrix XenDesktop & XtremIO
-- Image Management & Storage
-- Demonstrations
-- XtremIO XenDesktop Integration
EMC FORUM RESEARCH GLOBAL RESULTS - 10,451 RESPONSES ACROSS 33 COUNTRIES EMC
Explore findings from the EMC Forum IT Study and learn how cloud computing, social, mobile, and big data megatrends are shaping IT as a business driver globally.
Reference architecture with MIRANTIS OPENSTACK PLATFORM.The changes that are going on in IT with disruptions from technology, business and culture and so IT to solve the issues has to change from moving from traditional models to broker provider model.
Force Cyber Criminals to Shop Elsewhere
Learn the value of having an Identity Management and Governance solution and how retailers today are benefiting by strengthening their defenses and bolstering their Identity Management capabilities.
Container-based technology has experienced a recent revival and is becoming adopted at an explosive rate. For those that are new to the conversation, containers offer a way to virtualize an operating system. This virtualization isolates processes, providing limited visibility and resource utilization to each, such that the processes appear to be running on separate machines. In short, allowing more applications to run on a single machine. Here is a brief timeline of key moments in container history.
This white paper provides an overview of EMC's data protection solutions for the data lake - an active repository to manage varied and complex Big Data workloads
This infographic highlights key stats and messages from the analyst report from J.Gold Associates that addresses the growing economic impact of mobile cybercrime and fraud.
This white paper describes how an intelligence-driven governance, risk management, and compliance (GRC) model can create an efficient, collaborative enterprise GRC strategy across IT, Finance, Operations, and Legal areas.
The Trust Paradox: Access Management and Trust in an Insecure AgeEMC
This white paper discusses the results of a CIO UK survey on a“Trust Paradox,” defined as employees and business partners being both the weakest link in an organization’s security as well as trusted agents in achieving the company’s goals.
2. W H I T E P A P E R / 2
Table of Contents
Executive Summary ................................................................................................................3
The Software Defined Data Center is the Future..................................................................4
The SDDC is More Agile, More Flexible, and More Secure .................................................5
The SDDC – A Weapon, not a Target.....................................................................................5
The Dawning of the Truly Micro-segmented Data Center Network....................................5
Performance.........................................................................................................................6
Automation ...........................................................................................................................6
Native Security in NSX-Powered SDDC: Isolation and Segmentation..............................6
Isolation ................................................................................................................................6
Segmentation .......................................................................................................................7
Segmentation with advanced security service insertion, chaining and traffic steering ........7
Cost...........................................................................................................................................8
More Secure Data Centers – the Software Defined New Normal........................................8
3. W H I T E P A P E R / 3
Executive Summary
The software-defined data center (SDDC), while well understood architecturally, is beginning to reveal
some of its benefits beyond agility, speed, and efficiency as organizations deploy and discover other
areas of improvement. One critical area organizations are driving SDDC deployment from is security.
When enterprises and public sector IT organizations embrace SDDC and virtualize compute, network,
and storage, they automate provisioning and greatly reduce time-to-market for IT applications and
services. They also streamline and de-risk infrastructure moves, adds, and changes. This new
operations model has some additional benefits. Where customers build their SDDC with the automation
and “baked-in” security of VMware’s NSX platform, they’ve discovered some significant security benefits
– fortuitously – as many organizations are trying to move to an increasingly fine-grained network
segmentation approach (e.g., Forrester Research’s Zero-Trust Network Architecture) for their data
center networks in response to the increasing incidence of attackers moving freely within the enterprise
data center perimeter. These approaches wrap security controls around much smaller groups of
resources – often down to a small group of virtualized resources or individual VMs. Micro-segmentation
has been understood to be a best practice approach from a security perspective, but difficult to apply in
traditional environments. The inherent security and automation capabilities of the NSX platform are
making micro-segmentation operationally feasible in the enterprise data center for the first time.
VMware NSX deploys three modes of security for data center networks – fully isolated virtual networks,
segmented virtual networks (via high-performance, fully automated firewalling native to the NSX
platform), and segmentation with advanced security services with our security partners. Examples of
partner integration include Palo Alto Networks for network segmentation with next-generation firewalls
or Rapid7 for vulnerability scanning.
When it comes to the business case, network micro-segmentation is not only operationally feasible
using VMware NSX, but cost-effective, enabling the deployment of security controls inside the data
center network for a fraction of the hardware cost.
Many large data centers are using security as one of the big first benefits of the software defined data
center. In the very near future, a more secure data center will become the new normal.
4. W H I T E P A P E R / 4
The Software Defined Data Center is the Future
A Software Defined Data Center (SDDC) is an architectural approach to data center design, which
leverages a fundamental principle of computer science, abstraction. Operating systems, higher-level
programming languages, networking protocols, and most recently server virtualization are all examples
of abstractions whose introductions resulted in major industry innovation cycles over the past 25 years.
The introduction of an abstraction layer allows systems and services above and below the abstraction
layer to operate and innovate independently, while maintaining agreed-upon communication paths and
exposing services between layers through well-defined interfaces. An SDDC approach applies the
principles of abstraction to deliver an entire data center construct in software, decoupling service
delivery from the underlying physical infrastructure. This allows the underlying hardware to be utilized
as generalized pools of compute, network and storage capacity which can be combined, consumed and
repurposed programmatically, without modification to the hardware.
The SDDC approach has been proven by many of the largest, most agile and efficient data centers in
the world, including Google, Facebook and Amazon. Over the past 10 years, these “mega data center”
operators have engineered an SDDC abstraction layer into their custom applications and platforms,
allowing them to automate almost every aspects of data center operations, while completely decoupling
from the underlying compute, network and storage hardware. This decoupling dramatically reduces both
the capital and operational expense of their physical infrastructure and allows them to deliver services
orders of magnitude faster than most enterprise IT organizations.
Today, enterprise IT can achieve the same level of agility and efficiency as “mega data centers” in their
own data centers, without modification to their existing hardware infrastructure.
Figure 1 - Intelligence is moved into software to create an abstraction layer between software and the
underlying physical infrastructure. Large data centers have been doing this for a decade by putting
intelligence in their custom application or platform software. Today enterprise data centers can achieve the
same decoupling by leveraging software in the data center virtualization layer.
VMware has built the data center abstraction layer into its NSX network virtualization platform. The
platform is based on a distributed system controller combined with the traditional hypervisor and
vSwitch to allow the entire data center construct to be faithfully reproduced non-disruptively in software,
independent of the existing physical infrastructure. The VMware NSX platform has been proven in
production deployments, some over three years old and is now being deployed at two of the top three
service providers in the world, four out of top five global financial services companies, and over 100
enterprise class datacenters in almost every business sector including healthcare, manufacturing, retail,
consumer products, banking, insurance, transportation, federal, state and local government and high
tech.
5. W H I T E P A P E R / 5
The SDDC is More Agile, More Flexible, and More Secure
An SDDC approach takes the benefits of virtualization and automation and extends it to incorporate the
entire data center construct. The ability to programmatically create, snapshot, move, delete and restore
virtual machines in software transformed the operational model of compute for IT. Now, an SDDC
approach allows IT to programmatically create, snapshot, move, delete and restore an entire data
center construct of compute, storage, and network in software. Data center automation, self-service IT,
and a complete transformation of the network operational model have proven to be huge benefits of an
SDDC approach. In deployments, business and IT leadership agree that an SDDC approach delivers
measureable differences in IT speed, agility, and competitive advantage. IT operations leaders quickly
benefit from automated change management and simplification of the underlying hardware
configuration and management. Perhaps most profoundly, the SDDC approach powers the
infrastructure and security teams’ ability to achieve investment flexibility (build to mean and burst to
hybrid) and protection (utilize existing hardware), increased utilization, and never before possible
security in the data center. In fact, security has proven to be one of the most compelling applications of
the SDDC platform.
The SDDC – A Weapon, not a Target
At first glance, most IT network security professionals will view a new approach like a SDDC as a new
potential target. The reality is, the impact to the way IT does security is far greater (and more positive)
than the changes to what needs to be secured. In other words, for IT security teams, SDDC is more of
a weapon than a target. An SDDC approach actually delivers a platform that inherently addresses
some fundamental architectural limitations in data center design, which have restricted security
professionals for decades.
Consider the trade-off that is often made between context and isolation in traditional security
approaches. Often, in order to gain context we place controls in the host operating system. This
approach allows us to see what applications and data are being accessed and what users are using the
system, resulting in good context. However, because the control sits in the attack domain, the first thing
an attacker will do is disable the control. This is bad isolation. This approach is tantamount to putting
the on/off switch for a home alarm system on the outside of the house. An alternative approach, which
trades context for isolation, places the control in the physical infrastructure. This approach isolates the
control from the resource it’s securing, but has poor context because IP addresses, ports and protocols
are very bad proxies for user, application, or transaction context. Furthermore, there has never been a
ubiquitous enforcement layer built into the infrastructure…until now.
The data center virtualization layer used by the SDDC offers the ideal location to achieve both context
and isolation, combined with ubiquitous enforcement. Controls operating in the data center virtualization
layer leverage secure host introspection, the ability to provide agentless, high definition host context,
while remaining isolated in the hypervisor, safe from the attack being attempted.
The ideal position of the data center virtualization layer between the application and the physical
infrastructure combined with automated provisioning and management of network and security policies,
kernel embedded performance, distributed enforcement, and scale-out capacity is on the verge of
completely transforming data center security and allowing data center security professionals to achieve
levels of security that in the past were operationally infeasible.
The Dawning of the Truly Micro-segmented Data Center Network
The perimeter-centric network security strategy for enterprise data centers has proven to be
inadequate. Modern attacks exploit this perimeter-only defense, hitching a ride with authorized users,
then move laterally within the data center perimeter from workload to workload with little or no controls
to block their propagation. Many of the recent public breaches have exemplified this – starting with
spearphishing or social engineering, leading to malware, vulnerability exploits, command and control,
and unfettered lateral movement within the data center until the attackers find what they are looking for
– which is then exfiltrated.
Micro-segmentation of the data center network can be a huge help to limit that unauthorized lateral
movement, but hasn’t been operationally feasible in traditional data center networks. Why?
6. W H I T E P A P E R / 6
Traditional and even advanced next-generation firewalls implement controls as physical or virtual
“choke points” on the network. As application workload traffic is directed to pass through these control
points, rules are enforced and packets are either blocked or allowed to pass through. Using the
traditional firewall approach to achieve micro-segmentation quickly reaches two key operational barriers
– throughput capacity and operations/change management. The first, capacity, can be overcome at a
cost. It is possible to buy enough physical or virtual firewalls to deliver the capacity required to achieve
micro-segmentation. However, the second, operations, increases exponentially with the number of
workloads and the increasingly dynamic nature of today’s data centers. If firewall rules need to be
manually added, deleted and/or modified every time a new VM is added, moved or decommissioned,
the rate of change quickly overwhelms IT operations. It’s this barrier that has been the demise of most
security team’s best-laid plans to realize a comprehensive micro-segmentation or “Zero-trust” strategy.
A VMware SDDC approach leverages the NSX network virtualization platform to offer several significant
advantages over traditional network security approaches – automated provisioning, automated
move/add/change for workloads, distributed enforcement at every virtual interface and in-kernel, scale-
out firewalling performance, distributed to every hypervisor and baked into the platform.
Performance
It’s important to note that the firewalling performance offered in the NSX platform is not intended to
replace hardware firewall platforms used for North-South perimeter defense. The performance capacity
of hardware firewall platforms is design to control traffic flowing from hundreds or thousands of
workloads entering or leaving the data center perimeter.
That said, the firewalling performance and capacity of the NSX platform is more than impressive. The
NSX platform delivers 20Gbps of firewall throughput and supports over 80K connections per second,
per host. This performance is only applied to the VMs on its hypervisor and every time another host is
added into the SDDC platform, another 20Gbps or throughput capacity is added.
Automation
The automated provisioning and move/add/change enables the correct firewall policies to be
provisioned when a workload is programmatically created and those policies follow the workload as it is
moved anywhere in the data center or between data centers. And, if the application is every deleted, it’s
security policies are removed from the system with it. This eliminates the key barrier, which has made
the delivery of a true micro-segmentation solution infeasible.
Furthermore, the NSX partner ecosystem can also take advantage of the distribution and automation
capabilities of the SDDC/NSX platform to enable enterprises apply a combination of different partner
capabilities by chaining advanced security services together and enforcing different services based on
different security situations. For example, a workload may be provisioned with standard firewalling
policies, which allow or restrict its access to other types of workloads. The same policy may also define
that if a vulnerability is detected on the workload during the course of normal vulnerability scanning, a
more restrictive firewalling policy would apply, restricting the workload to access by only those tools
used to remediate the vulnerabilities. All automated, always on, without human intervention.
The combination of performance and automation delivered by the NSX platform allows operationally
feasible micro-segmentation to be designed and implemented all the way down to every virtual
interface.
Native Security in NSX-Powered SDDC: Isolation and Segmentation
The VMware NSX platform inherently delivers three levels of security in data centers – isolation,
segmentation, and segmentation with advanced services.
Isolation
Isolation is the foundation of most network security, whether for compliance, containment or simply
keeping development, test and production environments from interacting. While manually configured
and maintained routing, ACLs and/or firewall rules on physical devices have traditionally been used to
establish and enforce isolation, isolation and multi-tenancy are inherent to network virtualization. Virtual
networks are isolated from any other virtual network and from the underlying physical network by
7. W H I T E P A P E R / 7
default, delivering the security principle of least privilege. No physical subnets, no VLANs, no ACLs, no
firewall rules are required to enable this isolation. This is worth repeating…NO configuration required.
Virtual networks are created in isolation and remain isolated unless specifically connected together.
Any isolated virtual network can be made up of workloads distributed anywhere in the data center.
Workloads in the same virtual network can reside on the same or separate hypervisors. Additionally,
workloads in several isolated virtual networks can reside on the same hypervisor. One very useful
example: isolation between virtual networks allows for overlapping IP addresses, making it possible to
have isolated development, test and production virtual networks, each with different application
versions, but with the same IP addresses, all operating at the same time, all on the same underlying
physical infrastructure.
Virtual networks are also isolated from the underlying physical infrastructure. Because traffic between
hypervisors is encapsulated, physical network devices operate in a completely different address space
then the workloads connected to the virtual networks. For example, a virtual network could support IPv6
application workloads on top of an IPv4 physical network. This isolation protects the underlying
physical infrastructure from any possible attack initiated by workloads in any virtual network. Again, all
of this is independent from any VLANs, ACLs, or firewall rules that would traditionally be required to
create this isolation.
Segmentation
Related to isolation, but applied within a multi-tier virtual network, is segmentation. Traditionally,
network segmentation is a function of a physical firewall or router, designed to allow or deny traffic
between network segments or tiers. For example, segmenting traffic between a web tier, application tier
and database tier. Traditional processes for defining and configuring segmentation are time consuming
and highly prone to human error, resulting in a large percentage of security breaches. Implementation
requires deep and specific expertise in device configuration syntax, network addressing, application
ports and protocols.
Network segmentation, like isolation, is a core capability of VMware NSX network virtualization platform.
A virtual network can support a multi-tier network environment, meaning multiple L2 segments with L3
segmentation or micro-segmentation on a single L2 segment using distributed firewalling defined by
workload security policies. As in the example above, these could represent a web tier, application tier
and database tier. Physical firewalls and access control lists deliver a proven segmentation function,
trusted by network security teams and compliance auditors. Confidence in this approach for cloud data
centers, however, has been shaken, as more and more attacks, breaches and downtime are attributed
to human error in manual network security provisioning and change management processes.
In a virtual network, network services (L2, L3, ACL, Firewall, QoS etc.) that are provisioned with a
workload are programmatically created and distributed to the hypervisor vSwitch. Network services,
including L3 segmentation and firewalling, are enforced at the virtual interface. Communication within a
virtual network never leaves the virtual environment, removing the requirement for network
segmentation to be configured and maintained in the physical network or firewall.
Segmentation with advanced security service insertion, chaining and traffic steering
The base VMware NSX network virtualization platform provides basic stateful inspection firewalling
features to deliver segmentation within virtual networks. In some environments, there is a requirement
for more advanced network security capabilities. In these instances, customers can leverage the SDDC
platform to distribute, enable and enforce advanced network security services in a virtualized network
environment. The NSX platform distributes network services into the vSwitch to form a logical pipeline
of services applied to virtual network traffic. Third party network services can be inserted into this logical
pipeline, allowing physical or virtual services to be consumed in the logical pipeline.
Every security team uses a unique combination of network security products to meet the needs of their
environment. The VMware NSX platform is being leveraged by VMware’s entire ecosystem of security
solution providers. Network security teams are often challenged to coordinate network security services
from multiple vendors in relationship to each other. Another powerful benefit of the NSX approach is its
ability to build policies that leverage NSX service insertion, chaining and steering to drive service
execution in the logical services pipeline, based on the result of other services, making it possible to