LANs are constantly evolving, build your XYZ Account Network with that baked-in…
Extreme Networks brings XYZ Account simplicity, agility, and optimized performance to your most strategic business asset. The data center is critically important to business operations in the enterprise, but often organizations have difficulty leveraging their data centers as a strategic business asset. At Extreme Networks, we focus on providing an Intelligent Enterprise Data Center Network that’s purpose-built for enterprise requirements. Our OneFabric Data Center Solution:
XoS “can be like an elastic Fabric” for XYZ Account Network…
Demand for application availability has changed how applications are hosted in today’s datacenter. Evolutionary changes have occurred throughout the various elements of the data center, starting with server and storage virtualization and network virtualization. Motivations for server virtualization were initially associated with massive cost reduction and redundancy but have now evolved to focus on greater scalability and agility within the data center. Data center focused LAN technologies have taken a similar path; with a goal of redundancy and then to create a more scalable fabric within and between data centers.
As vendors continue to tout networking architectures that decouple software from hardware, bare-metal switches are moving into the spotlight. These switches are built on merchant silicon deliver a lower-cost and more flexible switching alternative. Extreme Purple Metal switches are open enough to allow our customers to choose their network architecture based on their specific needs without going all the way to bare metal. We believe in the disaggregation of traditional enterprise networking. Extreme uses merchant silicon versus custom ASICs. Custom ASICs have fallen behind. Unless a vendor can build and compete against merchant silicon, there's no point in doing custom ASICs.
BM NeXtScale - the next generation of dense computing
transtec HPC solutions with IBM NeXtScale are highly dense systems for those workloads that are currently the fastest growing, such as social media, analytics, technical computing and cloud applications. NeXtScale has been developed with standard components and provides up to three times as many cores in a one-unit rack unit when compared to previous versions.
The increasing use of this workload and delivery model generates increased demands on data centres. Operators are on the look-out for new technologies that can deal with the current demands with the highest possible level of performance and the lowest possible level of power consumption. NeXtScale is the latest addition to the transtec IBM x86 portfolio: Developed to allow applications with the power of a "supercomputer" to run in data centres – via a simple, flexible and open architecture.
This is the latest version of the slides based on my book "Solaris Performance and Tuning" that has been extended to include Linux and many other more recent topics. It has been presented innumerable times, most recently at the CMG conference, Usenix 08 and LISA 08, and this version will be presented at Usenix 09, San Diego on June 16th, along with the Free Tools slides.
BM NeXtScale - the next generation of dense computing
transtec HPC solutions with IBM NeXtScale are highly dense systems for those workloads that are currently the fastest growing, such as social media, analytics, technical computing and cloud applications. NeXtScale has been developed with standard components and provides up to three times as many cores in a one-unit rack unit when compared to previous versions.
The increasing use of this workload and delivery model generates increased demands on data centres. Operators are on the look-out for new technologies that can deal with the current demands with the highest possible level of performance and the lowest possible level of power consumption. NeXtScale is the latest addition to the transtec IBM x86 portfolio: Developed to allow applications with the power of a "supercomputer" to run in data centres – via a simple, flexible and open architecture.
This is the latest version of the slides based on my book "Solaris Performance and Tuning" that has been extended to include Linux and many other more recent topics. It has been presented innumerable times, most recently at the CMG conference, Usenix 08 and LISA 08, and this version will be presented at Usenix 09, San Diego on June 16th, along with the Free Tools slides.
NVMe PCIe and TLC V-NAND It’s about TimeDell World
With an explosion in data and the relentless growth in demand for information, identifying a much more efficient means of storage has become extremely important. In this session, we will cover the key drivers behind the need for faster and more efficient storage. NVMe, a standardized protocol for PCIe-based storage, is giving users the huge leap in bandwidth required for demanding applications. Samsung, who makes the fastest NVMe SSDs on the market, will cover the benefits enabled by such technology, in areas such as fraud prevention and surgical procedures.
The technology behind flash drives – NAND memory – will be spotlighted in this presentation. Memory manufacturers have improved NAND’s value by migrating from single-level-cell to multi-level-cell designs, but the most significant evolution will be a marriage of triple-level-cell and V-NAND flash manufacturing technologies. Samsung will also provide an overview of the prospects for TLC V-NAND with mobile device manufacturers, while examining the strong potential for a much wider TLC V-NAND market in data centers.
Find out how the unique architecture of MDS 9396S innovative, next-generation switch enables you to design a high-performing, scalable Fibre Channel SAN. Learn best practices for supporting flash storage applications as well as a wide variety of deployment scenarios. See how the new Cisco MDS 9396S can meet your SAN challenges today and tomorrow as we reveal:
• Architectural innovations inside the Cisco MDS 9396S
• Enterprise-class features and scale options
• Design and deployment scenarios
• Customer-tested best practices for implementation
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015 Tony Antony
Today’s storage area networks (SANs) face tremendous pressure from the phenomenal growth of digital information and the need to access it quickly and efficiently. Worldwide data is projected to multiply by an astonishing 1000 percent by 2020. It’s little wonder, then, that storage administrators rank slow drain and related SAN congestion issues as their number-one concern. If not addressed in a timely fashion, these can have a domino effect, even degrading the performance of totally unrelated applications.
Find out how the Cisco Data Center Network Manager tool provides centralized monitoring and reporting of slow drain conditions across your entire fabric, enabling you to easily pinpoint the exact sources of congestion. Discover how these solutions maximize the performance of your existing SAN as we reveal:
•Common causes of slow drain
•Best practices for avoiding congestion
•Tools for Cisco Nexus and MDS switches that speed detection and recovery
•Recent innovations that fully automate resolution
Mechanical Simulations for Electronic ProductsAnsys
As electronic devices become smaller and more ubiquitous, the printed circuit boards and components that drive them face increasing power densities and evermore complexity. To ensure product reliability and performance, accurate and detailed analysis methodologies are necessary.
IBM SAN Volume Controller Performance Analysisbrettallison
Introduction
Storage Problems and Limitations with Native Storage
SVC Overview
SVC Physical and Logical Overview
Performance and Scalability Implications
Types of Problems
Performance Analysis Techniques
Performance Analysis Tools for SVC
Performance Analysis Metrics for SVC
Online Banking Example
Why hitachi virtual storage platform does so well in a mainframe environment ...Hitachi Vantara
Hitachi VSP is a new paradigm in enterprise array performance. In this session we will discuss how the architecture of VSP enhances its box-wide performance. The results of performance testing with synthetic host I/O generators and the PAI/O driver will also be presented.
A Dataflow Processing Chip for Training Deep Neural Networksinside-BigData.com
In this deck from the Hot Chips conference, Chris Nicol from Wave Computing presents: A Dataflow Processing Chip for Training Deep Neural Networks.
Watch the video: https://wp.me/p3RLHQ-k6W
Learn more: https://wavecomp.ai/
and
http://www.hotchips.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
White Paper - CEVA-XM4 Intelligent Vision ProcessorCEVA, Inc.
A change has come to consumer electronics. Once confined to the desktop, processing-intensive algorithms for image enhancement, computational photography and computer vision have moved en masse to camera ready smartphones, tablets, wearables and other embedded mobile devices. This movement has already hit the limits of today’s underlying hardware ability to keep pace in terms of performance, space and energy
efficiency, yet we are only seeing the tip of the iceberg.
A clear and tangible indicator of recent advances in mobile imaging and vision that are pushing these limits of design is the dual-camera smartphone, with its accompanying sensor and signal-chain processing for 3D vision and scanning, along with many other image-enhancement features. While consumers may believe they are coming closer to the ideal camera-plus-phone converged solution, designers and equipment manufacturers understand that compromises have been made as the increasingly advanced algorithms are simply relying upon the pre-existing hardware.
This hardware, typically comprising a CPU and a GPU, was not designed to support such processing-intensive imaging algorithms, so it is forcing developers to compromise on features and image quality to match the processing capabilities of the hardware. Even so, the total application continues to consume too much power and drastically shortens battery life, too much so for the still unwary user.
As newer and more-complex algorithms develop to meet both consumer demand for increased functionality as well as manufacturers’ need for differentiation, an alternate approach to the underlying vision processing architecture is required if the delicate balance between functionality and acceptable battery life is to be maintained. This alternate approach relies on the adoption of dedicated, on-chip vision processors that are able to cope with both current and future complex imaging and vision algorithms. CEVA-XM4 is exactly that, a fully programmable processor that was designed from the ground up to accelerate the most demanding image-processing and computer-vision algorithms.
This document supplies an overview of the CEVA-XM4 processor’s capabilities, architecture, features, target applications, use cases and code examples.
Virtualisation For Network Testing & Staff TrainingAPNIC
Virtualisation For Network Testing & Staff Training, by Philip Smith.
A presentation given at APRICOT 2016’s Network Infrastructure session on 24 February 2016.
Data center network reference pov jeff green 2016 v2Jeff Green
Control and Security - A constant risk to the network, and ultimately XYZ Account, are unapproved applications and rogue devices that may appear on the network and either permit unauthorized access or interfere with other devices. A means to monitor all devices and applications that operate across the network is vital. Just as important are the audit and reporting capabilities necessary to report on who, what, where, when, and how XYZ Account data is accessed. Our centralized management system will leverage data in the network to understand application use.
Maintaining high quality user experience (Single-Pane-of-Glass Control from BYOD to the Data Center).
Minimizing risk from consumer products and mobile devices (automation of routine tasks).
Identifying root-cause of service outage (Make decisions about your network based on analytics, not assumptions.).
Business alignment - Over time, the proliferation of devices has created unnecessary complexity. Control Center delivers centralized visibility and granular control of network resources. One click can equal a thousand actions when you manage your network. Control Center can even manage beyond Extreme Networks switching, routing, and wireless hardware to deliver standards-based control of other vendors’ network equipment.
Transform complex network data into actionable information (Gain visibility from data in your network).
Centralize and simplify the definition, management, and enforcement of policies (Detect anomalies and get alerts based on real network behavior).
Manage third-party devices to provide a complete picture of the entire infrastructure in a heterogeneous network environment (Balance CapEx and OpEx and decrease complexity).
Three fundamental building blocks for Data Center Network Automation Solution:
Orchestration (OpenStack, vRealize-NSX, DCM)
Overlay (VXLAN, NVGRE..)
Underlay (traditional L2/L3 protocols, OSPF, MLAG etc…)
CHALLENGES AND PAIN
POINTS IN ENTERPRISE IT
• Meeting the growing
expectations of users in a mobile
first world
• Flexibility vs. Security: More
devices and applications on the
network challenge security and
control
• Cost vs. Capability
• Reliability vs. Growth
• Managing the network is too
complex and time consuming
• Enterprise mobility/constant
connectivity: the ability to
access company servers,
databases, and network in all
facilities of a company is crucial
to daily business
• State-of-the-art security is
required to prevent access of
personal information
• Require the ability to control
content accessible to individuals
with varying network functions
and limitations based on role of
individual
The network plays a critical role in establishing a consistent and high quality user experience. The network must provide more than basic connectivity; it must transform to become a strategic business asset. Similar to a television, radio or telephone, we expect technology to just work and deliver a great experience. For instance, if the network or technology does not allow us to share our experience socially, we are left with a negative perception; the expectation for what constitutes a great experience is different for each person. It is up to IT to keep up and deliver excellent experiences. Our solution is TRULY integrated for both wired and wireless. Deployable for private and public cloud environments. Align with mega trends shaping the market: Cloud, Mobile and of Couse with Isaac, Social as well.
Until recently, that kind of good design had primarily been found in consumer-facing apps. Great Network design rely on getting users hooked on the product. It’s not good enough any more for enterprise apps to work—they need a great user experience as well. Lacking that, there’s probably an alternative tool that’s easier to use and gets the same business results. Application Analytics intelligence provides IT with the visibility and control of applications and websites (including related sub-web sites) resident in all parts of the network, from the wired or wireless edge all the way through the core and datacenter, as well as application traffic from the Enterprise to the private Cloud, public Cloud or any service on the internet.
Network and Application Response Time Management – Provides network performance versus applications performance.
Proactive Security and Compliance – Provides IT with the ability to monitor and restrict application usage and website access based on specific parameters. For example, a known web browser version that poses security risks could be restricted.
Contextual Information with Depth and Granularity – Associate additional contextual information such as who, what, where, when and how with any application.
ExtremeAnalytics provides us with a global view of the overall health of the network from a single pane of glass. It’s the first stop in the troubleshooting process when network issues are discovered and it enables us to pinpoint issues and drill down to a specific closet or client for fast resolution. It is the industry’s very first and only – patent pending – solution to transform the Network into a Strategic Business Asset – by enabling the mining of network-based business events and strategic information that help business leaders make faster and more effective decisions. It does this all from a centralized command control center that combines Network Management with Business Analytics, and at unprecedented scale (100M sessions) and scope.
NVMe PCIe and TLC V-NAND It’s about TimeDell World
With an explosion in data and the relentless growth in demand for information, identifying a much more efficient means of storage has become extremely important. In this session, we will cover the key drivers behind the need for faster and more efficient storage. NVMe, a standardized protocol for PCIe-based storage, is giving users the huge leap in bandwidth required for demanding applications. Samsung, who makes the fastest NVMe SSDs on the market, will cover the benefits enabled by such technology, in areas such as fraud prevention and surgical procedures.
The technology behind flash drives – NAND memory – will be spotlighted in this presentation. Memory manufacturers have improved NAND’s value by migrating from single-level-cell to multi-level-cell designs, but the most significant evolution will be a marriage of triple-level-cell and V-NAND flash manufacturing technologies. Samsung will also provide an overview of the prospects for TLC V-NAND with mobile device manufacturers, while examining the strong potential for a much wider TLC V-NAND market in data centers.
Find out how the unique architecture of MDS 9396S innovative, next-generation switch enables you to design a high-performing, scalable Fibre Channel SAN. Learn best practices for supporting flash storage applications as well as a wide variety of deployment scenarios. See how the new Cisco MDS 9396S can meet your SAN challenges today and tomorrow as we reveal:
• Architectural innovations inside the Cisco MDS 9396S
• Enterprise-class features and scale options
• Design and deployment scenarios
• Customer-tested best practices for implementation
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
Eliminating SAN Congestion Just Got Much Easier- webinar - Nov 2015 Tony Antony
Today’s storage area networks (SANs) face tremendous pressure from the phenomenal growth of digital information and the need to access it quickly and efficiently. Worldwide data is projected to multiply by an astonishing 1000 percent by 2020. It’s little wonder, then, that storage administrators rank slow drain and related SAN congestion issues as their number-one concern. If not addressed in a timely fashion, these can have a domino effect, even degrading the performance of totally unrelated applications.
Find out how the Cisco Data Center Network Manager tool provides centralized monitoring and reporting of slow drain conditions across your entire fabric, enabling you to easily pinpoint the exact sources of congestion. Discover how these solutions maximize the performance of your existing SAN as we reveal:
•Common causes of slow drain
•Best practices for avoiding congestion
•Tools for Cisco Nexus and MDS switches that speed detection and recovery
•Recent innovations that fully automate resolution
Mechanical Simulations for Electronic ProductsAnsys
As electronic devices become smaller and more ubiquitous, the printed circuit boards and components that drive them face increasing power densities and evermore complexity. To ensure product reliability and performance, accurate and detailed analysis methodologies are necessary.
IBM SAN Volume Controller Performance Analysisbrettallison
Introduction
Storage Problems and Limitations with Native Storage
SVC Overview
SVC Physical and Logical Overview
Performance and Scalability Implications
Types of Problems
Performance Analysis Techniques
Performance Analysis Tools for SVC
Performance Analysis Metrics for SVC
Online Banking Example
Why hitachi virtual storage platform does so well in a mainframe environment ...Hitachi Vantara
Hitachi VSP is a new paradigm in enterprise array performance. In this session we will discuss how the architecture of VSP enhances its box-wide performance. The results of performance testing with synthetic host I/O generators and the PAI/O driver will also be presented.
A Dataflow Processing Chip for Training Deep Neural Networksinside-BigData.com
In this deck from the Hot Chips conference, Chris Nicol from Wave Computing presents: A Dataflow Processing Chip for Training Deep Neural Networks.
Watch the video: https://wp.me/p3RLHQ-k6W
Learn more: https://wavecomp.ai/
and
http://www.hotchips.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
White Paper - CEVA-XM4 Intelligent Vision ProcessorCEVA, Inc.
A change has come to consumer electronics. Once confined to the desktop, processing-intensive algorithms for image enhancement, computational photography and computer vision have moved en masse to camera ready smartphones, tablets, wearables and other embedded mobile devices. This movement has already hit the limits of today’s underlying hardware ability to keep pace in terms of performance, space and energy
efficiency, yet we are only seeing the tip of the iceberg.
A clear and tangible indicator of recent advances in mobile imaging and vision that are pushing these limits of design is the dual-camera smartphone, with its accompanying sensor and signal-chain processing for 3D vision and scanning, along with many other image-enhancement features. While consumers may believe they are coming closer to the ideal camera-plus-phone converged solution, designers and equipment manufacturers understand that compromises have been made as the increasingly advanced algorithms are simply relying upon the pre-existing hardware.
This hardware, typically comprising a CPU and a GPU, was not designed to support such processing-intensive imaging algorithms, so it is forcing developers to compromise on features and image quality to match the processing capabilities of the hardware. Even so, the total application continues to consume too much power and drastically shortens battery life, too much so for the still unwary user.
As newer and more-complex algorithms develop to meet both consumer demand for increased functionality as well as manufacturers’ need for differentiation, an alternate approach to the underlying vision processing architecture is required if the delicate balance between functionality and acceptable battery life is to be maintained. This alternate approach relies on the adoption of dedicated, on-chip vision processors that are able to cope with both current and future complex imaging and vision algorithms. CEVA-XM4 is exactly that, a fully programmable processor that was designed from the ground up to accelerate the most demanding image-processing and computer-vision algorithms.
This document supplies an overview of the CEVA-XM4 processor’s capabilities, architecture, features, target applications, use cases and code examples.
Virtualisation For Network Testing & Staff TrainingAPNIC
Virtualisation For Network Testing & Staff Training, by Philip Smith.
A presentation given at APRICOT 2016’s Network Infrastructure session on 24 February 2016.
Data center network reference pov jeff green 2016 v2Jeff Green
Control and Security - A constant risk to the network, and ultimately XYZ Account, are unapproved applications and rogue devices that may appear on the network and either permit unauthorized access or interfere with other devices. A means to monitor all devices and applications that operate across the network is vital. Just as important are the audit and reporting capabilities necessary to report on who, what, where, when, and how XYZ Account data is accessed. Our centralized management system will leverage data in the network to understand application use.
Maintaining high quality user experience (Single-Pane-of-Glass Control from BYOD to the Data Center).
Minimizing risk from consumer products and mobile devices (automation of routine tasks).
Identifying root-cause of service outage (Make decisions about your network based on analytics, not assumptions.).
Business alignment - Over time, the proliferation of devices has created unnecessary complexity. Control Center delivers centralized visibility and granular control of network resources. One click can equal a thousand actions when you manage your network. Control Center can even manage beyond Extreme Networks switching, routing, and wireless hardware to deliver standards-based control of other vendors’ network equipment.
Transform complex network data into actionable information (Gain visibility from data in your network).
Centralize and simplify the definition, management, and enforcement of policies (Detect anomalies and get alerts based on real network behavior).
Manage third-party devices to provide a complete picture of the entire infrastructure in a heterogeneous network environment (Balance CapEx and OpEx and decrease complexity).
Three fundamental building blocks for Data Center Network Automation Solution:
Orchestration (OpenStack, vRealize-NSX, DCM)
Overlay (VXLAN, NVGRE..)
Underlay (traditional L2/L3 protocols, OSPF, MLAG etc…)
CHALLENGES AND PAIN
POINTS IN ENTERPRISE IT
• Meeting the growing
expectations of users in a mobile
first world
• Flexibility vs. Security: More
devices and applications on the
network challenge security and
control
• Cost vs. Capability
• Reliability vs. Growth
• Managing the network is too
complex and time consuming
• Enterprise mobility/constant
connectivity: the ability to
access company servers,
databases, and network in all
facilities of a company is crucial
to daily business
• State-of-the-art security is
required to prevent access of
personal information
• Require the ability to control
content accessible to individuals
with varying network functions
and limitations based on role of
individual
The network plays a critical role in establishing a consistent and high quality user experience. The network must provide more than basic connectivity; it must transform to become a strategic business asset. Similar to a television, radio or telephone, we expect technology to just work and deliver a great experience. For instance, if the network or technology does not allow us to share our experience socially, we are left with a negative perception; the expectation for what constitutes a great experience is different for each person. It is up to IT to keep up and deliver excellent experiences. Our solution is TRULY integrated for both wired and wireless. Deployable for private and public cloud environments. Align with mega trends shaping the market: Cloud, Mobile and of Couse with Isaac, Social as well.
Until recently, that kind of good design had primarily been found in consumer-facing apps. Great Network design rely on getting users hooked on the product. It’s not good enough any more for enterprise apps to work—they need a great user experience as well. Lacking that, there’s probably an alternative tool that’s easier to use and gets the same business results. Application Analytics intelligence provides IT with the visibility and control of applications and websites (including related sub-web sites) resident in all parts of the network, from the wired or wireless edge all the way through the core and datacenter, as well as application traffic from the Enterprise to the private Cloud, public Cloud or any service on the internet.
Network and Application Response Time Management – Provides network performance versus applications performance.
Proactive Security and Compliance – Provides IT with the ability to monitor and restrict application usage and website access based on specific parameters. For example, a known web browser version that poses security risks could be restricted.
Contextual Information with Depth and Granularity – Associate additional contextual information such as who, what, where, when and how with any application.
ExtremeAnalytics provides us with a global view of the overall health of the network from a single pane of glass. It’s the first stop in the troubleshooting process when network issues are discovered and it enables us to pinpoint issues and drill down to a specific closet or client for fast resolution. It is the industry’s very first and only – patent pending – solution to transform the Network into a Strategic Business Asset – by enabling the mining of network-based business events and strategic information that help business leaders make faster and more effective decisions. It does this all from a centralized command control center that combines Network Management with Business Analytics, and at unprecedented scale (100M sessions) and scope.
Extreme is rethinking the data plane, the control plane, and the management plane. Extreme is a better mouse trap which delivers new features, advanced function, and wire-speed performance. Our switches deliver deterministic performance independent of load or what features are enabled. All Extreme Switches are based on XOS, the industries first and only truly modular operating system. Having a modular OS provides higher availability of critical network resources. By isolating each critical process in its own protected memory space, a single failed process can not take down the entire switch. Application modules can be loaded and unloaded without the need for rebooting the switch. This is the level of functionality that users expect on other technology. Reaching the twenty million port milestone is a significant achievement demonstrating how our highly effective network solutions, with rich features, innovative software and integrated support for secure convergence. VoIP/Unified Communica Fons/Infrastructure/SIP Trunking (SBC) – Because of strong ROI, investment in this segment remains on a very strong growth trajectory.
Enterprises depend on modular switching solutions for all aspects of the enterprise network: in the enterprise core and data center, the distribution layer that lies between the core and wiring closet, and in the wiring closet itself. Modular solutions provide port diversity and density that fixed solutions simply cannot match. There are also high-capacity modular solutions that only the largest of enterprises and institutions use for high-density and high-speed deployments. Modular solutions are generally much more expensive than their fixed cousins, especially in situations where density or flexibility are not required. Fixed-configuration stackable switches are typically cost- optimized, but they offer no real port diversity on an individual switch. Port diversity means the availability of different port types, such as fiber versus copper ports. Stackable switches have gotten better at offering port diversity, but they still cannot match their modular cousins. Many of these products now offer high-end features such as 802.3af PoE, QoS, and multi-layer intelligence that were only found on modular switches in the past. This is due to the proliferation of third-party merchant silicon in the fixed configuration market. Generally, a stack of fixed configuration switches can be managed as a single virtual entity. Fixed configuration switches generally cannot be used to provision an entire large enterprise, but instead are mostly used out at the edge or departmental level as a low-cost alternative to modular products.
Assumptions:
Ethernet is Open
Active/Active in the Fabric
Therefore:
Open at the Edge
Active/Active at the edge
Next Generation Ethernet
Next Generation Ethernet is a platform that should deliver all of previous function requirements under on hood. I have grouped the Generations in this way because Cisco has different purpose-built product lines for each of 4 waves of technology. Counter to that Extreme offers a platform solution for a customer to build his network on. Extreme does not require different switches to address different convergence requirements, this would be cost prohibitive for most customers and complicated. Simply put to disrupt the Cisco market, Extreme must deliver more with less.
The IEEE is pushing Ethernet to unimaginable speeds, with the 40/100Gigabit Ethernet standard expected to be ratified in 2010 and Terabit Ethernet on the drawing board for 2015. Here's a timeline showing key milestones in the growth of Ethernet Sstandard's-compliant products are expected to ship in the second half of next year, not long after the expected June 2010 ratification of the 802.3ba standard.
Complexity - Complex systems are a special type of chaotic system. They display a very interesting type of emergent behavior called, logically enough, complex adaptive behavior. But we are getting ahead of ourselves. There’s a need to back up a bit and describe a fundamental behavior that occurs at the granular level and leads to complex adaptive behavior. It is self -organization. Complex Adaptive Behavior is the name given to this forming-falling apart-reforming-falling apart-… behavior. Specifically it is defined as many agents working in parallel to accomplish a goal. It is conflict ridden, very fluid, and very positive. The hallmark of emergent, complex adaptive behavior is it brings about a change from the starting point that is not just different in degree but in kind. In biology a good example of this is the emergence of consciousness. Another example is the Manhattan Project and the development of the atomic bomb. Below is a checklist that helps facilitate a qualitative assessment of the level of complexity. It is in everyday language to facilitate use by a broad range of stakeholders and team members. In other words, it stays away from jargon, which can be the kiss of death when requesting information from people.
The Checklist
Not sure how the project will get done; Many stakeholders, teams and sub-teams;
Too Many vendors; New vendors;
New client; Team members are geographically dispersed;
End-users are geographically dispersed; Many organizations;
Many cultures (professional, organizational, sociological);
Many languages (professional, organizational, sociological);
High risk;
Lack of quality best characterized by lack of acceptance criteria;
Lack of clear requirements and too Many tasks;
Arbitrary budget or end date;
Inadequate resources;
Leading-edge technology;
New, unproven application of existing technology;
High degree of interconnectedness (professional, technological, political, sociological).
We see the need for IT to use best of breed applications in an open standards network, but often times lacking the unified management or staff to efficiently maintain a complex network.
I am fortunate enough to speak with CIOs and IT Directors like yourself, and they tell me that while they may be happy with their current vendor situation they face challenges such as
· We help our customers transform their network architecture as a strategic business asset
· Difficult and complex to gain strategically valuable insight into network usage
· Implementing and controlling BYOD
· Increased volume of devices on the network
· Poor correlation of data and management solutions between third party technologies
· Frustration due to lack of network visibility and application analytics
· Controlling guest and rogue devices on-boarding the network
· Bad user experience causing a poor perception of IT competence
As I mentioned, I’m with Extreme Networks. We are a global leader in high performance wired and wireless networking hardware and software solutions, presently working with ABC University on their stadium wi-fi.
ScyllaDB Open Source 5.0 is the latest evolution of our monstrously fast and scalable NoSQL database – powering instantaneous experiences with massive distributed datasets.
Join us to learn about ScyllaDB Open Source 5.0, which represents the first milestone in ScyllaDB V. ScyllaDB 5.0 introduces a host of functional, performance and stability improvements that resolve longstanding challenges of legacy NoSQL databases.
We’ll cover:
- New capabilities including a new IO model and scheduler, Raft-based schema updates, automated tombstone garbage collection, optimized reverse queries, and support for the latest AWS EC2 instances
- How ScyllaDB 5.0 fits into the evolution of ScyllaDB – and what to expect next
- The first look at benchmarks that quantify the impact of ScyllaDB 5.0's numerous optimizations
This will be an interactive session with ample time for Q & A – bring us your questions and feedback!
Webinar: Untethering Compute from StorageAvere Systems
Enterprise storage infrastructures are gradually sprawling across the globe and consumers of data increasingly require access to remote storage resources. Solutions for mitigating the pain associated with this growth are out there, but performance varies. This Webinar will take a look at these challenges, review available solutions, and compare tests of performance.
Container Attached Storage (CAS) with OpenEBS - Berlin Kubernetes Meetup - Ma...OpenEBS
The OpenEBS project has taken a different approach to storage when it comes to containers. Instead of using existing storage systems and making them work with containers; what if you were to redesign something from scratch using the same paradigms used in the container world? This resulted in the effort of containerizing the storage controller. Also, as applications that consume storage are changing over, do we need a scale-out distributed storage systems?
StorPool Storage presenting at Storage Field Day 25pdfStorPool Storage
Storage Field Day 25 took place on March 22–23, 2023, and gathered industry leaders and storage analysts in an exciting 2-day meet up with technical presentations. StorPool Storage participated in the event, and our team showcased our storage platform, its capabilities, and improvements.
Learn more: Watch now the recording of the presentation: https://storpool.com/blog/storpool-presents-at-storage-field-day-25-video-recordings
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta correttaNetgear Italia
Oltre alla breve introduzione della gamma di soluzioni inclusiva delle ultime novità, si tratteranno gli elementi distintivi delle famiglie di prodotto ed i alcuni criteri per l'identificazione delle soluzioni corrette ed adeguate per la nostra rete.
Extreme Manufacturing Solutions
Operations Performance Analytics (OPA)
Business alignment - Over time, the proliferation of devices has created unnecessary complexity. Control Center delivers centralized visibility and granular control of network resources. One click can equal a thousand actions when you manage your network. Control Center can even manage beyond Extreme Networks switching, routing, and wireless hardware to deliver standards-based control of other vendors’ network equipment.
Pairing assets with intelligent sensors to gather, analyze, and communicate data is driving enormous new efficiencies in manufacturing and business operations. Just as in the consumer markets, where the first generation of personal fitness monitors and smart home devices leverage data sets to influence and shape events in the physical world, so too are operational efficiencies borne by the Internet of Things (IoT) generating high returns in manufacturing.
According to McKinsey, “business-to-business applications will account for nearly 70 percent of the value … from IoT in the next ten years.” The firm estimates that of the nearly $11 trillion a year in economic value generated globally, ‘nearly $5 trillion [will] be generated almost exclusively in B2B settings, including factories… such as those in manufacturing, agriculture, and even healthcare environments; work sites across mining, oil and gas, and construction; and, finally, offices.’
More informed decision-making and optimized operations across the extended supply chain are only some of the benefits. Wireless sensors, whether measuring hydrogen levels in the soil or temperature variables on the production line, are eliminating blind spots in traditional manufacturing processes and delivering a constant flow of data that optimize workflows. And while manufacturers have leveraged data in discrete applications for Manufacturing Execution Systems (MES) and Enterprise Manufacturing Intelligence (EMI) systems for years, the growth of sensors, real-time dashboards, cloud-applications, and mobile technologies are delivering new degrees of actionable intelligence to the precise location at the precise time it can be optimally leveraged.
Yet this goal of seamlessly moving data across plant and business functions, and applying analytical tools to enable new insights, requires a new degree of visibility into the performance of manufacturing applications, networks, and systems. Traditionally monitoring tools used in factory environments are often isolated, closed, proprietary, and offer only a keyhole view of IT system performance.
Places in the network (featuring policy)Jeff Green
Networks of the Future will be about a great user experience, devices and things…
In an industry that’s already defined, Extreme Network’s recent announcement of The Automated Campus is a significant advance in networking. For the first time, all the essential technologies, products, procedures and support are gathered together and integrated. All too often, the piecemeal/piecewise growth strategy, typically applied in network evolutions, results in too many tools, procedures, and techniques. The patchwork quilt approach precludes fast responsiveness, optimal operations staff productivity, and sacrifices the accuracy and efficiency required to keep end-users productive as well.
The most important opportunity to improve efficiency for governments today is in boosting both the productivity of end-users and network operators. The automated campus must address the productivity of network planners and network operations managers and staff. The often-significant number of elements required in an installation can demand significant staff time and can, consequentially, have an adverse impact on operating expenses (OpEx). While It is possible to build traditional networks that, when running correctly and optimally get the job done, they often embody such high operating expenses that cost becomes the overriding factor controlling the evolution of the campus network. The Automated Campus will allow XYZ Account to address all these issues and concerns. A key goal must be for XYZ Account to reduce the number of “moving parts” required to build and operate any campus and introduce a level of simplicity and automation that will address your future.
Extreme’s strategy for Campus Automation begins with re-thinking the way networks are designed, deployed and managed. Extreme’s Fabric-based networks enable faster configuration and troubleshooting; As a result, there is less opportunity for misconfiguration. Several automation solutions designed to enhance security often force network managers to accept complexity and degraded resilience to secure the network to meet local policies. Should a breach occur, containment to that segment protects even more sensitive parts of the network, resulting in a true dead-end for the hacker. With Extreme’s Automated Campus services can easily be defined and provisioned on-the-fly without disruption. Network operators specify what services are allowed or prohibited across the network.
Where is the 6 GHz beef?
The low number of channels available today forces users to share available bandwidth and creates congestion. As each client station waits to transmit (or receive) data, congestion is caused by devices, Access Points and Stations, sharing the same channel. To better describe the impact of 6GHZ wifi, let us borrow the catchphrase "Where is the beef?". As a visual aid, begin with a hamburger bun with a 2.4GHz and 5GHz spectrum in the middle. The picture below may exaggerate a 20 years spectrum limitation. However, the visual expresses the potential of the 6GHz range to deliver the spectrum beef.
Where is the 6 GHz beef?
The low number of channels available today forces users to share available bandwidth and creates congestion. As each client station waits to transmit (or receive) data, congestion is caused by devices, Access Points and Stations, sharing the same channel. To better describe the impact of 6GHZ wifi, let us borrow the catchphrase "Where is the beef?". As a visual aid, begin with a hamburger bun with a 2.4GHz and 5GHz spectrum in the middle. The picture below may exaggerate a 20 years spectrum limitation. However, the visual expresses the potential of the 6GHz range to deliver the spectrum beef.
Where is the 6 GHz beef?
The low number of channels available today forces users to share available bandwidth and creates congestion. As each client station waits to transmit (or receive) data, congestion is caused by devices, Access Points and Stations, sharing the same channel. To better describe the impact of 6GHZ wifi, let us borrow the catchphrase "Where is the beef?". As a visual aid, begin with a hamburger bun with a 2.4GHz and 5GHz spectrum in the middle. The picture below may exaggerate a 20 years spectrum limitation. However, the visual expresses the potential of the 6GHz range to deliver the spectrum beef.
The next generation ethernet gangster (part 3)Jeff Green
The original competitors in the Ethernet market remind me of gang members who each had their unique advantages to win over their turf. Over the past few years, Extreme assembled seven gangers from a variety of backgrounds with their strengths to perform a mission and deliver a new level of value to our customers. Extreme has adopted a gangster strategy going against the grain of the market leader. So far, the gangster strategy has been a winning strategy. When market leaders are proposing proprietary solutions, Extreme went open Linux with “superspec.” When they pushed DNA and its additional complexity, Extreme responded by re-thinking the way networks are designed, deployed, and managed without vendor lock-in. Final-ly, when they tied to service and to licensing together with Cisco One, Extreme responded with added flexibility in both licensing, services, and Extreme-as-a-service.
The next generation ethernet gangster (part 2)Jeff Green
The original competitors in the Ethernet market remind me of gang members who each had their unique advantages to win over their turf. Over the past few years, Extreme assembled seven gangers from a variety of backgrounds with their strengths to perform a mission and deliver a new level of value to our customers. Extreme has adopted a gangster strategy going against the grain of the market leader. So far, the gangster strategy has been a winning strategy. When market leaders are proposing proprietary solutions, Extreme went open Linux with “superspec.” When they pushed DNA and its additional complexity, Extreme responded by re-thinking the way networks are designed, deployed, and managed without vendor lock-in. Final-ly, when they tied to service and to licensing together with Cisco One, Extreme responded with added flexibility in both licensing, services, and Extreme-as-a-service.
The next generation ethernet gangster (part 1)Jeff Green
The original competitors in the Ethernet market remind me of gang members who each had their unique advantages to win over their turf. Over the past few years, Extreme assembled seven gangers from a variety of backgrounds with their strengths to perform a mission and deliver a new level of value to our customers. Extreme has adopted a gangster strategy going against the grain of the market leader. So far, the gangster strategy has been a winning strategy. When market leaders are proposing proprietary solutions, Extreme went open Linux with “superspec.” When they pushed DNA and its additional complexity, Extreme responded by re-thinking the way networks are designed, deployed, and managed without vendor lock-in. Final-ly, when they tied to service and to licensing together with Cisco One, Extreme responded with added flexibility in both licensing, services, and Extreme-as-a-service.
The next generation ethernet gangster (part 3)Jeff Green
Today Extreme can be more aggressive, with confidence in knowing we can compete with anyone in the market. As the #1 market alternative, there are three critical reasons for including Extreme in your technology considerations: our end-to-end portfolio, our fabric, and our customer service. We are moving Extreme from a reactive, tactical vendor to a pro-active, strategic partner. When Extreme gets a seat at the table, and we bring our unique “sizzle,” we are the customer’s choice. Our customer retention rate is unmatched in the industry, according to Gartner.
Jeff Green
Extreme Networks
jgreen@extremenetworks.com
Mobile (772) 925-2345
https://prezi.com/view/BFLC71PVkoYVKBOffPAv/
The next generation ethernet gangster (part 2)Jeff Green
Today Extreme can be more aggressive, with confidence in knowing we can compete with anyone in the market. As the #1 market alternative, there are three critical reasons for including Extreme in your technology considerations: our end-to-end portfolio, our fabric, and our customer service. We are moving Extreme from a reactive, tactical vendor to a pro-active, strategic partner. When Extreme gets a seat at the table, and we bring our unique “sizzle,” we are the customer’s choice. Our customer retention rate is unmatched in the industry, according to Gartner.
Jeff Green
Extreme Networks
jgreen@extremenetworks.com
Mobile (772) 925-2345
https://prezi.com/view/BFLC71PVkoYVKBOffPAv/
The ubiquitous heavy-tailed distributions in the Internet im-plies an interesting feature of the Internet traffic: most (e.g. 80%) of the traffic is actually carried by only a small number of connections (elephants), while the remaining large amount of connections are very small in size or lifetime (mice). In a fair network environment, short connections expect rela-tively fast service than long connections. For these reasons, short TCP flows are generally more con-servative than long flows and thus tend to get less than their fair share when they compete for the bottleneck bandwidth. In this paper, we propose to give preferential treatment to short flows2 with help from an Active Queue Management (AQM) policy inside the network. We also rely on the pro-posed Differentiated Services (Diffserv) architecture [3] to classify flows into short and long at the edge of the network. More specifically, we maintain the length of each active flow (in packets3) at the edge routers and use it to classify incoming packets.
Fortinet Firewall Integration - User to IP Mapping and Distributed Threat Response
oAccurate User ID to IP mapping eliminates potential attacks and provides reliable, out of the box User Information to firewalls
oImproves security by blocking/limiting user access at the point of entry without impacting other users
oMore accurate network mapping for dynamic policy enforcement and reporting
In an industry that’s already defined, Extreme Network’s recent announcement of The Automated Branch is a significant advance in networking. For the first time, all the essential technologies, products, procedures and support are gathered together and integrated. All too often, the piecemeal/piecewise growth strategy typically historically applied in organizational network evolution results in too many tools, procedures, and techniques at work, precluding fast responsiveness, optimal operations staff productivity, and the degree of accuracy and efficiency required to keep end-users productive as well.
This reference design helps organizations design and configure a small to midsize data center (be¬tween 2 and 60 server racks) at headquarters or a server room at a remote site. You will learn how to configure the data center core, aggregation and access switches for connectivity to the servers and the campus network.
The Avaya Fabric Connect data center design supports high-speed 10 Gbps Ethernet connect-ed servers. The design can easily scale server bandwidth with link aggregation and servers can be connected to one or more switches in order to provide the level of availability required for the services delivered by the host. The design also supports legacy and low traffic servers that need 1 Gbps Ethernet connectivity,
The reference design presented in this guide is based on common network requirements and pro¬vides a tested starting point for network engineers to design and deploy an Avaya data center net¬work. This guide does not document every possible option and feature used to design and deploy networks but instead presents the tested and recommended options that will meet the majority of customer needs.
This design uses Avaya Fabric Connect in order to provide benefits over traditional data center design.
IT departments face several challenges in today’s data center:
· Data center traffic flow is not the same as campus traffic flow. Over 80% of the traffic is east-west, server-to-server, vs. north-south, client-to-server, like in a campus.
· Server virtualization allows a virtual machine or workload to be located anywhere in the physi¬cal data center. Data center networks can make it difficult to extend virtual local area networks (VLANs) and subnets anywhere in the data center.
· Server virtualization means that new services can be brought online in minutes or migrated in real time. Reconfiguring the network to support this is difficult because it can interrupt other services.
· Server virtualization means that the load on a physical box is much higher. Physical servers regularly host 10-50 workloads, driving network utilization well past 1 Gbps.
Audio video ethernet (avb cobra net dante)Jeff Green
AVB fits low-cost, small-form-factor products such as this microphone. The overall trend is that music no longer lives on shelves or in CD racks, but in hard drives in home computers, and increasingly in the cloud. This brings about its own unique problems, not in the encoding system used, or the storage technology, but in distributing the audio from the storage media to the speakers. AVB features are all enabled by a global and port level configuration. Connecting these elements is the AVB-enabled switch (in the graphic above, the Extreme Networks® Summit® X440.) The role of the switch is to provide support for the control protocols: AVB is Ethernet’s next stage of convergence, delivering pitch perfect audio and crystal clear video seamlessly over the network
IP/Ethernet is bringing simplicity and features to audio and video as it has brought to services like VoIP, Storage and many more
High quality, perfectly synchronized A/V until now has been difficult to maintain
Standards work by the IEEE and the AVB standard changes everything, creating interoperability and mass-marketing equipment pricing
Benefits of AVB - Delivers predictable latency and precise synchronization, maximizing the functionality of AV – time synchronization and quality or service
Reduced complexity and Ease of use through interoperability between devices
Streamlines complex network set-up and management, the Infrastructure negotiates and manages the network for optimal prioritized media transport
AV traffic can co-exist with non-AV traffic on same Ethernet infrastructure
Role based control at the XYZ Account - XYZ Account can identify devices and apply policies based on device type all the way down to the port and or the AP. Policies can dynamically change based on the device a user is connecting with and where that user is located. Extreme Networks provides infrastructure to deliver customizable prioritization and scalable capacity via configurable and built-in intelligence, ensuring a comprehensive, superior quality experience. Furthermore, when deployed with Extreme Wireless XYZ Account can configure the network to ensure applications receive the bandwidth they require, while still limiting or preventing high speed streaming of music of video or even games.
The Pug is a breed of dog with a wrinkly, short-muzzled face, and curled tail. The breed has a fine, glossy coat that comes in a variety of colours, most often fawn or black, and a compact square body with well-developed muscles.
Pugs were brought from China to Europe in the sixteenth century and were popularized in Western Europe by the House of Orange of the Netherlands, and the House of Stuart.In the United Kingdom, in the nineteenth century, Queen Victoria developed a passion for pugs which she passed on to other members of the Royal family. Pugs are known for being sociable and gentle companion dogs.[3] The breed remains popular into the twenty-first century, with some famous celebrity owners. A pug was judged Best in Show at the World Dog Show in 2004.
Donald J. Trump For President, Inc. –– Why Now?
On November 8, 2016, the American People delivered a historic victory and took our country back. This victory was the result of a Movement to put America first, to save the American economy, and to make America once again a shining city on the hill. But our Movement cannot stop now - we still have much work to do.
This is why our Campaign Committee, Donald J. Trump for President, Inc., is still here.
We will provide a beacon for this historic Movement as our lights continue to shine brightly for you - the hardworking patriots who have paid the price for our freedom. While Washington flourished, our American jobs were shipped overseas, our families struggled, and our factories closed - that all ended on January 20, 2017.
This Campaign will be a voice for all Americans, in every city near and far, who support a more prosperous, safe and strong America. That’s why our Campaign cannot stop now - our Movement is just getting started.
Together, we will Make America Great Again!
In an industry that’s already defined, Extreme Network’s recent announcement of The Automated Campus is a significant advance in networking. For the first time, all the essential technologies, products, procedures and support are gathered together and integrated. All too often, the piecemeal/piecewise growth strategy typically historically applied in organizational network evolution results in too many tools, procedures, and techniques at work, precluding fast responsiveness, optimal operations staff productivity, and the degree of accuracy and efficiency required to keep end-users productive as well.
The most important opportunity today is in boosting both productivity of end-users and network operators. The automated campus must address the productivity of network planners and network operations managers and staff. The often-significant number of elements required in an installation can demand significant staff time and can consequentially have an adverse impact on operating expenses (OpEx). While It is possible to build traditional networks that, when running correctly and optimally, get the job done – unfortunately, they often embody such high operating expenses that cost becomes the overriding factor controlling the evolution of the campus network overall. The Automated Campus will allow XYZ Account to address all these issues and concerns. A key goal here must be, of course, to reduce the number of “moving parts” required to build and operate any campus.
Extreme’s strategy for Campus Automation begins with re-thinking the way networks are designed, deployed and managed. Extreme’s Fabric-based networks enable faster configuration and troubleshooting; As a result, there is less opportunity for misconfiguration. Several automation solutions designed to enhance security often force network managers to accept complexity and degraded resilience to secure the network to meet local policies. Should a breach occur, containment to that segment protects even more sensitive parts of the network, resulting in a true dead-end for the hacker. With Extreme’s Automated Campus services can easily be defined and provisioned on-the-fly without disruption. Network operators specify what services are allowed or prohibited across the network.
XoS Performance - Separation between control and forwarding planes - The "SDN Classic" model, as illustrated by this graphic from the Open Networking Foundation, offers many potential benefits:
In the forwarding plane all switching, and feature implementation such as deep packet inspection , QoS scheduling, MAC learning and filtering, etc are performed in dedicated ASIC hardware
Wire speed performance across entire product line (Backplane resources, packet /frame forwarding rate, Bits per second throughput) Local switching on all line cards at no additional cost ,increasing throughput and reducing latency. Dedicated stacking interfaces, and stacking over fiber.
Low latency with Exceptional QoS
We build networks to deliver on today’s Experience Economy. Extreme Networks combines high performance wired and wireless hardware with a software-defined architecture that makes it simple, fast and smart for the user to connect with their device of choice. We provide a comprehensive portfolio, including Campus Mobility and Data Center solutions, which allow our customers to deliver a positive and consistent experience to each and every user in their environment. As SDN excitement grew, the term software-defined was adopted by marketers and applied liberally to all kinds of products and technologies: software-defined storage, software-defined security, software-defined data center.
What technologies allow me to do this today?
Key Features: Loop free load balancing, density, L2 overlays
VXLAN fabric in EXOS / EOS
MLAG: L2 Leaf/Spine with two spine members
VPLS: L2 Leaf/Spine for HPC deployments
SPB-V: S/K-Series for small enterprise data center
Evolution ExtremeFabric: fully automated
Why VxLAN? It’s a really easy L2 over L3 transport
MLAG technology Leaf/Spine Fabric
MLAG is a special case of Leaf/Spine with only two spine members and everything on L2 (We kill the spanning tree and maintain state between the spines) – We’ve been leading in MLAG for a while
VPLS technology Leaf/Spine Fabric
We have successfully built VPLS mesh Leaf/Spine networks for HPC deployments
Key Features: Loop free load balancing, density, L2 overlays
We need more scale!
21.x / 22.x bring some interesting new features that fix this
NEW with 21.1: The Scalable Layer 2 Fabric with VxLAN Technology
VXLAN – Overlay on routing for efficient load balancing and reachability
OSPF extensions massively simplify deployment
The Layer 2 traffic tunnels over any Layer 3 network
Can be used in any topology, but highest performance is Leaf/Spine
Removes the limitation on transit overlay in the spine
Easy setup, small configuration
X670-G2 and X770, S and K, and will be available on X870 at launch
Scale to 2592 10G ports (X670-G2-72, 1:1), 512 40G (X770, 1:1)
Available on EOS and EXOS NOW
NEW with EXOS 22.x and EOS 8.81: Future Fabric Technology
The Secret Sauce is the Control Plane, not the Encapsulation
Host Route Distribution decoupled from the Underlay protocol
Use MultiProtocol-BGP (MP-BGP) on the Leaf nodes to distribute internal Host/Subnet Routes and external reachability information
Route-Reflectors deployed for scaling purposes
VXLAN terminates its tunnels on VTEPs (Virtual Tunnel End Point).
Each VTEP has two interfaces, one is to provide bridging function for local hosts, the other has an IP identification in the core network for VXLAN encapsulation/decapsulation.
VXLAN Encapsulation and De-encapsulation occur on T2
Bridging and Gateway are independent of the port type (1/10/40G ports)
Encapsulation happens on the egress port
Decapsulation happens on the ingress port
Service Oriented Architecture
2 or 3 layer network to Leaf & Spine
High density and bandwidth required
Layer 3 ECMP
No oversubscription
Low and uniform delay characteristic
Wire & configure once network
Uniform network configuration
Workload Mobility
Workload Placement
Segmentation
Scale
Automation & Programmability
L2 + L3 Connectivity
Physical + Virtual
Open
Where Does Networking Fit In? To gain the full benefits of cloud computing and virtualization and achieve a business agile IT infrastructure, organizations need a reliable, high-performance data center networking infrastructure with built-in investment protection. Several technology inflection points are coming together that are fundamentally changing the way networks are architected, deployed and operated both in the public cloud as well as the private cloud. From performance, to scale, to virtualization support and automation to simplified orchestration, the requirements are rapidly changing and driving new approaches to building data center networks.
With Extreme Networks, IT can manage more with less. Automated intelligence and analytics for compliance, forensics, and traffic patterns translates into reduced help desk calls. Businesses can predict costs and return on investment, and increase employee productivity by securely onboarding BYOD, increasing both customer and employee satisfaction. A constant risk to the network, and ultimately the hospital, are unapproved applications and rogue devices that may appear on the network and either permit unauthorized access or interfere with other devices. A means to monitor all devices and applications that operate across the network is vital. Just as important are the audit and reporting capabilities necessary to report on who, what, where, when, and how patient data is accessed.
What is SDN? What software-defined networking really means has evolved dramatically and now includes automation and virtualization. Hardware is still a critical component in data center networking equipment, but the influence of switch software shouldn’t be overlooked. When everyone began to get excited about SDN a few years ago, we thought of it as only one thing: the separation of network control from network data packet handling. Traditional networks had already started down this path, with the addition of controller cards to manage line cards in scalable chassis-based switches, and with various data center fabric technologies. SDN took the idea to its logical end, removing the need for the controller and the packet handlers to be on the same backplane or even from the same vendor.
Cost. Reducing costs in the data center and contributing to corporate profitability is an increasingly important trend in today’s economic climate. For example, energy costs for the data center are increasing at 12% a year. Moreover, increased application requirements such as 100% availability necessitate additional hardware and services to manage storage and performance thus raising total cost of ownership.
An alternative to the core/aggregation/access layer network topology has emerged known as leaf-spine. In a leaf-spine architecture, a series of leaf switches form the access layer. These switches are fully meshed to a series of spine switches. One way is to create a Spine and Leaf architecture, also known as a Distributed Core. This architecture has two main components: Spine switches and Leaf switches. Intuition Systems can think of spine switches as the core, but instead of being a large, chassis-based switching platform, the spine is composed of many high-throughput Layer 3 switches with high port density. The mesh ensures that access-layer switches are no more than one hop away from one another, minimizing latency and the likelihood of bottlenecks between access-layer switches. When networking vendors speak of an Ethernet fabric, this is generally the sort of topology they have in mind.
Haven’t we spent the last few decades disaggregating datacenter architecture? And if so, what does disaggregation mean now, is it something different? Strictly speaking, to “disaggregate” means to divide
Data Center Aggregation/Core Switch
The proposed solution must provide a high-density chassis based switch solution that meets the requirements provided below. Your response should describe how your offering would meet these requirements. Vendors must provide clear and concise responses, illustrations can be provided where appropriate. Any additional feature descriptions for your offering can be provided, if applicable.
• Must offer a chassis-based switch solution that provides eight I/O module slots, two management module slots and four fabric module slots. Must support a variety of I/O modules providing support for 1GbE, 10GbE, 40GbE and 100GbE interfaces. Please describe the recommended switching solution and the available I/O modules.
• Switch must offer switching capacity up to 20.48 Tbps. Please describe the performance levels for the recommended switching solution.
• Switch system must support high availability for the hardware preventing single points of failure. Please describe the high availability features.
• It is preferred that the 10 Gigabit Ethernet modules will also be able to accept standard Gigabit SFP transceivers. Please describe the capability of your switch.
• Must support an N+1 redundant power supplies
• Must support N+1 redundant fan trays
• Must support a modular operating system that is common across the entire switching profile. Please describe the OS and advantages.
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
# Internet Security: Safeguarding Your Digital World
In the contemporary digital age, the internet is a cornerstone of our daily lives. It connects us to vast amounts of information, provides platforms for communication, enables commerce, and offers endless entertainment. However, with these conveniences come significant security challenges. Internet security is essential to protect our digital identities, sensitive data, and overall online experience. This comprehensive guide explores the multifaceted world of internet security, providing insights into its importance, common threats, and effective strategies to safeguard your digital world.
## Understanding Internet Security
Internet security encompasses the measures and protocols used to protect information, devices, and networks from unauthorized access, attacks, and damage. It involves a wide range of practices designed to safeguard data confidentiality, integrity, and availability. Effective internet security is crucial for individuals, businesses, and governments alike, as cyber threats continue to evolve in complexity and scale.
### Key Components of Internet Security
1. **Confidentiality**: Ensuring that information is accessible only to those authorized to access it.
2. **Integrity**: Protecting information from being altered or tampered with by unauthorized parties.
3. **Availability**: Ensuring that authorized users have reliable access to information and resources when needed.
## Common Internet Security Threats
Cyber threats are numerous and constantly evolving. Understanding these threats is the first step in protecting against them. Some of the most common internet security threats include:
### Malware
Malware, or malicious software, is designed to harm, exploit, or otherwise compromise a device, network, or service. Common types of malware include:
- **Viruses**: Programs that attach themselves to legitimate software and replicate, spreading to other programs and files.
- **Worms**: Standalone malware that replicates itself to spread to other computers.
- **Trojan Horses**: Malicious software disguised as legitimate software.
- **Ransomware**: Malware that encrypts a user's files and demands a ransom for the decryption key.
- **Spyware**: Software that secretly monitors and collects user information.
### Phishing
Phishing is a social engineering attack that aims to steal sensitive information such as usernames, passwords, and credit card details. Attackers often masquerade as trusted entities in email or other communication channels, tricking victims into providing their information.
### Man-in-the-Middle (MitM) Attacks
MitM attacks occur when an attacker intercepts and potentially alters communication between two parties without their knowledge. This can lead to the unauthorized acquisition of sensitive information.
### Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks
1. Multi-Rate1,2.5,5,10GigabitEdgePoE++
Multi-RateSpineLeafDesign(10,25,40,50,100Gigabit)
Web-scale for the rest of us...
Web-Scale for The Enterprise (Any Scale upgrades).
• SLAs with Agility (Storage Pools and Containers).
• Security, Control & Analytics (Data follows a VM as it moves).
• Predictable Scale (I/O & data locality are critical).
X460-G2 (Advanced L3 1-40G) Multirate Option
PoE
Fiber
DC
Policy
Fit The Swiss Army Knife of Switches
Half Duplex
½ & ½
3 Models
This is where: 10G on
existing copper Cat5e
and Cat6 extend the
life of the installed
cable plant. Great for
1:N Convergence.
X620 (1OG Copper or Fiber)
Speed Next Gen Edge
Lowered TCO via
Limited Lifetime Warrantee
XYZ Account Design Goals
• Fractional consumption and predictable scale
(Distributed everything).
• No single point of failure (Always-on systems).
• Extensive automation and rich analytics.
XYZ Account Fundamental Assumptions..
• Unbranded x86 servers: fail-fast systems
• All intelligence and services in software
• Linear, predictable scale-out
CAPEX or OPEX (you choose)?
Reduced Risk (just witness or take action)
Time is the critical Factor with XYZ Account Services...
Infrastructure
Businessmodel
Ownership
Considerations
Management
Location
• 32 x 100Gb
• 64 x 50Gb
• 128 x 25Gb
• 128 x 10Gb
• 32 x 40Gb
96 x 10GbE Ports
(via4x10Gb breakout)
8 x 10/25/40/
50/100G
10G
Next Gen: Spine Leaf
X670 & X770 - Hyper Ethernet
Common Features
• Data Center Bridging (DCB) features
• Low ~600 nsec chipset latency in cut through mode.
• Same PSUs and Fans as X670s (Front to back or Back to
Front) AC or DC.
X670-G2 -72X (10GbE Spine Leaf) 72 10GbE
X670-48x-4q (10GbE Spine Leaf) 48 10GbE & 4 QSFP+
QSFP+
40G DAC
Extreme Feature Packs
Core
Edge
AVB
OpenFlow
Advance
Edge
1588 PTP
MPLS
Direct Attach
Optics License
Extreme Switches
include the license
they normally need.
Like any other
software platform
you have an
upgrade path.
QSPF28
100G DAC
Disaggregated Switch
Purple Metal
XoS as a Platform . Network as a Platform...
Distributed Everything (no propietary tech).
Always-on Operations (Spine-leaf Resilience).
Extensive Automation (rich analytics).
Purposed for
Broadcom (ASICs)
XYZ Account Business Value
XoS Platform
Config L2/L3
Analytics
Any OS
Any Bare Metal Switch
Policy
Disaggregated Switch
Bare - Grey
Web-Scale
Configuration
consistency ..
What constitutes a Software
Defined Data Center (SDDC)?
Abstract pool automate across...
XYZ Account Strategic Asset
Initial Configuration Tasks...
• Multi-chassis LAG (LACP)
• Routing configuration (VRRP/HSRP)
• STP (Instances/mapping) VLANs
Recurring configuration...
• VRRP/HSRP (Advertise new subnets)
• Access lists (ACLs)
• VLANs (Adjust VLANs on trunks).
• VLANs STP/MST mapping
• Add VLANs on uplinks
• Add VLANs to server ports
Control Plane
Logical
Data Plane
Physical
compute
network
storage
Logical Router 1
VXLAN 5001
Logical Router 2
VXLAN 5002
Logical Router 3
VXLAN 5003
MAC table
ARP table
VTEP table
Controller Directory
VTEP
DHCP/DNS
Policy
Edge Services
VM VM VM VM VM
Who?
Where?
When?
Whatdevice?
How?
QuarantineRemediate
Allow
Authentication
NAC Server
Summit
Netsite
Advanced
NAC Client
Joe Smith
XYZ Account
Access
Controlled
Subnet
Enforcement
Point
Network
Access
Control
This is where
if X + Y, then Z...
• LLDP-MED
• CDPv2
• ELRP
• ZTP
If user
matches a
defined
attribute
value
ACL
QoS
Then place
user into a
defined ROLE
A port is what it is because?
Datacenter
Evolution
1990's
Client-Server
x86 x86 x86x86 x86 x86
2000s
Virtualization
x86 x86x86
2010> Cloud
Public Cloud
Intelligent
Software
Roadblocks
• Silos
• Complexity
• Scaling
Application
Experience
FullContext
App
App
Analytics
App
Stop the
finger-pointing
Application Network Response.
Flow or Bit
Bucket
Collector
3 million Flows
Sensors
X460 IPFix 4000 Flows
(2048 ingress, 2048 egress)
Sensor PV-FC-180, S or K Series (Core
Flow 2/ 1 Million Flows)
Flow-based Access Points
From the controller (8K Flows
per AP or C35 is 24K Flows)
Flows
Why not do this in the
network?
10110111011101110 101101110111011101
6 million Flows
Business Value
Context BW IP HTTP:// Apps
Platform Automation Control Experience Solution Framework
Is your network faster today than
it was 3 years ago? Going forward
it should deliver more, faster,
different
DIY Fabric for the DIY Data Center
Three fundamental building blocks for Data
Center Network Automation Solution:
• Orchestration (OpenStack, vRealize, ESX,
NSX, MS Azure, ExtremeConnect)
• Overlay (VXLAN, NVGRE..)
• Underlay (traditional L2/L3 protocols, OSPF,
MLAG etc Underlay
Overlay
Orchestration
How is a traditional
Aggregated Technology like a Duck?
A duck can swim, walk and fly but...
Z
I/O
Bandwidth
Y
Memory
Storage
X
Compute
XoS fn(x,y,z)
is like an elastic Fabric
• You can never have enough.
• Customers want Scale. made easy.
• Hypervisor integration.
The next convergence will be collapsing the
datacenter designs into smaller, elastic form
factors for compute, storage and networking.The application
is always the driver.
Summit
Cisco ACI
HP Moonshot
XYZ Account Data CenterXYZ Account Data Center
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Fabric Modules (Spine)
I/OModules(Leaf)
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3L2 L3 L2 L3L2 L3
L2 L3L2 L3 L2 L3L2 L3
L2 L3L2 L3
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3 L2 L3
L2 L3 L2 L3
L2 L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3 L2 L3
L2 L3 L2 L3
L2 L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit) Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
ServersServers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
• Can not access Line cards.
• No L2/l3 recovery inside.
• No access to Fabric.
Disaggregated value...
• Control Top-of-Rack Switches
• L2/L3 protocols inside the Spline
• Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
• Traditional 3-tier model (Less cabling).
• Link speeds must increase at every hop (Less
predictable latency).
• Common in Chassis based architectures (Optimized
for North/South traffic).
• Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
• Always two hops to any leaf (More resiliency,
flexibility and performance).
• Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
• This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
• Virtualization happens with VXLAN and VMotion (Control by the overlay).
• N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
• This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
• Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
• Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
• This is where, you can never have enough.
• Customers want scale made easy.
• Hypervisor integration w cloud simplicity.
L2 L3 L2 L3
L2 L3 L2 L3
L2 L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
• Each cluster is independent
(including servers, storage,
database & interconnects).
• Each cluster can be used for
a different type of service.
• Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
• All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
• VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
• Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
• Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
• Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
• No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit) Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
• This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
• Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
• Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
• Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
• Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
• Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
• ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
• XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
• XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
• 80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
• Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
• Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
• Low density X620 might help XYZ
Account to avoid stranded ports.
• Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
• Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
• With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
• Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
• Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
2. OPEX Components of Converged Environment
Security
Compliance
Automation
Operations
Compute
Storage
Networking
X Y
Z
Pooled compute, network,
and storage capacity
XYZ Account 2017 Design
CAPEX Components of Converged Environment
Cores
Memory
Spindles
Network
6 12 16 20
64GB 128GB 192GB 256GB 512GB
3.6TB 4.8TB 6TB 10TB8TB
10G RJ45 SFP+ QSFP+ QSFP28
SSD SSD
2016 Design
10G Compute, Memory and Storage
Jeff Green
2017
Rev. 1
South
Legend
Legend
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
Prescriptive Services10G / 40G
Overlay
Overall Architecture
SDN
NSX
Underlay
ACI
Other
Spine-Leaf
MLAG
NEXUS
Other
Applications
Automated provisioning
and configuration,
Intelligence in software
Manual Slow
ExtremeCore10G
ExtremeEdgePoE
25G / 50G /100G
QSFP28 DACs (Passive Cables)
LR4 - Up to 10 Km on Single Mode.
2 Km lower cost module (Lite).
Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm).
QSFP28 QSFP28 DACs (Active Cables)
10411 - 100Gb, QSFP28-QSFP28 DAC, 1m
10413 - 100Gb, QSFP28-QSFP28 DAC, 3m
10414 - 100Gb, QSFP28-QSFP28 DAC, 5m
10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m
4x25 DACS
1x1 DAC
10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m
10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m
10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m
2X50 DACs
100G => 4 x 25G lanes
10434 - 100Gb, QSFP28-QSFP28 DAC, 5m
10435 - 100Gb, QSFP28-QSFP28 DAC, 7m
10436 - 100Gb, QSFP28-QSFP28 DAC, 10m
10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
4x25 DACS
1x1 DAC
10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m
10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m
10437 - 100Gb, QSFP28-QSFP28 DAC, 20m
10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m
Extreme Data Center Switch Options (10, 25, 40, 50, 100G)Extreme Data Center Switch Options (10, 25, 40, 50, 100G)
Layer2 multi-
chassis port
channel (vPC or
MLAG)
ISSU for a redundant
pair. Less than 2000ms
impact for the
upgrade.
Spine (100G)Spine (100G) Spine (100G)Spine (100G)
Core
Spine (100G) Spine (100G)
Core
ControlControl ControlControl
Border
Control Control
Border
4 x 100G 4 x 100G
Spine (100G) Spine (100G)
Core
Control Control
Border
4 x 100G 4 x 100G
LeafLeafLeafLeaf LeafLeaf
Campus
LeafLeaf
Campus
LeafLeafLeafLeaf LeafLeaf LeafLeaf LeafLeafLeafLeaf LeafLeaf
Resnet
LeafLeaf
Resnet
LeafLeaf
Campus
LeafLeaf LeafLeaf
Resnet
Minimum MAC
address table size
should be 256K.ARP table capacity
should support minimum
64K users in a single
vlan.
Deep interface
Buffer or Intelligence
buffer management.
VXlan
Minimum MAC
address table size
should be 256K.ARP table capacity
should support minimum
64K users in a single
vlan.
Deep interface
Buffer or Intelligence
buffer management.
VXlan
Scale UP
CampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloadsSpine Leaf delivers Interconnect for distributed compute workloads
ResnetCampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloads
Resnet
Scale UP
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
Fabric transparent to
end devices
Combines the fabric elements into
a single domain
Fabric appears as a single device
Policy and overlays applied at the
fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
SwitchSwitch
SwitchSwitch
ExtremeFabric away to simplify
network design & operation
Fabric transparent to
end devices
Combines the fabric elements into
a single domain
Fabric appears as a single device
Policy and overlays applied at the
fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
Switch
Switch
ExtremeFabric away to simplify
network design & operation
Make the
Network act like
~~~~
3. OPEX Components of Converged Environment
Security
Compliance
Automation
Operations
Compute
Storage
Networking
X Y
Z
Pooled compute, network,
and storage capacity
XYZ Account 2017 Design
CAPEX Components of Converged Environment
Cores
Memory
Spindles
Network
6 12 16 20
64GB 128GB 192GB 256GB 512GB
3.6TB 4.8TB 6TB 10TB8TB
10G RJ45 SFP+ QSFP+ QSFP28
SSD SSD
2016 Design
10G Compute, Memory and Storage
Jeff Green
2017
Rev. 1
South
Legend
Legend
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
Prescriptive Services10G / 40G
Overlay
Overall Architecture
SDN
NSX
Underlay
ACI
Other
Spine-Leaf
MLAG
NEXUS
Other
Applications
Automated provisioning
and configuration,
Intelligence in software
Manual Slow
ExtremeCore10G
ExtremeEdgePoE
25G / 50G /100G
QSFP28 DACs (Passive Cables)
LR4 - Up to 10 Km on Single Mode.
2 Km lower cost module (Lite).
Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm).
QSFP28 QSFP28 DACs (Active Cables)
10411 - 100Gb, QSFP28-QSFP28 DAC, 1m
10413 - 100Gb, QSFP28-QSFP28 DAC, 3m
10414 - 100Gb, QSFP28-QSFP28 DAC, 5m
10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m
4x25 DACS
1x1 DAC
10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m
10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m
10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m
2X50 DACs
100G => 4 x 25G lanes
10434 - 100Gb, QSFP28-QSFP28 DAC, 5m
10435 - 100Gb, QSFP28-QSFP28 DAC, 7m
10436 - 100Gb, QSFP28-QSFP28 DAC, 10m
10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
4x25 DACS
1x1 DAC
10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m
10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m
10437 - 100Gb, QSFP28-QSFP28 DAC, 20m
10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m
Extreme Data Center Switch Options (10, 25, 40, 50, 100G)Extreme Data Center Switch Options (10, 25, 40, 50, 100G)
Data Center Fabric
Spine (100G)Spine (100G)
Spine
LeafLeafLeafLeaf LeafLeaf LeafLeafLeafLeaf
ISP 2ISP 2
ISP 1ISP 1
Residential
Housing
Residential
Housing
Hot Spot in
Local Town
Hot Spot in
Local Town
Everything in the Spine
Leaf is just 2 hops away.
Separate Path Available
to each Spine.
Same Latency for each
Path.
Everything in the Spine
Leaf is just 2 hops away.
Separate Path Available
to each Spine.
Same Latency for each
Path.
LeafLeaf
Spine (100G)Spine (100G)
Main CampusMain Campus
UniversityUniversity
Main Campus
University
Spine (100G)Spine (100G) Spine (100G)Spine (100G)Scale OutScale Out Scale OutScale Out
Make the
Network act like ~~~~
CampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloadsSpine Leaf delivers Interconnect for distributed compute workloads
ResnetCampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloads
Resnet
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
Fabric transparent
to end devices
Combines the fabric elements
into a single domain
Fabric appears as a single device
Policy and overlays applied at
the fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
SwitchSwitch
SwitchSwitch
ExtremeFabric away to simplify
network design & operation
Fabric transparent
to end devices
Combines the fabric elements
into a single domain
Fabric appears as a single device
Policy and overlays applied at
the fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
Switch
Switch
ExtremeFabric away to simplify
network design & operation
Extreme Data Center Switch Options (10, 25, 40, 50, 100G)
Data Center Fabric
Spine (100G)
Spine
LeafLeaf Leaf LeafLeaf
ISP 2
ISP 1
Residential
Housing
Hot Spot in
Local Town
Everything in the Spine
Leaf is just 2 hops away.
Separate Path Available
to each Spine.
Same Latency for each
Path.
Leaf
Spine (100G)
Main Campus
University
Spine (100G) Spine (100G)Scale Out Scale Out
Make the
Network act like ~~
CampusDataCenter
Spine Leaf delivers Interconnect for distributed compute workloads
Resnet
X870-32c Spine/Leaf Switch
32 x 10/25/40/50/100GbE QSFP28 Ports
96 x 10GbE Ports (via 24 ports of
4x10Gb breakout)
8 x 10/25/40/50/
100GbE Ports
X690 10Gb Leaf Switches Enabled with 100Gb
New 10Gb leaf aggregation switches for fiber and
10GBASE-T applications with 100Gb Ethernet.
• Enabled with 40Gb & 100Gb high speed uplinks
• Shares power supply and fan modules with X870
• Stacks with X870 using SummitStack-V400
460 Multirate
V400 Port Extender
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
X620 Multirate
Fabric transparent
to end devices
Combines the fabric elements
into a single domain
Fabric appears as a single device
Policy and overlays applied at
the fabric edge
No subnets, no VLANs, no VRFs
required within the fabric
Zero Touch
Configuration
Switch
Switch
ExtremeFabric away to simplify
network design & operation
4. OPEX Components of Converged Environment
Security
Compliance
Automation
Operations
Compute
Storage
Networking
X Y
Z
Pooled compute, network,
and storage capacity
XYZ Account 2017 Design
CAPEX Components of Converged Environment
Cores
Memory
Spindles
Network
6 12 16 20
64GB 128GB 192GB 256GB 512GB
3.6TB 4.8TB 6TB 10TB8TB
10G RJ45 SFP+ QSFP+ QSFP28
SSD SSD
2016 Design
10G Compute, Memory and Storage
Jeff Green
2017
Rev. 1
South
Legend
Legend
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
Prescriptive Services10G / 40G
Overlay
Overall Architecture
SDN
NSX
Underlay
ACI
Other
Spine-Leaf
MLAG
NEXUS
Other
Applications
Automated provisioning
and configuration,
Intelligence in software
Manual Slow
ExtremeCore10G
ExtremeEdgePoE
25G / 50G /100G
QSFP28 DACs (Passive Cables)
LR4 - Up to 10 Km on Single Mode.
2 Km lower cost module (Lite).
Wavelengths (1295.56, 1300.05, 1304.58,1309.14 nm).
QSFP28 QSFP28 DACs (Active Cables)
10411 - 100Gb, QSFP28-QSFP28 DAC, 1m
10413 - 100Gb, QSFP28-QSFP28 DAC, 3m
10414 - 100Gb, QSFP28-QSFP28 DAC, 5m
10421 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 1m
4x25 DACS
1x1 DAC
10423 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 3m
10424 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
10426- 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 1m
10428 - 100Gb, QSFP28– x SFP28 (2x50Gb) DAC breakout, 3m
2X50 DACs
100G => 4 x 25G lanes
10434 - 100Gb, QSFP28-QSFP28 DAC, 5m
10435 - 100Gb, QSFP28-QSFP28 DAC, 7m
10436 - 100Gb, QSFP28-QSFP28 DAC, 10m
10441 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 5m
4x25 DACS
1x1 DAC
10442 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 7m
10443 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 10m
10437 - 100Gb, QSFP28-QSFP28 DAC, 20m
10444 - 100Gb, QSFP28– x SFP28 (4x25Gb) DAC breakout, 20m
Organizing Compute, Management & Edge
Edge Leaf
L3 to DC Fabric
L2 to External Networks
Compute Clusters Infrastructure Clusters (Edge, Storage,
vCenter and Cloud Management
System)
WAN
Internet
L3
L2
L3
L2
Leaf
Spine
L2 VLANs
f or bridging
Single vCenter Server to manage all Management, Edge and Compute Clusters
• NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server
• NSX Controllers can also be deployed into the Management Cluster
• Reduces vCenter Server licensing requirements
Separation of compute, management and Edge function with following design
advantage. Managing life-cycle of resources for compute and Edge functions.
• Ability to isolate and develop span of control
• Capacity planning – CPU, Memory & NIC
• Upgrades & migration flexibility
Automation control over area or function that requires frequent changes. app-
tier, micro-segmentation & load-balancer. Three areas of technology require
considerations.
• Interaction with physical network
• Overlay (VXLAN) impact
• Integration with vSphere clustering
Registration or
Mapping
WebVM
WebVM
VM
VM WebVM
Compute Cluster
WebVM VM
VM
Compute
A
vCenter Server
NSX Manager NSX
Controller
Compute
B
Edge and Control VM
Edge Cluster
Management Cluster
Organizing Compute, Management & Edge
Edge Leaf
L3 to DC Fabric
L2 to External Networks
Compute Clusters Infrastructure Clusters (Edge, Storage,
vCenter and Cloud Management
System)
WAN
Internet
L3
L2
L3
L2
Leaf
Spine
L2 VLANs
f or bridging
Single vCenter Server to manage all Management, Edge and Compute Clusters
• NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server
• NSX Controllers can also be deployed into the Management Cluster
• Reduces vCenter Server licensing requirements
Separation of compute, management and Edge function with following design
advantage. Managing life-cycle of resources for compute and Edge functions.
• Ability to isolate and develop span of control
• Capacity planning – CPU, Memory & NIC
• Upgrades & migration flexibility
Automation control over area or function that requires frequent changes. app-
tier, micro-segmentation & load-balancer. Three areas of technology require
considerations.
• Interaction with physical network
• Overlay (VXLAN) impact
• Integration with vSphere clustering
Registration or
Mapping
WebVM
WebVM
VM
VM WebVM
Compute Cluster
WebVM VM
VM
Compute
A
vCenter Server
NSX Manager NSX
Controller
Compute
B
Edge and Control VM
Edge Cluster
Management Cluster
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Traditional Networking Configuration Tasks
L3
L2
Initial configuration
• Multi-chassis LAG
• Routing configuration
• SVIs/RVIs
• VRRP/HSRP
• LACP
• VLANs
Recurring configuration
• SVIs/RVIs
• VRRP/HSRP
• Advertise new subnets
• Access lists (ACLs)
• VLANs
• Adjust VLANs on trunks
• VLANs STP/MST mapping
• VLANs STP/MST mapping
• Add VLANs on uplinks
• Add VLANs to server port
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Traditional Networking Configuration Tasks
L3
L2
Initial configuration
• Multi-chassis LAG
• Routing configuration
• SVIs/RVIs
• VRRP/HSRP
• LACP
• VLANs
Recurring configuration
• SVIs/RVIs
• VRRP/HSRP
• Advertise new subnets
• Access lists (ACLs)
• VLANs
• Adjust VLANs on trunks
• VLANs STP/MST mapping
• VLANs STP/MST mapping
• Add VLANs on uplinks
• Add VLANs to server port
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
Network & Security Services in Software
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
L3 Topologies & Design Considerations. With XoS 670 Cores L2
Interfaces by default IP packet as large as 9214 Bytes can
be sent and received (no configuration is required). L3
interfaces by default IP packet as large as 1500 Bytes can
be sent and received. Configuration step for L3 interfaces:
change MTU to 9214 “mtu ” command) IP packet as
large as 9214 Bytes can be sent and received
• L3 ToR designs have dynamic routing protocol between
leaf and spine.
• BGP, OSPF or ISIS can be used
• Rack advertises small set of prefixes
• (Unique VLAN/subnet per rack)
• Equal cost paths to the other racks prefixes.
• Switch provides default gateway service for each VLAN
subnet
• 801.Q trunks with a small set of VLANs for VMkernel
traffic
• Rest of the session assumes L3 topology
L3
L2
Network & Security Services in Software
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
L3 Topologies & Design Considerations. With XoS 670 Cores L2
Interfaces by default IP packet as large as 9214 Bytes can
be sent and received (no configuration is required). L3
interfaces by default IP packet as large as 1500 Bytes can
be sent and received. Configuration step for L3 interfaces:
change MTU to 9214 “mtu ” command) IP packet as
large as 9214 Bytes can be sent and received
• L3 ToR designs have dynamic routing protocol between
leaf and spine.
• BGP, OSPF or ISIS can be used
• Rack advertises small set of prefixes
• (Unique VLAN/subnet per rack)
• Equal cost paths to the other racks prefixes.
• Switch provides default gateway service for each VLAN
subnet
• 801.Q trunks with a small set of VLANs for VMkernel
traffic
• Rest of the session assumes L3 topology
L3
L2
XYZ Account (Spine)
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
• Lync Traffic Engineering with
Purview Analytics Service Insertion
• Multi-Tenant Networks Automation
and Orchestration
• Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
• Lync Traffic Engineering with
Purview Analytics Service Insertion
• Multi-Tenant Networks Automation
and Orchestration
• Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
NSX Controllers Functions
LogicalRouter1
VXLAN5000
LogicalRouter2
VXLAN5001
LogicalRouter3
VXLAN-5002
Controller VXLAN
DirectoryService
MAC table
ARP table
VTEPtable
This is where NSX will provide XYZ Account one control
plane to distribute network information to ESXi hosts.
NSX Controllers are clustered for scale out and high
availability.
• Network information is distributed across nodes in a
Controller Cluster (slicing)
• Remove the VXLAN dependency on multicast
routing/PIM in the physical network
• Provide suppression of ARP broadcast traffic in
VXLAN networks
XYZ Account (Spine)
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
• Lync Traffic Engineering with
Purview Analytics Service Insertion
• Multi-Tenant Networks Automation
and Orchestration
• Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
NSX Controllers Functions
LogicalRouter1
VXLAN5000
LogicalRouter2
VXLAN5001
LogicalRouter3
VXLAN-5002
Controller VXLAN
DirectoryService
MAC table
ARP table
VTEPtable
This is where NSX will provide XYZ Account one control
plane to distribute network information to ESXi hosts.
NSX Controllers are clustered for scale out and high
availability.
• Network information is distributed across nodes in a
Controller Cluster (slicing)
• Remove the VXLAN dependency on multicast
routing/PIM in the physical network
• Provide suppression of ARP broadcast traffic in
VXLAN networks
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
vSphere
Host
VXLAN Transport
Network
Host 1
VTEP2
10.20.10.11
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VXLAN when deployed creates
automatic port-group whose VLAN
ID must be the same per VDS. For
the Fabric is L2, this usually
means that the same IP subnets
are also used across racks for a
given type of traffic. For a given
host only one VDS responsible for
VXLAN traffic. A single VDS can
span multiple cluster.Transport
Zone, VTEP, Logical Networks and
VDS
VTEP VMkernel interface belongs
to a specific VLAN backed port-
group dynamically created
during the cluster VXLAN
preparation
• One or more VDS can be part
of the same TZ
• A given Logical Switch can
span multiple VDS. vSphere Host(ESXi)
L3 ToR Switch
Routed uplinks (ECMP)
VLANTrunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26
GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26
GW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
vSphere
Host
VXLAN Transport
Network
Host 1
VTEP2
10.20.10.11
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VXLAN when deployed creates
automatic port-group whose VLAN
ID must be the same per VDS. For
the Fabric is L2, this usually
means that the same IP subnets
are also used across racks for a
given type of traffic. For a given
host only one VDS responsible for
VXLAN traffic. A single VDS can
span multiple cluster.Transport
Zone, VTEP, Logical Networks and
VDS
VTEP VMkernel interface belongs
to a specific VLAN backed port-
group dynamically created
during the cluster VXLAN
preparation
• One or more VDS can be part
of the same TZ
• A given Logical Switch can
span multiple VDS. vSphere Host(ESXi)
L3 ToR Switch
Routed uplinks (ECMP)
VLANTrunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26
GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26
GW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
Traditional control
LDAP NAC DHCP Radius Captive
Portal
DNS MDMLDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
LDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
NAC
Analytics
Netsite
Management Cluster (Control)
Cloud Based control
Leaf L2
L3 L3
L2
VMkernel
VLANs
VLANs for
Management VMs
L2
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
VMkernel
VLANs
VLANs for
Management VMs
Single Rack Connectivity
Leaf
L3
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
Dual Rack Connectivity
L2
23
Extreme Vmware Deployment Considerations – This is
where, management Cluster is typically provisioned on a
single rack
• The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management
• Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the
requirements for highly available design.
• Typically in a small design management and Edge
cluster are collapsed. Exclude management cluster
from preparing VXLAN.
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager deployed as a
virtual appliance
4 vCPU, 12 GB of RAM per node
Consider reserving memory for
VC to ensure good Web Client
performance
Can not modify configurations
Traditional control
LDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
NAC
Analytics
Netsite
Management Cluster (Control)
Cloud Based control
Leaf L2
L3 L3
L2
VMkernel
VLANs
VLANs for
Management VMs
L2
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
VMkernel
VLANs
VLANs for
Management VMs
Single Rack Connectivity
Leaf
L3
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
Dual Rack Connectivity
L2
23
Extreme Vmware Deployment Considerations – This is
where, management Cluster is typically provisioned on a
single rack
• The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management
• Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the
requirements for highly available design.
• Typically in a small design management and Edge
cluster are collapsed. Exclude management cluster
from preparing VXLAN.
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager deployed as a
virtual appliance
4 vCPU, 12 GB of RAM per node
Consider reserving memory for
VC to ensure good Web Client
performance
Can not modify configurations
Extreme Networks
Compute, Storage Networking
Integration...
Extreme Networks
Control, Analytics & Security
Integration...