Control and Security - A constant risk to the network, and ultimately XYZ Account, are unapproved applications and rogue devices that may appear on the network and either permit unauthorized access or interfere with other devices. A means to monitor all devices and applications that operate across the network is vital. Just as important are the audit and reporting capabilities necessary to report on who, what, where, when, and how XYZ Account data is accessed. Our centralized management system will leverage data in the network to understand application use.
Maintaining high quality user experience (Single-Pane-of-Glass Control from BYOD to the Data Center).
Minimizing risk from consumer products and mobile devices (automation of routine tasks).
Identifying root-cause of service outage (Make decisions about your network based on analytics, not assumptions.).
Business alignment - Over time, the proliferation of devices has created unnecessary complexity. Control Center delivers centralized visibility and granular control of network resources. One click can equal a thousand actions when you manage your network. Control Center can even manage beyond Extreme Networks switching, routing, and wireless hardware to deliver standards-based control of other vendors’ network equipment.
Transform complex network data into actionable information (Gain visibility from data in your network).
Centralize and simplify the definition, management, and enforcement of policies (Detect anomalies and get alerts based on real network behavior).
Manage third-party devices to provide a complete picture of the entire infrastructure in a heterogeneous network environment (Balance CapEx and OpEx and decrease complexity).
Three fundamental building blocks for Data Center Network Automation Solution:
Orchestration (OpenStack, vRealize-NSX, DCM)
Overlay (VXLAN, NVGRE..)
Underlay (traditional L2/L3 protocols, OSPF, MLAG etc…)
CHALLENGES AND PAIN
POINTS IN ENTERPRISE IT
• Meeting the growing
expectations of users in a mobile
first world
• Flexibility vs. Security: More
devices and applications on the
network challenge security and
control
• Cost vs. Capability
• Reliability vs. Growth
• Managing the network is too
complex and time consuming
• Enterprise mobility/constant
connectivity: the ability to
access company servers,
databases, and network in all
facilities of a company is crucial
to daily business
• State-of-the-art security is
required to prevent access of
personal information
• Require the ability to control
content accessible to individuals
with varying network functions
and limitations based on role of
individual
The network plays a critical role in establishing a consistent and high quality user experience. The network must provide more than basic connectivity; it must transform to become a strategic business asset. Similar to a television, radio or telephone, we expect technology to just work and deliver a great experience. For instance, if the network or technology does not allow us to share our experience socially, we are left with a negative perception; the expectation for what constitutes a great experience is different for each person. It is up to IT to keep up and deliver excellent experiences. Our solution is TRULY integrated for both wired and wireless. Deployable for private and public cloud environments. Align with mega trends shaping the market: Cloud, Mobile and of Couse with Isaac, Social as well.
Until recently, that kind of good design had primarily been found in consumer-facing apps. Great Network design rely on getting users hooked on the product. It’s not good enough any more for enterprise apps to work—they need a great user experience as well. Lacking that, there’s probably an alternative tool that’s easier to use and gets the same business results. Application Analytics intelligence provides IT with the visibility and control of applications and websites (including related sub-web sites) resident in all parts of the network, from the wired or wireless edge all the way through the core and datacenter, as well as application traffic from the Enterprise to the private Cloud, public Cloud or any service on the internet.
Network and Application Response Time Management – Provides network performance versus applications performance.
Proactive Security and Compliance – Provides IT with the ability to monitor and restrict application usage and website access based on specific parameters. For example, a known web browser version that poses security risks could be restricted.
Contextual Information with Depth and Granularity – Associate additional contextual information such as who, what, where, when and how with any application.
ExtremeAnalytics provides us with a global view of the overall health of the network from a single pane of glass. It’s the first stop in the troubleshooting process when network issues are discovered and it enables us to pinpoint issues and drill down to a specific closet or client for fast resolution. It is the industry’s very first and only – patent pending – solution to transform the Network into a Strategic Business Asset – by enabling the mining of network-based business events and strategic information that help business leaders make faster and more effective decisions. It does this all from a centralized command control center that combines Network Management with Business Analytics, and at unprecedented scale (100M sessions) and scope.
The ubiquitous heavy-tailed distributions in the Internet im-plies an interesting feature of the Internet traffic: most (e.g. 80%) of the traffic is actually carried by only a small number of connections (elephants), while the remaining large amount of connections are very small in size or lifetime (mice). In a fair network environment, short connections expect rela-tively fast service than long connections. For these reasons, short TCP flows are generally more con-servative than long flows and thus tend to get less than their fair share when they compete for the bottleneck bandwidth. In this paper, we propose to give preferential treatment to short flows2 with help from an Active Queue Management (AQM) policy inside the network. We also rely on the pro-posed Differentiated Services (Diffserv) architecture [3] to classify flows into short and long at the edge of the network. More specifically, we maintain the length of each active flow (in packets3) at the edge routers and use it to classify incoming packets.
In an industry that’s already defined, Extreme Network’s recent announcement of The Automated Campus is a significant advance in networking. For the first time, all the essential technologies, products, procedures and support are gathered together and integrated. All too often, the piecemeal/piecewise growth strategy typically historically applied in organizational network evolution results in too many tools, procedures, and techniques at work, precluding fast responsiveness, optimal operations staff productivity, and the degree of accuracy and efficiency required to keep end-users productive as well.
The most important opportunity today is in boosting both productivity of end-users and network operators. The automated campus must address the productivity of network planners and network operations managers and staff. The often-significant number of elements required in an installation can demand significant staff time and can consequentially have an adverse impact on operating expenses (OpEx). While It is possible to build traditional networks that, when running correctly and optimally, get the job done – unfortunately, they often embody such high operating expenses that cost becomes the overriding factor controlling the evolution of the campus network overall. The Automated Campus will allow XYZ Account to address all these issues and concerns. A key goal here must be, of course, to reduce the number of “moving parts” required to build and operate any campus.
Extreme’s strategy for Campus Automation begins with re-thinking the way networks are designed, deployed and managed. Extreme’s Fabric-based networks enable faster configuration and troubleshooting; As a result, there is less opportunity for misconfiguration. Several automation solutions designed to enhance security often force network managers to accept complexity and degraded resilience to secure the network to meet local policies. Should a breach occur, containment to that segment protects even more sensitive parts of the network, resulting in a true dead-end for the hacker. With Extreme’s Automated Campus services can easily be defined and provisioned on-the-fly without disruption. Network operators specify what services are allowed or prohibited across the network.
Juniper Networks: Virtual Chassis High AvailabilityJuniper Networks
This presentation shares the findings of the second installment of a recent Juniper Networks commissioned Network Test to evaluate its Virtual Chassis technology in Juniper EX8200 modular and Juniper EX4200/EX4500/EX4550 fixed-configuration switches.
In this second installment of a two-part project, the focus is on the reliability and resiliency of Virtual Chassis technology. Part I of this project focused on Virtual Chassis performance and scalability: http://juni.pr/13Zi1Sp. Visit http://juni.pr/dacenSS
to learn more about Juniper’s Data Center solutions.
Ronnie Scott
Consulting CSE
Presented at the Cybera/CANARIE National Summit 2009, as part of the session "What's Next: Key Areas of Emerging Cyberinfrastructure."
This session explored some of the up-and-coming areas of cyberinfrastructure and why they are increasingly being considered as essential elements to innovative research and development.
The network plays a critical role in establishing a consistent and high quality user experience. The network must provide more than basic connectivity; it must transform to become a strategic business asset. Similar to a television, radio or telephone, we expect technology to just work and deliver a great experience. For instance, if the network or technology does not allow us to share our experience socially, we are left with a negative perception; the expectation for what constitutes a great experience is different for each person. It is up to IT to keep up and deliver excellent experiences. Our solution is TRULY integrated for both wired and wireless. Deployable for private and public cloud environments. Align with mega trends shaping the market: Cloud, Mobile and of Couse with Isaac, Social as well.
Until recently, that kind of good design had primarily been found in consumer-facing apps. Great Network design rely on getting users hooked on the product. It’s not good enough any more for enterprise apps to work—they need a great user experience as well. Lacking that, there’s probably an alternative tool that’s easier to use and gets the same business results. Application Analytics intelligence provides IT with the visibility and control of applications and websites (including related sub-web sites) resident in all parts of the network, from the wired or wireless edge all the way through the core and datacenter, as well as application traffic from the Enterprise to the private Cloud, public Cloud or any service on the internet.
Network and Application Response Time Management – Provides network performance versus applications performance.
Proactive Security and Compliance – Provides IT with the ability to monitor and restrict application usage and website access based on specific parameters. For example, a known web browser version that poses security risks could be restricted.
Contextual Information with Depth and Granularity – Associate additional contextual information such as who, what, where, when and how with any application.
ExtremeAnalytics provides us with a global view of the overall health of the network from a single pane of glass. It’s the first stop in the troubleshooting process when network issues are discovered and it enables us to pinpoint issues and drill down to a specific closet or client for fast resolution. It is the industry’s very first and only – patent pending – solution to transform the Network into a Strategic Business Asset – by enabling the mining of network-based business events and strategic information that help business leaders make faster and more effective decisions. It does this all from a centralized command control center that combines Network Management with Business Analytics, and at unprecedented scale (100M sessions) and scope.
The ubiquitous heavy-tailed distributions in the Internet im-plies an interesting feature of the Internet traffic: most (e.g. 80%) of the traffic is actually carried by only a small number of connections (elephants), while the remaining large amount of connections are very small in size or lifetime (mice). In a fair network environment, short connections expect rela-tively fast service than long connections. For these reasons, short TCP flows are generally more con-servative than long flows and thus tend to get less than their fair share when they compete for the bottleneck bandwidth. In this paper, we propose to give preferential treatment to short flows2 with help from an Active Queue Management (AQM) policy inside the network. We also rely on the pro-posed Differentiated Services (Diffserv) architecture [3] to classify flows into short and long at the edge of the network. More specifically, we maintain the length of each active flow (in packets3) at the edge routers and use it to classify incoming packets.
In an industry that’s already defined, Extreme Network’s recent announcement of The Automated Campus is a significant advance in networking. For the first time, all the essential technologies, products, procedures and support are gathered together and integrated. All too often, the piecemeal/piecewise growth strategy typically historically applied in organizational network evolution results in too many tools, procedures, and techniques at work, precluding fast responsiveness, optimal operations staff productivity, and the degree of accuracy and efficiency required to keep end-users productive as well.
The most important opportunity today is in boosting both productivity of end-users and network operators. The automated campus must address the productivity of network planners and network operations managers and staff. The often-significant number of elements required in an installation can demand significant staff time and can consequentially have an adverse impact on operating expenses (OpEx). While It is possible to build traditional networks that, when running correctly and optimally, get the job done – unfortunately, they often embody such high operating expenses that cost becomes the overriding factor controlling the evolution of the campus network overall. The Automated Campus will allow XYZ Account to address all these issues and concerns. A key goal here must be, of course, to reduce the number of “moving parts” required to build and operate any campus.
Extreme’s strategy for Campus Automation begins with re-thinking the way networks are designed, deployed and managed. Extreme’s Fabric-based networks enable faster configuration and troubleshooting; As a result, there is less opportunity for misconfiguration. Several automation solutions designed to enhance security often force network managers to accept complexity and degraded resilience to secure the network to meet local policies. Should a breach occur, containment to that segment protects even more sensitive parts of the network, resulting in a true dead-end for the hacker. With Extreme’s Automated Campus services can easily be defined and provisioned on-the-fly without disruption. Network operators specify what services are allowed or prohibited across the network.
Juniper Networks: Virtual Chassis High AvailabilityJuniper Networks
This presentation shares the findings of the second installment of a recent Juniper Networks commissioned Network Test to evaluate its Virtual Chassis technology in Juniper EX8200 modular and Juniper EX4200/EX4500/EX4550 fixed-configuration switches.
In this second installment of a two-part project, the focus is on the reliability and resiliency of Virtual Chassis technology. Part I of this project focused on Virtual Chassis performance and scalability: http://juni.pr/13Zi1Sp. Visit http://juni.pr/dacenSS
to learn more about Juniper’s Data Center solutions.
Ronnie Scott
Consulting CSE
Presented at the Cybera/CANARIE National Summit 2009, as part of the session "What's Next: Key Areas of Emerging Cyberinfrastructure."
This session explored some of the up-and-coming areas of cyberinfrastructure and why they are increasingly being considered as essential elements to innovative research and development.
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld
VMworld 2013
Ben Basler, VMware
Roberto Mari, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Pcdvpcu en ex9200-customer-presentation-1He Hariyadi
Juniper EX9200 is programmable Core Switch for Enterprise and Telco, supporting SDN based, 100Gig Ethernet and Full Routing capability for MPLS private networks
Architecting data center networks in the era of big data and cloudbradhedlund
Brad Hedlund's speaking session at Interop Las Vegas 2012.
Big Data clusters and SDN enabled clouds invite a new approach to data center networking. This session for data center architects will explore the transition from traditional scale-up chassis based Layer 2 centric networking, to the next generation of scale-out Layer 3 CLOS based fabrics of fixed switches.
Next Generation Nexus 9000 ArchitectureCisco Canada
In the upcoming year, 2016, the industry will see a significant capacity, capability and cost point shift in Data Center switching. The introduction of 25/100G supplementing the previous standard of 10/40G at the same cost points and power efficiency which represents a 250% increase in capacity for roughly the same capital costs is just one example of the scope of the change. These changes are occurring due to the introduction of new generations of ASICs leveraging improvements in semiconductor fabrication combined with innovative developments in network algorithms, SerDes capabilities and ASIC design approaches. This session will take a deep dive look at the technology changes enabling this shift and the architecture of the next generation nexus 9000 Data Center switches enabled due to these changes. Topics will include a discussion of the introduction of 25/50/100G to compliment existing 10/40G, why next generation fabrication techniques enable much larger forwarding scale, more intelligent buffering and queuing algorithms and embedded telemetry enabling big data analytics based on network traffic
XoS Performance - Separation between control and forwarding planes - The "SDN Classic" model, as illustrated by this graphic from the Open Networking Foundation, offers many potential benefits:
In the forwarding plane all switching, and feature implementation such as deep packet inspection , QoS scheduling, MAC learning and filtering, etc are performed in dedicated ASIC hardware
Wire speed performance across entire product line (Backplane resources, packet /frame forwarding rate, Bits per second throughput) Local switching on all line cards at no additional cost ,increasing throughput and reducing latency. Dedicated stacking interfaces, and stacking over fiber.
Low latency with Exceptional QoS
We build networks to deliver on today’s Experience Economy. Extreme Networks combines high performance wired and wireless hardware with a software-defined architecture that makes it simple, fast and smart for the user to connect with their device of choice. We provide a comprehensive portfolio, including Campus Mobility and Data Center solutions, which allow our customers to deliver a positive and consistent experience to each and every user in their environment. As SDN excitement grew, the term software-defined was adopted by marketers and applied liberally to all kinds of products and technologies: software-defined storage, software-defined security, software-defined data center.
What technologies allow me to do this today?
Key Features: Loop free load balancing, density, L2 overlays
VXLAN fabric in EXOS / EOS
MLAG: L2 Leaf/Spine with two spine members
VPLS: L2 Leaf/Spine for HPC deployments
SPB-V: S/K-Series for small enterprise data center
Evolution ExtremeFabric: fully automated
Why VxLAN? It’s a really easy L2 over L3 transport
MLAG technology Leaf/Spine Fabric
MLAG is a special case of Leaf/Spine with only two spine members and everything on L2 (We kill the spanning tree and maintain state between the spines) – We’ve been leading in MLAG for a while
VPLS technology Leaf/Spine Fabric
We have successfully built VPLS mesh Leaf/Spine networks for HPC deployments
Key Features: Loop free load balancing, density, L2 overlays
We need more scale!
21.x / 22.x bring some interesting new features that fix this
NEW with 21.1: The Scalable Layer 2 Fabric with VxLAN Technology
VXLAN – Overlay on routing for efficient load balancing and reachability
OSPF extensions massively simplify deployment
The Layer 2 traffic tunnels over any Layer 3 network
Can be used in any topology, but highest performance is Leaf/Spine
Removes the limitation on transit overlay in the spine
Easy setup, small configuration
X670-G2 and X770, S and K, and will be available on X870 at launch
Scale to 2592 10G ports (X670-G2-72, 1:1), 512 40G (X770, 1:1)
Available on EOS and EXOS NOW
NEW with EXOS 22.x and EOS 8.81: Future Fabric Technology
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld
VMworld 2013
Ben Basler, VMware
Roberto Mari, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Pcdvpcu en ex9200-customer-presentation-1He Hariyadi
Juniper EX9200 is programmable Core Switch for Enterprise and Telco, supporting SDN based, 100Gig Ethernet and Full Routing capability for MPLS private networks
Architecting data center networks in the era of big data and cloudbradhedlund
Brad Hedlund's speaking session at Interop Las Vegas 2012.
Big Data clusters and SDN enabled clouds invite a new approach to data center networking. This session for data center architects will explore the transition from traditional scale-up chassis based Layer 2 centric networking, to the next generation of scale-out Layer 3 CLOS based fabrics of fixed switches.
Next Generation Nexus 9000 ArchitectureCisco Canada
In the upcoming year, 2016, the industry will see a significant capacity, capability and cost point shift in Data Center switching. The introduction of 25/100G supplementing the previous standard of 10/40G at the same cost points and power efficiency which represents a 250% increase in capacity for roughly the same capital costs is just one example of the scope of the change. These changes are occurring due to the introduction of new generations of ASICs leveraging improvements in semiconductor fabrication combined with innovative developments in network algorithms, SerDes capabilities and ASIC design approaches. This session will take a deep dive look at the technology changes enabling this shift and the architecture of the next generation nexus 9000 Data Center switches enabled due to these changes. Topics will include a discussion of the introduction of 25/50/100G to compliment existing 10/40G, why next generation fabrication techniques enable much larger forwarding scale, more intelligent buffering and queuing algorithms and embedded telemetry enabling big data analytics based on network traffic
XoS Performance - Separation between control and forwarding planes - The "SDN Classic" model, as illustrated by this graphic from the Open Networking Foundation, offers many potential benefits:
In the forwarding plane all switching, and feature implementation such as deep packet inspection , QoS scheduling, MAC learning and filtering, etc are performed in dedicated ASIC hardware
Wire speed performance across entire product line (Backplane resources, packet /frame forwarding rate, Bits per second throughput) Local switching on all line cards at no additional cost ,increasing throughput and reducing latency. Dedicated stacking interfaces, and stacking over fiber.
Low latency with Exceptional QoS
We build networks to deliver on today’s Experience Economy. Extreme Networks combines high performance wired and wireless hardware with a software-defined architecture that makes it simple, fast and smart for the user to connect with their device of choice. We provide a comprehensive portfolio, including Campus Mobility and Data Center solutions, which allow our customers to deliver a positive and consistent experience to each and every user in their environment. As SDN excitement grew, the term software-defined was adopted by marketers and applied liberally to all kinds of products and technologies: software-defined storage, software-defined security, software-defined data center.
What technologies allow me to do this today?
Key Features: Loop free load balancing, density, L2 overlays
VXLAN fabric in EXOS / EOS
MLAG: L2 Leaf/Spine with two spine members
VPLS: L2 Leaf/Spine for HPC deployments
SPB-V: S/K-Series for small enterprise data center
Evolution ExtremeFabric: fully automated
Why VxLAN? It’s a really easy L2 over L3 transport
MLAG technology Leaf/Spine Fabric
MLAG is a special case of Leaf/Spine with only two spine members and everything on L2 (We kill the spanning tree and maintain state between the spines) – We’ve been leading in MLAG for a while
VPLS technology Leaf/Spine Fabric
We have successfully built VPLS mesh Leaf/Spine networks for HPC deployments
Key Features: Loop free load balancing, density, L2 overlays
We need more scale!
21.x / 22.x bring some interesting new features that fix this
NEW with 21.1: The Scalable Layer 2 Fabric with VxLAN Technology
VXLAN – Overlay on routing for efficient load balancing and reachability
OSPF extensions massively simplify deployment
The Layer 2 traffic tunnels over any Layer 3 network
Can be used in any topology, but highest performance is Leaf/Spine
Removes the limitation on transit overlay in the spine
Easy setup, small configuration
X670-G2 and X770, S and K, and will be available on X870 at launch
Scale to 2592 10G ports (X670-G2-72, 1:1), 512 40G (X770, 1:1)
Available on EOS and EXOS NOW
NEW with EXOS 22.x and EOS 8.81: Future Fabric Technology
Tap DANZing - Arista Networks Redefining the Cost of Accessing Network TrafficEmulex Corporation
Join us for a webinar with Sri Sundaralingham, Head of Product Management for the Endace Product Line, Emulex, and Joe Hielscher, Business Development Director, Arista Networks, on Thursday 20th June, 2013, at 10am PT where we’ll explain how the combination of Arista Network's 7150 switch running the DANZ software and EndaceProbe Intelligent Network Recorders allows you to build a cost effective, 100% accurate Intelligent network recording fabric.
Jayshree Ullal
President & CEO
Arista Networks
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
The network plays a critical role in establishing a consistent and high quality user experience. The network must provide more than basic connectivity; it must transform to become a strategic business asset. Similar to a television, radio or telephone, we expect technology to just work and deliver a great experience. For instance, if the network or technology does not allow us to share our experience socially, we are left with a negative perception; the expectation for what constitutes a great experience is different for each person. It is up to IT to keep up and deliver excellent experiences. Our solution is TRULY integrated for both wired and wireless. Deployable for private and public cloud environments. Align with mega trends shaping the market: Cloud, Mobile and of Couse with Isaac, Social as well.
What is Flow-Based Switching?
A Flow is a uni-directional conversation between end devices
Described by as many of the following contained in the packet
(L2) – SA, DA, Port, VLAN, EtherType
(L3) – adds SIP, DIP, Protocol, TOS/DSCP
(L4) – adds L4 Source, L4 Destination
Packet Processor operates at either L2, L3 or L4 depending on switch configuration, Provides context for network traffic
Who, Where, What
So What?- Every unique flow can be counted
and controlled!
Question, Why are you calling L2 MAC forwarding switching? Several decades ago we did one of 3 things to a packet, repeated it (frame was not touched), Bridged it (L2 MAC forwarding), or Routed it (L3 forwarding followed by L2 forwarding). The term switch is a “marketing” term that really just talks about a multiport Bridge in my opinion, and we have had those for many, many years
CORD aims to bring the data center economy and cloud agility to the service provider networks and is an end-to-end solution for the next generation central offices. CORD leverages three related technologies: SDN, NFV, and Cloud and builds on merchant silicon, white boxes and open-source platforms such as ONOS, OpenStack, and XOS. ON.Lab, AT&T and partners demonstrated CORD POC at ONS2015 and are now building a CORD POD for a market trial.
The CORD thought leaders and developers introduce CORD, explain the motivation from a service provider perspective, discuss CORD architecture, related services and key use cases including vOLT, vSG and vRouter.
Topics of Discussion
>>> CORD Introduction
>>> Motivation from a Service Provider Perspective
>>> CORD Architecture
>>> Usecases: vOLT, vSG and vRouter
>>> CORD Future Plans
Arista Networks - Building the Next Generation Workplace and Data Center Using SDN Architectures
Topics Include:
Enterprise Workplace and Data Center Networking Trend
Arista Networks Introduction
Arista Datacenter Solution
Arista and Aruba joint SDN Solution
LANs are constantly evolving, build your XYZ Account Network with that baked-in…
Extreme Networks brings XYZ Account simplicity, agility, and optimized performance to your most strategic business asset. The data center is critically important to business operations in the enterprise, but often organizations have difficulty leveraging their data centers as a strategic business asset. At Extreme Networks, we focus on providing an Intelligent Enterprise Data Center Network that’s purpose-built for enterprise requirements. Our OneFabric Data Center Solution:
XoS “can be like an elastic Fabric” for XYZ Account Network…
Demand for application availability has changed how applications are hosted in today’s datacenter. Evolutionary changes have occurred throughout the various elements of the data center, starting with server and storage virtualization and network virtualization. Motivations for server virtualization were initially associated with massive cost reduction and redundancy but have now evolved to focus on greater scalability and agility within the data center. Data center focused LAN technologies have taken a similar path; with a goal of redundancy and then to create a more scalable fabric within and between data centers.
As vendors continue to tout networking architectures that decouple software from hardware, bare-metal switches are moving into the spotlight. These switches are built on merchant silicon deliver a lower-cost and more flexible switching alternative. Extreme Purple Metal switches are open enough to allow our customers to choose their network architecture based on their specific needs without going all the way to bare metal. We believe in the disaggregation of traditional enterprise networking. Extreme uses merchant silicon versus custom ASICs. Custom ASICs have fallen behind. Unless a vendor can build and compete against merchant silicon, there's no point in doing custom ASICs.
Extreme Manufacturing Solutions
Operations Performance Analytics (OPA)
Business alignment - Over time, the proliferation of devices has created unnecessary complexity. Control Center delivers centralized visibility and granular control of network resources. One click can equal a thousand actions when you manage your network. Control Center can even manage beyond Extreme Networks switching, routing, and wireless hardware to deliver standards-based control of other vendors’ network equipment.
Pairing assets with intelligent sensors to gather, analyze, and communicate data is driving enormous new efficiencies in manufacturing and business operations. Just as in the consumer markets, where the first generation of personal fitness monitors and smart home devices leverage data sets to influence and shape events in the physical world, so too are operational efficiencies borne by the Internet of Things (IoT) generating high returns in manufacturing.
According to McKinsey, “business-to-business applications will account for nearly 70 percent of the value … from IoT in the next ten years.” The firm estimates that of the nearly $11 trillion a year in economic value generated globally, ‘nearly $5 trillion [will] be generated almost exclusively in B2B settings, including factories… such as those in manufacturing, agriculture, and even healthcare environments; work sites across mining, oil and gas, and construction; and, finally, offices.’
More informed decision-making and optimized operations across the extended supply chain are only some of the benefits. Wireless sensors, whether measuring hydrogen levels in the soil or temperature variables on the production line, are eliminating blind spots in traditional manufacturing processes and delivering a constant flow of data that optimize workflows. And while manufacturers have leveraged data in discrete applications for Manufacturing Execution Systems (MES) and Enterprise Manufacturing Intelligence (EMI) systems for years, the growth of sensors, real-time dashboards, cloud-applications, and mobile technologies are delivering new degrees of actionable intelligence to the precise location at the precise time it can be optimally leveraged.
Yet this goal of seamlessly moving data across plant and business functions, and applying analytical tools to enable new insights, requires a new degree of visibility into the performance of manufacturing applications, networks, and systems. Traditionally monitoring tools used in factory environments are often isolated, closed, proprietary, and offer only a keyhole view of IT system performance.
We see the need for IT to use best of breed applications in an open standards network, but often times lacking the unified management or staff to efficiently maintain a complex network.
I am fortunate enough to speak with CIOs and IT Directors like yourself, and they tell me that while they may be happy with their current vendor situation they face challenges such as
· We help our customers transform their network architecture as a strategic business asset
· Difficult and complex to gain strategically valuable insight into network usage
· Implementing and controlling BYOD
· Increased volume of devices on the network
· Poor correlation of data and management solutions between third party technologies
· Frustration due to lack of network visibility and application analytics
· Controlling guest and rogue devices on-boarding the network
· Bad user experience causing a poor perception of IT competence
As I mentioned, I’m with Extreme Networks. We are a global leader in high performance wired and wireless networking hardware and software solutions, presently working with ABC University on their stadium wi-fi.
Webinar NETGEAR - Novita' Switch Prosafe e alcuni criteri per la scelta correttaNetgear Italia
Oltre alla breve introduzione della gamma di soluzioni inclusiva delle ultime novità, si tratteranno gli elementi distintivi delle famiglie di prodotto ed i alcuni criteri per l'identificazione delle soluzioni corrette ed adeguate per la nostra rete.
Extreme is rethinking the data plane, the control plane, and the management plane. Extreme is a better mouse trap which delivers new features, advanced function, and wire-speed performance. Our switches deliver deterministic performance independent of load or what features are enabled. All Extreme Switches are based on XOS, the industries first and only truly modular operating system. Having a modular OS provides higher availability of critical network resources. By isolating each critical process in its own protected memory space, a single failed process can not take down the entire switch. Application modules can be loaded and unloaded without the need for rebooting the switch. This is the level of functionality that users expect on other technology. Reaching the twenty million port milestone is a significant achievement demonstrating how our highly effective network solutions, with rich features, innovative software and integrated support for secure convergence. VoIP/Unified Communica Fons/Infrastructure/SIP Trunking (SBC) – Because of strong ROI, investment in this segment remains on a very strong growth trajectory.
Enterprises depend on modular switching solutions for all aspects of the enterprise network: in the enterprise core and data center, the distribution layer that lies between the core and wiring closet, and in the wiring closet itself. Modular solutions provide port diversity and density that fixed solutions simply cannot match. There are also high-capacity modular solutions that only the largest of enterprises and institutions use for high-density and high-speed deployments. Modular solutions are generally much more expensive than their fixed cousins, especially in situations where density or flexibility are not required. Fixed-configuration stackable switches are typically cost- optimized, but they offer no real port diversity on an individual switch. Port diversity means the availability of different port types, such as fiber versus copper ports. Stackable switches have gotten better at offering port diversity, but they still cannot match their modular cousins. Many of these products now offer high-end features such as 802.3af PoE, QoS, and multi-layer intelligence that were only found on modular switches in the past. This is due to the proliferation of third-party merchant silicon in the fixed configuration market. Generally, a stack of fixed configuration switches can be managed as a single virtual entity. Fixed configuration switches generally cannot be used to provision an entire large enterprise, but instead are mostly used out at the edge or departmental level as a low-cost alternative to modular products.
Assumptions:
Ethernet is Open
Active/Active in the Fabric
Therefore:
Open at the Edge
Active/Active at the edge
Ready to Assist You
Extreme Networks® Global Technical Assistance Centers (GTAC) provide 24x7x365 worldwide coverage. These centers are the focal point of contact for post-sales technical and network-related questions or issues. GTAC will create a Case number and manage and track all aspects of the Case until it is resolved.
The Extreme Networks GTAC team provides personalized assistance via web, email, or phone to quickly address your questions or issues. This document explains the levels of service available, shows you how to identify the level of service in effect, and guides you to preparing the information you need before you contact your Technical Assistance Center. This document describes:
· How to submit various requests
· What happens to your request
· How to follow the progress of your request
· How to escalate your request, if necessary
Extreme Networks is committed to continuously evolving and developing service programs and business practices to meet the unique needs of each customer. As part of delivering world-class networking solutions, we strive to be a long-term service partner that exceeds your expectations and helps you achieve success.
We hope you find this document to be a useful tool in your day-to-day interaction with the Extreme Networks service and support organization. To obtain the most recent version of this document and to stay abreast of our service policies, visit the Extreme Networks website at http://extremenetworks.com/support/policies.
For general questions or comments regarding this document or to recommend improvements, contact svcmktg@extremenetworks.com. For questions regarding service contracts, contact your sales representative or email svc@ extremenetworks.com.
Understanding Your Support Options
Customers who have product covered under product warranty or have purchased an ExtremeWorks® Service Contract are entitled to use GTAC. You can check the status of your support contracts on the
Different Types of Switches in Networking : NotesSubhajit Sahu
Highlighted notes while studying the Course:
Advanced Computer Networks
Article: Different Types of Switches in Networking
By: Small Business Networking Resources, Cisco.
Cisco Systems, Inc. is an American multinational technology conglomerate headquartered in San Jose, California, in the center of Silicon Valley. Cisco develops, manufactures and sells networking hardware, software, telecommunications equipment and other high-technology services and products.
Where Does Networking Fit In? To gain the full benefits of cloud computing and virtualization and achieve a business agile IT infrastructure, organizations need a reliable, high-performance data center networking infrastructure with built-in investment protection. Several technology inflection points are coming together that are fundamentally changing the way networks are architected, deployed and operated both in the public cloud as well as the private cloud. From performance, to scale, to virtualization support and automation to simplified orchestration, the requirements are rapidly changing and driving new approaches to building data center networks.
With Extreme Networks, IT can manage more with less. Automated intelligence and analytics for compliance, forensics, and traffic patterns translates into reduced help desk calls. Businesses can predict costs and return on investment, and increase employee productivity by securely onboarding BYOD, increasing both customer and employee satisfaction. A constant risk to the network, and ultimately the hospital, are unapproved applications and rogue devices that may appear on the network and either permit unauthorized access or interfere with other devices. A means to monitor all devices and applications that operate across the network is vital. Just as important are the audit and reporting capabilities necessary to report on who, what, where, when, and how patient data is accessed.
What is SDN? What software-defined networking really means has evolved dramatically and now includes automation and virtualization. Hardware is still a critical component in data center networking equipment, but the influence of switch software shouldn’t be overlooked. When everyone began to get excited about SDN a few years ago, we thought of it as only one thing: the separation of network control from network data packet handling. Traditional networks had already started down this path, with the addition of controller cards to manage line cards in scalable chassis-based switches, and with various data center fabric technologies. SDN took the idea to its logical end, removing the need for the controller and the packet handlers to be on the same backplane or even from the same vendor.
Cost. Reducing costs in the data center and contributing to corporate profitability is an increasingly important trend in today’s economic climate. For example, energy costs for the data center are increasing at 12% a year. Moreover, increased application requirements such as 100% availability necessitate additional hardware and services to manage storage and performance thus raising total cost of ownership.
Next Generation Ethernet
Next Generation Ethernet is a platform that should deliver all of previous function requirements under on hood. I have grouped the Generations in this way because Cisco has different purpose-built product lines for each of 4 waves of technology. Counter to that Extreme offers a platform solution for a customer to build his network on. Extreme does not require different switches to address different convergence requirements, this would be cost prohibitive for most customers and complicated. Simply put to disrupt the Cisco market, Extreme must deliver more with less.
The IEEE is pushing Ethernet to unimaginable speeds, with the 40/100Gigabit Ethernet standard expected to be ratified in 2010 and Terabit Ethernet on the drawing board for 2015. Here's a timeline showing key milestones in the growth of Ethernet Sstandard's-compliant products are expected to ship in the second half of next year, not long after the expected June 2010 ratification of the 802.3ba standard.
Complexity - Complex systems are a special type of chaotic system. They display a very interesting type of emergent behavior called, logically enough, complex adaptive behavior. But we are getting ahead of ourselves. There’s a need to back up a bit and describe a fundamental behavior that occurs at the granular level and leads to complex adaptive behavior. It is self -organization. Complex Adaptive Behavior is the name given to this forming-falling apart-reforming-falling apart-… behavior. Specifically it is defined as many agents working in parallel to accomplish a goal. It is conflict ridden, very fluid, and very positive. The hallmark of emergent, complex adaptive behavior is it brings about a change from the starting point that is not just different in degree but in kind. In biology a good example of this is the emergence of consciousness. Another example is the Manhattan Project and the development of the atomic bomb. Below is a checklist that helps facilitate a qualitative assessment of the level of complexity. It is in everyday language to facilitate use by a broad range of stakeholders and team members. In other words, it stays away from jargon, which can be the kiss of death when requesting information from people.
The Checklist
Not sure how the project will get done; Many stakeholders, teams and sub-teams;
Too Many vendors; New vendors;
New client; Team members are geographically dispersed;
End-users are geographically dispersed; Many organizations;
Many cultures (professional, organizational, sociological);
Many languages (professional, organizational, sociological);
High risk;
Lack of quality best characterized by lack of acceptance criteria;
Lack of clear requirements and too Many tasks;
Arbitrary budget or end date;
Inadequate resources;
Leading-edge technology;
New, unproven application of existing technology;
High degree of interconnectedness (professional, technological, political, sociological).
Slawomir Janukowicz, Juniper Networks
Juniper Day, Praha, 13.5.2015
Jestliže SlideShare nezobrazí prezentaci korektně, můžete si ji stáhnout ve formátu .ppsx nebo .pdf (kliknutím na tlačitko v dolní liště snímků).
Where is the 6 GHz beef?
The low number of channels available today forces users to share available bandwidth and creates congestion. As each client station waits to transmit (or receive) data, congestion is caused by devices, Access Points and Stations, sharing the same channel. To better describe the impact of 6GHZ wifi, let us borrow the catchphrase "Where is the beef?". As a visual aid, begin with a hamburger bun with a 2.4GHz and 5GHz spectrum in the middle. The picture below may exaggerate a 20 years spectrum limitation. However, the visual expresses the potential of the 6GHz range to deliver the spectrum beef.
Where is the 6 GHz beef?
The low number of channels available today forces users to share available bandwidth and creates congestion. As each client station waits to transmit (or receive) data, congestion is caused by devices, Access Points and Stations, sharing the same channel. To better describe the impact of 6GHZ wifi, let us borrow the catchphrase "Where is the beef?". As a visual aid, begin with a hamburger bun with a 2.4GHz and 5GHz spectrum in the middle. The picture below may exaggerate a 20 years spectrum limitation. However, the visual expresses the potential of the 6GHz range to deliver the spectrum beef.
Where is the 6 GHz beef?
The low number of channels available today forces users to share available bandwidth and creates congestion. As each client station waits to transmit (or receive) data, congestion is caused by devices, Access Points and Stations, sharing the same channel. To better describe the impact of 6GHZ wifi, let us borrow the catchphrase "Where is the beef?". As a visual aid, begin with a hamburger bun with a 2.4GHz and 5GHz spectrum in the middle. The picture below may exaggerate a 20 years spectrum limitation. However, the visual expresses the potential of the 6GHz range to deliver the spectrum beef.
The next generation ethernet gangster (part 3)Jeff Green
The original competitors in the Ethernet market remind me of gang members who each had their unique advantages to win over their turf. Over the past few years, Extreme assembled seven gangers from a variety of backgrounds with their strengths to perform a mission and deliver a new level of value to our customers. Extreme has adopted a gangster strategy going against the grain of the market leader. So far, the gangster strategy has been a winning strategy. When market leaders are proposing proprietary solutions, Extreme went open Linux with “superspec.” When they pushed DNA and its additional complexity, Extreme responded by re-thinking the way networks are designed, deployed, and managed without vendor lock-in. Final-ly, when they tied to service and to licensing together with Cisco One, Extreme responded with added flexibility in both licensing, services, and Extreme-as-a-service.
The next generation ethernet gangster (part 2)Jeff Green
The original competitors in the Ethernet market remind me of gang members who each had their unique advantages to win over their turf. Over the past few years, Extreme assembled seven gangers from a variety of backgrounds with their strengths to perform a mission and deliver a new level of value to our customers. Extreme has adopted a gangster strategy going against the grain of the market leader. So far, the gangster strategy has been a winning strategy. When market leaders are proposing proprietary solutions, Extreme went open Linux with “superspec.” When they pushed DNA and its additional complexity, Extreme responded by re-thinking the way networks are designed, deployed, and managed without vendor lock-in. Final-ly, when they tied to service and to licensing together with Cisco One, Extreme responded with added flexibility in both licensing, services, and Extreme-as-a-service.
The next generation ethernet gangster (part 1)Jeff Green
The original competitors in the Ethernet market remind me of gang members who each had their unique advantages to win over their turf. Over the past few years, Extreme assembled seven gangers from a variety of backgrounds with their strengths to perform a mission and deliver a new level of value to our customers. Extreme has adopted a gangster strategy going against the grain of the market leader. So far, the gangster strategy has been a winning strategy. When market leaders are proposing proprietary solutions, Extreme went open Linux with “superspec.” When they pushed DNA and its additional complexity, Extreme responded by re-thinking the way networks are designed, deployed, and managed without vendor lock-in. Final-ly, when they tied to service and to licensing together with Cisco One, Extreme responded with added flexibility in both licensing, services, and Extreme-as-a-service.
The next generation ethernet gangster (part 3)Jeff Green
Today Extreme can be more aggressive, with confidence in knowing we can compete with anyone in the market. As the #1 market alternative, there are three critical reasons for including Extreme in your technology considerations: our end-to-end portfolio, our fabric, and our customer service. We are moving Extreme from a reactive, tactical vendor to a pro-active, strategic partner. When Extreme gets a seat at the table, and we bring our unique “sizzle,” we are the customer’s choice. Our customer retention rate is unmatched in the industry, according to Gartner.
Jeff Green
Extreme Networks
jgreen@extremenetworks.com
Mobile (772) 925-2345
https://prezi.com/view/BFLC71PVkoYVKBOffPAv/
The next generation ethernet gangster (part 2)Jeff Green
Today Extreme can be more aggressive, with confidence in knowing we can compete with anyone in the market. As the #1 market alternative, there are three critical reasons for including Extreme in your technology considerations: our end-to-end portfolio, our fabric, and our customer service. We are moving Extreme from a reactive, tactical vendor to a pro-active, strategic partner. When Extreme gets a seat at the table, and we bring our unique “sizzle,” we are the customer’s choice. Our customer retention rate is unmatched in the industry, according to Gartner.
Jeff Green
Extreme Networks
jgreen@extremenetworks.com
Mobile (772) 925-2345
https://prezi.com/view/BFLC71PVkoYVKBOffPAv/
Places in the network (featuring policy)Jeff Green
Networks of the Future will be about a great user experience, devices and things…
In an industry that’s already defined, Extreme Network’s recent announcement of The Automated Campus is a significant advance in networking. For the first time, all the essential technologies, products, procedures and support are gathered together and integrated. All too often, the piecemeal/piecewise growth strategy, typically applied in network evolutions, results in too many tools, procedures, and techniques. The patchwork quilt approach precludes fast responsiveness, optimal operations staff productivity, and sacrifices the accuracy and efficiency required to keep end-users productive as well.
The most important opportunity to improve efficiency for governments today is in boosting both the productivity of end-users and network operators. The automated campus must address the productivity of network planners and network operations managers and staff. The often-significant number of elements required in an installation can demand significant staff time and can, consequentially, have an adverse impact on operating expenses (OpEx). While It is possible to build traditional networks that, when running correctly and optimally get the job done, they often embody such high operating expenses that cost becomes the overriding factor controlling the evolution of the campus network. The Automated Campus will allow XYZ Account to address all these issues and concerns. A key goal must be for XYZ Account to reduce the number of “moving parts” required to build and operate any campus and introduce a level of simplicity and automation that will address your future.
Extreme’s strategy for Campus Automation begins with re-thinking the way networks are designed, deployed and managed. Extreme’s Fabric-based networks enable faster configuration and troubleshooting; As a result, there is less opportunity for misconfiguration. Several automation solutions designed to enhance security often force network managers to accept complexity and degraded resilience to secure the network to meet local policies. Should a breach occur, containment to that segment protects even more sensitive parts of the network, resulting in a true dead-end for the hacker. With Extreme’s Automated Campus services can easily be defined and provisioned on-the-fly without disruption. Network operators specify what services are allowed or prohibited across the network.
Fortinet Firewall Integration - User to IP Mapping and Distributed Threat Response
oAccurate User ID to IP mapping eliminates potential attacks and provides reliable, out of the box User Information to firewalls
oImproves security by blocking/limiting user access at the point of entry without impacting other users
oMore accurate network mapping for dynamic policy enforcement and reporting
In an industry that’s already defined, Extreme Network’s recent announcement of The Automated Branch is a significant advance in networking. For the first time, all the essential technologies, products, procedures and support are gathered together and integrated. All too often, the piecemeal/piecewise growth strategy typically historically applied in organizational network evolution results in too many tools, procedures, and techniques at work, precluding fast responsiveness, optimal operations staff productivity, and the degree of accuracy and efficiency required to keep end-users productive as well.
This reference design helps organizations design and configure a small to midsize data center (be¬tween 2 and 60 server racks) at headquarters or a server room at a remote site. You will learn how to configure the data center core, aggregation and access switches for connectivity to the servers and the campus network.
The Avaya Fabric Connect data center design supports high-speed 10 Gbps Ethernet connect-ed servers. The design can easily scale server bandwidth with link aggregation and servers can be connected to one or more switches in order to provide the level of availability required for the services delivered by the host. The design also supports legacy and low traffic servers that need 1 Gbps Ethernet connectivity,
The reference design presented in this guide is based on common network requirements and pro¬vides a tested starting point for network engineers to design and deploy an Avaya data center net¬work. This guide does not document every possible option and feature used to design and deploy networks but instead presents the tested and recommended options that will meet the majority of customer needs.
This design uses Avaya Fabric Connect in order to provide benefits over traditional data center design.
IT departments face several challenges in today’s data center:
· Data center traffic flow is not the same as campus traffic flow. Over 80% of the traffic is east-west, server-to-server, vs. north-south, client-to-server, like in a campus.
· Server virtualization allows a virtual machine or workload to be located anywhere in the physi¬cal data center. Data center networks can make it difficult to extend virtual local area networks (VLANs) and subnets anywhere in the data center.
· Server virtualization means that new services can be brought online in minutes or migrated in real time. Reconfiguring the network to support this is difficult because it can interrupt other services.
· Server virtualization means that the load on a physical box is much higher. Physical servers regularly host 10-50 workloads, driving network utilization well past 1 Gbps.
Audio video ethernet (avb cobra net dante)Jeff Green
AVB fits low-cost, small-form-factor products such as this microphone. The overall trend is that music no longer lives on shelves or in CD racks, but in hard drives in home computers, and increasingly in the cloud. This brings about its own unique problems, not in the encoding system used, or the storage technology, but in distributing the audio from the storage media to the speakers. AVB features are all enabled by a global and port level configuration. Connecting these elements is the AVB-enabled switch (in the graphic above, the Extreme Networks® Summit® X440.) The role of the switch is to provide support for the control protocols: AVB is Ethernet’s next stage of convergence, delivering pitch perfect audio and crystal clear video seamlessly over the network
IP/Ethernet is bringing simplicity and features to audio and video as it has brought to services like VoIP, Storage and many more
High quality, perfectly synchronized A/V until now has been difficult to maintain
Standards work by the IEEE and the AVB standard changes everything, creating interoperability and mass-marketing equipment pricing
Benefits of AVB - Delivers predictable latency and precise synchronization, maximizing the functionality of AV – time synchronization and quality or service
Reduced complexity and Ease of use through interoperability between devices
Streamlines complex network set-up and management, the Infrastructure negotiates and manages the network for optimal prioritized media transport
AV traffic can co-exist with non-AV traffic on same Ethernet infrastructure
Role based control at the XYZ Account - XYZ Account can identify devices and apply policies based on device type all the way down to the port and or the AP. Policies can dynamically change based on the device a user is connecting with and where that user is located. Extreme Networks provides infrastructure to deliver customizable prioritization and scalable capacity via configurable and built-in intelligence, ensuring a comprehensive, superior quality experience. Furthermore, when deployed with Extreme Wireless XYZ Account can configure the network to ensure applications receive the bandwidth they require, while still limiting or preventing high speed streaming of music of video or even games.
The Pug is a breed of dog with a wrinkly, short-muzzled face, and curled tail. The breed has a fine, glossy coat that comes in a variety of colours, most often fawn or black, and a compact square body with well-developed muscles.
Pugs were brought from China to Europe in the sixteenth century and were popularized in Western Europe by the House of Orange of the Netherlands, and the House of Stuart.In the United Kingdom, in the nineteenth century, Queen Victoria developed a passion for pugs which she passed on to other members of the Royal family. Pugs are known for being sociable and gentle companion dogs.[3] The breed remains popular into the twenty-first century, with some famous celebrity owners. A pug was judged Best in Show at the World Dog Show in 2004.
Donald J. Trump For President, Inc. –– Why Now?
On November 8, 2016, the American People delivered a historic victory and took our country back. This victory was the result of a Movement to put America first, to save the American economy, and to make America once again a shining city on the hill. But our Movement cannot stop now - we still have much work to do.
This is why our Campaign Committee, Donald J. Trump for President, Inc., is still here.
We will provide a beacon for this historic Movement as our lights continue to shine brightly for you - the hardworking patriots who have paid the price for our freedom. While Washington flourished, our American jobs were shipped overseas, our families struggled, and our factories closed - that all ended on January 20, 2017.
This Campaign will be a voice for all Americans, in every city near and far, who support a more prosperous, safe and strong America. That’s why our Campaign cannot stop now - our Movement is just getting started.
Together, we will Make America Great Again!
The Secret Sauce is the Control Plane, not the Encapsulation
Host Route Distribution decoupled from the Underlay protocol
Use MultiProtocol-BGP (MP-BGP) on the Leaf nodes to distribute internal Host/Subnet Routes and external reachability information
Route-Reflectors deployed for scaling purposes
VXLAN terminates its tunnels on VTEPs (Virtual Tunnel End Point).
Each VTEP has two interfaces, one is to provide bridging function for local hosts, the other has an IP identification in the core network for VXLAN encapsulation/decapsulation.
VXLAN Encapsulation and De-encapsulation occur on T2
Bridging and Gateway are independent of the port type (1/10/40G ports)
Encapsulation happens on the egress port
Decapsulation happens on the ingress port
Service Oriented Architecture
2 or 3 layer network to Leaf & Spine
High density and bandwidth required
Layer 3 ECMP
No oversubscription
Low and uniform delay characteristic
Wire & configure once network
Uniform network configuration
Workload Mobility
Workload Placement
Segmentation
Scale
Automation & Programmability
L2 + L3 Connectivity
Physical + Virtual
Open
An alternative to the core/aggregation/access layer network topology has emerged known as leaf-spine. In a leaf-spine architecture, a series of leaf switches form the access layer. These switches are fully meshed to a series of spine switches. One way is to create a Spine and Leaf architecture, also known as a Distributed Core. This architecture has two main components: Spine switches and Leaf switches. Intuition Systems can think of spine switches as the core, but instead of being a large, chassis-based switching platform, the spine is composed of many high-throughput Layer 3 switches with high port density. The mesh ensures that access-layer switches are no more than one hop away from one another, minimizing latency and the likelihood of bottlenecks between access-layer switches. When networking vendors speak of an Ethernet fabric, this is generally the sort of topology they have in mind.
Haven’t we spent the last few decades disaggregating datacenter architecture? And if so, what does disaggregation mean now, is it something different? Strictly speaking, to “disaggregate” means to divide
Data Center Aggregation/Core Switch
The proposed solution must provide a high-density chassis based switch solution that meets the requirements provided below. Your response should describe how your offering would meet these requirements. Vendors must provide clear and concise responses, illustrations can be provided where appropriate. Any additional feature descriptions for your offering can be provided, if applicable.
• Must offer a chassis-based switch solution that provides eight I/O module slots, two management module slots and four fabric module slots. Must support a variety of I/O modules providing support for 1GbE, 10GbE, 40GbE and 100GbE interfaces. Please describe the recommended switching solution and the available I/O modules.
• Switch must offer switching capacity up to 20.48 Tbps. Please describe the performance levels for the recommended switching solution.
• Switch system must support high availability for the hardware preventing single points of failure. Please describe the high availability features.
• It is preferred that the 10 Gigabit Ethernet modules will also be able to accept standard Gigabit SFP transceivers. Please describe the capability of your switch.
• Must support an N+1 redundant power supplies
• Must support N+1 redundant fan trays
• Must support a modular operating system that is common across the entire switching profile. Please describe the OS and advantages.
If the number of spine switches were to be merely doubled, the effect of a single switch failure is halved. With 8 spine switches, the effect of a single switch failure only causes a 12% reduction in available bandwidth. So, in modern data centers, people build networks with anywhere from 4 to 32 spine switches. With a leaf-spine network, every server on the network is exactly the same distance away from all other servers – three port hops, to be precise. The benefit of this architecture is that you can just add more spines and leaves as you expand the cluster and you don't have to do any recabling. Intuition Systems will also get more predictable latency between the nodes.
As a trend, disaggregation seems to be most useful for very large companies like Facebook and Google, or cloud providers. The technology does not necessarily have significant implications for small or medium sized businesses. Historically, however, technology has a way of trickling down from the pioneering phases of existing only within large companies with tremendous resources, to becoming more standardized across the board.
If the number of spine switches were to be merely doubled, the effect of a single switch failure is halved. With 8 spine switches, the effect of a single switch failure only causes a 12% reduction in available bandwidth. So, in modern data centers, people build networks with anywhere from 4 to 32 spine switches. With a leaf-spine network, every server on the network is exactly the same distance away from all other servers – three port hops, to be precise. The benefit of this architecture is that you can just add more spines and leaves as you expand the cluster and you don't have to do any recabling. Intuition Systems will also get more predictable latency between the nodes.
As a trend, disaggregation seems to be most useful for very large companies like Facebook and Google, or cloud providers. The technology does not necessarily have significant implications for small or medium sized businesses. Historically, however, technology has a way of trickling down from the pioneering phases of existing only within large companies with tremendous resources, to becoming more standardized across the board.
An experience is a personal and emotional event we remember. Every experience is established based upon pre-determined expectations we conceive and create in our minds. It’s personal, and therefore, remains a moving and evolving target in every scenario. When our experience concludes and the moment has passed, the outcome remains in our memory. Think about what makes you happy when connecting with your own device and then think about what makes you really upset when things are hard, complicated, and slow. If the user has a bad experience in anyone of these areas (simple, fast, and smart), they are likely to leave, share their negative experience, and potentially never return. Users might forget facts or details about their computing environment but they find it difficult to forgot the feeling behind a bad network experience. When something goes wrong with the network or an application, do you always get the blame?
So what can Ultra Low, consistent latency deliver? Low latency is a requirement for intensive, time critical applications. Latency is measure on a port-to-port basis, that once a frame is received on a ingress port how long does it take the frame to go through the internal switching infrastructure and leave an ingress port. The Summit X670 Top of Rack switch supports latency of around 800-900usec while the Black Diamond chassis, BDX8, can switch frames in a little as 3usec. We’re big believers in the value of disaggregation – of breaking down traditional data center technologies into their core components so we can build new systems that are more flexible, more scalable, and more efficient. This approach has guided Facebook from the beginning, as we’ve grown and expanded our infrastructure to connect more than 1.28 billion people around the world.
Flatter networks. Traditional data center networks have a minimum of three tiers: top of rack (ToR), aggregation and core. Often, there is more than one aggregation tier, meaning the data center could have three or more network tiers. When network traffic is primarily best effort, this is sufficient. But as more mission-critical, real-time traffic flows into the data center, it becomes critical that organizations move to two-tier networks.
An increase in east-west traffic flows. Legacy data center networks are designed for traffic to flow from the edge of the network into the core and then back to the edge in a north-south direction. Today, however, factors such as workforce mobility, Hadoop, big data and other applications are driving east-west traffic flows from server to server.
Virtualization of other IT assets. Historically, compute resources such as processor, memory and storage were resident in the server itself. Over time, more and more of these resources are being put into “pools” that can be accessed on demand. In this case, the data center network becomes a “fabric” that acts as the backplane for the virtualized data center.
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Data center network reference pov jeff green 2016 v2
1. Multi-Rate1,2.5,5,10GigabitEdgePoE++
Multi-RateSpineLeafDesign(10,25,40,50,100Gigabit)
X440-G2 (L3 - Value 1G to 10G)
PoE
Fiber
DC
Policy
SummitStack-V (WITHOUT any
additional license required).
Upgradeable 10GbE (PN 16542 or 16543).
Policy built-in (simplicity with multi-auth).
EXOS 21.1 or
higher
Value with Automation
First Extreme
Switch to support
Cloud Value
X460-G2 (Advanced L3 1-40G) Multirate Option
PoE
Fiber
DC
Policy
Fit The Swiss Army Knife of Switches
Half Duplex
½ & ½
3 Models
This is where: 10G on
existing copper Cat5e
and Cat6 extend the
life of the installed
cable plant. Great for
1:N Convergence.
X620 (1OG Copper or Fiber)
Speed Next Gen Edge
Lowered TCO via
Limited Lifetime Warrantee
Wallplate AP
AP + Camera
Outdoor Wave 2
Multi-Gigabit
Wireless
High Density
-pack or Wedge
Facebook
ExtremeSupport
XoS
Platform
Config L2/L3
Analytics
Any OS
Any Bare Metal Switch
Policy
Disaggregated Switch
CAPEX or OPEX (you choose)?
Reduced Risk (just witness or take action)
Time is the critical Factor with XYZ Account Services...
Infrastructure
Businessmodel
Ownership
Considerations
Management
Location
32 x 100Gb
64 x 50Gb
128 x 25Gb
128 x 10Gb
32 x 40Gb
96 x 10GbE Ports
(via4x10Gb breakout)
8 x 10/25/40/
50/100G
10G
Next Gen: Spine Leaf
X670 & X770 - Hyper Ethernet
Common Features
Data Center Bridging (DCB) features
Low ~600 nsec chipset latency in cut through mode.
Same PSUs and Fans as X670s (Front to back or Back to
Front) AC or DC.
X670-G2 -72X (10GbE Spine Leaf) 72 10GbE
X670-48x-4q (10GbE Spine Leaf) 48 10GbE & 4 QSFP+
QSFP+
40G DAC
Extreme Feature Packs
Core
Edge
AVB
OpenFlow
Advance
Edge
1588 PTP
MPLS
Direct Attach
Optics License
Extreme Switches
include the license
they normally need.
Like any other
software platform
you have an
upgrade path.
QSPF28
100G DAC
Thin & Crunchy
XoS Platform with one track of software.
Speed with Features (Simple).
Metro Functionality like ATM or SONET
Flexible Horizontal or Vertical stacking
Purposed for Broadcom
(ASICs)
So What, Who cares?
Deliver XYZ Account, the
value of HP with the feature
function of Cisco.
XYZ Account Business Value
WhyExtreme?
Summit
Summit
Policy delivers automation..
Thick & Chewy
Know and control
the who, what, when, where and the user
experience across your XYZ Account
Network.
Control with insight...
WhyEnterasys?
XYZ Account Strategic Asset
Custom ASICs
S & K Series
Chantry
Motorola
Air
Defense
So What, Who cares?
Flow Based Switching
Simplicity w Policy
Wired and Wireless
100% insourced support
Today you get both
Control
So What, Who cares?
Fit
Speed
Unique
Value
Unique
Control
Summit G2
Yesterday - Cabletron Changed the game w Structured wiring
(remember Vampire taps, Coax ethernet ect.)
Today - Extreme Delivers Structured networking
Policy
Summit
Who?
Where?
When?
Whatdevice?
How?
QuarantineRemediate
Allow
Authentication
NAC Server
Summit
Netsite
Advanced
NAC Client
Joe Smith
XYZ Account
Access
Controlled
Subnet
Enforcement
Point
Network
Access
Control
This is where
if X + Y, then Z...
LLDP-MED
CDPv2
ELRP
ZTP
If user
matches a
defined
attribute
value
ACL
QoS
Then place
user into a
defined ROLE
A port is what it is because?This is where you easily Identify
the impact and Source of
Interference Problems.
Detailed Forensic Analysis
Device, Threats, Associations,
Traffic, Signal and Location
Trends
Record of Wireless Issues
Network Trend Analysis
Historical Analysis of
Intermittent Wireless
Problems
Performance Trends a
Spectrum Analysis for
Interference Detection
Real-time Spectrograms
Proactive Detection of
Application Impacting
Interference
Visualize RF Coverage
Real-time RF Visualizations
Proactive Monitoring and
Alerting of Coverage Problem
ADSP for faster Root Cause Forensic
Analysis for SECURITY & COMPLIANCE.
Event
Sequence
Classify
Interference
Sources
Side-by-side
Comparative
Analysis
Air Defense
Application
Experience
FullContext
App
App
Analytics
App
Stop the
finger-pointing
Application Network Response.
Flow or Bit
Bucket
Collector
3 million Flows
Sensors
X460 IPFix 4000 Flows
(2048 ingress, 2048 egress)
Sensor PV-FC-180, S or K Series (Core
Flow 2/ 1 Million Flows)
Flow-based Access Points
From the controller (8K Flows
per AP or C35 is 24K Flows)
Flows
Why not do this in the
network?
10110111011101110 101101110111011101
6 million Flows
Business Value
Context BW IP HTTP:// Apps
Platform Automation Control Experience Solution Framework
Is your network faster today than
it was 3 years ago? Going forward
it should deliver more, faster,
different
X430-G2 (L2 - 1G to 10G)
PoE
Distribute content
from a single source
to hundreds of displays
Ethernet as a Utility
(PoE)
Injectors
Up to 75
Watts
XYZ Account Data CenterXYZ Account Data Center
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
Can not access Line cards.
No L2/l3 recovery inside.
No access to Fabric.
Disaggregated value...
Control Top-of-Rack Switches
L2/L3 protocols inside the Spline
Full access to Spine Switches
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
Can not access Line cards.
No L2/l3 recovery inside.
No access to Fabric.
Disaggregated value...
Control Top-of-Rack Switches
L2/L3 protocols inside the Spline
Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
Traditional 3-tier model (Less cabling).
Link speeds must increase at every hop (Less
predictable latency).
Common in Chassis based architectures (Optimized
for North/South traffic).
Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
Always two hops to any leaf (More resiliency,
flexibility and performance).
Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
Traditional 3-tier model (Less cabling).
Link speeds must increase at every hop (Less
predictable latency).
Common in Chassis based architectures (Optimized
for North/South traffic).
Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
Always two hops to any leaf (More resiliency,
flexibility and performance).
Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
Virtualization happens with VXLAN and VMotion (Control by the overlay).
N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account handshake layer:
This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
Virtualization happens with VXLAN and VMotion (Control by the overlay).
N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Fabric Modules (Spine)
I/OModules(Leaf)
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
Can not access Line cards.
No L2/l3 recovery inside.
No access to Fabric.
Disaggregated value...
Control Top-of-Rack Switches
L2/L3 protocols inside the Spline
Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
Traditional 3-tier model (Less cabling).
Link speeds must increase at every hop (Less
predictable latency).
Common in Chassis based architectures (Optimized
for North/South traffic).
Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
Always two hops to any leaf (More resiliency,
flexibility and performance).
Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
Virtualization happens with VXLAN and VMotion (Control by the overlay).
N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
This is where, you can never have enough.
Customers want scale made easy.
Hypervisor integration w cloud simplicity.
L2
L3
L2
L3
L2
L3
L2
L3
L2
L3
L2
L3 L2
L3
L2
L3
L2
L3
L2
L3
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
This is where, you can never have enough.
Customers want scale made easy.
Hypervisor integration w cloud simplicity.
L2
L3
L2
L3
L2
L3 L2
L3
L2
L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
Each cluster is independent
(including servers, storage,
database & interconnects).
Each cluster can be used for
a different type of service.
Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
Each cluster is independent
(including servers, storage,
database & interconnects).
Each cluster can be used for
a different type of service.
Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
This is where, you can never have enough.
Customers want scale made easy.
Hypervisor integration w cloud simplicity.
L2
L3
L2
L3
L2
L3 L2
L3
L2
L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
Each cluster is independent
(including servers, storage,
database & interconnects).
Each cluster can be used for
a different type of service.
Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit) Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
The XYZ Account the VxLan forwarding plane for NSX control:
This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
Low density X620 might help XYZ
Account to avoid stranded ports.
Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
ServersServers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
Low density X620 might help XYZ
Account to avoid stranded ports.
Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
Low density X620 might help XYZ
Account to avoid stranded ports.
Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
Low density X620 might help XYZ
Account to avoid stranded ports.
Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Chassis V Spline
Fabric Modules (Spine)
I/OModules(Leaf)
Spine
Leaf
Proven value with legacy approach.
Can not access Line cards.
No L2/l3 recovery inside.
No access to Fabric.
Disaggregated value...
Control Top-of-Rack Switches
L2/L3 protocols inside the Spline
Full access to Spine Switches
No EGO, Complexity or Vendor Lock-in).
Fat-Tree
Clos / Cross-Bar
Traditional 3-tier model (Less cabling).
Link speeds must increase at every hop (Less
predictable latency).
Common in Chassis based architectures (Optimized
for North/South traffic).
Every Leaf is connected to every Spine (Efficient
utilization/ Very predictable latency).
Always two hops to any leaf (More resiliency,
flexibility and performance).
Friendlier to east/west traffic (The uplink to the
rest of the network is just another leaf).
The XYZ Account handshake layer:
This is where convergence needs to happen – LAN/SAN, FCoE, ETS. Stop or allow
whatever you can (Efficient Multicasting).
Virtualization happens with VXLAN and VMotion (Control by the overlay).
N plus one fabric design needs to happen here (Delivers simple no vanity future proofing,
No-forklift migrations, interop between vendors and hit-less operation).
This is where,
a Fabric outperforms the Big Uglies
ONE to ONE: Spine Leaf
The XYZ Account Ethernet Expressway Layer: deliver massive scale...
This is where low latency is critical, switch as quickly as you can. DO NOT slow down
the core keep it simple (Disaggregated Spline + One Big Ugly
Elastic Capacity - Today s XYZ Account s spines are tomorrow s leafs. Dial-in the
bandwidth to your specific needs with the number of uplinks.
Availability - the state of the network is kept in each switch; no single point of failure.
Seamless XYZ Account upgrades, easy to take a single switch out of service.
(Cloud Fabric) Disaggregation
Spine
Leaf
Legacy Challenges:
Complex/Slow/Expensive
Scale-up and Scale out
Vendor lock-in
Proprietary (HW, SW)Commodity
Fabric Modules (Spine)
I/OModules(Leaf)
Spline (Speed)
Active - Active redundancy
fn(x,y,z) The next convergence will be collapsing
datacenter designs into smaller, elastic form
factors for compute, storage and networking.
This is where, you can never have enough.
Customers want scale made easy.
Hypervisor integration w cloud simplicity.
L2
L3
L2
L3
L2
L3 L2
L3
L2
L3
Start Small; Scale as You Grow
This is where, you can simply add
a Extreme Leaf Clusters
Each cluster is independent
(including servers, storage,
database & interconnects).
Each cluster can be used for
a different type of service.
Delivers repeatable design
which can be added as a
commodity.
XYZ Account Spine
Leaf
Cluster Cluster Cluster
Egress
Scale
Ingress
Active / Active
VM
VMVM
RR RR
BGP Route-ReflectorRR
iBGP Adjacency
This is where
VXLAN (Route Distribution)
This is where Why VxLAN? It Flattens network to a single
tier from the XYZ Account end station
perspective.
All IP/BGP based (Virtual eXtensible Local
Area Network). Host Route Distribution
decoupled from the Underlay protocol.
VXLAN s goal is allowing dynamic large
scale isolated virtual L2 networks to be
created for virtualized and multi-
tenant environments.
Route-Reflectors deployed for scaling
purposes - Easy setup, small configuration.
TrafficEngineer“likeATMorMPLS”
UDP
Start
Stop
UDP UDP
UseExistingIPNetwork
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
Dense 10GbE
Interconnect using
breakout cables,
Copper or Fiber
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
App 1
App 2
App 3
Intel, Facebook, OCP
Facebook 4-Post Architecture - Each
leaf or rack switch has up to 48 10G
downlinks. Segmentation or multi-tenancy
without routers.
Each spine has 4 uplinks – one to each
leaf (4:1 oversubscription).
Enable insertion of services without
sprawl (Analytics for fabric and
application forensics).
No routers at spine. One failure
reduces cluster capacity to 75%.
(5 S's) Needs to be Scalable, Secure,
Shared, Standardized, and Simplified.
Network (Fit) Overlay Control
The XYZ Account the VxLan forwarding plane for NSX control:
This is where logical switches span across physical hosts and network switches. Application
continuity is delivered with scale. Scalable Multi-tenancy across data center.
Enabling L2 over L3 Infrastructure - Pool resources from multiple data centers with the ability to
recover from disasters faster.
Address Network Sprawl with an VXLAN overlay. Deeper Integration with infrastructure and
operations partners, integrations, and frameworks for IT organizations.
Vmware NSX (Control Plane)
Management Plane deliver
by the NSX Manager.
Control Plane NSX Controller
Manages Logical networks
and data plane resources.
Extreme delivers an open
high performance data
plane with Scale
NSX Architecture and Components
CORE
CAMPUS
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
X870-32c
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
01
05
10
15
20
25
30
35
40
02
03
04
06
07
08
09
11
12
13
14
16
17
18
19
21
22
23
24
26
27
28
29
31
32
33
34
36
37
38
39
41
42
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
2GbGb 1 Gb 3 Gb 4
21
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
N3K-C3064PQ-FASTAT
ID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4
10Gb
Aggregation
High
Density
10Gb
Aggregation
10Gb/40Gb
Aggregation
High Density 25Gb/50Gb
Aggregation
X770 X870-96x-8c
100Gb
Uplinks
X670-G2
100Gb
Uplinks
Server PODs
770 / 870 Spine
Data Center – Private Cloud
vC-1 vC-2
…
vC-N
This is where XYZ Account must first it must have the ability to scale with customer demand,
delivering more than just disk space and processors.
Scale – XYZ Account must have be able the to seamlessly failover, scale up, scaled down and
optimize management of the applications and services.
Flexibility - The infrastructure XYZ Account must have the ability to host heterogeneous and
interoperable technologies.
Business - The business model costs might be optimized for operating expenses or towards
capital investment.
Cloud Computing (Control Plane)
(On-Premise)
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Storage
Servers
Networking
O/S
Middleware
Virtualization
Data
Applications
Runtime
Youmanage
Managedbyvendor
Managedbyvendor
Youmanage
Youmanage
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Software
(as a Service)
Managedbyvendor
Storage
Servers
Networking
O/S
Middleware
Virtualization
Applications
Runtime
Data
Public
Private
MSP
F
A
B
R
I
C
This is where Azure ExpressRoute lets XYZ Account create private connections between Azure
datacenters and XYZ Account infrastructure on or off premises.
ExpressRoute connections don't go over the public Internet. They offer more reliability, faster
speeds, and lower latencies, and higher security than typical Internet connections.
XYZ Account can transfer data between on-premises systems and Azure can yield significant
cost benefits.
XYZ Account can establishing connections to Azure at an ExpressRoute location, such as an
Exchange provider facility, or directly connect to Azure from your existing WAN network, such as
a multi-protocol label switching (MPLS) VPN, provided by a network service provider
Microsoft Assure (Control Plane)
Cloud The key impact of this model
for the customer is a move from
managing physical servers to focus on
logical management of data storage
through policies.
Compute Storage
Data Center Architecture
Considerations
Compute
Cache
Database
Storage
Client
Response
80% North-South Traffic
Oversubscription : upto 200:1 (Client
Request +Server Response = 20%
traffic).
Inter-rack latency: 150 micros.
Lookup Storage = 80% traffic.
Scale: Up to 20 racks (features Non-
blocking 2 tier designs optimal).
VM
VM VM
VM
Purchase "vanity free"
This is where..
Open Compute might allow companies to
purchase "vanity free". Previous outdated
data center designs support more
monolithic computing.
Low density X620 might help XYZ
Account to avoid stranded ports.
Availability - Dual X620s can be
deployed to minimize impact to
maintenance.
Flexibility of the X620 can offer
flexibility to support both 1G and 10G to
servers and storage.
One RACK Design
Closely
coupled
Nearly
coupled
Loosely
coupled
Shared Combo Ports
4x10GBASE-T & 4xSFP+
100Mb/1Gb/10GBASE-T
The monolithic datacenter
is dead.
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Open Compute - Two Rack Design
This is where, XYZ Account can reduce OPEX and
leverage a repeatable solution.
With the spline setup, XYZ Account can put
redundant switches in the middle and link
each server to those switches.
Fewer Hops between Servers - The important
thing is that each server is precisely one hop
from any other server.
Avoid Stranded ports – Designs often have a
mix of fat and skinny nodes. If XYZ Account
deploys a 48-port leaf switches many
configurations might have anywhere from 16
to 24 stranded ports.
Two RACK
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Management
Servers
Storage
Summit
Management
Switch
Summit
Summit
Typical spline setup
Open Compute : Eight Rack POD Design
This is where
Typical spline setup : Eight Rack POD
Leaf
Spine
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
2. Heading
XYZ Account 2016 DesignExtremeEdgePoE
ExtremeCore10G
2016 Design
1G 2.5G/5G 10G 40G
Jeff Green
2016
Rev. 1
Florida
Legend
Legend
PoE
802.3at (PoE+)
Cat5e
30W
30W30W
60W
UPOE
No Cabling Change from PoE+
Cat5e
NBASE-T Alliance Copper Max Distances
Cat 7 Shielded 100 m
Cat 6a Shielded 100 m
Cat 6a Unshielded 100 m
Cat 6 Shielded** 100 m
Cat 6 Unshielded** 55 m
Need Correct
UTP, Patch Panel
and Adapter.
known as IEEE 802.3bz
Greenfield - Cat 6a (2.5, 5G & 10G) 100m
Cat 6 (2.5G, 5G & 10G) 55m
Brownfield - Cat 5e (2.5&5G) 100M
Requires X620 or
X460 Switch for
Multi-rate Support
plus Client that
supports Multi-rate.
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
OM3 50 µm (550m/SX) Laser, LC (PN 10051H)
OM1 62.5 µm (FDDI 220m/OM1, LC (PN 10051H)
OM2 62.5 µm (ATM 275m/OM2), LC (PN 10051H)
OM4 50 µm (550m/SX) 2Km, LC (PN 10051H)
1G Fiber (50 µm)
1G Fiber (62.5 µm)
Single-fiber
transmission uses
only one strand of
fiber for both
transmit and
receive (1310nm
and 1490nm for
1Gbps; 1310nm and
1550nm for
100Mbps)
LX (MMF 220 & 550m), SMF 10km, LC (PN 10052H)
ZX SMF 70km, LC (PN 10053H)
10/100/1000 (UTP to 100m) SFP (PN 1070H)
SR4 at least 100 m OM3 MMF (PN 10319)
SR4 at least 125 m OM4 MMF (PN 10319)
LR4 at least 10 km SMF, LC (PN 10320)
LM4 140m MMF or 1kM SMF, LC (PN 10334)
Optics
Optics +
Fan-out
Fiber Cable
QSFP-SFPP-ADPTQSFP-SFPP-ADPT – QSFP to SFP+ adapter
ER4 40km SMF, LC (PN 10335) Internal CWDM
transits four wavelengths over single fiber.
MPO to 4 x LC Fanout 10m (PN 10327) for use
with (PN 10326) MPO to 4 x LC duplex
connectors, SMF
LR4 Parallel SM, 10km SMF, MPO (PN 10326)
25/50/100G
CR10 > 10 m over copper cable (10x10 Gb/s /Twinax (7M))
SR10 > 100 m over OM3 MMF (10x10 Gb/s / Multimode (100M))
SR10 > 125 m over OM4 MMF (10x10 Gb/s/ (100M) Data Center)
LR4 > 10 km over SMF (4x25 Gb/s SMF/WDM (10km) Campus)
ER4 > 40 km over SMF (4x25 Gb/s SMF/WDM (40km) Metro)
Optics and DAC Cables
Extreme Networks will restrict the integration of non-qualified 3rd party optical devices within 40G and 100G product environments,
unless you purchase the EXOS 3rd Party 40G/100G Optics feature license to allow such integration.
Proprietary got you Keyed Optics
ModelNumber Description
10GB-LR271-SFPP 10Gb CWDM LR, SM, Channel 1271nm, LC
10GB-LR291-SFPP 10Gb CWDM LR, SM, Channel 1291nm, LC
10GB-LR311-SFPP 10Gb CWDM LR, SM, Channel 1311nm, LC
10GB-LR331-SFPP 10Gb CWDM LR, SM, Channel 1331nm, LC
MUX-CWDM-01 4 Channel O-Band CWDM Mux/Demux
MUX-RACK-01 Rack mount kit for MUX-CWDM-01
40GB-LR4-QSFP 40Gb 40GBASE-LR4, SM 10Km, LC
CWDM
MUX-CWDM-01
DACs
Notes:
Organizing Compute, Management & Edge
Edge Leaf
L3 to DC Fabric
L2 to External Networks
Compute Clusters Infrastructure Clusters (Edge, Storage,
vCenter and Cloud Management
System)
WAN
Internet
L3
L2
L3
L2
Leaf
Spine
L2 VLANs
f or bridging
Single vCenter Server to manage all Management, Edge and Compute Clusters
NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server
NSX Controllers can also be deployed into the Management Cluster
Reduces vCenter Server licensing requirements
Separation of compute, management and Edge function with following design
advantage. Managing life-cycle of resources for compute and Edge functions.
Ability to isolate and develop span of control
Capacity planning – CPU, Memory & NIC
Upgrades & migration flexibility
Automation control over area or function that requires frequent changes. app-
tier, micro-segmentation & load-balancer. Three areas of technology require
considerations.
Interaction with physical network
Overlay (VXLAN) impact
Integration with vSphere clustering
Registration or
Mapping
WebVM
WebVM
VM
VM WebVM
Compute Cluster
WebVM VM
VM
Compute
A
vCenter Server
NSX Manager NSX
Controller
Compute
B
Edge and Control VM
Edge Cluster
Management Cluster
Organizing Compute, Management & Edge
Edge Leaf
L3 to DC Fabric
L2 to External Networks
Compute Clusters Infrastructure Clusters (Edge, Storage,
vCenter and Cloud Management
System)
WAN
Internet
L3
L2
L3
L2
Leaf
Spine
L2 VLANs
f or bridging
Single vCenter Server to manage all Management, Edge and Compute Clusters
NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server
NSX Controllers can also be deployed into the Management Cluster
Reduces vCenter Server licensing requirements
Separation of compute, management and Edge function with following design
advantage. Managing life-cycle of resources for compute and Edge functions.
Ability to isolate and develop span of control
Capacity planning – CPU, Memory & NIC
Upgrades & migration flexibility
Automation control over area or function that requires frequent changes. app-
tier, micro-segmentation & load-balancer. Three areas of technology require
considerations.
Interaction with physical network
Overlay (VXLAN) impact
Integration with vSphere clustering
Registration or
Mapping
WebVM
WebVM
VM
VM WebVM
Compute Cluster
WebVM VM
VM
Compute
A
vCenter Server
NSX Manager NSX
Controller
Compute
B
Edge and Control VM
Edge Cluster
Management Cluster
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Traditional Networking Configuration Tasks
L3
L2
Initial configuration
Multi-chassis LAG
Routing configuration
SVIs/RVIs
VRRP/HSRP
LACP
VLANs
Recurring configuration
SVIs/RVIs
VRRP/HSRP
Advertise new subnets
Access lists (ACLs)
VLANs
Adjust VLANs on trunks
VLANs STP/MST mapping
VLANs STP/MST mapping
Add VLANs on uplinks
Add VLANs to server port
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Traditional Networking Configuration Tasks
L3
L2
Initial configuration
Multi-chassis LAG
Routing configuration
SVIs/RVIs
VRRP/HSRP
LACP
VLANs
Recurring configuration
SVIs/RVIs
VRRP/HSRP
Advertise new subnets
Access lists (ACLs)
VLANs
Adjust VLANs on trunks
VLANs STP/MST mapping
VLANs STP/MST mapping
Add VLANs on uplinks
Add VLANs to server port
NSX isAGNOSTICto UnderlayNetwork
L2 or L3 orAny Combination
OnlyTWORequirements
IPConnectivity MTUof 1600
Network & Security Services in Software
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
L3 Topologies & Design Considerations. With XoS 670 Cores L2
Interfaces by default IP packet as large as 9214 Bytes can
be sent and received (no configuration is required). L3
interfaces by default IP packet as large as 1500 Bytes can
be sent and received. Configuration step for L3 interfaces:
change MTU to 9214 “mtu ” command) IP packet as
large as 9214 Bytes can be sent and received
L3 ToR designs have dynamic routing protocol between
leaf and spine.
BGP, OSPF or ISIS can be used
Rack advertises small set of prefixes
(Unique VLAN/subnet per rack)
Equal cost paths to the other racks prefixes.
Switch provides default gateway service for each VLAN
subnet
801.Q trunks with a small set of VLANs for VMkernel
traffic
Rest of the session assumes L3 topology
L3
L2
Network & Security Services in Software
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
L3 Topologies & Design Considerations. With XoS 670 Cores L2
Interfaces by default IP packet as large as 9214 Bytes can
be sent and received (no configuration is required). L3
interfaces by default IP packet as large as 1500 Bytes can
be sent and received. Configuration step for L3 interfaces:
change MTU to 9214 “mtu ” command) IP packet as
large as 9214 Bytes can be sent and received
L3 ToR designs have dynamic routing protocol between
leaf and spine.
BGP, OSPF or ISIS can be used
Rack advertises small set of prefixes
(Unique VLAN/subnet per rack)
Equal cost paths to the other racks prefixes.
Switch provides default gateway service for each VLAN
subnet
801.Q trunks with a small set of VLANs for VMkernel
traffic
Rest of the session assumes L3 topology
L3
L2
XYZ Account (Spine)
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
Lync Traffic Engineering with
Purview Analytics Service Insertion
Multi-Tenant Networks Automation
and Orchestration
Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
Lync Traffic Engineering with
Purview Analytics Service Insertion
Multi-Tenant Networks Automation
and Orchestration
Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
NSX Controllers Functions
LogicalRouter1
VXLAN5000
LogicalRouter2
VXLAN5001
LogicalRouter3
VXLAN-5002
Controller VXLAN
DirectoryService
MAC table
ARP table
VTEPtable
This is where NSX will provide XYZ Account one control
plane to distribute network information to ESXi hosts.
NSX Controllers are clustered for scale out and high
availability.
Network information is distributed across nodes in a
Controller Cluster (slicing)
Remove the VXLAN dependency on multicast
routing/PIM in the physical network
Provide suppression of ARP broadcast traffic in
VXLAN networks
XYZ Account (Spine)
CORE 1 CORE 2
Preparation Netsite Operation
Convergence 3.0 (Automation/ Seconds')
Flexibility and choice
Wi-FiAnalytics Security Policy
Extreme s Platform
Lync Traffic Engineering with
Purview Analytics Service Insertion
Multi-Tenant Networks Automation
and Orchestration
Self-Provisioned Network Slicing
(Proof of concept Implementation)
Better Experience through simpler
solutions that deliver long term
value.
Products – one wired and wireless
platform
Customer Care – Strong 1st call
resolution
NSX Controllers Functions
LogicalRouter1
VXLAN5000
LogicalRouter2
VXLAN5001
LogicalRouter3
VXLAN-5002
Controller VXLAN
DirectoryService
MAC table
ARP table
VTEPtable
This is where NSX will provide XYZ Account one control
plane to distribute network information to ESXi hosts.
NSX Controllers are clustered for scale out and high
availability.
Network information is distributed across nodes in a
Controller Cluster (slicing)
Remove the VXLAN dependency on multicast
routing/PIM in the physical network
Provide suppression of ARP broadcast traffic in
VXLAN networks
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
vSphere
Host
VXLAN Transport
Network
Host 1
VTEP2
10.20.10.11
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VXLAN when deployed creates
automatic port-group whose VLAN
ID must be the same per VDS. For
the Fabric is L2, this usually
means that the same IP subnets
are also used across racks for a
given type of traffic. For a given
host only one VDS responsible for
VXLAN traffic. A single VDS can
span multiple cluster.Transport
Zone, VTEP, Logical Networks and
VDS
VTEP VMkernel interface belongs
to a specific VLAN backed port-
group dynamically created
during the cluster VXLAN
preparation
One or more VDS can be part
of the same TZ
A given Logical Switch can
span multiple VDS. vSphere Host(ESXi)
L3 ToR Switch
Routed uplinks (ECMP)
VLANTrunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26
GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26
GW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
Serve
rs
Manageme
nt
Summi
t
Summi
t
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD Services and Connectivity
vSphere
Host
VXLAN Transport
Network
Host 1
VTEP2
10.20.10.11
V
M
VXLAN 5002
MAC2
vSphere
Host
VTEP3
10.20.10.12
Host 2
10.20.10.13
V
M
MAC4
V
M
MAC1
V
M
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VXLAN when deployed creates
automatic port-group whose VLAN
ID must be the same per VDS. For
the Fabric is L2, this usually
means that the same IP subnets
are also used across racks for a
given type of traffic. For a given
host only one VDS responsible for
VXLAN traffic. A single VDS can
span multiple cluster.Transport
Zone, VTEP, Logical Networks and
VDS
VTEP VMkernel interface belongs
to a specific VLAN backed port-
group dynamically created
during the cluster VXLAN
preparation
One or more VDS can be part
of the same TZ
A given Logical Switch can
span multiple VDS. vSphere Host(ESXi)
L3 ToR Switch
Routed uplinks (ECMP)
VLANTrunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26
DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26
GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26
DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26
GW: 10.99.1.1
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
SpanofVLANs
SpanofVLANs
Traditional control
LDAP NAC DHCP Radius Captive
Portal
DNS MDMLDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
LDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
NAC
Analytics
Netsite
Management Cluster (Control)
Cloud Based control
Leaf L2
L3 L3
L2
VMkernel
VLANs
VLANs for
Management VMs
L2
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
VMkernel
VLANs
VLANs for
Management VMs
Single Rack Connectivity
Leaf
L3
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
Dual Rack Connectivity
L2
23
Extreme Vmware Deployment Considerations – This is
where, management Cluster is typically provisioned on a
single rack
The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management
Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the
requirements for highly available design.
Typically in a small design management and Edge
cluster are collapsed. Exclude management cluster
from preparing VXLAN.
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager deployed as a
virtual appliance
4 vCPU, 12 GB of RAM per node
Consider reserving memory for
VC to ensure good Web Client
performance
Can not modify configurations
Traditional control
LDAP NAC DHCP Radius Captive
Portal
DNS MDM
XYZ Account ServicesUser Repositories or Corporate Control
NAC
Analytics
Netsite
Management Cluster (Control)
Cloud Based control
Leaf L2
L3 L3
L2
VMkernel
VLANs
VLANs for
Management VMs
L2
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
VMkernel
VLANs
VLANs for
Management VMs
Single Rack Connectivity
Leaf
L3
L2
VMkernel
VLANs
Routed DC Fabric
802.1Q
Trunk
Dual Rack Connectivity
L2
23
Extreme Vmware Deployment Considerations – This is
where, management Cluster is typically provisioned on a
single rack
The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management
Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the
requirements for highly available design.
Typically in a small design management and Edge
cluster are collapsed. Exclude management cluster
from preparing VXLAN.
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager deployed as a
virtual appliance
4 vCPU, 12 GB of RAM per node
Consider reserving memory for
VC to ensure good Web Client
performance
Can not modify configurations
Extreme Networks
Compute, Storage Networking
Integration...
Extreme Networks
Control, Analytics & Security
Integration...
3. Heading
XYZ Account 2016 DesignExtremeEdgePoE
ExtremeCore10G
2016 Design
1G 2.5G/5G 10G 40G
Jeff Green
2016
Rev. 1
Florida
Legend
Legend
PoE
802.3at (PoE+)
Cat5e
30W
30W30W
60W
UPOE
No Cabling Change from PoE+
Cat5e
NBASE-T Alliance Copper Max Distances
Cat 7 Shielded 100 m
Cat 6a Shielded 100 m
Cat 6a Unshielded 100 m
Cat 6 Shielded** 100 m
Cat 6 Unshielded** 55 m
Need Correct
UTP, Patch Panel
and Adapter.
known as IEEE 802.3bz
Greenfield - Cat 6a (2.5, 5G & 10G) 100m
Cat 6 (2.5G, 5G & 10G) 55m
Brownfield - Cat 5e (2.5&5G) 100M
Requires X620 or
X460 Switch for
Multi-rate Support
plus Client that
supports Multi-rate.
10G Passive (PN 10306 ~ 5m, 10307~ 10M)
10G SFP+ Active copper cable (upto 100m)
40G Passive (PN 10321 ~3m, 10323~ 5m)
40G Active (PN 10315~10M, 10316 ~20m, 10318~ 100m)
40G Fan-out (PN 10321 ~3m, 10322 ~5m, PN 10GB-4-
F10-QSFP ~10m, PN 10GB-4-F20-QSFP ~20m, )
10G Passive (PN 10304 ~1m, 10305~3m, 10306~5m)
SFP+ DAC Cables
QSFP+ DAC Cables
10 LRM 220m (720ft/plus mode conditioning) (PN 10303)
10GBASE-T over Class E Cat 6 (55M) (10G)
10GBASE-T over Class E Cat 6a or 7 (100M) (10G)
10 SR over OM3 (300M) or OM4 (400M) (PN 10301)
10 LR over single mode (10KM) 1310nm (PN 10302)
10 ER over single mode (40KM) 1550nm (PN 10309)
10 ZR over single mode (80KM) 1550nm (PN 10310)
802.3bz 10GBASE-T (100M) for Cat 6 (5G)
10G Fiber
10G Copper
802.3bz 10GBASE-T (100M) for Cat 5e (2.5G)
OM3 50 µm (550m/SX) Laser, LC (PN 10051H)
OM1 62.5 µm (FDDI 220m/OM1, LC (PN 10051H)
OM2 62.5 µm (ATM 275m/OM2), LC (PN 10051H)
OM4 50 µm (550m/SX) 2Km, LC (PN 10051H)
1G Fiber (50 µm)
1G Fiber (62.5 µm)
Single-fiber
transmission uses
only one strand of
fiber for both
transmit and
receive (1310nm
and 1490nm for
1Gbps; 1310nm and
1550nm for
100Mbps)
LX (MMF 220 & 550m), SMF 10km, LC (PN 10052H)
ZX SMF 70km, LC (PN 10053H)
10/100/1000 (UTP to 100m) SFP (PN 1070H)
SR4 at least 100 m OM3 MMF (PN 10319)
SR4 at least 125 m OM4 MMF (PN 10319)
LR4 at least 10 km SMF, LC (PN 10320)
LM4 140m MMF or 1kM SMF, LC (PN 10334)
Optics
Optics +
Fan-out
Fiber Cable
QSFP-SFPP-ADPTQSFP-SFPP-ADPT – QSFP to SFP+ adapter
ER4 40km SMF, LC (PN 10335) Internal CWDM
transits four wavelengths over single fiber.
MPO to 4 x LC Fanout 10m (PN 10327) for use
with (PN 10326) MPO to 4 x LC duplex
connectors, SMF
LR4 Parallel SM, 10km SMF, MPO (PN 10326)
25/50/100G
CR10 > 10 m over copper cable (10x10 Gb/s /Twinax (7M))
SR10 > 100 m over OM3 MMF (10x10 Gb/s / Multimode (100M))
SR10 > 125 m over OM4 MMF (10x10 Gb/s/ (100M) Data Center)
LR4 > 10 km over SMF (4x25 Gb/s SMF/WDM (10km) Campus)
ER4 > 40 km over SMF (4x25 Gb/s SMF/WDM (40km) Metro)
Optics and DAC Cables
Extreme Networks will restrict the integration of non-qualified 3rd party optical devices within 40G and 100G product environments,
unless you purchase the EXOS 3rd Party 40G/100G Optics feature license to allow such integration.
Proprietary got you Keyed Optics
ModelNumber Description
10GB-LR271-SFPP 10Gb CWDM LR, SM, Channel 1271nm, LC
10GB-LR291-SFPP 10Gb CWDM LR, SM, Channel 1291nm, LC
10GB-LR311-SFPP 10Gb CWDM LR, SM, Channel 1311nm, LC
10GB-LR331-SFPP 10Gb CWDM LR, SM, Channel 1331nm, LC
MUX-CWDM-01 4 Channel O-Band CWDM Mux/Demux
MUX-RACK-01 Rack mount kit for MUX-CWDM-01
40GB-LR4-QSFP 40Gb 40GBASE-LR4, SM 10Km, LC
CWDM
MUX-CWDM-01
DACs
Notes:
Identify design principles and implementation strategies,
Start from service requirements and leverage
standardization (Design should be driven by today s and
tomorrow s service requirements).
Standardization limits technical and operational complexity
and related costs (Develop a reference model based on
principles (Principles enable consistent choice in long term
run).
Leverage best practices and proven expertise, Streamline
your capability to execute and operational effectiveness
(Unleash capabilities provided by enabling technologies).
Virtual Router 1 (VoIP) - Virtualized services for application delivery
Virtual Router 1 (Oracle) - Virtualized services for application delivery
Virtual Router 1 (Wireless Lan) - Virtualized services for application delivery
Virtual Router 1 (PACs) - Virtualized services for application delivery
Virtual Router 1 (VoIP) - Virtualized services for application delivery
Virtual Router 1 (Oracle) - Virtualized services for application delivery
Virtual Router 1 (Wireless Lan) - Virtualized services for application delivery
Virtual Router 1 (PACs) - Virtualized services for application delivery
# of assets/ports
maintenance costs
operational costs
Next generation operations
Pay as you go
Savings
Referencearchitecture
Data center Network as a Service (NaaS)
NSX
Controller
VC for NSX Domain - A VC for NSX Domain - B
Management Cluster
NSX Manager VM - A
Management VC
Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VMCompute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VMCompute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM
NSX Manager VM - B
Multiple vCenters Design – XYZ Account Design with Multiple NSX Domains...
Following VMware best practices to have the Management Cluster managed by a dedicated
vCenter Server (Mgmt VC) Separate vCenter Server into the Management Cluster to manage the
Edge and Compute Clusters
NSX Manager also deployed into the Management Cluster and pared with this second vCenter
Server Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains)
NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to,
therefore the Controllers are usually also deployed into the Edge Cluster
Data center Network as a Service (NaaS)
NSX
Controller
VC for NSX Domain - A VC for NSX Domain - B
Management Cluster
NSX Manager VM - A
Management VC
Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM Compute Cluster
Edge Cluster
Compute A
Compute B
Web
VM
Web
VM
VM
VM
NSX
Controller
Edge and
Control VM
NSX Manager VM - B
Multiple vCenters Design – XYZ Account Design with Multiple NSX Domains...
Following VMware best practices to have the Management Cluster managed by a dedicated
vCenter Server (Mgmt VC) Separate vCenter Server into the Management Cluster to manage the
Edge and Compute Clusters
NSX Manager also deployed into the Management Cluster and pared with this second vCenter
Server Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains)
NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to,
therefore the Controllers are usually also deployed into the Edge Cluster
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
Logical Router 1
VXLAN 5000
Logical Router 2
VXLAN 5001
Logical Router 3
VXLAN - 5002
Controller VXLAN
Directory Service
MAC table
ARP table
VTEPtable
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
Traffic Engineer “like ATM or MPLS”
UDP
Start
Stop
UDP UDP
Use Existing IP Network
VM
VM
VM
VM
VM
VM
VM
VM
Traffic Engineer “like ATM or MPLS”
UDP
Start
Stop
UDP UDP
Use Existing IP Network
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
XYZ Account NSX Transport Zone: collection of VXLAN prepared ESXi clusters
Normally a TZ defines the span of Logical Switches (Layer 2 communication domains).
A given Logical Switch can span multiple VDS
VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for
encap/decap VXLAN traffic. One or more VDS can be part of the same TZ
VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically
created during the cluster VXLAN preparation.
Overlays Considerations? Ethernet Virtual Interconnect (EVI) can be deployed for
active/active DC over any Network. This is where careful attention is required because
there is different data plane (additional header) makes Jumbo Frames a must have
and will continue to evolve
Scalability beyond the 802.1Q VLAN limitations to 16M services/tenants
L2 extension, VXLAN as de-facto solution by Vmware. Standardization around control
plane is still work in progress (even if BGP EVPNs are here)
Encapsulation over IP delivers the ability to cross L3 boundaries. As a result, the
design above becomes a big L3 domain with L2 processing. EVI provides additional
benefits such as:
Transport agnostic
Up to 16 Active/Active DCs
Active/Active VRRP default gateways for VMs
STP outages remain local to each DC
Improves WAN utilization by dropping unknown frames and providing ARP suppression
EVI tunnel
Physical Underlay Network
CORE 1
CORE 2
XYZ Account (Primary)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
CORE 1
CORE 2
XYZ Account (DR Site)
Preparation Netsite
OperationLogical Switch
SERVER FARM (Leafs)
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Servers
Servers
Storage
Summit
Management
Switch
Summit
Summit
Storage
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
Serv
ers
Manage
ment
Sum
mit
Sum
mit
Media
Servers
Routers
Firewalls
PBXs
COMPUTE WORKLOAD COMPUTE WORKLOAD
Services and
Connectivity
Logical Router 1
VXLAN 5000
Logical Router 2
VXLAN 5001
Logical Router 3
VXLAN - 5002
Controller VXLAN
Directory Service
MAC table
ARP table
VTEPtable
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
Traffic Engineer “like ATM or MPLS”
UDP
Start
Stop
UDP UDP
Use Existing IP Network
VM
VM
VM
VM
VM
VM
VM
VM
VTEP VTEP
XYZ Account NSX Transport Zone: collection of VXLAN prepared ESXi clusters
Normally a TZ defines the span of Logical Switches (Layer 2 communication domains).
A given Logical Switch can span multiple VDS
VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for
encap/decap VXLAN traffic. One or more VDS can be part of the same TZ
VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically
created during the cluster VXLAN preparation.
Overlays Considerations? Ethernet Virtual Interconnect (EVI) can be deployed for
active/active DC over any Network. This is where careful attention is required because
there is different data plane (additional header) makes Jumbo Frames a must have
and will continue to evolve
Scalability beyond the 802.1Q VLAN limitations to 16M services/tenants
L2 extension, VXLAN as de-facto solution by Vmware. Standardization around control
plane is still work in progress (even if BGP EVPNs are here)
Encapsulation over IP delivers the ability to cross L3 boundaries. As a result, the
design above becomes a big L3 domain with L2 processing. EVI provides additional
benefits such as:
Transport agnostic
Up to 16 Active/Active DCs
Active/Active VRRP default gateways for VMs
STP outages remain local to each DC
Improves WAN utilization by dropping unknown frames and providing ARP suppression
EVI tunnel
Physical Underlay Network