In an industry that’s already defined, Extreme Network’s recent announcement of The Automated Campus is a significant advance in networking. For the first time, all the essential technologies, products, procedures and support are gathered together and integrated. All too often, the piecemeal/piecewise growth strategy typically historically applied in organizational network evolution results in too many tools, procedures, and techniques at work, precluding fast responsiveness, optimal operations staff productivity, and the degree of accuracy and efficiency required to keep end-users productive as well.
The most important opportunity today is in boosting both productivity of end-users and network operators. The automated campus must address the productivity of network planners and network operations managers and staff. The often-significant number of elements required in an installation can demand significant staff time and can consequentially have an adverse impact on operating expenses (OpEx). While It is possible to build traditional networks that, when running correctly and optimally, get the job done – unfortunately, they often embody such high operating expenses that cost becomes the overriding factor controlling the evolution of the campus network overall. The Automated Campus will allow XYZ Account to address all these issues and concerns. A key goal here must be, of course, to reduce the number of “moving parts” required to build and operate any campus.
Extreme’s strategy for Campus Automation begins with re-thinking the way networks are designed, deployed and managed. Extreme’s Fabric-based networks enable faster configuration and troubleshooting; As a result, there is less opportunity for misconfiguration. Several automation solutions designed to enhance security often force network managers to accept complexity and degraded resilience to secure the network to meet local policies. Should a breach occur, containment to that segment protects even more sensitive parts of the network, resulting in a true dead-end for the hacker. With Extreme’s Automated Campus services can easily be defined and provisioned on-the-fly without disruption. Network operators specify what services are allowed or prohibited across the network.
Places in the network (featuring policy)Jeff Green
Networks of the Future will be about a great user experience, devices and things…
In an industry that’s already defined, Extreme Network’s recent announcement of The Automated Campus is a significant advance in networking. For the first time, all the essential technologies, products, procedures and support are gathered together and integrated. All too often, the piecemeal/piecewise growth strategy, typically applied in network evolutions, results in too many tools, procedures, and techniques. The patchwork quilt approach precludes fast responsiveness, optimal operations staff productivity, and sacrifices the accuracy and efficiency required to keep end-users productive as well.
The most important opportunity to improve efficiency for governments today is in boosting both the productivity of end-users and network operators. The automated campus must address the productivity of network planners and network operations managers and staff. The often-significant number of elements required in an installation can demand significant staff time and can, consequentially, have an adverse impact on operating expenses (OpEx). While It is possible to build traditional networks that, when running correctly and optimally get the job done, they often embody such high operating expenses that cost becomes the overriding factor controlling the evolution of the campus network. The Automated Campus will allow XYZ Account to address all these issues and concerns. A key goal must be for XYZ Account to reduce the number of “moving parts” required to build and operate any campus and introduce a level of simplicity and automation that will address your future.
Extreme’s strategy for Campus Automation begins with re-thinking the way networks are designed, deployed and managed. Extreme’s Fabric-based networks enable faster configuration and troubleshooting; As a result, there is less opportunity for misconfiguration. Several automation solutions designed to enhance security often force network managers to accept complexity and degraded resilience to secure the network to meet local policies. Should a breach occur, containment to that segment protects even more sensitive parts of the network, resulting in a true dead-end for the hacker. With Extreme’s Automated Campus services can easily be defined and provisioned on-the-fly without disruption. Network operators specify what services are allowed or prohibited across the network.
The ubiquitous heavy-tailed distributions in the Internet im-plies an interesting feature of the Internet traffic: most (e.g. 80%) of the traffic is actually carried by only a small number of connections (elephants), while the remaining large amount of connections are very small in size or lifetime (mice). In a fair network environment, short connections expect rela-tively fast service than long connections. For these reasons, short TCP flows are generally more con-servative than long flows and thus tend to get less than their fair share when they compete for the bottleneck bandwidth. In this paper, we propose to give preferential treatment to short flows2 with help from an Active Queue Management (AQM) policy inside the network. We also rely on the pro-posed Differentiated Services (Diffserv) architecture [3] to classify flows into short and long at the edge of the network. More specifically, we maintain the length of each active flow (in packets3) at the edge routers and use it to classify incoming packets.
Extreme is rethinking the data plane, the control plane, and the management plane. Extreme is a better mouse trap which delivers new features, advanced function, and wire-speed performance. Our switches deliver deterministic performance independent of load or what features are enabled. All Extreme Switches are based on XOS, the industries first and only truly modular operating system. Having a modular OS provides higher availability of critical network resources. By isolating each critical process in its own protected memory space, a single failed process can not take down the entire switch. Application modules can be loaded and unloaded without the need for rebooting the switch. This is the level of functionality that users expect on other technology. Reaching the twenty million port milestone is a significant achievement demonstrating how our highly effective network solutions, with rich features, innovative software and integrated support for secure convergence. VoIP/Unified Communica Fons/Infrastructure/SIP Trunking (SBC) – Because of strong ROI, investment in this segment remains on a very strong growth trajectory.
Enterprises depend on modular switching solutions for all aspects of the enterprise network: in the enterprise core and data center, the distribution layer that lies between the core and wiring closet, and in the wiring closet itself. Modular solutions provide port diversity and density that fixed solutions simply cannot match. There are also high-capacity modular solutions that only the largest of enterprises and institutions use for high-density and high-speed deployments. Modular solutions are generally much more expensive than their fixed cousins, especially in situations where density or flexibility are not required. Fixed-configuration stackable switches are typically cost- optimized, but they offer no real port diversity on an individual switch. Port diversity means the availability of different port types, such as fiber versus copper ports. Stackable switches have gotten better at offering port diversity, but they still cannot match their modular cousins. Many of these products now offer high-end features such as 802.3af PoE, QoS, and multi-layer intelligence that were only found on modular switches in the past. This is due to the proliferation of third-party merchant silicon in the fixed configuration market. Generally, a stack of fixed configuration switches can be managed as a single virtual entity. Fixed configuration switches generally cannot be used to provision an entire large enterprise, but instead are mostly used out at the edge or departmental level as a low-cost alternative to modular products.
Assumptions:
Ethernet is Open
Active/Active in the Fabric
Therefore:
Open at the Edge
Active/Active at the edge
If the number of spine switches were to be merely doubled, the effect of a single switch failure is halved. With 8 spine switches, the effect of a single switch failure only causes a 12% reduction in available bandwidth. So, in modern data centers, people build networks with anywhere from 4 to 32 spine switches. With a leaf-spine network, every server on the network is exactly the same distance away from all other servers – three port hops, to be precise. The benefit of this architecture is that you can just add more spines and leaves as you expand the cluster and you don't have to do any recabling. Intuition Systems will also get more predictable latency between the nodes.
As a trend, disaggregation seems to be most useful for very large companies like Facebook and Google, or cloud providers. The technology does not necessarily have significant implications for small or medium sized businesses. Historically, however, technology has a way of trickling down from the pioneering phases of existing only within large companies with tremendous resources, to becoming more standardized across the board.
Building efficient 5G NR base stations with Intel® Xeon® Scalable Processors Michelle Holley
Speaker: Daniel Towner, System Architect for Wireless Access, Intel Corporation
5G brings many new capabilities over 4G including higher bandwidths, lower latencies, and more efficient use of radio spectrum. However, these improvements require a large increase in computing power in the base station. Fortunately the Xeon Scalable Processor series (Skylake-SP) recently introduced by Intel has a new high-performance instruction set called Intel® Advanced Vector Extensions 512 (Intel® AVX-512) which is capable of delivering the compute needed to support the exciting new world of 5G.
In his talk Daniel will give an overview of the new capabilities of the Intel AVX-512 instruction set and show why they are so beneficial to supporting 5G efficiently. The most obvious difference is that Intel AVX-512 has double the compute performance of previous generations of instruction sets. Perhaps surprisingly though it is the addition of brand new instructions that can make the biggest improvements. The new instructions mean that software algorithms can become more efficient, thereby enabling even more effective use of the improvements in computing performance and leading to very high performance 5G NR software implementations.
Flexible Data Centre Fabric - FabricPath/TRILL, OTV, LISP and VXLANCisco Canada
This presentation will discuss the evolving Data Centre Fabric, FabricPath, VXLAN, LISP, LISP Host Mobility, OTV LAN Extension, Mobility with Extended Subnets and Nexus Fabric.
Places in the network (featuring policy)Jeff Green
Networks of the Future will be about a great user experience, devices and things…
In an industry that’s already defined, Extreme Network’s recent announcement of The Automated Campus is a significant advance in networking. For the first time, all the essential technologies, products, procedures and support are gathered together and integrated. All too often, the piecemeal/piecewise growth strategy, typically applied in network evolutions, results in too many tools, procedures, and techniques. The patchwork quilt approach precludes fast responsiveness, optimal operations staff productivity, and sacrifices the accuracy and efficiency required to keep end-users productive as well.
The most important opportunity to improve efficiency for governments today is in boosting both the productivity of end-users and network operators. The automated campus must address the productivity of network planners and network operations managers and staff. The often-significant number of elements required in an installation can demand significant staff time and can, consequentially, have an adverse impact on operating expenses (OpEx). While It is possible to build traditional networks that, when running correctly and optimally get the job done, they often embody such high operating expenses that cost becomes the overriding factor controlling the evolution of the campus network. The Automated Campus will allow XYZ Account to address all these issues and concerns. A key goal must be for XYZ Account to reduce the number of “moving parts” required to build and operate any campus and introduce a level of simplicity and automation that will address your future.
Extreme’s strategy for Campus Automation begins with re-thinking the way networks are designed, deployed and managed. Extreme’s Fabric-based networks enable faster configuration and troubleshooting; As a result, there is less opportunity for misconfiguration. Several automation solutions designed to enhance security often force network managers to accept complexity and degraded resilience to secure the network to meet local policies. Should a breach occur, containment to that segment protects even more sensitive parts of the network, resulting in a true dead-end for the hacker. With Extreme’s Automated Campus services can easily be defined and provisioned on-the-fly without disruption. Network operators specify what services are allowed or prohibited across the network.
The ubiquitous heavy-tailed distributions in the Internet im-plies an interesting feature of the Internet traffic: most (e.g. 80%) of the traffic is actually carried by only a small number of connections (elephants), while the remaining large amount of connections are very small in size or lifetime (mice). In a fair network environment, short connections expect rela-tively fast service than long connections. For these reasons, short TCP flows are generally more con-servative than long flows and thus tend to get less than their fair share when they compete for the bottleneck bandwidth. In this paper, we propose to give preferential treatment to short flows2 with help from an Active Queue Management (AQM) policy inside the network. We also rely on the pro-posed Differentiated Services (Diffserv) architecture [3] to classify flows into short and long at the edge of the network. More specifically, we maintain the length of each active flow (in packets3) at the edge routers and use it to classify incoming packets.
Extreme is rethinking the data plane, the control plane, and the management plane. Extreme is a better mouse trap which delivers new features, advanced function, and wire-speed performance. Our switches deliver deterministic performance independent of load or what features are enabled. All Extreme Switches are based on XOS, the industries first and only truly modular operating system. Having a modular OS provides higher availability of critical network resources. By isolating each critical process in its own protected memory space, a single failed process can not take down the entire switch. Application modules can be loaded and unloaded without the need for rebooting the switch. This is the level of functionality that users expect on other technology. Reaching the twenty million port milestone is a significant achievement demonstrating how our highly effective network solutions, with rich features, innovative software and integrated support for secure convergence. VoIP/Unified Communica Fons/Infrastructure/SIP Trunking (SBC) – Because of strong ROI, investment in this segment remains on a very strong growth trajectory.
Enterprises depend on modular switching solutions for all aspects of the enterprise network: in the enterprise core and data center, the distribution layer that lies between the core and wiring closet, and in the wiring closet itself. Modular solutions provide port diversity and density that fixed solutions simply cannot match. There are also high-capacity modular solutions that only the largest of enterprises and institutions use for high-density and high-speed deployments. Modular solutions are generally much more expensive than their fixed cousins, especially in situations where density or flexibility are not required. Fixed-configuration stackable switches are typically cost- optimized, but they offer no real port diversity on an individual switch. Port diversity means the availability of different port types, such as fiber versus copper ports. Stackable switches have gotten better at offering port diversity, but they still cannot match their modular cousins. Many of these products now offer high-end features such as 802.3af PoE, QoS, and multi-layer intelligence that were only found on modular switches in the past. This is due to the proliferation of third-party merchant silicon in the fixed configuration market. Generally, a stack of fixed configuration switches can be managed as a single virtual entity. Fixed configuration switches generally cannot be used to provision an entire large enterprise, but instead are mostly used out at the edge or departmental level as a low-cost alternative to modular products.
Assumptions:
Ethernet is Open
Active/Active in the Fabric
Therefore:
Open at the Edge
Active/Active at the edge
If the number of spine switches were to be merely doubled, the effect of a single switch failure is halved. With 8 spine switches, the effect of a single switch failure only causes a 12% reduction in available bandwidth. So, in modern data centers, people build networks with anywhere from 4 to 32 spine switches. With a leaf-spine network, every server on the network is exactly the same distance away from all other servers – three port hops, to be precise. The benefit of this architecture is that you can just add more spines and leaves as you expand the cluster and you don't have to do any recabling. Intuition Systems will also get more predictable latency between the nodes.
As a trend, disaggregation seems to be most useful for very large companies like Facebook and Google, or cloud providers. The technology does not necessarily have significant implications for small or medium sized businesses. Historically, however, technology has a way of trickling down from the pioneering phases of existing only within large companies with tremendous resources, to becoming more standardized across the board.
Building efficient 5G NR base stations with Intel® Xeon® Scalable Processors Michelle Holley
Speaker: Daniel Towner, System Architect for Wireless Access, Intel Corporation
5G brings many new capabilities over 4G including higher bandwidths, lower latencies, and more efficient use of radio spectrum. However, these improvements require a large increase in computing power in the base station. Fortunately the Xeon Scalable Processor series (Skylake-SP) recently introduced by Intel has a new high-performance instruction set called Intel® Advanced Vector Extensions 512 (Intel® AVX-512) which is capable of delivering the compute needed to support the exciting new world of 5G.
In his talk Daniel will give an overview of the new capabilities of the Intel AVX-512 instruction set and show why they are so beneficial to supporting 5G efficiently. The most obvious difference is that Intel AVX-512 has double the compute performance of previous generations of instruction sets. Perhaps surprisingly though it is the addition of brand new instructions that can make the biggest improvements. The new instructions mean that software algorithms can become more efficient, thereby enabling even more effective use of the improvements in computing performance and leading to very high performance 5G NR software implementations.
Flexible Data Centre Fabric - FabricPath/TRILL, OTV, LISP and VXLANCisco Canada
This presentation will discuss the evolving Data Centre Fabric, FabricPath, VXLAN, LISP, LISP Host Mobility, OTV LAN Extension, Mobility with Extended Subnets and Nexus Fabric.
Building DataCenter networks with VXLAN BGP-EVPNCisco Canada
The session specifically covers the requirements and approaches for deploying the Underlay, Overlay as well as the inter-Fabric connectivity of Data Center Networks or Fabrics. Within the VXLAN BGP-EVPN based Overlay, we focus on the insights like forwarding and control plane functions which are critical to the simplicity operation of the architecture in achieving scale, small failure domains and consistent configuration. To complete the overlay view on VXLAN BGP-EVPN, we are going to the insides of BGP and its EVPN address-familiy and extend to about how multiple DC Fabric can be interconnected within, either as stretched Fabrics or with true DCI. The session concludes with a brief overview of manageability functions, network orchestration capabilities and multi-tenancy details. This Advanced session is intended for network, design and operation engineers from Enterprises to Service Providers.
The Secret Sauce is the Control Plane, not the Encapsulation
Host Route Distribution decoupled from the Underlay protocol
Use MultiProtocol-BGP (MP-BGP) on the Leaf nodes to distribute internal Host/Subnet Routes and external reachability information
Route-Reflectors deployed for scaling purposes
VXLAN terminates its tunnels on VTEPs (Virtual Tunnel End Point).
Each VTEP has two interfaces, one is to provide bridging function for local hosts, the other has an IP identification in the core network for VXLAN encapsulation/decapsulation.
VXLAN Encapsulation and De-encapsulation occur on T2
Bridging and Gateway are independent of the port type (1/10/40G ports)
Encapsulation happens on the egress port
Decapsulation happens on the ingress port
Service Oriented Architecture
2 or 3 layer network to Leaf & Spine
High density and bandwidth required
Layer 3 ECMP
No oversubscription
Low and uniform delay characteristic
Wire & configure once network
Uniform network configuration
Workload Mobility
Workload Placement
Segmentation
Scale
Automation & Programmability
L2 + L3 Connectivity
Physical + Virtual
Open
XoS Performance - Separation between control and forwarding planes - The "SDN Classic" model, as illustrated by this graphic from the Open Networking Foundation, offers many potential benefits:
In the forwarding plane all switching, and feature implementation such as deep packet inspection , QoS scheduling, MAC learning and filtering, etc are performed in dedicated ASIC hardware
Wire speed performance across entire product line (Backplane resources, packet /frame forwarding rate, Bits per second throughput) Local switching on all line cards at no additional cost ,increasing throughput and reducing latency. Dedicated stacking interfaces, and stacking over fiber.
Low latency with Exceptional QoS
We build networks to deliver on today’s Experience Economy. Extreme Networks combines high performance wired and wireless hardware with a software-defined architecture that makes it simple, fast and smart for the user to connect with their device of choice. We provide a comprehensive portfolio, including Campus Mobility and Data Center solutions, which allow our customers to deliver a positive and consistent experience to each and every user in their environment. As SDN excitement grew, the term software-defined was adopted by marketers and applied liberally to all kinds of products and technologies: software-defined storage, software-defined security, software-defined data center.
What technologies allow me to do this today?
Key Features: Loop free load balancing, density, L2 overlays
VXLAN fabric in EXOS / EOS
MLAG: L2 Leaf/Spine with two spine members
VPLS: L2 Leaf/Spine for HPC deployments
SPB-V: S/K-Series for small enterprise data center
Evolution ExtremeFabric: fully automated
Why VxLAN? It’s a really easy L2 over L3 transport
MLAG technology Leaf/Spine Fabric
MLAG is a special case of Leaf/Spine with only two spine members and everything on L2 (We kill the spanning tree and maintain state between the spines) – We’ve been leading in MLAG for a while
VPLS technology Leaf/Spine Fabric
We have successfully built VPLS mesh Leaf/Spine networks for HPC deployments
Key Features: Loop free load balancing, density, L2 overlays
We need more scale!
21.x / 22.x bring some interesting new features that fix this
NEW with 21.1: The Scalable Layer 2 Fabric with VxLAN Technology
VXLAN – Overlay on routing for efficient load balancing and reachability
OSPF extensions massively simplify deployment
The Layer 2 traffic tunnels over any Layer 3 network
Can be used in any topology, but highest performance is Leaf/Spine
Removes the limitation on transit overlay in the spine
Easy setup, small configuration
X670-G2 and X770, S and K, and will be available on X870 at launch
Scale to 2592 10G ports (X670-G2-72, 1:1), 512 40G (X770, 1:1)
Available on EOS and EXOS NOW
NEW with EXOS 22.x and EOS 8.81: Future Fabric Technology
Operationalizing EVPN in the Data Center: Part 2Cumulus Networks
In the second of our two-part series on EVPN, Cumulus Networks Chief Scientist Dinesh Dutt dives into more technical details of network routing, EVPN use cases, and best practices for operationalizing EVPN in the data center.
To view the recording of this webinar, visit http://go.cumulusnetworks.com/l/32472/2017-09-23/95t7xh
Network Configuration Example: Configuring CoS to Support an MC-LAG on an FCo...Juniper Networks
This NCE provides a step-by-step procedure for configuring class of service (CoS) for Fibre Channel over Ethernet (FCoE) transit switch traffic across a multichassis link aggregation group (MC-LAG) that connects two QFX Series switches.
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreinside-BigData.com
In this deck from the 2018 Swiss HPC Conference, Erez Cohen from Mellanox presents: Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more.
"While InfiniBand, RDMA and GPU-Direct are an HPC mainstay, these advanced networking technologies are increasingly becoming a core differentiator to the data center. In fact, within just a few short years so far, where only a handful of bleeding edge industrial leaders emulated classic HPC disciplines, today almost every commercial market is usurping HPC technologies and disciplines in mass. Additionally, with the rampant adoption of demanding workloads like Machine Learning, cloud to on premise providers are now deploying the same advanced networking technologies and delivering the same core capabilities and performance as traditional HPC environments. These same data centers embracing AI are also driving the increased adoption of complex technologies including containers and virtualization that must also be optimized for performance, optimal profit and operational efficiency. In this talk we explore how high performance networking has emerged from HPC to become the critical path for the cloud, machine learning and much more."
Watch the video: https://wp.me/p3RLHQ-ixP
Learn more: http://mellanox.com
and
http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This presentation discusses the design and evaluation of two open-source implementations of the LTE EPC, one based on SDN principles and the other based on NFV, and presents a performance comparison of the two approaches. Speaker: Mythili Vutukuru
Where Does Networking Fit In? To gain the full benefits of cloud computing and virtualization and achieve a business agile IT infrastructure, organizations need a reliable, high-performance data center networking infrastructure with built-in investment protection. Several technology inflection points are coming together that are fundamentally changing the way networks are architected, deployed and operated both in the public cloud as well as the private cloud. From performance, to scale, to virtualization support and automation to simplified orchestration, the requirements are rapidly changing and driving new approaches to building data center networks.
With Extreme Networks, IT can manage more with less. Automated intelligence and analytics for compliance, forensics, and traffic patterns translates into reduced help desk calls. Businesses can predict costs and return on investment, and increase employee productivity by securely onboarding BYOD, increasing both customer and employee satisfaction. A constant risk to the network, and ultimately the hospital, are unapproved applications and rogue devices that may appear on the network and either permit unauthorized access or interfere with other devices. A means to monitor all devices and applications that operate across the network is vital. Just as important are the audit and reporting capabilities necessary to report on who, what, where, when, and how patient data is accessed.
What is SDN? What software-defined networking really means has evolved dramatically and now includes automation and virtualization. Hardware is still a critical component in data center networking equipment, but the influence of switch software shouldn’t be overlooked. When everyone began to get excited about SDN a few years ago, we thought of it as only one thing: the separation of network control from network data packet handling. Traditional networks had already started down this path, with the addition of controller cards to manage line cards in scalable chassis-based switches, and with various data center fabric technologies. SDN took the idea to its logical end, removing the need for the controller and the packet handlers to be on the same backplane or even from the same vendor.
Cost. Reducing costs in the data center and contributing to corporate profitability is an increasingly important trend in today’s economic climate. For example, energy costs for the data center are increasing at 12% a year. Moreover, increased application requirements such as 100% availability necessitate additional hardware and services to manage storage and performance thus raising total cost of ownership.
This reference design helps organizations design and configure a small to midsize data center (be¬tween 2 and 60 server racks) at headquarters or a server room at a remote site. You will learn how to configure the data center core, aggregation and access switches for connectivity to the servers and the campus network.
The Avaya Fabric Connect data center design supports high-speed 10 Gbps Ethernet connect-ed servers. The design can easily scale server bandwidth with link aggregation and servers can be connected to one or more switches in order to provide the level of availability required for the services delivered by the host. The design also supports legacy and low traffic servers that need 1 Gbps Ethernet connectivity,
The reference design presented in this guide is based on common network requirements and pro¬vides a tested starting point for network engineers to design and deploy an Avaya data center net¬work. This guide does not document every possible option and feature used to design and deploy networks but instead presents the tested and recommended options that will meet the majority of customer needs.
This design uses Avaya Fabric Connect in order to provide benefits over traditional data center design.
IT departments face several challenges in today’s data center:
· Data center traffic flow is not the same as campus traffic flow. Over 80% of the traffic is east-west, server-to-server, vs. north-south, client-to-server, like in a campus.
· Server virtualization allows a virtual machine or workload to be located anywhere in the physi¬cal data center. Data center networks can make it difficult to extend virtual local area networks (VLANs) and subnets anywhere in the data center.
· Server virtualization means that new services can be brought online in minutes or migrated in real time. Reconfiguring the network to support this is difficult because it can interrupt other services.
· Server virtualization means that the load on a physical box is much higher. Physical servers regularly host 10-50 workloads, driving network utilization well past 1 Gbps.
Cloud Network Virtualization with Juniper Contrailbuildacloud
Description: Contrail Technology will be discussed covering architecture, capabilities and use cases. It will be followed by a demonstration on current Contrail implementation on CloudStack/Openstack.
Parantap works as a Sr. Director of Solutions Engineering for Contrail Product within Juniper. Before Juniper, Parantap led the network architecture team for Microsoft Online Services (Windows Azure, MS Bing). Prior to Microsoft, Parantap worked as a core engineering manager for UUNet Technologies building Internet backbones.
Building DataCenter networks with VXLAN BGP-EVPNCisco Canada
The session specifically covers the requirements and approaches for deploying the Underlay, Overlay as well as the inter-Fabric connectivity of Data Center Networks or Fabrics. Within the VXLAN BGP-EVPN based Overlay, we focus on the insights like forwarding and control plane functions which are critical to the simplicity operation of the architecture in achieving scale, small failure domains and consistent configuration. To complete the overlay view on VXLAN BGP-EVPN, we are going to the insides of BGP and its EVPN address-familiy and extend to about how multiple DC Fabric can be interconnected within, either as stretched Fabrics or with true DCI. The session concludes with a brief overview of manageability functions, network orchestration capabilities and multi-tenancy details. This Advanced session is intended for network, design and operation engineers from Enterprises to Service Providers.
The Secret Sauce is the Control Plane, not the Encapsulation
Host Route Distribution decoupled from the Underlay protocol
Use MultiProtocol-BGP (MP-BGP) on the Leaf nodes to distribute internal Host/Subnet Routes and external reachability information
Route-Reflectors deployed for scaling purposes
VXLAN terminates its tunnels on VTEPs (Virtual Tunnel End Point).
Each VTEP has two interfaces, one is to provide bridging function for local hosts, the other has an IP identification in the core network for VXLAN encapsulation/decapsulation.
VXLAN Encapsulation and De-encapsulation occur on T2
Bridging and Gateway are independent of the port type (1/10/40G ports)
Encapsulation happens on the egress port
Decapsulation happens on the ingress port
Service Oriented Architecture
2 or 3 layer network to Leaf & Spine
High density and bandwidth required
Layer 3 ECMP
No oversubscription
Low and uniform delay characteristic
Wire & configure once network
Uniform network configuration
Workload Mobility
Workload Placement
Segmentation
Scale
Automation & Programmability
L2 + L3 Connectivity
Physical + Virtual
Open
XoS Performance - Separation between control and forwarding planes - The "SDN Classic" model, as illustrated by this graphic from the Open Networking Foundation, offers many potential benefits:
In the forwarding plane all switching, and feature implementation such as deep packet inspection , QoS scheduling, MAC learning and filtering, etc are performed in dedicated ASIC hardware
Wire speed performance across entire product line (Backplane resources, packet /frame forwarding rate, Bits per second throughput) Local switching on all line cards at no additional cost ,increasing throughput and reducing latency. Dedicated stacking interfaces, and stacking over fiber.
Low latency with Exceptional QoS
We build networks to deliver on today’s Experience Economy. Extreme Networks combines high performance wired and wireless hardware with a software-defined architecture that makes it simple, fast and smart for the user to connect with their device of choice. We provide a comprehensive portfolio, including Campus Mobility and Data Center solutions, which allow our customers to deliver a positive and consistent experience to each and every user in their environment. As SDN excitement grew, the term software-defined was adopted by marketers and applied liberally to all kinds of products and technologies: software-defined storage, software-defined security, software-defined data center.
What technologies allow me to do this today?
Key Features: Loop free load balancing, density, L2 overlays
VXLAN fabric in EXOS / EOS
MLAG: L2 Leaf/Spine with two spine members
VPLS: L2 Leaf/Spine for HPC deployments
SPB-V: S/K-Series for small enterprise data center
Evolution ExtremeFabric: fully automated
Why VxLAN? It’s a really easy L2 over L3 transport
MLAG technology Leaf/Spine Fabric
MLAG is a special case of Leaf/Spine with only two spine members and everything on L2 (We kill the spanning tree and maintain state between the spines) – We’ve been leading in MLAG for a while
VPLS technology Leaf/Spine Fabric
We have successfully built VPLS mesh Leaf/Spine networks for HPC deployments
Key Features: Loop free load balancing, density, L2 overlays
We need more scale!
21.x / 22.x bring some interesting new features that fix this
NEW with 21.1: The Scalable Layer 2 Fabric with VxLAN Technology
VXLAN – Overlay on routing for efficient load balancing and reachability
OSPF extensions massively simplify deployment
The Layer 2 traffic tunnels over any Layer 3 network
Can be used in any topology, but highest performance is Leaf/Spine
Removes the limitation on transit overlay in the spine
Easy setup, small configuration
X670-G2 and X770, S and K, and will be available on X870 at launch
Scale to 2592 10G ports (X670-G2-72, 1:1), 512 40G (X770, 1:1)
Available on EOS and EXOS NOW
NEW with EXOS 22.x and EOS 8.81: Future Fabric Technology
Operationalizing EVPN in the Data Center: Part 2Cumulus Networks
In the second of our two-part series on EVPN, Cumulus Networks Chief Scientist Dinesh Dutt dives into more technical details of network routing, EVPN use cases, and best practices for operationalizing EVPN in the data center.
To view the recording of this webinar, visit http://go.cumulusnetworks.com/l/32472/2017-09-23/95t7xh
Network Configuration Example: Configuring CoS to Support an MC-LAG on an FCo...Juniper Networks
This NCE provides a step-by-step procedure for configuring class of service (CoS) for Fibre Channel over Ethernet (FCoE) transit switch traffic across a multichassis link aggregation group (MC-LAG) that connects two QFX Series switches.
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreinside-BigData.com
In this deck from the 2018 Swiss HPC Conference, Erez Cohen from Mellanox presents: Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more.
"While InfiniBand, RDMA and GPU-Direct are an HPC mainstay, these advanced networking technologies are increasingly becoming a core differentiator to the data center. In fact, within just a few short years so far, where only a handful of bleeding edge industrial leaders emulated classic HPC disciplines, today almost every commercial market is usurping HPC technologies and disciplines in mass. Additionally, with the rampant adoption of demanding workloads like Machine Learning, cloud to on premise providers are now deploying the same advanced networking technologies and delivering the same core capabilities and performance as traditional HPC environments. These same data centers embracing AI are also driving the increased adoption of complex technologies including containers and virtualization that must also be optimized for performance, optimal profit and operational efficiency. In this talk we explore how high performance networking has emerged from HPC to become the critical path for the cloud, machine learning and much more."
Watch the video: https://wp.me/p3RLHQ-ixP
Learn more: http://mellanox.com
and
http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This presentation discusses the design and evaluation of two open-source implementations of the LTE EPC, one based on SDN principles and the other based on NFV, and presents a performance comparison of the two approaches. Speaker: Mythili Vutukuru
Where Does Networking Fit In? To gain the full benefits of cloud computing and virtualization and achieve a business agile IT infrastructure, organizations need a reliable, high-performance data center networking infrastructure with built-in investment protection. Several technology inflection points are coming together that are fundamentally changing the way networks are architected, deployed and operated both in the public cloud as well as the private cloud. From performance, to scale, to virtualization support and automation to simplified orchestration, the requirements are rapidly changing and driving new approaches to building data center networks.
With Extreme Networks, IT can manage more with less. Automated intelligence and analytics for compliance, forensics, and traffic patterns translates into reduced help desk calls. Businesses can predict costs and return on investment, and increase employee productivity by securely onboarding BYOD, increasing both customer and employee satisfaction. A constant risk to the network, and ultimately the hospital, are unapproved applications and rogue devices that may appear on the network and either permit unauthorized access or interfere with other devices. A means to monitor all devices and applications that operate across the network is vital. Just as important are the audit and reporting capabilities necessary to report on who, what, where, when, and how patient data is accessed.
What is SDN? What software-defined networking really means has evolved dramatically and now includes automation and virtualization. Hardware is still a critical component in data center networking equipment, but the influence of switch software shouldn’t be overlooked. When everyone began to get excited about SDN a few years ago, we thought of it as only one thing: the separation of network control from network data packet handling. Traditional networks had already started down this path, with the addition of controller cards to manage line cards in scalable chassis-based switches, and with various data center fabric technologies. SDN took the idea to its logical end, removing the need for the controller and the packet handlers to be on the same backplane or even from the same vendor.
Cost. Reducing costs in the data center and contributing to corporate profitability is an increasingly important trend in today’s economic climate. For example, energy costs for the data center are increasing at 12% a year. Moreover, increased application requirements such as 100% availability necessitate additional hardware and services to manage storage and performance thus raising total cost of ownership.
This reference design helps organizations design and configure a small to midsize data center (be¬tween 2 and 60 server racks) at headquarters or a server room at a remote site. You will learn how to configure the data center core, aggregation and access switches for connectivity to the servers and the campus network.
The Avaya Fabric Connect data center design supports high-speed 10 Gbps Ethernet connect-ed servers. The design can easily scale server bandwidth with link aggregation and servers can be connected to one or more switches in order to provide the level of availability required for the services delivered by the host. The design also supports legacy and low traffic servers that need 1 Gbps Ethernet connectivity,
The reference design presented in this guide is based on common network requirements and pro¬vides a tested starting point for network engineers to design and deploy an Avaya data center net¬work. This guide does not document every possible option and feature used to design and deploy networks but instead presents the tested and recommended options that will meet the majority of customer needs.
This design uses Avaya Fabric Connect in order to provide benefits over traditional data center design.
IT departments face several challenges in today’s data center:
· Data center traffic flow is not the same as campus traffic flow. Over 80% of the traffic is east-west, server-to-server, vs. north-south, client-to-server, like in a campus.
· Server virtualization allows a virtual machine or workload to be located anywhere in the physi¬cal data center. Data center networks can make it difficult to extend virtual local area networks (VLANs) and subnets anywhere in the data center.
· Server virtualization means that new services can be brought online in minutes or migrated in real time. Reconfiguring the network to support this is difficult because it can interrupt other services.
· Server virtualization means that the load on a physical box is much higher. Physical servers regularly host 10-50 workloads, driving network utilization well past 1 Gbps.
Cloud Network Virtualization with Juniper Contrailbuildacloud
Description: Contrail Technology will be discussed covering architecture, capabilities and use cases. It will be followed by a demonstration on current Contrail implementation on CloudStack/Openstack.
Parantap works as a Sr. Director of Solutions Engineering for Contrail Product within Juniper. Before Juniper, Parantap led the network architecture team for Microsoft Online Services (Windows Azure, MS Bing). Prior to Microsoft, Parantap worked as a core engineering manager for UUNet Technologies building Internet backbones.
This presentation by Westermo’s Technical Lead Engineers Dakota Diehl and Benjamin Campbell, is an integral part of the Westermo webinar on January 30th 2020, covering the basics of industrial networking. https://www.westermo.com/news-and-events/webinars/learn-the-basics-of-industrial-ethernet-communications
The webinar, including this presentation, aimed to teach the basics of industrial ethernet communications and computer networking. Starting from the ground up, it covered the basics of how network connections work, and how one computer talks to another.
CCNA 4 Answers, CCNA 1 Version 4.0 Answers, CCNA 2 Version 4.0 Answers, CCNA 3 Version 4.0 Answers, CCNA 4 Version 4.0 Answers, CCNA 1 Final Version 4.0 Answers, CCNA 2 Final Version 4.0 Answers, CCNA 3 Final Version 4.0 Answers, CCNA 4 Final Version 4.0 Answers
And first-class of all, a threat to hone your competencies. It’s adequate if you experience in over your head. We all did sooner or later, this subsequent step is about pushing thru that worry and on the point of address something as hard because the 200-301. In case you get caught, reach out. In case you see others caught, assist them.
How is this newsletter going to help you? Apart from providing you with a brief glimpse of the test’s topics and shape, we are able to additionally assist you discover efficient education substances. Cisco’s internet site is a extraordinary starting point, but you shouldn’t restriction your self to it. Despite the fact that you would possibly have by no means heard approximately them, you have to attempt exam dumps as they may grow to be your secret tool to get a passing rating in 2 hundred-301 assessment. But now, let’s start with the exam details.
Do not permit yourself face your ccna two hundred-301 exam without proper guidance to remorse later when you fail in cisco licensed community associate real exam because many people had been there. Let assist you to your cisco two hundred-301 ccna real examination preparation. To help you put together on your cisco certified network partner (ccna) 200-301 exam.
Ensure that only reliable networks are set up in your systems by listening to our short Webinar teaching you all about the basics of industrial ethernet communications and computer networking. Starting from the ground up, this presentation covers the basics of how network connections work, and how one computer talks to another.
Where is the 6 GHz beef?
The low number of channels available today forces users to share available bandwidth and creates congestion. As each client station waits to transmit (or receive) data, congestion is caused by devices, Access Points and Stations, sharing the same channel. To better describe the impact of 6GHZ wifi, let us borrow the catchphrase "Where is the beef?". As a visual aid, begin with a hamburger bun with a 2.4GHz and 5GHz spectrum in the middle. The picture below may exaggerate a 20 years spectrum limitation. However, the visual expresses the potential of the 6GHz range to deliver the spectrum beef.
Where is the 6 GHz beef?
The low number of channels available today forces users to share available bandwidth and creates congestion. As each client station waits to transmit (or receive) data, congestion is caused by devices, Access Points and Stations, sharing the same channel. To better describe the impact of 6GHZ wifi, let us borrow the catchphrase "Where is the beef?". As a visual aid, begin with a hamburger bun with a 2.4GHz and 5GHz spectrum in the middle. The picture below may exaggerate a 20 years spectrum limitation. However, the visual expresses the potential of the 6GHz range to deliver the spectrum beef.
Where is the 6 GHz beef?
The low number of channels available today forces users to share available bandwidth and creates congestion. As each client station waits to transmit (or receive) data, congestion is caused by devices, Access Points and Stations, sharing the same channel. To better describe the impact of 6GHZ wifi, let us borrow the catchphrase "Where is the beef?". As a visual aid, begin with a hamburger bun with a 2.4GHz and 5GHz spectrum in the middle. The picture below may exaggerate a 20 years spectrum limitation. However, the visual expresses the potential of the 6GHz range to deliver the spectrum beef.
The next generation ethernet gangster (part 3)Jeff Green
The original competitors in the Ethernet market remind me of gang members who each had their unique advantages to win over their turf. Over the past few years, Extreme assembled seven gangers from a variety of backgrounds with their strengths to perform a mission and deliver a new level of value to our customers. Extreme has adopted a gangster strategy going against the grain of the market leader. So far, the gangster strategy has been a winning strategy. When market leaders are proposing proprietary solutions, Extreme went open Linux with “superspec.” When they pushed DNA and its additional complexity, Extreme responded by re-thinking the way networks are designed, deployed, and managed without vendor lock-in. Final-ly, when they tied to service and to licensing together with Cisco One, Extreme responded with added flexibility in both licensing, services, and Extreme-as-a-service.
The next generation ethernet gangster (part 2)Jeff Green
The original competitors in the Ethernet market remind me of gang members who each had their unique advantages to win over their turf. Over the past few years, Extreme assembled seven gangers from a variety of backgrounds with their strengths to perform a mission and deliver a new level of value to our customers. Extreme has adopted a gangster strategy going against the grain of the market leader. So far, the gangster strategy has been a winning strategy. When market leaders are proposing proprietary solutions, Extreme went open Linux with “superspec.” When they pushed DNA and its additional complexity, Extreme responded by re-thinking the way networks are designed, deployed, and managed without vendor lock-in. Final-ly, when they tied to service and to licensing together with Cisco One, Extreme responded with added flexibility in both licensing, services, and Extreme-as-a-service.
The next generation ethernet gangster (part 1)Jeff Green
The original competitors in the Ethernet market remind me of gang members who each had their unique advantages to win over their turf. Over the past few years, Extreme assembled seven gangers from a variety of backgrounds with their strengths to perform a mission and deliver a new level of value to our customers. Extreme has adopted a gangster strategy going against the grain of the market leader. So far, the gangster strategy has been a winning strategy. When market leaders are proposing proprietary solutions, Extreme went open Linux with “superspec.” When they pushed DNA and its additional complexity, Extreme responded by re-thinking the way networks are designed, deployed, and managed without vendor lock-in. Final-ly, when they tied to service and to licensing together with Cisco One, Extreme responded with added flexibility in both licensing, services, and Extreme-as-a-service.
The next generation ethernet gangster (part 3)Jeff Green
Today Extreme can be more aggressive, with confidence in knowing we can compete with anyone in the market. As the #1 market alternative, there are three critical reasons for including Extreme in your technology considerations: our end-to-end portfolio, our fabric, and our customer service. We are moving Extreme from a reactive, tactical vendor to a pro-active, strategic partner. When Extreme gets a seat at the table, and we bring our unique “sizzle,” we are the customer’s choice. Our customer retention rate is unmatched in the industry, according to Gartner.
Jeff Green
Extreme Networks
jgreen@extremenetworks.com
Mobile (772) 925-2345
https://prezi.com/view/BFLC71PVkoYVKBOffPAv/
The next generation ethernet gangster (part 2)Jeff Green
Today Extreme can be more aggressive, with confidence in knowing we can compete with anyone in the market. As the #1 market alternative, there are three critical reasons for including Extreme in your technology considerations: our end-to-end portfolio, our fabric, and our customer service. We are moving Extreme from a reactive, tactical vendor to a pro-active, strategic partner. When Extreme gets a seat at the table, and we bring our unique “sizzle,” we are the customer’s choice. Our customer retention rate is unmatched in the industry, according to Gartner.
Jeff Green
Extreme Networks
jgreen@extremenetworks.com
Mobile (772) 925-2345
https://prezi.com/view/BFLC71PVkoYVKBOffPAv/
Fortinet Firewall Integration - User to IP Mapping and Distributed Threat Response
oAccurate User ID to IP mapping eliminates potential attacks and provides reliable, out of the box User Information to firewalls
oImproves security by blocking/limiting user access at the point of entry without impacting other users
oMore accurate network mapping for dynamic policy enforcement and reporting
In an industry that’s already defined, Extreme Network’s recent announcement of The Automated Branch is a significant advance in networking. For the first time, all the essential technologies, products, procedures and support are gathered together and integrated. All too often, the piecemeal/piecewise growth strategy typically historically applied in organizational network evolution results in too many tools, procedures, and techniques at work, precluding fast responsiveness, optimal operations staff productivity, and the degree of accuracy and efficiency required to keep end-users productive as well.
LANs are constantly evolving, build your XYZ Account Network with that baked-in…
Extreme Networks brings XYZ Account simplicity, agility, and optimized performance to your most strategic business asset. The data center is critically important to business operations in the enterprise, but often organizations have difficulty leveraging their data centers as a strategic business asset. At Extreme Networks, we focus on providing an Intelligent Enterprise Data Center Network that’s purpose-built for enterprise requirements. Our OneFabric Data Center Solution:
XoS “can be like an elastic Fabric” for XYZ Account Network…
Demand for application availability has changed how applications are hosted in today’s datacenter. Evolutionary changes have occurred throughout the various elements of the data center, starting with server and storage virtualization and network virtualization. Motivations for server virtualization were initially associated with massive cost reduction and redundancy but have now evolved to focus on greater scalability and agility within the data center. Data center focused LAN technologies have taken a similar path; with a goal of redundancy and then to create a more scalable fabric within and between data centers.
As vendors continue to tout networking architectures that decouple software from hardware, bare-metal switches are moving into the spotlight. These switches are built on merchant silicon deliver a lower-cost and more flexible switching alternative. Extreme Purple Metal switches are open enough to allow our customers to choose their network architecture based on their specific needs without going all the way to bare metal. We believe in the disaggregation of traditional enterprise networking. Extreme uses merchant silicon versus custom ASICs. Custom ASICs have fallen behind. Unless a vendor can build and compete against merchant silicon, there's no point in doing custom ASICs.
Audio video ethernet (avb cobra net dante)Jeff Green
AVB fits low-cost, small-form-factor products such as this microphone. The overall trend is that music no longer lives on shelves or in CD racks, but in hard drives in home computers, and increasingly in the cloud. This brings about its own unique problems, not in the encoding system used, or the storage technology, but in distributing the audio from the storage media to the speakers. AVB features are all enabled by a global and port level configuration. Connecting these elements is the AVB-enabled switch (in the graphic above, the Extreme Networks® Summit® X440.) The role of the switch is to provide support for the control protocols: AVB is Ethernet’s next stage of convergence, delivering pitch perfect audio and crystal clear video seamlessly over the network
IP/Ethernet is bringing simplicity and features to audio and video as it has brought to services like VoIP, Storage and many more
High quality, perfectly synchronized A/V until now has been difficult to maintain
Standards work by the IEEE and the AVB standard changes everything, creating interoperability and mass-marketing equipment pricing
Benefits of AVB - Delivers predictable latency and precise synchronization, maximizing the functionality of AV – time synchronization and quality or service
Reduced complexity and Ease of use through interoperability between devices
Streamlines complex network set-up and management, the Infrastructure negotiates and manages the network for optimal prioritized media transport
AV traffic can co-exist with non-AV traffic on same Ethernet infrastructure
Role based control at the XYZ Account - XYZ Account can identify devices and apply policies based on device type all the way down to the port and or the AP. Policies can dynamically change based on the device a user is connecting with and where that user is located. Extreme Networks provides infrastructure to deliver customizable prioritization and scalable capacity via configurable and built-in intelligence, ensuring a comprehensive, superior quality experience. Furthermore, when deployed with Extreme Wireless XYZ Account can configure the network to ensure applications receive the bandwidth they require, while still limiting or preventing high speed streaming of music of video or even games.
The Pug is a breed of dog with a wrinkly, short-muzzled face, and curled tail. The breed has a fine, glossy coat that comes in a variety of colours, most often fawn or black, and a compact square body with well-developed muscles.
Pugs were brought from China to Europe in the sixteenth century and were popularized in Western Europe by the House of Orange of the Netherlands, and the House of Stuart.In the United Kingdom, in the nineteenth century, Queen Victoria developed a passion for pugs which she passed on to other members of the Royal family. Pugs are known for being sociable and gentle companion dogs.[3] The breed remains popular into the twenty-first century, with some famous celebrity owners. A pug was judged Best in Show at the World Dog Show in 2004.
Donald J. Trump For President, Inc. –– Why Now?
On November 8, 2016, the American People delivered a historic victory and took our country back. This victory was the result of a Movement to put America first, to save the American economy, and to make America once again a shining city on the hill. But our Movement cannot stop now - we still have much work to do.
This is why our Campaign Committee, Donald J. Trump for President, Inc., is still here.
We will provide a beacon for this historic Movement as our lights continue to shine brightly for you - the hardworking patriots who have paid the price for our freedom. While Washington flourished, our American jobs were shipped overseas, our families struggled, and our factories closed - that all ended on January 20, 2017.
This Campaign will be a voice for all Americans, in every city near and far, who support a more prosperous, safe and strong America. That’s why our Campaign cannot stop now - our Movement is just getting started.
Together, we will Make America Great Again!
An alternative to the core/aggregation/access layer network topology has emerged known as leaf-spine. In a leaf-spine architecture, a series of leaf switches form the access layer. These switches are fully meshed to a series of spine switches. One way is to create a Spine and Leaf architecture, also known as a Distributed Core. This architecture has two main components: Spine switches and Leaf switches. Intuition Systems can think of spine switches as the core, but instead of being a large, chassis-based switching platform, the spine is composed of many high-throughput Layer 3 switches with high port density. The mesh ensures that access-layer switches are no more than one hop away from one another, minimizing latency and the likelihood of bottlenecks between access-layer switches. When networking vendors speak of an Ethernet fabric, this is generally the sort of topology they have in mind.
Haven’t we spent the last few decades disaggregating datacenter architecture? And if so, what does disaggregation mean now, is it something different? Strictly speaking, to “disaggregate” means to divide
Data Center Aggregation/Core Switch
The proposed solution must provide a high-density chassis based switch solution that meets the requirements provided below. Your response should describe how your offering would meet these requirements. Vendors must provide clear and concise responses, illustrations can be provided where appropriate. Any additional feature descriptions for your offering can be provided, if applicable.
• Must offer a chassis-based switch solution that provides eight I/O module slots, two management module slots and four fabric module slots. Must support a variety of I/O modules providing support for 1GbE, 10GbE, 40GbE and 100GbE interfaces. Please describe the recommended switching solution and the available I/O modules.
• Switch must offer switching capacity up to 20.48 Tbps. Please describe the performance levels for the recommended switching solution.
• Switch system must support high availability for the hardware preventing single points of failure. Please describe the high availability features.
• It is preferred that the 10 Gigabit Ethernet modules will also be able to accept standard Gigabit SFP transceivers. Please describe the capability of your switch.
• Must support an N+1 redundant power supplies
• Must support N+1 redundant fan trays
• Must support a modular operating system that is common across the entire switching profile. Please describe the OS and advantages.
If the number of spine switches were to be merely doubled, the effect of a single switch failure is halved. With 8 spine switches, the effect of a single switch failure only causes a 12% reduction in available bandwidth. So, in modern data centers, people build networks with anywhere from 4 to 32 spine switches. With a leaf-spine network, every server on the network is exactly the same distance away from all other servers – three port hops, to be precise. The benefit of this architecture is that you can just add more spines and leaves as you expand the cluster and you don't have to do any recabling. Intuition Systems will also get more predictable latency between the nodes.
As a trend, disaggregation seems to be most useful for very large companies like Facebook and Google, or cloud providers. The technology does not necessarily have significant implications for small or medium sized businesses. Historically, however, technology has a way of trickling down from the pioneering phases of existing only within large companies with tremendous resources, to becoming more standardized across the board.
An experience is a personal and emotional event we remember. Every experience is established based upon pre-determined expectations we conceive and create in our minds. It’s personal, and therefore, remains a moving and evolving target in every scenario. When our experience concludes and the moment has passed, the outcome remains in our memory. Think about what makes you happy when connecting with your own device and then think about what makes you really upset when things are hard, complicated, and slow. If the user has a bad experience in anyone of these areas (simple, fast, and smart), they are likely to leave, share their negative experience, and potentially never return. Users might forget facts or details about their computing environment but they find it difficult to forgot the feeling behind a bad network experience. When something goes wrong with the network or an application, do you always get the blame?
So what can Ultra Low, consistent latency deliver? Low latency is a requirement for intensive, time critical applications. Latency is measure on a port-to-port basis, that once a frame is received on a ingress port how long does it take the frame to go through the internal switching infrastructure and leave an ingress port. The Summit X670 Top of Rack switch supports latency of around 800-900usec while the Black Diamond chassis, BDX8, can switch frames in a little as 3usec. We’re big believers in the value of disaggregation – of breaking down traditional data center technologies into their core components so we can build new systems that are more flexible, more scalable, and more efficient. This approach has guided Facebook from the beginning, as we’ve grown and expanded our infrastructure to connect more than 1.28 billion people around the world.
Flatter networks. Traditional data center networks have a minimum of three tiers: top of rack (ToR), aggregation and core. Often, there is more than one aggregation tier, meaning the data center could have three or more network tiers. When network traffic is primarily best effort, this is sufficient. But as more mission-critical, real-time traffic flows into the data center, it becomes critical that organizations move to two-tier networks.
An increase in east-west traffic flows. Legacy data center networks are designed for traffic to flow from the edge of the network into the core and then back to the edge in a north-south direction. Today, however, factors such as workforce mobility, Hadoop, big data and other applications are driving east-west traffic flows from server to server.
Virtualization of other IT assets. Historically, compute resources such as processor, memory and storage were resident in the server itself. Over time, more and more of these resources are being put into “pools” that can be accessed on demand. In this case, the data center network becomes a “fabric” that acts as the backplane for the virtualized data center.
In today’s Experience Economy, networks must provide a great user experience meeting each individual’s personal expectation. Users do not care about what happens behind the scenes to make everything work; in fact, users don’t even consider it until something breaks. People living in today’s Experience Economy care about simply connecting to a video, where the network is smart enough to remember who they are without a lot of hassle connecting, and then providing a blisteringly fast connection so that there is no interruption to the video stream. Think about what makes you happy when connecting with your own device and then think about what makes you really upset when things are hard, complicated, and slow. If the user has a bad experience in anyone of these areas (simple, fast, and smart), they are likely to leave, share their negative experience, and potentially never return.
Where Does Networking Fit In? To gain the full benefits of cloud computing and virtualization and achieve a business agile IT infrastructure, organizations need a reliable, high-performance data center networking infrastructure with built-in investment protection. Several technology inflection points are coming together that are fundamentally changing the way networks are architected, deployed and operated both in the public cloud as well as the private cloud. From performance, to scale, to virtualization support and automation to simplified orchestration, the requirements are rapidly changing and driving new approaches to building data center networks.
With Extreme Networks, IT can manage more with less. Automated intelligence and analytics for compliance, forensics, and traffic patterns translates into reduced help desk calls. Businesses can predict costs and return on investment, and increase employee productivity by securely onboarding BYOD, increasing both customer and employee satisfaction. A constant risk to the network, and ultimately the hospital, are unapproved applications and rogue devices that may appear on the network and either permit unauthorized access or interfere with other devices. A means to monitor all devices and applications that operate across the network is vital. Just as important are the audit and reporting capabilities necessary to report on who, what, where, when, and how patient data is accessed.
What is SDN? What software-defined networking really means has evolved dramatically and now includes automation and virtualization. Hardware is still a critical component in data center networking equipment, but the influence of switch software shouldn’t be overlooked. When everyone began to get excited about SDN a few years ago, we thought of it as only one thing: the separation of network control from network data packet handling. Traditional networks had already started down this path, with the addition of controller cards to manage line cards in scalable chassis-based switches, and with various data center fabric technologies.
An experience is a personal and emotional event we remember. Every experience is established based upon pre-determined expectations we conceive and create in our minds. It’s personal, and therefore, remains a moving and evolving target in every scenario. When our experience concludes and the moment has passed, the outcome remains in our memory. Think about what makes you happy when connecting with your own device and then think about what makes you really upset when things are hard, complicated, and slow. If the user has a bad experience in anyone of these areas (simple, fast, and smart), they are likely to leave, share their negative experience, and potentially never return. Users might forget facts or details about their computing environment but they find it difficult to forgot the feeling behind a bad network experience. When something goes wrong with the network or an application, do you always get the blame?
If the number of spine switches were to be merely doubled, the effect of a single switch failure is halved. With 8 spine switches, the effect of a single switch failure only causes a 12% reduction in available bandwidth. So, in modern data centers, people build networks with anywhere from 4 to 32 spine switches. With a leaf-spine network, every server on the network is exactly the same distance away from all other servers – three port hops, to be precise. The benefit of this architecture is that you can just add more spines and leaves as you expand the cluster and you don't have to do any recabling. Intuition Systems will also get more predictable latency between the nodes.
As a trend, disaggregation seems to be most useful for very large companies like Facebook and Google, or cloud providers. The technology does not necessarily have significant implications for small or medium sized businesses. Historically, however, technology has a way of trickling down from the pioneering phases of existing only within large companies with tremendous resources, to becoming more standardized across the board.
Large venues like stadiums or concert halls are challenging environments for Wi-Fi deployments. Most of today’s phones and tablets carry Wi-Fi interfaces. A safe assumption is that at least one device per person in a stadium carry a Wi-Fi interface. Monetizing those Wi-Fi interfaces with real time information of the event in the venue, targeted advertising, internet access, multimedia and social applications can create new revenues to the owner of the venue, if executed properly.
9.) audio video ethernet (avb cobra net dante)Jeff Green
Replacing a crossbar switch with ‘virtual’ IP packet switching - The ability to expand video-over-IP systems ‘one piece at a time’ and the decentralized nature of the matrix makes the technology very compelling for any size or scope of AV project.. AV-over-IP is the transport of AV signals over a standard Ethernet network, including…
HD Video (e.g. HDMI, DVI)
Audio
Control Signals (e.g. IR)
Peripheral Signals (e.g. USB)
Does Dante require special switches? No. We strongly recommend that Gigabit switches be used due to the clear advantages in performance and scalability.
Does Dante require a dedicated network infrastructure? No, a dedicated network infrastructure is not required. Dante-enabled devices can happily coexist with other equipment making use of the network, such as general purpose PCs sending and receiving email and other data.
Does Dante require any special network infrastructure? No, special network infrastructure is not required. Since Dante is based upon universally accepted networking standards, Dante-enabled devices can be connected using inexpensive off-the-shelf Ethernet switches and cabling.
What features are important when purchasing a switch? Dante makes use of standard Voice over IP (VoIP) Quality of Service (QoS) switch features, to prioritize clock sync and audio traffic over other network traffic. VoIP QoS features are available in a variety of inexpensive and enterprise Ethernet switches. Any switches with the following features should be appropriate for use with Dante:
Gigabit ports for inter-switch connections
Quality of Service (QoS) with 4 queues
Diffserv (DSCP) QoS, with strict priority
Totally new to AV over IT? This may help. If you have worked with any of the popular protocols, your time is better spent in other sessions. AV over IT methods vary in application of OSI model. Audio Networking - One RJ45 and CAT5 cable for dozens of signal paths. Switches can provide hardware time stamping which allows synchronization, offsets, and corrections. All covered in IEEE 1588.
Ethernet Timing & Priority Standards - All audio over Ethernet protocols require Priority, Sequence, & Sync
Differentiated Services / Quality of Service (DiffServ, QoS)
Priority by data type (Clock Sync and Audio Packets over Email)
Traffic prioritized based upon tags in IP Header (Layer 3)
Priority number assigned by manage switch to each packet
Real-time Transport Protocol (RTP)
Keeps data sequenced in the right order
Time stamp on UDP header
Works with RTCP (Real Time Control Protocol) for QoS and Sync
Variation: RTSP (Real Time Streaming Protocol) works on TCP and not UDP
Does not reserve resources or provide for quality of service
Precision Timing Protocol (PTP)
IEEE 1588
Sub-microsecond accuracy to synchronize subnets
Layer 2 - Switches provide hardware-based time stamping
R3 Stem Cells and Kidney Repair A New Horizon in Nephrology.pptxR3 Stem Cell
R3 Stem Cells and Kidney Repair: A New Horizon in Nephrology" explores groundbreaking advancements in the use of R3 stem cells for kidney disease treatment. This insightful piece delves into the potential of these cells to regenerate damaged kidney tissue, offering new hope for patients and reshaping the future of nephrology.
How many patients does case series should have In comparison to case reports.pdfpubrica101
Pubrica’s team of researchers and writers create scientific and medical research articles, which may be important resources for authors and practitioners. Pubrica medical writers assist you in creating and revising the introduction by alerting the reader to gaps in the chosen study subject. Our professionals understand the order in which the hypothesis topic is followed by the broad subject, the issue, and the backdrop.
https://pubrica.com/academy/case-study-or-series/how-many-patients-does-case-series-should-have-in-comparison-to-case-reports/
CRISPR-Cas9, a revolutionary gene-editing tool, holds immense potential to reshape medicine, agriculture, and our understanding of life. But like any powerful tool, it comes with ethical considerations.
Unveiling CRISPR: This naturally occurring bacterial defense system (crRNA & Cas9 protein) fights viruses. Scientists repurposed it for precise gene editing (correction, deletion, insertion) by targeting specific DNA sequences.
The Promise: CRISPR offers exciting possibilities:
Gene Therapy: Correcting genetic diseases like cystic fibrosis.
Agriculture: Engineering crops resistant to pests and harsh environments.
Research: Studying gene function to unlock new knowledge.
The Peril: Ethical concerns demand attention:
Off-target Effects: Unintended DNA edits can have unforeseen consequences.
Eugenics: Misusing CRISPR for designer babies raises social and ethical questions.
Equity: High costs could limit access to this potentially life-saving technology.
The Path Forward: Responsible development is crucial:
International Collaboration: Clear guidelines are needed for research and human trials.
Public Education: Open discussions ensure informed decisions about CRISPR.
Prioritize Safety and Ethics: Safety and ethical principles must be paramount.
CRISPR offers a powerful tool for a better future, but responsible development and addressing ethical concerns are essential. By prioritizing safety, fostering open dialogue, and ensuring equitable access, we can harness CRISPR's power for the benefit of all. (2998 characters)
Defecation
Normal defecation begins with movement in the left colon, moving stool toward the anus. When stool reaches the rectum, the distention causes relaxation of the internal sphincter and an awareness of the need to defecate. At the time of defecation, the external sphincter relaxes, and abdominal muscles contract, increasing intrarectal pressure and forcing the stool out
The Valsalva maneuver exerts pressure to expel faeces through a voluntary contraction of the abdominal muscles while maintaining forced expiration against a closed airway. Patients with cardiovascular disease, glaucoma, increased intracranial pressure, or a new surgical wound are at greater risk for cardiac dysrhythmias and elevated blood pressure with the Valsalva maneuver and need to avoid straining to pass the stool.
Normal defecation is painless, resulting in passage of soft, formed stool
CONSTIPATION
Constipation is a symptom, not a disease. Improper diet, reduced fluid intake, lack of exercise, and certain medications can cause constipation. For example, patients receiving opiates for pain after surgery often require a stool softener or laxative to prevent constipation. The signs of constipation include infrequent bowel movements (less than every 3 days), difficulty passing stools, excessive straining, inability to defecate at will, and hard feaces
IMPACTION
Fecal impaction results from unrelieved constipation. It is a collection of hardened feces wedged in the rectum that a person cannot expel. In cases of severe impaction the mass extends up into the sigmoid colon.
DIARRHEA
Diarrhea is an increase in the number of stools and the passage of liquid, unformed feces. It is associated with disorders affecting digestion, absorption, and secretion in the GI tract. Intestinal contents pass through the small and large intestine too quickly to allow for the usual absorption of fluid and nutrients. Irritation within the colon results in increased mucus secretion. As a result, feces become watery, and the patient is unable to control the urge to defecate. Normally an anal bag is safe and effective in long-term treatment of patients with fecal incontinence at home, in hospice, or in the hospital. Fecal incontinence is expensive and a potentially dangerous condition in terms of contamination and risk of skin ulceration
HEMORRHOIDS
Hemorrhoids are dilated, engorged veins in the lining of the rectum. They are either external or internal.
FLATULENCE
As gas accumulates in the lumen of the intestines, the bowel wall stretches and distends (flatulence). It is a common cause of abdominal fullness, pain, and cramping. Normally intestinal gas escapes through the mouth (belching) or the anus (passing of flatus)
FECAL INCONTINENCE
Fecal incontinence is the inability to control passage of feces and gas from the anus. Incontinence harms a patient’s body image
PREPARATION AND GIVING OF LAXATIVESACCORDING TO POTTER AND PERRY,
An enema is the instillation of a solution into the rectum and sig
Telehealth Psychology Building Trust with Clients.pptxThe Harvest Clinic
Telehealth psychology is a digital approach that offers psychological services and mental health care to clients remotely, using technologies like video conferencing, phone calls, text messaging, and mobile apps for communication.
Medical Technology Tackles New Health Care Demand - Research Report - March 2...pchutichetpong
M Capital Group (“MCG”) predicts that with, against, despite, and even without the global pandemic, the medical technology (MedTech) industry shows signs of continuous healthy growth, driven by smaller, faster, and cheaper devices, growing demand for home-based applications, technological innovation, strategic acquisitions, investments, and SPAC listings. MCG predicts that this should reflects itself in annual growth of over 6%, well beyond 2028.
According to Chris Mouchabhani, Managing Partner at M Capital Group, “Despite all economic scenarios that one may consider, beyond overall economic shocks, medical technology should remain one of the most promising and robust sectors over the short to medium term and well beyond 2028.”
There is a movement towards home-based care for the elderly, next generation scanning and MRI devices, wearable technology, artificial intelligence incorporation, and online connectivity. Experts also see a focus on predictive, preventive, personalized, participatory, and precision medicine, with rising levels of integration of home care and technological innovation.
The average cost of treatment has been rising across the board, creating additional financial burdens to governments, healthcare providers and insurance companies. According to MCG, cost-per-inpatient-stay in the United States alone rose on average annually by over 13% between 2014 to 2021, leading MedTech to focus research efforts on optimized medical equipment at lower price points, whilst emphasizing portability and ease of use. Namely, 46% of the 1,008 medical technology companies in the 2021 MedTech Innovator (“MTI”) database are focusing on prevention, wellness, detection, or diagnosis, signaling a clear push for preventive care to also tackle costs.
In addition, there has also been a lasting impact on consumer and medical demand for home care, supported by the pandemic. Lockdowns, closure of care facilities, and healthcare systems subjected to capacity pressure, accelerated demand away from traditional inpatient care. Now, outpatient care solutions are driving industry production, with nearly 70% of recent diagnostics start-up companies producing products in areas such as ambulatory clinics, at-home care, and self-administered diagnostics.
1. Where Are We Coming From ?
L2 Bridged
Networks
L2 networks did not scale Why ?
1. The MAC address
L2 addressing = MAC address
The MAC address is a flat address with
no summarization or hierarchy possible
1. No Scalable Control Plane
With no addressing hierarchy possible it
was not possible to have a Link State
Protocol for L2 networks which could
scale
1. No L2 OAM tools
2. Limited Virtualization
Only 802.1Q VLAN tagging
2. SPB Provides Massive Simplification
Extreme L2 SPB Networks
Now a L2 SPB network scales
1.MACinMAC Encapsulation
• IEEE 802.1 ah standard
• Removes current Mac Address Scalability limitations
• Separate Customer vs Backbone demarcation
1.Scalable Control Plane
• IEEE 802.1 aq standard
• uses the IS-IS routing Protocol which works at L2
1.L2 OAM tool
• IEEE 801.ag standard
• Connectivity & Fault Management (CFM)
• Used for OAM
1.Designed for Virtualization
• 802.1ah introduces a Service ID (I-SID) which can
scale to 16 million services
IP/SPB, SPBm/SPBm
Protocol Infrastructure
Ethernet Physical
Infrastructur
e
Horizontally Independent
Connectivity Services independent from Infrastructure
Traditional Protocol Stack
3. 3
Todays Network using STP
Layer 2
Some sort of loop prevention must
be used, i.e. Spanning Tree, and enabled on
all switches
Spanning Tree will block ports based
on cost to root bridge – all available
paths cannot be used
50 MAC addresses
100 MAC addresses learned
on all switches!!
VLAN and port members
must be provisioned on
all switches
4. SPB
No Spanning Tree in SPB core
Customer VLAN & Services provisioned only at edge of network
50 MAC address
50 MAC address
100 MAC address
VLAN provisioning only required at edge of network:
simple as adding a VLAN, local ports, and assigning a
Service Identifier. Customer MAC learning only at Edge
of network, core never learns C-MAC (MAC learning
and flooding only at edge, NOT in core).
Customer MAC learning only at edge of network, core has zero
end user MAC addresses
SPB
6. Slide 6
Student Objectives
Upon completion of this module, you will be able to:
Describe transparent bridging.
Describe the flooding and learning port states.
Describe the forwarding and filtering port state.
Describe the forwarding database.
Identify the various FDB entry types.
Manage forwarding database entries.
Configure egress flooding.
Configure and verify the limit-learning feature.
Configure and verify the lock-learning feature.
Configure the Extreme link status monitor.
7. ISO Seven Layer Reference Model
Slide 7
L7 - APPLICATION
L6 - PRESENTATION
L5 - SESSION
L4 - TRANSPORT
L2 - DATA LINK
L1 - PHYSICAL
L3 - NETWORK
Layer Description
7
Application level access to the
network, file transfer, remote
terminals
6
Translation of data structures
between differing architectures
5
Provides for dialogue control
between processes
4
Provides for end to end
connection between machines
3 Where routing takes place
2
Defines protocols for exchanging
data frames
1
Defines the standards for
physical connections (the wire)
8. Slide 8
Collision Domain
All hosts accessing the same physical media
Host packets capable of colliding with each other
Shared Medium – A common Ethernet cable
9. Slide 9
Carrier Sense Multiple Access with Collision
Detection (CSMA/CD)
Carrier Sense
• Hosts sense if there is any current transmission in progress.
• If there is a transmission in progress, hosts wait until it is finished.
Multiple Access
• Multiple hosts can participate in the same domain / share the same media.
Collision Detection
• Two or more hosts can still transmit at exactly the same instant,
believing the media to be free.
• If a collision occurs:
The host sends a jamming signal to prevent any further transmission.
It waits a random amount of time before trying to retransmit.
• Allowed to retry up to 16 times.
10. Slide 10
Transparent Bridges Used for LAN Segmentation
Bridges widely used to segment Ethernet collision domains
Switches perform the bridge segmentation function in hardware
Before…
IPX
UNIXIPX
UNIX
Excessive
Delays
After…
IPX
IPX
UNIXUNIX
Acceptable
Delays
Low
Utilization
Bridge
11. Slide 11
802.1d Transparent Bridges
Used in Ethernet Networks
A talks to B – Packet remains in 1st collision domain
A talks to C – Bridge forwards packet to 2nd collision domain
A switch performs the bridging function in hardware
MAC Address based lookup table
Collision Domain 2Collision Domain 1
C
DIPX
A
BUNIX
Bridge
12. Slide 12
Ethernet Frames
A bridge learns host locations from Source MAC address.
It makes forwarding decisions based on Destination MAC address.
Ethernet Frame
6 Bytes 6 Bytes 2 Bytes 46 to 1500 Bytes 4 Bytes
Destination
MAC
Source
MAC
Type /
Length
Data
(Payload / Padding)
CRC
64 Bytes Minimum. 1518 Bytes Maximum.
13. Slide 13
Bridge Functions
The bridge can be performing one of four functions:
• Flooding, Learning, Forwarding, Filtering
14. Slide 14
0B
Flooding
In a newly configured network, host “0B” initiates communication
with host “1E”.
Because the destination is unknown, the packet is flooded to all of
the interfaces and host “0B” is learned on the inbound port.
Payload1E 0B T/L CRC
0-300s100:01:30:00:00:0B
TimerPort NumberMAC Address
Forwarding Table
0A 0C
0D
0E
0F
1A
1B
1C
1D
1E
1F
1 3 52 4 6Pad-
ding
15. Slide 15
Forwarding
Host “1E” replies to host “0B”, and the packet is forwarded onto
the destination port learned for “0B”.
At the same time, the MAC address for “1E” is learned and added
to the bridge table.
Payload0B 1E T/L CRC
0-300s600:01:30:00:00:1E
0-300s100:01:30:00:00:0B
TimerPort NumberMAC Address
Forwarding Table
0A
0B
0C
0D
0E
0F
1A
1B
1C
1D
1E
1F
1 3 52 4 6Pad-
ding
16. Slide 16
Filtering
When the destination MAC address matches the inbound port, the
switch drops the packet at the port. This reduces traffic on the
other ports within the broadcast domain (VLAN) and optimizes
performance.
Payload0B 0A T/L CRC
MAC Address Port Number Timer
00:01:30:00:00:0A 1 0-300s
00:01:30:00:00:0B 1 0-300s
Forwarding Table
0A
0B
0C
0D
0E
0F
1A
1B
1C
1D
1E
1F
1 3 52 4 6Pad-
ding
17. Slide 17
Forwarding Database
Maintains a record of the location of each of the host MAC
addresses.
Enables the switch to make forwarding decisions.
Entries are added dynamically by associating the source MAC field
of the Ethernet frame with the port number.
Has statically added entries. The administrator manually enters
MAC and port number fields.
Also known as the bridge table or FDB.
18. Slide 18
Forwarding Database Illustrated
L2 address entries consists of:
• MAC address, Port / Port ID, VLAN ID
FDB
20. FDB Entry Types
Dynamic entries
• Initially, all entries in the database are dynamic
Static entries
• Non-aging entries
Entries with an aging timer set to zero
• Permanent entries
Entered through the CLI and saved as permanent
Retained in the database after reset/power off
• Black hole entries
Created statically by the administrator
Created automatically by security features such as lock-learning
Configures FDB with specified source and/or destination MAC address to be
discarded
Slide 20
22. Displaying the FDB Table
To display the contents of the layer 2 Forwarding Database, use the
show fdb command:
show fdb
Slide 22
Results show MAC, VLAN, Age, Flags, and Port of each entry.
23. Adding Entries to the FDB
To add a static entry to the FDB, use the create fdbentry command:
create fdbentry <mac_addr> vlan <vlan_name>
[ports <port_list> | blackhole]
• Allows you to add a standard or blackhole entry to the FDB
Examples commands
• Add a permanent static entry to the FDB:
create fdbentry 00:E0:2B:12:34:56 vlan finance port 3:4
• Add a black hole entry to the FDB:
create fdbentry 00:E0:2B:12:34:56 vlan finance blackhole
• Verify the results of the above commands:
show fdb
Slide 23
24. Removing Entries from the FDB
To remove static entries from the FDB, use the delete fdbentry
command:
delete fdbentry [all | <mac_address> [vlan <vlan name>]
To remove dynamic or black hole entries from the FDB, use the
clear fdb command:
clear fdb {<mac_address> | blackhole | ports <portlist> |
vlan <vlan name>}
Examples:
• Remove a permanent entry from the FDB:
delete fdbentry 00:E0:2B:12:34:56 vlan default
• Remove a dynamic entry from the FDB:
clear fdb 00:E0:2B:12:34:56
• To verify the results of the delete fdbentry or clear fdb command:
show fdb
Slide 24
26. Configuring MAC Address Learning
To control if a switch learns the source addresses of incoming packets,
use the disable / enable learning command.
Determines if the source MAC address of incoming packets will be added
to FDB.
• Defines if incoming packets with unknown source MAC addresses are dropped or
forwarded to the appropriate egress ports.
MAC address learning is enabled by default and is configured on a per-port
basis.
Examples
• To only forward packets with static FDB entries on port 5:
disable learning drop-packets port 5
• To forward all packets received on this port:
disable learning forward-packets port 5
• To view the MAC address learning configuration on port 5. The lowercase m flag
indicates that MAC address learning is enabled.
show ports 5 information
Slide 26
27. Configuring the FDB Aging Time
To configure how long the FDB maintains a dynamic entry in the FDB, use
the configure fdb agingtime command:
configure fdb agingtime <seconds>
• Default: 300 seconds (5 minutes)
• Range: 15 - 1,000,000 seconds
• A value of 0 indicates that entries should never be aged out
• The timer is restarted when a packet with a matching source MAC address is received
on the same port.
Examples
• To change the FDB agetime to an hour:
configure fdb agingtime 3600
• To ensure no entries in the FDB age out:
configure fdb agingtime 0
• To verify the agingtime value:
show fdb
Slide 27
29. Describing Layer 2 Security Features
ExtremeXOS has three features that enhance Layer 2 security
• Egress Flood Control
Determines whether broadcast, multicast, or unknown unicast packets are
flooded.
• Limit-Learning
Limits the number of devices that can be learned.
• Lock-Learning
Freezes the FDB entries on a port / VLAN basis.
Once enabled, this feature does not allow new MAC address entries to be added
dynamically.
Configured by port or port / VLAN
• Egress Flooding Control - Port
• limit-learning - Port / VLAN
• lock-learning - Port / VLAN
Slide 29
30. Egress Flood Control
ExtremeXOS enables you to
manage the types of packets that
get flooded out to the network.
Egress flooding takes action on a
packet based on the packet
destination MAC address.
By default, egress flooding is
enabled.
You can enhance security and
privacy as well as improve
network performance by disabling
Layer 2 egress flooding on some
packets.
Slide 30
Disabling multicasting egress flooding does not affect those packets within an IGMP membership group
Client 1 Client 2
Access Link
Port 1
Access Link
Port 2
Uplink
Port 3
EXOS Switch / Access VLAN
ISP FW /
Security Proxy
With all_cast
flooding disabled,
clients will only
see known unicast
packets.
31. Configuring Egress Flood Control
To control egress flooding, use the enable / disable flooding
command with the port option.
Examples
• To disable flooding of unknown unicast packets on port 1:
disable flooding unicast port 1
• To enable flooding of broadcast packets on all ports:
enable flooding broadcast port all
• To verify egress flooding configuration on port 1:
show port 1 info detail
Slide 31
The broadcast, multicast, and unicast parameters are available only on the BlackDiamond 8800 series switches,
SummitStack, and the Summit family of switches.
32. Configuring Limit-Learning
This security feature allows you to limit the number of MAC
addresses that can be dynamically-learned by using the configure
ports command with the limit-learning option:
• Allows the first N number of hosts.
• All hosts thereafter are denied access.
The traffic is blocked as a black hole entry.
Both ingress and egress.
• Based on source MAC address
Examples
• To limit the number of MAC addresses learned on port 1 for VLAN
accounting to three entries:
configure ports 1 vlan accounting learning-limit 3
• To remove the learning limit from port 1 for VLAN accounting:
configure ports 1 vlan accounting unlimited-learnings
Slide 32
FDB
MAC 1
MAC 2
MAC 3
Port 1 limit
33. Configuring Lock-Learning
To lock entries in the FDB, use the configure ports command with
the lock-learning option:
• The entries in the FDB are frozen into a locked static state.
• New dynamic FDB entries are inserted as black hole entries.
• You can either limit dynamic MAC FDB entries, or lock down the current
MAC FDB entries per port/VLAN, but not both.
Examples:
• To lock the FDB entries associated with port 4 and the accounting VLAN:
configure ports 4 vlan accounting
lock-learning
• To unlock the FDB entries associated with port 4
and the accounting VLAN:
configure ports 4 vlan accounting
unlock-learning
Slide 33
Unknown MAC Known MAC
36. Extreme Link Status Monitoring (ELSM)
Extreme Networks proprietary protocol that monitors network
health by detecting CPU and remote link failures
Detects switch CPU failures that could result in a ESRP or EAPS
loop in the network
Operates on a point-to-point basis and is configured on both sides
of the peer connections
When ELSM is down, data packets are neither forwarded nor
transmitted out of that port
Slide 36
Hello messages
Hello messages
37. Verifying Extreme Link Status Monitoring
show elsm ports 3
Slide 37
ELSM state can be UP, Down, Down-wait, or Down-stuck
38. Summary
You should now be able to:
Define transparent bridging.
Define the flooding and learning port states.
Define the forwarding and filtering port state.
Define the forwarding database.
Identify the various FDB entry types.
Manage forwarding database entries.
Configure egress flooding.
Configure and verify the limit-learning feature.
Configure and verify the lock-learning feature.
Configure the Extreme link status monitor.
Slide 38
39. Slide 39
Lab
Turn to the Layer 2 Forwarding Lab
in the ExtremeXOS™
Operations and Configuration - Lab Guide Rev. 12.1
and complete the hands-on portion of this module.
Imagine using our switching as a policy enforcement engine to manage your network. Extreme offers a Carrier-class solution for the delivery of business and residential Ethernet services. Extreme Networks Metro Ethernet offerings enable service provider customers to offer a variety of business and residential Ethernet services using a resilient, high performance and service rich platform. Extreme Switch Hardware based design so the ISD will experience no performance penalty for running advanced features such as Multicast, ACLs, and QoS. Extreme can deliver the ISD Special Service Differentiation.
The need for business continuity has placed a greater demand on today’s data networks – redundancy and reliability are imperative and the network must be able to support them. The network infrastructure must be able to achieve a high availability environment and continuous access to resources. For this reason, the networking industry has relied on the Spanning Tree Protocol (STP) in large Layer 2 networks to provide a certain level of redundancy. However, STP has proven inadequate to provide the level of resiliency required for real-time and mission critical applications. It is important to note that the entire industry has recognized that a new technology is needed to replace STP and many vendors are in the process of developing pre-standard technologies to meet that requirement.
The control plane is the part of the router architecture that is concerned with drawing the network topology, or the information in a (possibly augmented) routing table. In most cases, the routing table contains a list of destination addresses and the outgoing interface(s) associated with them. Control plane logic also can define certain packets to be discarded, as well as preferential treatment of certain packets for which a high quality of service is defined by such mechanisms as differentiated services.
A major function of the control plane is deciding which routes go into the main routing table. &quot;Main&quot; refers to the table that holds the unicast routes that are active. Multicast routing may require an additional routing table for multicast routes. Several routing protocols e.g. IS-IS, OSPF and BGP maintain internal databases of candidate routes which are promoted when a route fails or when a routing policy is changed.
Service providers began building metro Ethernet networks in the late 1990s to provide a cost-effective alternative to TDM-based leased lines and legacy switching technologies such as ATM and frame relay. Initially, they paid little attention to the issue of scaling the metro, because the networks were new and had few subscribers and small amounts of traffic. Since then, the popularity of metro Ethernet has grown tremendously, and leading analysts predict that 20% annual growth will continue in the coming years. To prepare their networks for the onset of many new subscribers and ever-rising volumes of traffic, service providers must be ready to scale today.
Carrier Ethernet networks are typically composed of three tier systems—with switching equipment located at the customer edge, provider edge and provider aggregation. Not all networks will use all three tiers. For example, an IPTV network may be deployed using only a provider aggregation switch at a provider point of presence, skipping the provider edge. The provider edge is the Central Office (CO) used for service delivery. There is some cross over between provider edge and provider aggregation. Depending on the size of the network and the physical geography of the deployment, a service provider may do aggregation at either the provider edge or at a larger provider aggregation site. The customer edge includes building basements and wiring closets where switches are deployed for business services as well as multi-tenant apartment buildings for residential services.
The forwarding plane, sometimes called the data plane or user plane, defines the part of the router architecture that decides what to do with packets arriving on an inbound interface. Most commonly, it refers to a table in which the router looks up the destination address of the incoming packet and retrieves the information necessary to determine the path from the receiving element, through the internal forwarding fabric of the router, and to the proper outgoing interface(s). The IP Multimedia Subsystem architecture uses the term transport plane to describe a function roughly equivalent to the routing control plane. In certain cases, the table may specify that a packet is to be discarded. In such cases, the router may return an ICMP &quot;destination unreachable&quot; or other appropriate code. Some security policies, however, dictate that the router should drop the packet silently, in order that a potential attacker does not become aware that a target is being protected.
IEEE 802.1Q Data Plane Actions for XYZ Account… Providers are confronted with two distinct facets to metro Ethernet scalability. The first is subscriber scalability: the ability to seamlessly add large numbers of customers to the network without affecting operation. In general, the passage from the input interface directly to an output interface, through the fabric with minimum modification at the output interface, is called the fast path of the switch. If the packet needs significant processing, such as segmentation or encryption, it may go onto a slower path, which is sometimes called the services plane of the router. Service planes can make forwarding or processing decisions based on higher-layer information, such as a Web URL contained in the packet payload. The outgoing interface will encapsulate the packet in the appropriate data link protocol. Depending on the router software and its configuration, functions, usually implemented at the outgoing interface, may set various packet fields, such as the DSCP field used by differentiated services.
A Data Center with SPB and SDN Control for XYZ Account… A further consequence of SPBM&apos;s transparency in both data plane and control plane is that it provides a perfect, &quot;no compromise&quot; delivery of the complete MEF 6.1 service set. This includes not only E-LINE and E-LAN constructs, by also E-TREE (hub-and-spoke) connectivity.
SPBV supports shortest path trees but SPBV also builds a spanning tree which is computed from the link state database and uses the Base VID. This means that SPBV can use this traditional spanning tree for computation of the Common and Internal Spanning Tree (CIST). The CIST is the default tree used to interwork with other legacy bridges. It also serves as a fall back spanning tree if there are configuration problems with SPBV. SPBV has been designed to manage a moderate number of bridges.
SPBM offers both the ideal multicast replication model, where packets are replicated only at fork points in the shortest path tree that connects members, and the less state intensive head end replication model where serial unicast packets are sent to all other members along the same shortest path first tree. These two models are selected by specifying properties of the service at the edge which affect the transit node decisions on multicast state installation.
This allows for a trade-off to be made between optimum transit replication points (with their larger state costs) vs. reduced core state (but much more traffic) of the head end replication model. These selections can be different for different members of the same Individual Service ID (I-SID) allowing different trade-offs to be made for different members.
ExtremeXOS™ Operation and Configuration, Version 12.1 - Layer 2 Forwarding
ExtremeXOS™ Operation and Configuration, Version 12.1 - Layer 2 Forwarding Implementation
Basically, we have dynamic entries and static entries. Remember, dynamic entries are any entry that was learned automatically or dynamically by the switch based on the source MAC address and ingress port.
We have static entries, and in static entries, we have non-aging entries, permanent entries, and black hole entries.
Non-aging entries are simply entries with an aging time set to zero.
Permanent entries were manually entered on the CLI by the Administrator, and were saved as permanent. Permanent entries are retained in the database even through a power cycle or switch reboot.
Lastly, we have black hole entries. Again, black hole entries are created statically by the Administrator. The Administrator may have created an entry in there for security or to block undesired traffic on the network. For example, let’s say there was a denial of service attack being launched by a particular host, and the Administrator was able to determine that device’s MAC address. We can then go in and create a black hole entry that says “Any traffic coming into the switch sourced from this particular MAC address, simply discard that frame.” Additionally, black hole entries can be created automatically by some of Extreme Network’s basis security features such as lock-learning and limit-learning.
Here you see an example of the output of
show fdb
There is an important things to note here: On the left-hand side of the slide, you can see the MAC address of individual devices. Next, we see the VLAN column and you can look and see all of the devices that are associated with the default VLAN. You see the Age column. This is the amount of time that has lapsed since the last time this particular device has been heard on the wire. And then, the physical port number. Ports, you see here, ports 2, 3, and 7 are in use.
The next command you can use to verify MAC security is:
show vlan &lt;space&gt; vlan name &lt;space&gt; security
In this case, we’re looking at VLAN Default and you can see on port 7, we’ve locked learning on Port 7 and you can see that there was one dynamic entry in the FDB at the time that we locked learning on Port 7.
Extreme Link Status Monitoring (ELSM) is Extreme Networks’ proprietary protocol that monitors network health by detecting CPU and remote link failures. ELSM does this by sending hello messages between two ELSM peers.
Should one of the remote switches, CPUs, fail, we detect this by the fact that we’re no longer receiving ELSM messages from that peer, in which case we would actually block the link This can be helpful in a case of, say, ESRP, in preventing dual-ESRP masters. ELSM operates on a point-to-point basis, and must be configured on both sides of the peer connection. If ELSM is only configured on one end of the link and is not configured on the other end of the link, that port on the switch that ELSM is enabled on, will be set in to a blocking state, and will only be set in to a forwarding state once it actually starts communicating with the ELSM peers.
Use the command show ELSM ports and the port number to determine the ELSM state.
You should be now able to:
Describe ExtremeXOS Layer 2 features
Describe basic Transparent Bridging
Define Flooding and Learning port states
Define the Forwarding and Filtering port states
Define the basics of the FDB
Create FDB entries
Configure and verify the limit-learning feature
Configure and verify the lock-learning feature
And configure and verify the settings of ELSM