This document provides an overview of Overlay Transport Virtualization (OTV) including:
OTV allows extending VLANs across multiple sites to provide same IP subnet reachability without needing routing protocols between sites. It uses MAC routing and encapsulates frames with multicast or unicast to remote sites.
OTV edge devices run IS-IS to exchange MAC addresses and build adjacencies. Frames are encapsulated at ingress edge device and decapsulated at egress, caching ARP entries for remote MACs.
Considerations for OTV include using M-Series cards, IGMPv3 on join interfaces, defining multiple data groups, and localizing FHRP protocols to avoid suboptimal routing. OTV
This document summarizes port channels, virtual port channels (vPC), and multi-chassis etherchannel (MCEC) technologies. It discusses the basic design of vPC including components, initialization stages, best practices, and failure scenarios. Key points covered include vPC domains, roles, peer links, consistency checks, and configuration examples on Nexus 5000/7000/FEX platforms. Enhanced vPC (EvPC) and interactions with first hop redundancy protocols are also summarized.
Virtual port channels (vPC) allow links that are physically connected to two different switches to appear as a single port channel, avoiding STP blocking. Two switches are considered vPC peers and form a vPC domain. A peer link connects the two switches to synchronize information. A peer keepalive link provides a backup communication path if the peer link fails. VLANs allowed on the peer link are considered vPC VLANs.
CCNA DC ,CCNP DC ,CCIE DC ,CCIE DC RACK RENTALS ,CCIE DC LEARNING PPT ,CCIE DC ONLINE TRAINING.
UCS RACK RENTALS ,MDS RACK RENTALS ,NEXUS 7000 RACK RENALS
OTV is a technology that provides layer 2 extension capabilities between different data centers without using tunnels. It encapsulates layer 2 traffic with an IP header and transports it across an IP network. OTV devices called edge devices perform layer 2 learning and forwarding within a data center and encapsulation functions to extend layer 2 traffic across sites using IP addresses and multicast groups for control and data plane MAC address advertisement and traffic forwarding between sites.
CCNA DC ,CCNP DC ,CCIE DC ,CCIE DC RACK RENTALS ,CCIE DC LEARNING PPT ,CCIE DC ONLINE TRAINING.
UCS RACK RENTALS ,MDS RACK RENTALS ,NEXUS 7000 RACK RENALS
This document discusses Cisco OTV (Overlay Transport Virtualization) and how it separates STP domains between sites, allows different STP technologies per site, handles multi-homing between sites using an Authoritative Edge Device (AED) to prevent loops, and optimizes the forwarding of different traffic types including unicast, multicast, broadcast, and ARP packets between sites while supporting MAC mobility. It also discusses how OTV isolates FHRP protocols between sites.
vPC allows links connected to two Nexus switches to appear as a single port channel to a third device. It provides advantages like eliminating STP blocked ports, using all available uplink bandwidth, and fast convergence upon failures. Configuring vPC involves the vPC peer switches, peer link, domain, and member ports. vPC avoids loops at the data plane layer. It can be used within a single data center for active-active server connectivity or between two data centers to extend VLANs across sites at layer 2. Object tracking allows vPC to modify its state based on peer link states.
This document provides information on various topics related to UCS security including system policies, high availability, system events, SNMP, firmware, and TAC information. It discusses how high availability is achieved in UCS through clustering of fabric interconnects, replication of data between nodes, and use of chassis EEPROM. It also describes fault states, severity levels, and how to view system events and collect TAC support information.
This document summarizes port channels, virtual port channels (vPC), and multi-chassis etherchannel (MCEC) technologies. It discusses the basic design of vPC including components, initialization stages, best practices, and failure scenarios. Key points covered include vPC domains, roles, peer links, consistency checks, and configuration examples on Nexus 5000/7000/FEX platforms. Enhanced vPC (EvPC) and interactions with first hop redundancy protocols are also summarized.
Virtual port channels (vPC) allow links that are physically connected to two different switches to appear as a single port channel, avoiding STP blocking. Two switches are considered vPC peers and form a vPC domain. A peer link connects the two switches to synchronize information. A peer keepalive link provides a backup communication path if the peer link fails. VLANs allowed on the peer link are considered vPC VLANs.
CCNA DC ,CCNP DC ,CCIE DC ,CCIE DC RACK RENTALS ,CCIE DC LEARNING PPT ,CCIE DC ONLINE TRAINING.
UCS RACK RENTALS ,MDS RACK RENTALS ,NEXUS 7000 RACK RENALS
OTV is a technology that provides layer 2 extension capabilities between different data centers without using tunnels. It encapsulates layer 2 traffic with an IP header and transports it across an IP network. OTV devices called edge devices perform layer 2 learning and forwarding within a data center and encapsulation functions to extend layer 2 traffic across sites using IP addresses and multicast groups for control and data plane MAC address advertisement and traffic forwarding between sites.
CCNA DC ,CCNP DC ,CCIE DC ,CCIE DC RACK RENTALS ,CCIE DC LEARNING PPT ,CCIE DC ONLINE TRAINING.
UCS RACK RENTALS ,MDS RACK RENTALS ,NEXUS 7000 RACK RENALS
This document discusses Cisco OTV (Overlay Transport Virtualization) and how it separates STP domains between sites, allows different STP technologies per site, handles multi-homing between sites using an Authoritative Edge Device (AED) to prevent loops, and optimizes the forwarding of different traffic types including unicast, multicast, broadcast, and ARP packets between sites while supporting MAC mobility. It also discusses how OTV isolates FHRP protocols between sites.
vPC allows links connected to two Nexus switches to appear as a single port channel to a third device. It provides advantages like eliminating STP blocked ports, using all available uplink bandwidth, and fast convergence upon failures. Configuring vPC involves the vPC peer switches, peer link, domain, and member ports. vPC avoids loops at the data plane layer. It can be used within a single data center for active-active server connectivity or between two data centers to extend VLANs across sites at layer 2. Object tracking allows vPC to modify its state based on peer link states.
This document provides information on various topics related to UCS security including system policies, high availability, system events, SNMP, firmware, and TAC information. It discusses how high availability is achieved in UCS through clustering of fabric interconnects, replication of data between nodes, and use of chassis EEPROM. It also describes fault states, severity levels, and how to view system events and collect TAC support information.
Community tech talk virtual port channel ( v pc ) operations and design best ...crojasmo
This document discusses Nexus vPC (Virtual Port Channel) which allows links that are physically connected to two different Cisco Nexus switches to appear as a single port channel by using a virtual interface called a vPC. The key benefits of vPC include avoiding STP failures and providing redundancy. It also discusses vPC terminology, operation, configuration, verification and failure scenarios. The document concludes with recommendations for configuring vPC peer links, peer keepalive links and vPC member ports.
This document provides an overview and agenda for a presentation on VXLAN BGP EVPN technology. It begins with an introduction to VXLAN and EVPN concepts. It then outlines the agenda which includes explaining VXLAN configuration, EVPN configuration, underlay configuration, overlay configuration, and EVPN VXLAN service configuration. It also provides a sample migration from a legacy device configuration to a VXLAN BGP EVPN configuration. Various networking acronyms related to VXLAN and EVPN are defined. Sample vendor supported data center technologies and a VXLAN test topology are shown.
FabricPath is a Layer 2 technology from Cisco that provides multi-path Ethernet capabilities and eliminates the need for Spanning Tree Protocol. It combines the benefits of Layer 2 switching with greater scalability, availability, and loop prevention capabilities. FabricPath adds routing-like capabilities to Layer 2 switching such as all active links, fast convergence, and built-in loop avoidance mechanisms.
A novel way of creating overlay networks for OpenNebula is presented here. Using BGP Ethernet VPN (EVPN) with VXLAN data-plane encapsulation. This provides scalable Layer 2 over IP networks.
Operationalizing EVPN in the Data Center: Part 2Cumulus Networks
In the second of our two-part series on EVPN, Cumulus Networks Chief Scientist Dinesh Dutt dives into more technical details of network routing, EVPN use cases, and best practices for operationalizing EVPN in the data center.
To view the recording of this webinar, visit http://go.cumulusnetworks.com/l/32472/2017-09-23/95t7xh
VXLAN BGP EVPN is a technology that uses VXLAN, BGP and EVPN to build multi-tenant IP fabrics. The document discusses VXLAN and EVPN concepts and acronyms, as well as providing sample configurations and outputs for a VXLAN BGP EVPN setup on Arista switches. Key technologies covered include VXLAN, VTEPs, VNIs, EVPN instances, MAC learning in the control plane, and the advantages of EVPN over traditional VXLAN.
The document discusses Ethernet VPN (EVPN) use cases and applications. It provides background on EVPN, describing how it uses BGP to advertise MAC addresses and next hops. EVPN supports multi-homing and provides integrated Layer 2 and Layer 3 forwarding. The document outlines several use cases for EVPN including data center and data center interconnect, service chaining using policy-based routing to virtual network functions, Internet exchange points, and provider VPNs.
VXLAN allows layer 2 segments to span layer 3 networks by encapsulating Ethernet frames within UDP packets. This allows virtual machines and servers to communicate securely across physical networks as if they were on the same local area network. VXLAN uses VXLAN Tunnel End Points and a VXLAN Network Identifier to encapsulate packets and identify virtual network segments. Up to 16 million virtual networks can be created, enabling data center tenants and workloads to be isolated from each other while residing on the same physical network.
VXLAN allows overlaying of layer 2 networks over a layer 3 underlay network using IP routing. It creates virtual networks by encapsulating layer 2 frames in UDP packets which are transported via the layer 3 network. This provides up to 16 million virtual networks compared to 4000 with VLAN. VXLAN is used for virtual machine migration across data centers, disaster recovery, and network virtualization in the cloud. It works by having VXLAN tunnel end points encapsulate and de-encapsulate frames between virtual networks identified by VXLAN network identifiers.
Highly Focussed on CCIE Learning .11 Full CCIE DC Racks for your CCIE Needs .Demo available for our Online Classes and Online CCIE DC Racks .Take Demo and Decide yourself .World Class Racks based in New Jersey ,USA and Bangalore India
Private VLANs allow splitting a regular VLAN into multiple "subdomains" to provide isolation between hosts at layer 2. The domains are isolated broadcast domains that require layer 3 forwarding to communicate. Primary, isolated, and community ports are defined for the sub-VLANs. Primary VLANs deliver frames downstream, isolated VLANs carry frames upstream, and community VLANs allow communication within the same group and to promiscuous ports. The configuration binds VLANs into a private VLAN domain, maps host ports to secondary VLANs, and maps a promiscuous port to all secondary VLANs to allow inter-subnet communication.
Segment routing allows a node to steer a packet through an ordered list of segments encoded in the packet header. Segments represent instructions like forwarding through specific nodes or along certain paths. By encoding the path in packets, segment routing can compute paths centrally and reduce network state.
This document explains MPLS Layer 3 VPNs. It discusses how Layer 3 VPNs allow routing information to be shared between customer sites using protocols like OSPF and BGP across the service provider's MPLS network. It describes how Virtual Routing and Forwarding instances (VRFs), MP-BGP, Route Distinguishers (RDs), and Route Targets (RTs) work together to separate routing information for different customers and establish VPN connectivity between their sites while avoiding overlapping address spaces.
Building DataCenter networks with VXLAN BGP-EVPNCisco Canada
The session specifically covers the requirements and approaches for deploying the Underlay, Overlay as well as the inter-Fabric connectivity of Data Center Networks or Fabrics. Within the VXLAN BGP-EVPN based Overlay, we focus on the insights like forwarding and control plane functions which are critical to the simplicity operation of the architecture in achieving scale, small failure domains and consistent configuration. To complete the overlay view on VXLAN BGP-EVPN, we are going to the insides of BGP and its EVPN address-familiy and extend to about how multiple DC Fabric can be interconnected within, either as stretched Fabrics or with true DCI. The session concludes with a brief overview of manageability functions, network orchestration capabilities and multi-tenancy details. This Advanced session is intended for network, design and operation engineers from Enterprises to Service Providers.
The document contains questions and answers about networking concepts such as VLANs, trunking, VTP, STP, wireless networking, and inter-VLAN routing. Based on the provided exhibit and configuration snippets, it tests the reader's understanding of switch and router configuration as well as common network design implementations.
The document discusses Virtual Private Routed Network (VPRN) services. VPRNs use BGP and MPLS to provide Layer 3 VPN connectivity between customer sites. Each VPRN has its own routing table maintained by provider edge (PE) routers. PE routers exchange routes for each VPRN using MP-BGP. Routes include a Route Distinguisher to identify the VPRN. Tunnels using MPLS or GRE carry customer traffic across the provider network to the correct PE router based on the route label. The document outlines requirements, protocols, and features used to implement VPRNs such as route reflectors, route redistribution, and CE connectivity checks.
In this presentation, we will discuss how IEEE standard 802.3ad and its implications allow third-party devices such as switches, servers, or any other networking device that supports trunking to interoperate with the distributed trunking switches (DTSs) seamlessly. Check out the webinar recording where this presentation was used: http://community.arubanetworks.com/t5/Wired-Intelligent-Edge-Campus/Technical-Webinar-LACP-and-distributed-LACP-ArubaOS-Switch/td-p/458170
Register for the upcoming webinars: https://community.arubanetworks.com/t5/Training-Certification-Career/EMEA-Airheads-Webinars-Jul-Dec-2017/td-p/271908
IP Infusion Application Note for 4G LTE Fixed Wireless AccessDhiman Chowdhury
SKY Brazil is one of the largest Pay TV provider in Brazil with 5Million+ subscribers created world’s first disaggregated 5G-ready Fixed Wireless Access (FWA) network using IPInfusion’s disaggregated Cell Site Gateway Solution to serve 35K broadband subscribers.
Learn how the deployment was done, read this application note to know more about the usecase and OcNOS configurations.
- VLAN Trunking Protocol (VTP) allows VLAN configurations to be consistently maintained across a common administrative domain to reduce complexity and inconsistencies when changes are made. VTP advertisements containing VLAN information are transmitted over trunk links and switches inherit the sending switch's VTP domain name. VTP uses revision numbers, starting at 0 and incrementing with each change, to determine if received information is more recent than the local configuration.
- The Point-to-Point Protocol (PPP) provides a standard method for transporting multi-protocol packets over point-to-point links. PPP establishes communication in three phases: Link Control Protocol (LCP) phase for link configuration, optional authentication phase using Password Authentication Protocol (PAP) or Challenge Handshake Authentication Protocol (CHAP), and Network Control Protocol (NCP) phase for layer 3 configuration.
- PAP transmits passwords in clear text, while CHAP uses an encrypted hash to authenticate peers without transmitting passwords. The document provides configuration examples for PPP, PAP, and CHAP authentication between two routers to establish a point-to-point link.
1. The document discusses OpenStack Neutron and Open vSwitch (OVS), describing their architecture and configuration. It explains that Neutron uses OVS to provide virtual networking and switching capabilities between virtual machines.
2. Key components of the Neutron-OVS architecture include the Neutron server, OVS agents on compute nodes, and the OVS daemon that implements the switch in the kernel and userspace.
3. The document also provides examples of configuring an OVS bridge and ports for virtual networking in OpenStack.
Community tech talk virtual port channel ( v pc ) operations and design best ...crojasmo
This document discusses Nexus vPC (Virtual Port Channel) which allows links that are physically connected to two different Cisco Nexus switches to appear as a single port channel by using a virtual interface called a vPC. The key benefits of vPC include avoiding STP failures and providing redundancy. It also discusses vPC terminology, operation, configuration, verification and failure scenarios. The document concludes with recommendations for configuring vPC peer links, peer keepalive links and vPC member ports.
This document provides an overview and agenda for a presentation on VXLAN BGP EVPN technology. It begins with an introduction to VXLAN and EVPN concepts. It then outlines the agenda which includes explaining VXLAN configuration, EVPN configuration, underlay configuration, overlay configuration, and EVPN VXLAN service configuration. It also provides a sample migration from a legacy device configuration to a VXLAN BGP EVPN configuration. Various networking acronyms related to VXLAN and EVPN are defined. Sample vendor supported data center technologies and a VXLAN test topology are shown.
FabricPath is a Layer 2 technology from Cisco that provides multi-path Ethernet capabilities and eliminates the need for Spanning Tree Protocol. It combines the benefits of Layer 2 switching with greater scalability, availability, and loop prevention capabilities. FabricPath adds routing-like capabilities to Layer 2 switching such as all active links, fast convergence, and built-in loop avoidance mechanisms.
A novel way of creating overlay networks for OpenNebula is presented here. Using BGP Ethernet VPN (EVPN) with VXLAN data-plane encapsulation. This provides scalable Layer 2 over IP networks.
Operationalizing EVPN in the Data Center: Part 2Cumulus Networks
In the second of our two-part series on EVPN, Cumulus Networks Chief Scientist Dinesh Dutt dives into more technical details of network routing, EVPN use cases, and best practices for operationalizing EVPN in the data center.
To view the recording of this webinar, visit http://go.cumulusnetworks.com/l/32472/2017-09-23/95t7xh
VXLAN BGP EVPN is a technology that uses VXLAN, BGP and EVPN to build multi-tenant IP fabrics. The document discusses VXLAN and EVPN concepts and acronyms, as well as providing sample configurations and outputs for a VXLAN BGP EVPN setup on Arista switches. Key technologies covered include VXLAN, VTEPs, VNIs, EVPN instances, MAC learning in the control plane, and the advantages of EVPN over traditional VXLAN.
The document discusses Ethernet VPN (EVPN) use cases and applications. It provides background on EVPN, describing how it uses BGP to advertise MAC addresses and next hops. EVPN supports multi-homing and provides integrated Layer 2 and Layer 3 forwarding. The document outlines several use cases for EVPN including data center and data center interconnect, service chaining using policy-based routing to virtual network functions, Internet exchange points, and provider VPNs.
VXLAN allows layer 2 segments to span layer 3 networks by encapsulating Ethernet frames within UDP packets. This allows virtual machines and servers to communicate securely across physical networks as if they were on the same local area network. VXLAN uses VXLAN Tunnel End Points and a VXLAN Network Identifier to encapsulate packets and identify virtual network segments. Up to 16 million virtual networks can be created, enabling data center tenants and workloads to be isolated from each other while residing on the same physical network.
VXLAN allows overlaying of layer 2 networks over a layer 3 underlay network using IP routing. It creates virtual networks by encapsulating layer 2 frames in UDP packets which are transported via the layer 3 network. This provides up to 16 million virtual networks compared to 4000 with VLAN. VXLAN is used for virtual machine migration across data centers, disaster recovery, and network virtualization in the cloud. It works by having VXLAN tunnel end points encapsulate and de-encapsulate frames between virtual networks identified by VXLAN network identifiers.
Highly Focussed on CCIE Learning .11 Full CCIE DC Racks for your CCIE Needs .Demo available for our Online Classes and Online CCIE DC Racks .Take Demo and Decide yourself .World Class Racks based in New Jersey ,USA and Bangalore India
Private VLANs allow splitting a regular VLAN into multiple "subdomains" to provide isolation between hosts at layer 2. The domains are isolated broadcast domains that require layer 3 forwarding to communicate. Primary, isolated, and community ports are defined for the sub-VLANs. Primary VLANs deliver frames downstream, isolated VLANs carry frames upstream, and community VLANs allow communication within the same group and to promiscuous ports. The configuration binds VLANs into a private VLAN domain, maps host ports to secondary VLANs, and maps a promiscuous port to all secondary VLANs to allow inter-subnet communication.
Segment routing allows a node to steer a packet through an ordered list of segments encoded in the packet header. Segments represent instructions like forwarding through specific nodes or along certain paths. By encoding the path in packets, segment routing can compute paths centrally and reduce network state.
This document explains MPLS Layer 3 VPNs. It discusses how Layer 3 VPNs allow routing information to be shared between customer sites using protocols like OSPF and BGP across the service provider's MPLS network. It describes how Virtual Routing and Forwarding instances (VRFs), MP-BGP, Route Distinguishers (RDs), and Route Targets (RTs) work together to separate routing information for different customers and establish VPN connectivity between their sites while avoiding overlapping address spaces.
Building DataCenter networks with VXLAN BGP-EVPNCisco Canada
The session specifically covers the requirements and approaches for deploying the Underlay, Overlay as well as the inter-Fabric connectivity of Data Center Networks or Fabrics. Within the VXLAN BGP-EVPN based Overlay, we focus on the insights like forwarding and control plane functions which are critical to the simplicity operation of the architecture in achieving scale, small failure domains and consistent configuration. To complete the overlay view on VXLAN BGP-EVPN, we are going to the insides of BGP and its EVPN address-familiy and extend to about how multiple DC Fabric can be interconnected within, either as stretched Fabrics or with true DCI. The session concludes with a brief overview of manageability functions, network orchestration capabilities and multi-tenancy details. This Advanced session is intended for network, design and operation engineers from Enterprises to Service Providers.
The document contains questions and answers about networking concepts such as VLANs, trunking, VTP, STP, wireless networking, and inter-VLAN routing. Based on the provided exhibit and configuration snippets, it tests the reader's understanding of switch and router configuration as well as common network design implementations.
The document discusses Virtual Private Routed Network (VPRN) services. VPRNs use BGP and MPLS to provide Layer 3 VPN connectivity between customer sites. Each VPRN has its own routing table maintained by provider edge (PE) routers. PE routers exchange routes for each VPRN using MP-BGP. Routes include a Route Distinguisher to identify the VPRN. Tunnels using MPLS or GRE carry customer traffic across the provider network to the correct PE router based on the route label. The document outlines requirements, protocols, and features used to implement VPRNs such as route reflectors, route redistribution, and CE connectivity checks.
In this presentation, we will discuss how IEEE standard 802.3ad and its implications allow third-party devices such as switches, servers, or any other networking device that supports trunking to interoperate with the distributed trunking switches (DTSs) seamlessly. Check out the webinar recording where this presentation was used: http://community.arubanetworks.com/t5/Wired-Intelligent-Edge-Campus/Technical-Webinar-LACP-and-distributed-LACP-ArubaOS-Switch/td-p/458170
Register for the upcoming webinars: https://community.arubanetworks.com/t5/Training-Certification-Career/EMEA-Airheads-Webinars-Jul-Dec-2017/td-p/271908
IP Infusion Application Note for 4G LTE Fixed Wireless AccessDhiman Chowdhury
SKY Brazil is one of the largest Pay TV provider in Brazil with 5Million+ subscribers created world’s first disaggregated 5G-ready Fixed Wireless Access (FWA) network using IPInfusion’s disaggregated Cell Site Gateway Solution to serve 35K broadband subscribers.
Learn how the deployment was done, read this application note to know more about the usecase and OcNOS configurations.
- VLAN Trunking Protocol (VTP) allows VLAN configurations to be consistently maintained across a common administrative domain to reduce complexity and inconsistencies when changes are made. VTP advertisements containing VLAN information are transmitted over trunk links and switches inherit the sending switch's VTP domain name. VTP uses revision numbers, starting at 0 and incrementing with each change, to determine if received information is more recent than the local configuration.
- The Point-to-Point Protocol (PPP) provides a standard method for transporting multi-protocol packets over point-to-point links. PPP establishes communication in three phases: Link Control Protocol (LCP) phase for link configuration, optional authentication phase using Password Authentication Protocol (PAP) or Challenge Handshake Authentication Protocol (CHAP), and Network Control Protocol (NCP) phase for layer 3 configuration.
- PAP transmits passwords in clear text, while CHAP uses an encrypted hash to authenticate peers without transmitting passwords. The document provides configuration examples for PPP, PAP, and CHAP authentication between two routers to establish a point-to-point link.
1. The document discusses OpenStack Neutron and Open vSwitch (OVS), describing their architecture and configuration. It explains that Neutron uses OVS to provide virtual networking and switching capabilities between virtual machines.
2. Key components of the Neutron-OVS architecture include the Neutron server, OVS agents on compute nodes, and the OVS daemon that implements the switch in the kernel and userspace.
3. The document also provides examples of configuring an OVS bridge and ports for virtual networking in OpenStack.
The document contains a 20 question multiple choice exam about networking technologies like HSRP, SNMP, VLANs, STP, QoS, VoIP, and security. The questions cover topics such as router redundancy protocols, switch configuration, trunking protocols, and network hardening techniques.
The document provides troubleshooting tips and techniques for Cisco Data center switches including the Cisco Nexus 7000, Catalyst 6500 VSS, and high CPU utilization issues. It discusses using commands like show processes cpu sorted, debug netdr capture, and show ip cef to troubleshoot traffic flow and switching paths. It also covers troubleshooting software upgrades on the Nexus 7000 and gathering core dumps and logs to debug process crashes.
This document provides an introduction to Open vSwitch (OVS), including what a virtual switch is, examples of virtual network topologies using OVS, the main components of OVS, and how to use OVS to build network topologies. It discusses features of OVS like visibility into inter-VM communication and support for tunnels. It also demonstrates OVS configurations for virtual machine to virtual machine communication using GRE tunnels and a demo topology with OVS bridges communicating over a GRE tunnel.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
This document provides an overview of Open vSwitch, a software-based virtual switch. It discusses what a virtual switch is, how Open vSwitch uses a userspace controller and kernel datapath to provide network abstractions. The document outlines Open vSwitch components like ovsdb-server and ovs-vswitchd, and demonstrates how to use Open vSwitch to build virtual network topologies with VMs, tunnels, and bridges. Examples of QoS configuration and a GRE tunnel demo are also presented.
Relatore: Alessandro Legnani, Cisco CCIE e IP Network Architect di IT Global Consulting Srl
Sintesi e sinergia perfetta di un nuovo protocollo di routing (e non solo) con il caro vecchio e robusto IPsec (senza le problematiche ike). Perché inventarsi l’ennesima forma di tunnelig per il data plane?
Quanto sopra è la chiave del successo della soluzione sdwan Cisco/Viptela che la rende enormemente scalabile e unica sul mercato.
Nicolai van der Smagt has been in the business of designing, implementing and running SP networks for over 15 years. He has worked with DOCSIS, DSL and FTTH operators. Nowadays, Nicolai is helping Infradata’s pan-European customers build better access, aggregation and core networks, but his focus is on the data center, SDN, NFV and the whitebox switching revolution. His motto: “Simplicity is sophistication”.
Topic of Presentation: SDN
Language: English
Abstract:
Open source SDN that actually works -today
OpenContrail is an open source (Apache 2.0 licensed) project that provides network virtualization in the data center, using tried and tested open standards. It provides northbound APIs, integrates in Openstack or Cloudstack and is available today!
In this slot we’ll show you the architecture and ideas behind the technology and how OpenContrail enables you to avoid the pitfalls that other (closed) SDN solutions bring. If time permits we’ll also demo the technology.
The document discusses software-defined networking (SDN) and OpenFlow, including:
1) OpenFlow allows the control logic to be separated from the forwarding hardware by defining an open interface between the two. This enables more flexible and programmable networks.
2) OpenFlow works by defining flows that match packets and actions that are applied to the matched packets. The flows are populated and managed by an external controller through the OpenFlow protocol.
3) OpenFlow is being deployed in over 100 organizations and is enabling network innovation through its programmable and customizable nature.
The document discusses several security features of the Nexus 1000v virtual switch:
- It supports features like IGMP snooping, DHCP snooping, Dynamic ARP inspection, IP Source Guard, and ACLs to provide layer 2 security for virtual machines.
- These features work similarly to physical switches, protecting the layer 2 network from unmanaged VMs, but they are configured through the virtual Ethernet module interfaces.
- Dynamic ARP inspection and IP Source Guard rely on entries in the DHCP snooping binding database to validate IP-MAC bindings and filter invalid traffic from untrusted ports.
This document discusses OpenStack SDN using Neutron and GRE tunneling. It explains that Neutron provides networking as a service and uses plugins like ml2 with Open vSwitch for SDN. GRE tunneling is used to encapsulate VM traffic between compute and network nodes. Network namespaces are used to create isolated virtual routers and DHCP servers without collisions on each node. The packet flow between an external network, routers, bridges and a VM is outlined.
The document provides an overview of IT network design and installation topics covered in a MaxWiFi training course, including network models, IP addressing, NAT, routing, DHCP, VLANs, wireless networking, and Cisco device configuration.
1. The document discusses provider-provisioned layer 2 MPLS VPNs, which allow customers to construct private networks over a shared infrastructure while maintaining independent addressing and routing.
2. Key components include customer edge routers, provider edge routers, and provider routers. The provider edge routers exchange VPN routing information and use MPLS to forward traffic across the shared core network.
3. Provisioning involves configuring customer edge devices and VPN forwarding tables at provider edges to map customer sites to MPLS labels for transport across the core.
This document summarizes an article about SDN, OpenFlow, and the ONF. It discusses how OpenFlow and SDN are emerging technologies that have the potential to enable network innovation and optimize costs. It also introduces the Open Networking Foundation (ONF) and how the community around SDN and OpenFlow has grown rapidly.
This document provides an overview of interface configuration and monitoring on Juniper networks devices. It discusses the naming conventions for interfaces, including Flexible PIC Concentrators (FPCs) and PICs. It also covers configuring various interface types such as Ethernet, VLAN, aggregated Ethernet, serial and loopback interfaces. The document demonstrates how to configure encapsulation types like HDLC, PPP and Frame Relay. It concludes with examples of commands to monitor interface status, descriptions, statistics and details.
The LS-S2318TP-EI-AC is a Huawei Ethernet switch that provides 16 10/100Mbps ports, 2 combo gigabit ports, and Layer 2 switching capabilities. It has a forwarding performance of 5.4Mpps, port switching capacity of 7.2Gbps, and backplane switching capacity of 32Gbps. The switch supports features such as VLAN, QoS, IPv6, multicast, port mirroring, and security functions. It has a compact dimensions of 442mm x 220mm x 43.6mm and power consumption of less than 14.5W.
20151222_Interoperability with ML2: LinuxBridge, OVS and SDNSungman Jang
The document discusses interoperability between OpenStack Networking (Neutron) and different layer 2 networking technologies like LinuxBridge, Open vSwitch (OVS), and Software-Defined Networking (SDN) using the Modular Layer 2 (ML2) plugin framework. It provides background on LinuxBridge and OVS, describes the expected scenario of using ML2 with LinuxBridge and OVS mechanism drivers and type drivers like flat, VLAN, GRE, and VXLAN. It also briefly discusses the role of the ML2 plugin and mechanism drivers in code.
The document discusses integrating OpenStack Networking (Neutron) with Software Defined Networking (SDN) controllers. It describes how Neutron can use an SDN controller like ONOS instead of traditional mechanism drivers like Open vSwitch. The key components that would need to be modified are the mechanism driver, service plugin, and configuration. Five virtual machines or host machines running specific OpenStack and ONOS services are also needed to demonstrate the integration between Neutron and an SDN controller.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
1. OTV1
OTV technology Introduction
OTV Operations
OTV Configuration and verification (N7K)
OTV unicast mode and its limitation
FHRP Localization and egress routing
Guidelines and limitation for deployment.
Overlay Transport Virtualization
2. Overlay Transport Virtualization
OTV2
OTV is Layer 2 VPN technology. OTV extends VLAN from one site to another so you can use
same IP address space on both site for same VLAN. Some application requires same VLAN and
IP subnet to be present on more than two sites.
Connecting more than 2 sites are difficult to manage using exiting technology (e.g. VPLS) due to
Spanning tree restrictions.
OTV introduces the concept of “MAC routing,” which means a control plane protocol is used to
exchange MAC reachability information between network devices providing LAN extension
functionality.
3. Overlay Transport Virtualization
OTV3
At Data plane, OTV edge device does L2 frame encapsulation in IP payload at layer 3 Edge and uses
multicast to route encapsulated frames to destination OTV edge device.
At Control plane, OTV edge device uses a control multicast group to establish Level 1 IS-IS
adjacencies and uses IS-IS protocol to advertize MAC addresses to other OTV devices on other site.
Depending on upstream routing OTV edge device may or may not run routing protocols but running
routing protocol on OTV edge device is not a requirement. OTV edge device connects to core as a
host not as a router. If routing protocol is required only enable stub routing (stub area for OSPF or
EIGRP stub router).
OTV edge device filters unknown unicast frames in other words it does not forward unknown unicast
frames to other site. OTV edge device also sets DF bit in outer IP header when it encapsulates L2
frame.
OTV edge device has modified MAC address table which shows what IP address to use when
reaching to remote MAC address at other site. This IP address is IP address of join interface of the
remote site.
OTV edge device also cache ARP resolution for MAC addresses not local to the site and learnt via the
overlay. So that all ARP and ND reply can be responded locally within site.
Current implementation of OTV shim header on Nexus 7K uses MPLS over GRE over IP
encapsulation[2] but draft RFC defines UDP encapsulation method.[3]
4. OTV Terminologies
OTV4
Overlay interface: A Logical tunnel interface which does encapsulate the frame into a IP packet.
Join interface: L3 routed port which sends IGMP version 3 join message.
Internal interface: L2 trunk or access interfaces which runs spanning tree.
Site ID: A unique 24-bit value reserved for each site.
Site VLAN: A VLAN that is reserved for choosing OTV authorative edge device for that site.
Control group: An ASM multicast address used to build the OTV neighbor adjacency and to exchange MAC
addresses with neighbors. The use of the ASM group as a vehicle to transport the Hello messages allows the
edge devices to discover each other as if they were deployed on a shared LAN segment. This emulates a
shared medium where all OTV edge devices connected to it. [1]
Data group: In order to handle L2 multicast data-traffic between sites up to 8 ranges of IPv4 SSM multicast
group prefixes can be used by each site. Each OTV edge device creates mapping for Gs to Gd in Data group
mapping table.
MAC address table of a OTV edge devices are slightly modified to incorporate overlay interface as destination.
Site1-OTV1# sh mac add add 0007.eb49.7600
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN MAC Address Type age Secure NTFY Ports
---------+-----------------+--------+---------+------+----+------------------
O 101 0007.eb49.7600 dynamic 0 F F Overlay0
5. OTV Neighbor Discovery
OTV5
Step 1: Each OTV devices sends a IGMP join message thru their join interfaces on ASM control
group. This triggers PIM join and multicast tree for OTV control group.
Step 2: OTV control protocol sends Hello message with its identity.
Step 3 and 4: These hello messages are replicated to all OTV devices that has joined the control
group.
Step 5: The receiving OTV edge devices decapsulate the packets.
Step 6: These Hellos are passed to the control protocol process. This will eventually build neighbor
adjacency over interface overlay0. You can see them using show otv adjacency
6. OTV configuration example (Nexus 7000)
OTV6
feature otv
otv site-vlan 5
otv site-identifier 0x5
interface Overlay0
otv join-interface Ethernet2/1
otv control-group 233.1.1.1
otv data-group 232.5.6.0/28
otv extend-vlan 100
no shutdown
interface Ethernet2/1
descrip Join interface
ip address 150.1.5.5/24
ip igmp version 3
no shutdown
interface Ethernet2/3
descrip Internal interface
switchport
switchport mode trunk
no shutdown
feature otv
otv site-vlan 6
otv site-identifier 0x6
interface Overlay0
otv join-interface Ethernet2/1
otv control-group 233.1.1.1
otv data-group 232.5.6.0/28
otv extend-vlan 100
no shutdown
interface Ethernet2/1
descrip Join interface
ip address 150.1.6.6/24
ip igmp version 3
no shutdown
interface Ethernet2/3
descrip Internal interface
switchport
switchport mode trunk
no shutdown
7. Verification
OTV7
N7K-5# show otv
OTV Overlay Information
Site Identifier 0000.0000.0005
Overlay interface Overlay0
VPN name : Overlay0
VPN state : UP
Extended vlans : 100 (Total:1)
Control group : 233.1.1.1
Data group range(s) : 232.5.6.0/24
Join interface(s) : Eth2/1 (150.1.5.5)
Site VLAN : 5 (up)
AED-Capable : No (No extended VLAN is operationally up)
Capability : Multicast-Reachable
N7K-5# sh otv adjacency
Overlay Adjacency database
Overlay-Interface Overlay0 :
Hostname System-ID Dest Addr Up Time State
N7K-6 0050.5689.1ff6 150.4.6.6 00:06:51 UP
8. Overlay Transport Virtualization
OTV8
Verification commands
N7K-5# sh int overlay 0
Overlay0 is up
MTU 1400 bytes, BW 1000000 Kbit
Encapsulation OTV
Last link flapped 00:45:00
Last clearing of "show interface" counters never
Load-Interval is 5 minute (300 seconds)
RX
0 unicast packets 0 multicast packets
0 bytes 0 bits/sec 0 packets/sec
TX
0 unicast packets 0 multicast packets
0 bytes 0 bits/sec 0 packets/sec
N7K-5 # sh otv arp-nd-cache
OTV ARP/ND L3->L2 Address Mapping Cache
Overlay Interface Overlay1
VLAN MAC Address Layer-3 Address Age Expires In
100 001a.a1ff.7d46 15.1.1.32 00:03:42 00:04:17
9. OTV Authentication methods
OTV9
There are three methods of authentication. All of them are key chain based.
1. Neighbor Authentication – for ISIS neighbor authentication between two sites
2. Route Authentication – for route injection control
3. Neighbor Authentication – for neighbor authentication within a site when using
multihoming.
Authentication is useful when multicast core is not under same administrative control.
This is very similar to Fabricpath authentication and other IS-IS authentication
methods.
The following example shows route authentication.
key chain OTV
key 0
key-string 7 070c22454b0d1a5546
otv-isis default
vpn Overlay0
otv isis authentication-type md5
otv isis authentication key-chain OTV
10. OTV Authentication methods
OTV10
OTV Neighbor Authentication Configuration example.
key chain OTV
key 0
key-string 7 070c22454b0d1a5546
interface Overlay1
otv isis authentication-type md5
otv isis authentication key-chain OTV
N7K-5# sh otv isis interface overlay 0
OTV-IS-IS process: default VPN: Overlay0
Overlay0, Interface status: protocol-up/link-up/admin-up
IP address: none
IPv6 address: none
IPv6 link-local address: none
Index: 0x0001, Local Circuit ID: 0x01, Circuit Type: L1
Level1
Adjacency server (local/remote) : disabled / none
Adjacency server capability : multicast
Authentication type is MD5
Authentication keychain is OTV
Authentication check specified
LSP interval: 33 ms, MTU: 1400
Level Metric CSNP Next CSNP Hello Multi Next IIH
1 40 10 00:00:05 3 3 0.728284
Level Adjs AdjsUp Pri Circuit ID Since
1 1 1 64 N7K-5.01 * 00:53:44
11. OTV Unicast mode
OTV11
Unicast OTV mode can be used in smaller deployment (2 or 3 sites) where there
is no multicast transport core.
One site OTV edge device is selected as adjacency server and it is configured
under interface overlay.
Adjacency server maintains list of all OTV edge device that are part of same
overlay VPN.
Every OTV edge device willing to join a specific OTV logical overlay VPN, needs
to first "register" with the Adjacency Server by start sending OTV Hello messages
to it. All other OTV neighbor addresses are discovered dynamically through the
Adjacency Server.
When there is MAC address table update on one site that gets unicasted to all
OTV edge device in a given overlay VPN. (head end replication). Destination IP
address of this update packet is join interface IP address of each site as opposed
to single multicast address.
12. OTV Unicast mode Configuration example
OTV12
Unicast OTV mode Configuration example.
interface Overlay0
otv join-interface Ethernet2/1
! Instead of control and Data group range use IP address of
adjacency servers
otv use-adjacency-server 150.1.5.5 150.1.6.6
otv extend-vlan 100-103
no shutdown
13. Authorative Edge Device (AED)
OTV13
Each OTV site can have up to 2 edge device for high availability which can perform OTV
encapsulation. Each device is selected as Authorative edge device (AED) for given VLAN.
This election happens over site VLAN.
AED is responsible to forward traffic to and from Overlay VPN for its VLAN. E.g. If a host
sends a broadcast it reaches to both OTV edge device on site but who ever is AED forwards
this broadcast to overlay VPN. Similarly if a broadcast traffic received on both OTV edge
device only AED for that VLAN forwards traffic to internal interface.
14. FHRP Localization/Isolation
OTV14
Each VLAN connected via OTV should have their gateway local to their site i.e. FHRP
protocols should be filtered over OTV. Otherwise suboptimal switching/routing will
occur. Scenario likely to come in exam.
In a good design all FHRP Hellos and MAC addresses of local gateway should be filtered at
the OTV edge devices.
15. FHRP Localization/Isolation Configuration
OTV15
Step 1: Filtering HSRP hellos messages
ip access-list HSRPv1-IP
10 permit udp any 224.0.0.2/32 eq 1985
ip access-list ALL
10 permit ip any any
vlan access-map HSRP-FILTER 10
match ip address HSRPv1-IP
action drop
vlan access-map FHRP-FILTER 50
match ip address ALL-IPs
action forward
vlan filter FHRP-FILTER vlan-list 100
16. FHRP Localization/Isolation
OTV16
FHRP localization/Isolation configuration example for HSRP
Step 2: Filtering MAC address propagating to other site.
mac-list OTV-HSRP-MAC seq 10 deny 0000.0c07.ac00 ffff.ffff.ff00
route-map OTV-FHRP-FILTER permit 10
match mac-list OTV-FHRP-MAC
otv-isis default
vpn Overlay0
redistribute filter route-map OTV-FHRP-FILTER
17. Guidelines and consideration for deployment of OTV
OTV17
Up to eight data-group ranges can be defined.
L3 SVI (interface vlan) for vlans that are extended over OTV cannot be on same VDC.
OTV is only supported on M-series cards only as of today.
IGMP version 3 is mandatory to enable on join interface when multicast mode is used.
Site VLAN has to be up and operational even though there is only one OTV edge device at a
given site.
No need to configure PIM on join interface because OTV edge device connects to core as a
host.
Most simple design can just use 1 Overlay interface, however a more complex design can be
used with VLANs split between Overlays for loadbalancing.
In a given VDC, one overlay VPN can run in unicast mode and another overlay VPN can run in
Multicast mode.
18. References
OTV18
Cisco Overlay Transport Virtualization Technology Introduction and Deployment Considerations
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DCI/whitepaper/DCI3_OTV_
Intro.html
OTV Decoded – A Fancy GRE Tunnel
http://blog.ine.com/2012/08/17/otv-decoded-a-fancy-gre-tunnel/
Overlay Transport Virtualization draft
http://tools.ietf.org/html/draft-hasmit-otv-04
Cisco Nexus 7000 OTV configuration guide
http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-
os/OTV/config_guide/b_Cisco_Nexus_7000_Series_NX-OS_OTV_Configuration_Guide.html