Deploying Carrier Ethernet Features on Cisco ASR 9000
This document provides an overview and configuration instructions for deploying Carrier Ethernet services on the Cisco ASR 9000 router. It begins with an introduction to Carrier Ethernet and the Cisco ASR 9000 platform. It then covers the configuration of Ethernet flow points (EFPs) to classify and rewrite VLAN tags. The document details various Ethernet service types like local connect, VPWS, VPLS and bridging. It also provides sample configurations and verification commands for setting up point-to-point and multipoint services on the ASR 9000.
#7 Multiple Spanning Tree (MST)
Resilient Ethernet Protocol (REP)
Flexible Service Mapping
Ethernet Virtual Connection Infrastructure
Security Features
L2 transport over MPLS
VPLS/VPWS
Pseudowire redundancy
MPLS-TP
IP Unicast/Multicast
Fast Convergence
OAM
Ethernet Adaptation to MPLS and IP
Interworking VPLS with MST/REP
Ethernet/MPLS OAM
Inter-Chassis-Communication Protocol
Multi-Chassis Link Aggregation
MST Access Gateway
#14 Let’s see how it works on asr9k. Fundementally asr9k inherit the same evc implementation on 7600. So they are fully compatible to each other. Just some of the CLI is different
On the port level, it define L2 sub-interface, which is called EFP. Typically each EFP represent one logical end point of one service. It use vlan tag classification to match the frame for the EFP. The very typical case is one EFP = one customer VLAN. But we are much more flexible than that. I will show you the detail in next slide
Then each EFP can define vlan tag rewrite CLI to manipulate the vlan tag.
And it can apply the features like qos, acl at EFP level. Then map the EFP into certain service
#31 This slide give you detail view of the packet flow. It try to put everything together what we have talked so far
On the ingress Line card NPU, it does the first stage lookup. For both L2 and L3 mutlicast, they will lookup the same FIB tcam by using the (S,G) as the key for the longest match. The FIB table is organized by VRF ID in case of L3 multicast, or bridge-domain ID in case of L2 multicast. The FIB lookup result is the local MGID, which is pointing to the result table. The result table include FGID, global MGID and RPF ID.
Then it does the RPF check, if it pass, packet is forwarded to bridge FPGA, octpus fabric interface to the switch fabric
Both FGID and global MGID are in the internal packet head, and sent over to fabric and egress line card
On switch fabric, it replicate packet based on FGID
On egress linecard, fabric interface and bridge replicate the packet and forward it to the right NPU based on MGID table
On the egress NPU, there are two passes lookup. The first pass is the same as ingress NPU. It use (S,G) as the key to lookup the FIB tcam. The result include OLIST count and olist table ID.
Then the packet is sent to TM loopback replication engine. It replicate the number of packets based on the count it receive
Replicated copies is sent back for the second lookup. The lookup key is copy ID, olist table ID and local MGID ID. The result is egress IDB index
Then it lookup the egress IDB get all the adjacency information, like MAC, VLAN,.
If it’s link bundle, it will get bundle ID, and it will lookup the LAG table and choose the egress port. Egress acl and qos are applied in the second pass
Keep in mind, this flow only show the multicast related lookup and replication. It didn’t include all the procedures, for example, acl, qos, ttl handing, etc
#50 Here is the IRB Cli example on the asr9k, as you can see, besides the L2 bridge port, vpls or hvpls pws in the bridge domain, now we add routed interface BVI, which provide L3 logical interface for all the L2 port in that bridge domain.
With IRB, in addition to the normal bridging function, if the host want to go outside of the bridge domain, they can be L3 routed via BVI interface to the other L3 interfaces.