Your SlideShare is downloading. ×
Cisco Data center switches support Krunal Shah
Agenda <ul><li>6500 Troubleshooting and Sup2T introduction  </li></ul><ul><li>NX-OS Troubleshooting </li></ul><ul><li>Nexu...
Troubleshooting – High CPU utilization <ul><li>Check sh proc cpu sorted </li></ul><ul><li>VSS-SW2#show processes cpu sorte...
Troubleshooting Traffic Flow <ul><li>Three flows thru the 6500 switch </li></ul>Process switching path ( sh ip route) Soft...
<ul><li>E.g. Host A cannot reach host B in same vlan but it can reach another Host C on same vlan 200. </li></ul><ul><li>G...
Troubleshooting – Unicast Flow <ul><ul><ul><li>Check the load balancing algorithm for incoming port in module, some softwa...
Troubleshooting - ELAM capture <ul><li>A way to capture the a packet on a dbus/rbus of supervisor   </li></ul><ul><li>Enab...
<ul><li>In dual sup VSS Control plane act as Actvie/standby and Data plane remains active active in both sup. – A Borg Arc...
<ul><li>interface TenGigabitEthernet1/5 </li></ul><ul><li>no switchport </li></ul><ul><li>no ip address </li></ul><ul><li>...
Virtual Switching System <ul><li>To check the health of the link and standby chassis. </li></ul><ul><ul><li>ping vslp outp...
<ul><li>Sup2T uses PFC4/MSFC5 you can push 80 Gbps per slot traffic to the backplane fabric. </li></ul><ul><li>RP and SP a...
Catalyst 6500 Supervisor 2T Source: supervisor 2T architecture from Cisco.com
<ul><li>All DFC4 (68XX and 69XX) line cards are compatible and supported with Sup2T. Selected CEF720 67XX and Classic 61XX...
Nexus
<ul><li>HA manager includes system manager (just like init process of Linux), message transaction service ( an Inter Proce...
<ul><li>Always gather problem specific show techs….. </li></ul><ul><li>N7010B-Dist# show tech ? </li></ul><ul><li>In case ...
<ul><li>In 2008 SAN-OS is rebranded as NX-OS. NX-OS for Nexus products are combination software components from IOS and SA...
NX-OS Troubleshooting crashes N7010A-Dist# show system internal sysmgr service  name l2fm Service &quot;l2fm&quot; (&quot;...
<ul><li>Nexus 7000 has dual core Intel CPU and High CPU is not direct indication of the problem. Identify if this is relat...
<ul><li>3 types of software images - Kickstart, system and EPLD </li></ul><ul><li>During the upgrade first supervisor gets...
Nexus 7000 line cards 16MB 1p3q1t 7.56MB/6.15MB 2q4t/1p3q4t 1 46G ~1:1 1 Nexus 7000 - 48 Port 10/100/1000 Module with XL o...
F1 and M1 line card interactions <ul><li>All 32 ports in F1 series line card can work at 10 gig speed for local switching ...
Virtual port-channels <ul><li>vPC is multi-chassis port-channel (MLAG) technology. </li></ul><ul><li>Domain ID has to be u...
Configuring vPC <ul><li>N7010B-Dist# sh run vpc </li></ul><ul><li>feature vpc </li></ul><ul><li>vpc domain 1 </li></ul><ul...
Monitoring and troubleshooting vPC <ul><li>Show vpc  </li></ul><ul><li>Show vpc orphan-ports </li></ul><ul><ul><li>L2 Port...
Unsupported vPC topologies L2 L3 OSPF OSPF OSPF OSPF Vpc peer-link OSPF supported unsupported
Supported vPC topologies L2 L3 OSPF OSPF OSPF OSPF Vpc peer-link vPC 10
Virtual Device Context <ul><li>Yet another level of device virtualization after VSAN, VLANs and VRF.  </li></ul><ul><li>On...
Configuring and verifying VDC <ul><li>N7010A-Dist(config)# vdc N7010A-Core id 2 </li></ul><ul><li>N7010A-Dist(config-vdc)#...
Configuring and verifying VDC <ul><li>N7010A-Dist(config-vdc)#  show vdc N7010A-Core detail </li></ul><ul><ul><li>vdc id: ...
Limitations while assigning interfaces to a VDC <ul><li>N7K-M132XP-12 line card requires allocation in port-groups of 4 to...
Storage VDC <ul><li>With NX-OS 5.2 version a new type of VDC called Storage VDC is introduced.  </li></ul><ul><li>It creat...
Troubleshooting L2 forwarding <ul><li>All the MAC addresses are downloaded on the Forwarding engines of the line cards. CP...
Troubleshooting L2 forwarding <ul><li>If outgoing interface is a port-channel then which member port is chosen </li></ul><...
<ul><li>Routing protocol processes gathers routing information from the neighbors and selects the best route and place in ...
Troubleshooting L3 Forwarding N7010B-Dist# show routing 150.100.1.1 IP Route Table for VRF &quot;default&quot; '*' denotes...
L3 ECMP Forwarding N7010B-Dist# show routing 172.18.30.8 IP Route Table for VRF &quot;default&quot; '*' denotes best ucast...
Troubleshooting L3 Forwarding (Hardware Entry) N7010B-Dist#  show system internal forwarding route 172.18.30.8 module 2 de...
PMR: XXXXX,035,649 <ul><li>Environment: two Cisco Nexus 7010 with 4.2.6 NX-OS code  </li></ul><ul><li>Problem: Multicast t...
PMR: XXXXX,057,649 <ul><li>Environment: Nexus 7K </li></ul><ul><li>Problem: iBGP connection thru firewall was not working....
PMR: 18077,057,649 <ul><li>Environment: Nexus 7000 with NX-OS 5.2 </li></ul><ul><li>Problem: Cisco N7K sup1 shuts down whe...
PMR: XXXXX,057,649  <ul><li>Environment: Nexus 7000 and ASA firewall.  </li></ul><ul><li>Problem: Traffic sent by ASA fire...
OCPM # XXJV5 <ul><li>Environment: Nexus 7000 M1 32 port and 48 port line card.  </li></ul><ul><li>Problem: 2 Modules have ...
OCPM # XXJTL <ul><li>Environment : Nexus 7000 with 4.2(6) code </li></ul><ul><li>Problem : Cannot configure Jumbo MTU on v...
Appendix A: Lab Topology L3 L2 E1/1-2 E1/1-2 E1/4 E1/28 E1/28 E1/4 E1/3 E1/27 E1/10 E1/12 E1/7 E1/1-2 E1/1-2 E1/3-4 E1/3-4...
Appendix B: IP addresses admin/Nexus1010 10.23.242.231 Cisco Nexus 1000v VSM-1 Nexus 1000v cisco/cisco 10.23.242.40 Cataly...
 
Upcoming SlideShare
Loading in...5
×

Cisco data center support

5,489

Published on

Published in: Education, Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
5,489
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide
  • Packet may be process switched if next hop is not resolved.
  • All XL line cards requires a daughter card and license to activate feature.
  • When connecting Nexus 2000 to M1 32 port line card the FEX fabric ports has to be in same port group and in shared mode because the linecard has a 10G MAC that is shared among 4 port in a port-group. FEX tags the frames with VN-Tag and ASIC needs to know that VN-tag value prior to sending it to replication engine. All FEX ports remains in same VDC as their parent FEX ports. F1 series fast and low latency than M1 series line card. Switch on chip is the term used for F1 card ASIC that manages two ports. FIB TCAM are not present in F1 line card but Clasification TCAM are present. RACL is not applied on F1 series line card. No centralize arbitration and proxy routing to get Fabric access for multicast forwarding. Netflow support is only available on M1 line card 5.2 will add feature to collect netflow entry on F1 series line card. If Host A in vlan 10 connected to F1 line card wants to talk to Host B in vlan 20 connected to same line card requires L3 lookup on M1 series line card (called Proxy routing).
  • vpc secondary can be a PIM-DR for some vlans and primary can be PIM DR for other vlans.
  • Configuring peer-keepalive in management vrf is best practice because you do not need to dedicate a 10G port for peer-keepalive and management port give direct access to CPU for health check.
  • If firewalls are not running a routing protocol there is no problem at all. Simply have static routes, or a static default route, pointing to the HSRP VIP address on the Nexus 7000′s design works. Don’t use L2 port channel to attach routers to a vPC domain unless you statically route to HSRP address
  • # 1 Design rule for VPC topologies : Always dual attach devices to both vpc peers to get predictable traffic flow. For L3 connections use routing protocol’s ECMP.
  • N7010B-Dist# show processes cpu | in ospf 4973 173 2840 61 0.0% ospf 15655 126 58 2178 0.0% ospfv3 N7010B-Dist# sh processes cpu | grep &amp;quot;urib&amp;quot; 4396 93 1868 49 0.0% urib N7010B-Dist# sh processes cpu | grep &amp;quot;u.rib&amp;quot; 4410 74 24 3105 0.0% u6rib N7010B-Dist# attach module 1 Attaching to module 1 ... To exit type &apos;exit&apos;, to abort type &apos;$.&apos; module-1# show processes cpu | grep &amp;quot;ipfib&amp;quot; 1977 993938 6649066 149 0.0% ipfib
  • N7010A-Dist# show routing memory estimate routes 10000 next-hops 2 Shared memory estimates: Current max 8 MB; 5197 routes with 16 nhs in-use 1 MB; 47 routes with 1 nhs (average) Configured max 8 MB; 5197 routes with 16 nhs Estimate 4 MB; 10000 routes with 2 nhs Variable overheads: 14 bytes: per next hop per route in every MVPN enabled VRF 24 bytes: per OSPF route in every VRF where OSPF is PE-CE protocol 54 bytes: per EIGRP route in every VRF where EIGRP is PE-CE protocol
  • Transcript of "Cisco data center support"

    1. 1. Cisco Data center switches support Krunal Shah
    2. 2. Agenda <ul><li>6500 Troubleshooting and Sup2T introduction </li></ul><ul><li>NX-OS Troubleshooting </li></ul><ul><li>Nexus 7000 troubleshooting tools and tips </li></ul><ul><li>If time permits </li></ul><ul><ul><ul><li>Demo: N7K ISSU software upgrade from 5.1 to 5.2. </li></ul></ul></ul><ul><ul><ul><li>Demo: 6500 VSS demo </li></ul></ul></ul><ul><ul><ul><li>Demo: eFSU upgrade on 6500 VSS switch </li></ul></ul></ul><ul><li>Appendix </li></ul><ul><ul><li>Lab topology and IP address, connections of Nexus and 6500 VSS switches. </li></ul></ul><ul><li>Feel free to ask questions during the presentation . </li></ul>
    3. 3. Troubleshooting – High CPU utilization <ul><li>Check sh proc cpu sorted </li></ul><ul><li>VSS-SW2#show processes cpu sorted </li></ul><ul><li>CPU utilization for five seconds: 3%/0%; one minute: 4%; five minutes: 4% </li></ul><ul><li>PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process </li></ul><ul><li>3 5255096 102829462 51 1.67% 1.84% 1.69% 0 Exec </li></ul><ul><li>503 649576 3701204 175 0.39% 0.20% 0.17% 0 Port manager per </li></ul><ul><li>131 20 50 400 0.15% 0.01% 0.00% 1 Virtual Exec </li></ul><ul><li>310 298888 4114102 72 0.07% 0.08% 0.08% 0 IP Input </li></ul><ul><li>When high CPU due to interrupts </li></ul><ul><ul><ul><li>debug netdr capture ! Start debuging packets processed by CPU </li></ul></ul></ul><ul><ul><ul><li>show netdr captured-packets ! Shows captured packet header </li></ul></ul></ul><ul><ul><ul><li>debug netdr clear-capture ! Clears the capture buffer </li></ul></ul></ul><ul><ul><ul><li>show ip cef switching statistics </li></ul></ul></ul><ul><ul><ul><li>show ip traffic </li></ul></ul></ul><ul><li>In case you want to check CPU utilization on Switch processor </li></ul><ul><ul><li>CAT6500#remote command switch show proc cpu sorted </li></ul></ul><ul><li>To check CPU on DFC cards </li></ul><ul><ul><li>CAT6500#attach module 2 </li></ul></ul><ul><ul><li>CAT6500-dfc2#show proc cpu sorted </li></ul></ul>
    4. 4. Troubleshooting Traffic Flow <ul><li>Three flows thru the 6500 switch </li></ul>Process switching path ( sh ip route) Software based CEF switching path (sh ip cef) Hardware based (MLS) CEF switching path (sh mls cef) Involves CPU processing (high CPU due to “IP input process”) Involves CPU processing ( high CPU due to interrupts) No CPU involved in processing packets
    5. 5. <ul><li>E.g. Host A cannot reach host B in same vlan but it can reach another Host C on same vlan 200. </li></ul><ul><li>Get working and non working traffic flow source and destination IP address / MAC address. </li></ul><ul><ul><li>Trace the MAC address hop by hop on layer 2 path </li></ul></ul><ul><ul><ul><li>show mac-address-table address 0000.aaaa.bbbb vlan 200 all </li></ul></ul></ul><ul><ul><ul><li>Use all keyword to see results from Forwarding engine. </li></ul></ul></ul><ul><li>Host A in vlan 100 cannot reach host B in vlan 200 </li></ul><ul><ul><li>Check routing table for vlan 200 and 100 entry, check both SVI are up. </li></ul></ul><ul><ul><li>Do ping test to any other IPs in same subnet, to default gateway on both vlans. </li></ul></ul><ul><ul><li>Check CEF table and hardware forwarding table </li></ul></ul><ul><ul><ul><li>Cat6500-VSS-SW#show ip cef exact-route 172.18.100.13 172.18.200.1 </li></ul></ul></ul><ul><ul><ul><li>172.18.100.13 -> 172.18.200.1 => IP adj out of Vlan200, addr 172.18.200.1 </li></ul></ul></ul><ul><ul><ul><li>Cat6500-VSS-SW#show mls cef exact-route 172.18.100.13 62554 172.18.200.1 23 module 2 </li></ul></ul></ul><ul><ul><ul><li>Interface: Vl200, Next Hop: 172.18.200.1, Vlan: 200, Destination Mac: 0000.5e00.01c8 </li></ul></ul></ul>Troubleshooting – Unicast Flow
    6. 6. Troubleshooting – Unicast Flow <ul><ul><ul><li>Check the load balancing algorithm for incoming port in module, some software version allows load balancing on per module basis. </li></ul></ul></ul><ul><ul><ul><li>VSS-SW#show etherchannel load-balance module 2 </li></ul></ul></ul><ul><ul><ul><li>EtherChannel Load-Balancing Configuration: </li></ul></ul></ul><ul><ul><ul><li>src-dst-ip vlan included </li></ul></ul></ul><ul><ul><ul><li>mpls label-ip </li></ul></ul></ul><ul><ul><ul><li>EtherChannel Load-Balancing Addresses Used Per-Protocol: </li></ul></ul></ul><ul><ul><ul><li>Non-IP: Source XOR Destination MAC address </li></ul></ul></ul><ul><ul><ul><li>IPv4: Source XOR Destination IP address </li></ul></ul></ul><ul><ul><ul><li>IPv6: Source XOR Destination IP address </li></ul></ul></ul><ul><ul><ul><li>MPLS: Label or IP </li></ul></ul></ul><ul><ul><ul><li>VSS-SW#show etherchannel load-balance hash-result int port-channel 20 ip 172.18.200.13 172.18.200.1 </li></ul></ul></ul><ul><ul><ul><li>Computed RBH: 0x6 </li></ul></ul></ul><ul><ul><ul><li>Would select Te1/2 of Po20 </li></ul></ul></ul>
    7. 7. Troubleshooting - ELAM capture <ul><li>A way to capture the a packet on a dbus/rbus of supervisor </li></ul><ul><li>Enable “service internal” </li></ul><ul><li>select a valid ELAM ASIC </li></ul><ul><li>cmd: show platform capture elam asic </li></ul><ul><li>example: ... superman slot 5 </li></ul><ul><li>- select the Superman ASIC on slot 5 - supervisor </li></ul><ul><li>enter a valid trigger </li></ul><ul><li>cmd: show platform capture elam trigger </li></ul><ul><li>example: ... dbus ipv4 if ip_da=224.1.0.0[255.255.0.0] </li></ul><ul><li>- capture any IPv4 packet with DESTIP == 224.1.0.0/16 </li></ul><ul><li>... rbus ip if ccc=mcast_l3_rw met3=0x0300[0xff00] </li></ul><ul><li>- capture any IP packet generating a ML3_RW result with </li></ul><ul><li>MET3 pointer equal to 3xx </li></ul><ul><li>... dbus help </li></ul><ul><li>- return an extensive list of dbus field you can use </li></ul><ul><li>start the capture </li></ul><ul><li>cmd: show platform capture elam start </li></ul><ul><li>check the capture status </li></ul><ul><li>cmd: show platform capture elam status </li></ul><ul><li>print the captured data </li></ul><ul><li>cmd: show platform capture elam data </li></ul>
    8. 8. <ul><li>In dual sup VSS Control plane act as Actvie/standby and Data plane remains active active in both sup. – A Borg Architecture. </li></ul><ul><li>Only VS-S720-10G supervisor and 67XX line cards are supported in VSS mode. </li></ul><ul><li>Best practice is to dual attach the devices to VSS for deterministic traffic flow. VSS always use local interface in the portchannel to forward traffic. </li></ul><ul><li>Design Tip : If L3 portchannel between a router and VSS switch is not possible use equal cost path L3 link between router and switch. </li></ul><ul><li>Sup2T and Sup720 for VSS cannot be mix and match in single chassis or in VSS environment. </li></ul><ul><li>With Quad sup (2 sup in each chassis) one VSL link should be cross connected between supervisors. Quad sup only support Active sup in one chassis and standby sup in second chassis so in case of SSO the switch over happens sup to another chassis. Road map for supporting SSO within chassis in end of year 2011. </li></ul>Virtual Switching System
    9. 9. <ul><li>interface TenGigabitEthernet1/5 </li></ul><ul><li>no switchport </li></ul><ul><li>no ip address </li></ul><ul><li>mls qos trust cos </li></ul><ul><li>channel-group 1 mode on </li></ul><ul><li>interface TenGigabitEthernet2/1 </li></ul><ul><li>no switchport </li></ul><ul><li>no ip address </li></ul><ul><li>mls qos trust cos </li></ul><ul><li>channel-group 1 mode on </li></ul><ul><li>interface Port-channel1 </li></ul><ul><li>no switchport </li></ul><ul><li>no ip address </li></ul><ul><li>switch virtual link 1 </li></ul><ul><li>mls qos trust cos </li></ul><ul><li>no mls qos channel-consistency </li></ul><ul><li>switch virtual domain 1 </li></ul><ul><li>switch mode virtual </li></ul><ul><li>switch 1 priority 200 </li></ul><ul><li>mac-address use-virtual </li></ul><ul><li>! </li></ul><ul><li>VSS-SW1#switch convert mode virtual </li></ul><ul><li>interface TenGigabitEthernet1/5 </li></ul><ul><li>no switchport </li></ul><ul><li>no ip address </li></ul><ul><li>mls qos trust cos </li></ul><ul><li>channel-group 1 mode on </li></ul><ul><li>interface TenGigabitEthernet2/1 </li></ul><ul><li>no switchport </li></ul><ul><li>no ip address </li></ul><ul><li>mls qos trust cos </li></ul><ul><li>channel-group 1 mode on </li></ul><ul><li>interface Port-channel2 </li></ul><ul><li>no switchport </li></ul><ul><li>no ip address </li></ul><ul><li>switch virtual link 2 </li></ul><ul><li>mls qos trust cos </li></ul><ul><li>no mls qos channel-consistency </li></ul><ul><li>switch virtual domain 1 </li></ul><ul><li>switch mode virtual </li></ul><ul><li>switch 2 priority 100 </li></ul><ul><li>mac-address use-virtual </li></ul><ul><li>! </li></ul><ul><li>VSS-SW2#switch convert mode virtual </li></ul>Virtual Switching System
    10. 10. Virtual Switching System <ul><li>To check the health of the link and standby chassis. </li></ul><ul><ul><li>ping vslp output interface tengig 1/5/1 </li></ul></ul><ul><li>In ROMMON mode switch number can be seen by command &quot;set&quot; where SWITCH_NUM shows the switch number. You can set or change value using </li></ul><ul><ul><ul><ul><li>&quot;SWITCH_NUMBER=1&quot; </li></ul></ul></ul></ul><ul><li>Prior to IOS 12.2(33)SXI3 &quot;switch accept mode virtual&quot; command is required as last step to finish VSS configuration. </li></ul><ul><li>Recommended config : &quot; no mls qos channel-consistency &quot; on the portchannel interface of VSL. </li></ul><ul><li>VSLP fast hello, IP-BFD and Enhanced PAGp are the dual active detection methods. When VSS is in recovery mode, do not change the configuration (not even &quot;config t&quot;) </li></ul>
    11. 11. <ul><li>Sup2T uses PFC4/MSFC5 you can push 80 Gbps per slot traffic to the backplane fabric. </li></ul><ul><li>RP and SP are single processor which avoids management overhead of two separate file systems on processors. </li></ul><ul><li>With PFC4 on sup2T VPLS can be done in hardware no need of special SIP-SPA card on VPLS facing interface. </li></ul><ul><li>Supported only on E series chassis Non E chassis are now EoL. </li></ul><ul><li>New Connectivity Management Processor, CMP, added (Same CMP hardware as N7K-SUP1) for management (console, reload, power on/off system). </li></ul><ul><li>Improved NAT capability. First NAT packet is also switched in hardware as opposed to sup720 which does NAT first packet in software. </li></ul><ul><li>VSS capable with another Sup2T in separate chassis </li></ul>Catalyst 6500 Supervisor 2T Console and OOB management port 2x 10Gig X2 modules Switch Fabric CMP
    12. 12. Catalyst 6500 Supervisor 2T Source: supervisor 2T architecture from Cisco.com
    13. 13. <ul><li>All DFC4 (68XX and 69XX) line cards are compatible and supported with Sup2T. Selected CEF720 67XX and Classic 61XX line cards are compatible and supported with Sup2T. </li></ul><ul><li>65XX are not compatible with Sup2T. </li></ul><ul><li>Only 6708 line card is not field upgradeable to DFC4 due to certain ASIC limitation, customer can use 6908 instead. All other 67XX linecards are field upgradeable to DFC4 using a daughter card. </li></ul><ul><li>Sup2T used 12.2(49)SY code train which does not support eFSU at this moment. 15.X is future code train for sup2T and Sup720. </li></ul><ul><li>For more info on line cards for Sup2T go to </li></ul><ul><li>http://bit.ly/q34K20 </li></ul><ul><li>Whitepaper on Sup2T Architecture </li></ul><ul><li>http://bit.ly/rehpv9 </li></ul>Sup2T compatible line cards
    14. 14. Nexus
    15. 15. <ul><li>HA manager includes system manager (just like init process of Linux), message transaction service ( an Inter Process Communication ) and Persistent Storage Space (a relation database of last known state e.g. checkpoint of a process). </li></ul><ul><ul><li>N7010B-Dist# show system internal ? </li></ul></ul><ul><ul><li>mts MTS statistics </li></ul></ul><ul><ul><li>pss Display pss information </li></ul></ul><ul><ul><li>sysmgr Internal state of System Manager </li></ul></ul>NX-OS modular architecture Linux Kernel HA Manager Restart process! NX7K Data Plane PSS <ul><li>NX-OS services checkpoint their runtime state to the PSS for recovery in the event of a failure </li></ul>BGP OSPF PIM TCP/UDP IPv6 STP HSRP LACP etc Data plane streams
    16. 16. <ul><li>Always gather problem specific show techs….. </li></ul><ul><li>N7010B-Dist# show tech ? </li></ul><ul><li>In case you want to collect show tech use TAC PAC </li></ul><ul><li>N7010B-Dist# tac-pac </li></ul><ul><li>Above command collects show tech and saves a .gz file in bootflash, you can use tftp to collect the file. </li></ul><ul><li>Always get timestamp of the problem. Zip all files with NX-OS “gzip” command before ship it. </li></ul><ul><li>Use built in lnux tools eg. grep, egrep, last, less, sed, wc, sort, diff, redirect,exclude, include, pipe etc. to look for specific information. </li></ul><ul><li>Most useful commands </li></ul><ul><ul><ul><li>show version </li></ul></ul></ul><ul><ul><ul><li>show module </li></ul></ul></ul><ul><ul><ul><li>show log | last 100 </li></ul></ul></ul><ul><ul><ul><li>show running-config ? </li></ul></ul></ul><ul><ul><ul><li>show system resource </li></ul></ul></ul><ul><ul><ul><li>show inventory </li></ul></ul></ul><ul><ul><ul><li>show interface transceiver </li></ul></ul></ul><ul><ul><ul><li>show core </li></ul></ul></ul><ul><ul><ul><li>show process log </li></ul></ul></ul><ul><ul><ul><li>dir bootflash: </li></ul></ul></ul><ul><ul><ul><li>show accounting log start-time 2011 Sep 20 00:00:00 </li></ul></ul></ul><ul><ul><ul><li>show proc cpu sorted </li></ul></ul></ul><ul><ul><ul><li>show cli syntax | egrep “vpc” </li></ul></ul></ul>NX-OS Troubleshooting
    17. 17. <ul><li>In 2008 SAN-OS is rebranded as NX-OS. NX-OS for Nexus products are combination software components from IOS and SAN OS. </li></ul><ul><li>NX-OS is modular OS so if any process crash does not impact overall operation of the switch. </li></ul><ul><li>Failed process creates core dumps. </li></ul><ul><ul><li>N7010A-Dist# show core </li></ul></ul><ul><ul><li>VDC Module Instance Process-name PID Date(Year-Month-Day Time) </li></ul></ul><ul><ul><li>--- ------ -------- --------------- ---- ------------------------- </li></ul></ul><ul><li>To fetch the core dump from the supervisor. </li></ul><ul><ul><li>N7010B-Dist# copy core:? </li></ul></ul><ul><ul><li>core: Enter URL &quot;core://<module-number>/<process-id>[/instance-num]&quot; </li></ul></ul><ul><ul><li>N7010B-Dist# copy core: tftp: </li></ul></ul>NX-OS Troubleshooting crashes
    18. 18. NX-OS Troubleshooting crashes N7010A-Dist# show system internal sysmgr service name l2fm Service &quot;l2fm&quot; (&quot;l2fm&quot;, 90): UUID = 0x19A, PID = 4980, SAP = 221 State: SRV_STATE_HANDSHAKED (entered at time Sat Sep 3 17:49:00 2011). Restart count: 1 Time of last restart: Sat Sep 3 17:34:05 2011. The service never crashed since the last reboot. Tag = N/A Plugin ID: 1 N7010A-Dist# show system internal sysmgr service pid 4433 Service &quot; urib &quot; (&quot;urib&quot;, 173): UUID = 0x111, PID = 4433, SAP = 427 State: SRV_STATE_HANDSHAKED (entered at time Sat Sep 3 17:49:00 2011). Restart count: 1 Time of last restart: Sat Sep 3 17:33:25 2011. The service never crashed since the last reboot. Tag = N/A Plugin ID: 0
    19. 19. <ul><li>Nexus 7000 has dual core Intel CPU and High CPU is not direct indication of the problem. Identify if this is related to traffic punted to CPU. </li></ul><ul><li>There is a strict control plane and data plane separation in NX-OS. and CoPP restricts access to control plane. </li></ul><ul><li>If high CPU due to traffic punted to CPU then use debug-filter command to limit the packet debugs. </li></ul><ul><li>N7010A-Dist# debug-filter pktmgr ? </li></ul><ul><li>dest-mac Pm debug-filter destination mac </li></ul><ul><li>direction Pm debug-filter direction </li></ul><ul><li>driver-type Driver type </li></ul><ul><li>inband Inband filter </li></ul><ul><li>interface Pm debug-filter interface </li></ul><ul><li>priority Pm debug-filter priority </li></ul><ul><li>raw Pm debug raw form </li></ul><ul><li>source-mac Pm debug-filter source mac </li></ul><ul><li>type Pm debug-filter type </li></ul><ul><li>vlan Vlan </li></ul><ul><li>&quot; debug pktmgr frame &quot; almost similar to netdr capture on 6500 </li></ul><ul><li>Data collection during high CPU utilization issues </li></ul><ul><li>show processes cpu sort </li></ul><ul><li>show hardware internal cpu-mac inband counters </li></ul><ul><li>show system internal processes cpu </li></ul><ul><li>show proc cpu history </li></ul>Troubleshooting High CPU utilization
    20. 20. <ul><li>3 types of software images - Kickstart, system and EPLD </li></ul><ul><li>During the upgrade first supervisor gets upgraded to new version of code then each line card gets upgraded because line card also runs light weight version of NX-OS. </li></ul><ul><li>10 slot fully loaded chassis takes about an hour to upgrade the code but during this period no traffic loss. All configuration is locked on both sup. </li></ul><ul><li>Before upgrading software check the software installation impact </li></ul><ul><ul><li>N7010B-Dist#show install all impact kickstart bootflash:n7000-s1-kickstart.5.2.1.bin system bootflash:n7000-s1-dk9.5.2.1.bin </li></ul></ul><ul><li>In case software upgrade fails </li></ul><ul><ul><li>N7010B-Dist# show install all failure-reason </li></ul></ul><ul><li>N7010A-Dist# show process log </li></ul><ul><li>VDC Process PID Normal-exit Stack Core Log-create-time </li></ul><ul><li>--- --------------- ------ ----------- ----- ----- ------- </li></ul><ul><li>1 installer 1497 N N N Sat Jul 16 22:29:59 2011 </li></ul><ul><ul><li>N7010B-Dist# show system internal log install | no-more </li></ul></ul><ul><ul><li>N7010B-Dist# show system internal log install details | no-more </li></ul></ul><ul><li>If available, gather bootup logs from console or from CMP while software upgrade in process. </li></ul><ul><ul><li>N7010A-Dist-cmp5# attach cp </li></ul></ul>Troubleshooting NX-OS software upgrades
    21. 21. Nexus 7000 line cards 16MB 1p3q1t 7.56MB/6.15MB 2q4t/1p3q4t 1 46G ~1:1 1 Nexus 7000 - 48 Port 10/100/1000 Module with XL option N7K-M148GT-11L= 16MB 1p3q1t 7.56MB/6.15MB 2q4t/1p3q4t 1 46G ~1:1 1 Nexus 7000 - 48 Port 10/100/1000, RJ-45 N7K-M148GT-11= 16MB 1p3q1t 7.56MB/6.15MB 2q4t/1p3q4t 1 46G ~1:1 1 Nexus 7000 - 48 Port GE Module with XL Option (req. SFP) N7K-M148GS-11L= 16MB 1p3q1t 7.56MB/6.15MB 2q4t/1p3q4t 1 46G ~1:1 1 Nexus 7000 - 48 Port 1G, SFP N7K-M148GS-11= 32MB 1p3q1t 1MB+65MB/80MB 8q1t/1p7q4t 2 80G 4:01 1 Nexus 7000 - 32 Port 10GbE with XL Option, 80G Fabric (req. SFP+) N7K-M132XP-12L= 32MB 1p3q1t 1MB+65MB/80MB 8q1t/1p7q4t 2 80G 4:01 1 Nexus 7000 - 32 Port 10GbE, 80G Fabric (req. SFP+) N7K-M132XP-12= 32MB 1p3q1t 96MB/80MB 8q1t/1p7q4t 2 80G 1:01 2* Nexus 7000 - 8 Port 10GbE with XL option (req. X2) N7K-M108X2-12L= 48MB 2q4t/1p3q1t NA NA 5 230G ~1.4:1 2* Nexus 7000 - 32 Port 1G/10G Ethernet Module, SFP/SFP+ N7K-F132XP-15= VOQ buffer Capacity Fabric Queuing structure Per port buffer capacity Port queuing structure Min # of Fabric modules Backplane BW  Oversubscription ratio # of FE Description Part number
    22. 22. F1 and M1 line card interactions <ul><li>All 32 ports in F1 series line card can work at 10 gig speed for local switching (switching within line card) but total backplane available today is only 230Gbps. </li></ul><ul><li>F1 series line card requires M1 line cards to route a packet. All SVIs for vlans in F1 line card are stored on M1 line cards (proxy routing). Port on M1 line card does not need to be up. </li></ul><ul><li>F1 series line cards see other M1 series line card with a big giant port-channel for each VDC and uses it for L3 lookups. </li></ul><ul><li>F1 line card has connection between Forwarding engines so it can do local switching without going thru switching Fabric. </li></ul><ul><li>8 port M1 10 G line card can not perform local switching on first 4 ports to other switch ports without going thru switching fabric because it does not have the connections between forwarding engines. </li></ul>
    23. 23. Virtual port-channels <ul><li>vPC is multi-chassis port-channel (MLAG) technology. </li></ul><ul><li>Domain ID has to be unique. </li></ul><ul><li>Peer-keepalive uses UDP port 3200 and every one second sends packet to check health of the peer. </li></ul><ul><li>It is imp to remember that vPC is layer 2 bundling technology and both vpc peers are two independent routers. No L3 routing information synchronizes with each other. </li></ul><ul><li>NX-OS uses Cisco Fabric Services (CFS) to synchronize the state information (MAC address table, IGMP snooping database etc) between vpc peers. </li></ul><ul><ul><li>N7010A-Dist# show cfs ? </li></ul></ul><ul><ul><li>application Show locally registered applications </li></ul></ul><ul><ul><li>internal Show internal infomation </li></ul></ul><ul><ul><li>lock Show state of application's logical/physical locks </li></ul></ul><ul><ul><li>merge Show cfs merge information </li></ul></ul><ul><ul><li>peers Show all the peers in the physical fabric </li></ul></ul><ul><ul><li>regions Show all the applications with peers and region information </li></ul></ul><ul><ul><li>status Show current status of CFS </li></ul></ul><ul><li>Role priority can be configured to manually elect vPC role. vPC does not support role preemption. (Primary, Operational Secondary) </li></ul>
    24. 24. Configuring vPC <ul><li>N7010B-Dist# sh run vpc </li></ul><ul><li>feature vpc </li></ul><ul><li>vpc domain 1 </li></ul><ul><li>peer-switch </li></ul><ul><li>peer-keepalive destination 10.23.242.220 source 10.23.242.225 vrf management </li></ul><ul><li>peer-gateway </li></ul><ul><li>ipv6 nd synchronize </li></ul><ul><li>ip arp synchronize </li></ul><ul><li>interface port-channel1 </li></ul><ul><li>vpc peer-link </li></ul><ul><li>interface port-channel10 </li></ul><ul><li>vpc 10 </li></ul>Use VRF management Presents both vpc peers as single switch to access switches To enable local forwarding of packets destined to peer’s MAC address To enable ARP sych on both peer switches for faster convergence N7010A-Dist# sh run vpc feature vpc vpc domain 1 peer-switch peer-keepalive destination 10.23.242.225 source 10.23.242.220 vrf management peer-gateway ipv6 nd synchronize ip arp synchronize interface port-channel1 vpc peer-link interface port-channel10 vpc 10
    25. 25. Monitoring and troubleshooting vPC <ul><li>Show vpc </li></ul><ul><li>Show vpc orphan-ports </li></ul><ul><ul><li>L2 Ports that are not part of vpc and attached to only one vpc peer. </li></ul></ul><ul><li>Show vpc consistency-parameter global </li></ul><ul><ul><li>Any type-1 consistency parameter mismatch will suspend the vPC. </li></ul></ul><ul><ul><li>Any type-2 consistency parameter mismatch keeps vpc up but causes odd forwarding behaviour </li></ul></ul><ul><li>vPC peer-link ports can reside on F1 series line cards but it has to be a 10G port, When using M1 32 port line card for peer-link make sure peer-link ports are in dedicated mode otherwise peer-link won’t come up. </li></ul><ul><li>Show tech-support vpc </li></ul><ul><li>Show tech-support stp </li></ul>
    26. 26. Unsupported vPC topologies L2 L3 OSPF OSPF OSPF OSPF Vpc peer-link OSPF supported unsupported
    27. 27. Supported vPC topologies L2 L3 OSPF OSPF OSPF OSPF Vpc peer-link vPC 10
    28. 28. Virtual Device Context <ul><li>Yet another level of device virtualization after VSAN, VLANs and VRF. </li></ul><ul><li>One default VDC called admin VDC where all other VDCs can be created and assign interface to it. </li></ul><ul><li>Needs physical cables to pass traffic from one VDC to another. </li></ul><ul><li>With new code reload of individual VDC is supported using &quot; reload vdc vdc-name&quot; command in admin vdc. </li></ul><ul><li>Similarly gracefull shut down of VDC can be done using &quot; vdc vdc-name suspend&quot; </li></ul>VDC 2 type Storage VDC 1 Admin VDC 3 VDC 4
    29. 29. Configuring and verifying VDC <ul><li>N7010A-Dist(config)# vdc N7010A-Core id 2 </li></ul><ul><li>N7010A-Dist(config-vdc)# allocate interface Ethernet1/9,Ethernet1/11,Ethernet1/13,Ethernet1/15 </li></ul><ul><li>Moving ports will cause all config associated to them in source vdc to be removed. Are you sure you want to move the ports (y/n)? [yes] y </li></ul><ul><li>N7010A-Dist(config-vdc)# sh vdc N7010A-Core membership </li></ul><ul><li>vdc_id: 2 vdc_name: N7010A-Core interfaces: </li></ul><ul><li>Ethernet1/9 Ethernet1/11 Ethernet1/13 </li></ul><ul><li>Ethernet1/15 </li></ul><ul><li>N7010A-Dist(config-vdc)# sh run vdc | begin N7010A-Core </li></ul><ul><li>vdc N7010A-Core id 2 </li></ul><ul><li>limit-resource module-type m1 m1xl </li></ul><ul><li>allocate interface Ethernet1/9,Ethernet1/11,Ethernet1/13,Ethernet1/15,Ethernet1/18,Ethernet1/20,Ethernet1/22,Ethernet1/24 </li></ul><ul><li>allocate interface Ethernet2/25-48 </li></ul><ul><li>boot-order 1 </li></ul><ul><li>limit-resource vlan minimum 16 maximum 4094 </li></ul><ul><li>limit-resource monitor-session minimum 0 maximum 2 </li></ul><ul><li>limit-resource monitor-session-erspan-dst minimum 0 maximum 23 </li></ul><ul><li>limit-resource vrf minimum 2 maximum 1000 </li></ul><ul><li>limit-resource port-channel minimum 0 maximum 768 </li></ul><ul><li>limit-resource u4route-mem minimum 32 maximum 32 </li></ul><ul><li>limit-resource u6route-mem minimum 16 maximum 16 </li></ul><ul><li>limit-resource m4route-mem minimum 16 maximum 16 </li></ul><ul><li>limit-resource m6route-mem minimum 3 maximum 3 </li></ul>
    30. 30. Configuring and verifying VDC <ul><li>N7010A-Dist(config-vdc)# show vdc N7010A-Core detail </li></ul><ul><ul><li>vdc id: 2 </li></ul></ul><ul><ul><li>vdc name: N7010A-Core </li></ul></ul><ul><ul><li>vdc state: active </li></ul></ul><ul><ul><li>vdc mac address: 00:26:98:07:ea:c2 </li></ul></ul><ul><ul><li>vdc ha policy: RESTART </li></ul></ul><ul><ul><li>vdc dual-sup ha policy: SWITCHOVER </li></ul></ul><ul><ul><li>vdc boot Order: 1 </li></ul></ul><ul><ul><li>vdc create time: Sun Jul 31 17:39:25 2011 </li></ul></ul><ul><ul><li>vdc reload count: 0 </li></ul></ul><ul><ul><li>vdc restart count: 0 </li></ul></ul><ul><ul><li>vdc type: Ethernet </li></ul></ul><ul><ul><li>vdc supported linecards: m1 m1xl </li></ul></ul><ul><li>N7010A-Dist(config-vdc)# show vdc resource </li></ul><ul><li>vlan 30 used 13 unused 16354 free 16341 avail 16384 total </li></ul><ul><li>monitor-session 0 used 0 unused 2 free 2 avail 2 total </li></ul><ul><li>monitor-session 0 used 0 unused 23 free 23 avail 23 total </li></ul><ul><li>-erspan-dst </li></ul><ul><li>vrf 4 used 0 unused 4092 free 4092 avail 4096 total </li></ul><ul><li>port-channel 3 used 0 unused 765 free 765 avail 768 total </li></ul><ul><li>u4route-mem 40 used 0 unused 476 free 476 avail 516 total </li></ul><ul><li>u6route-mem 40 used 0 unused 168 free 168 avail 208 total </li></ul><ul><li>m4route-mem 32 used 0 unused 168 free 168 avail 200 total </li></ul><ul><li>m6route-mem 11 used 0 unused 69 free 69 avail 80 total </li></ul><ul><li>N7010A-Dist# switchto vdc N7010A-Core </li></ul><ul><li>N7010A-Core# switchback </li></ul><ul><li>N7010A-Dist# </li></ul>
    31. 31. Limitations while assigning interfaces to a VDC <ul><li>N7K-M132XP-12 line card requires allocation in port-groups of 4 to align ASIC resources. </li></ul><ul><li>N7K-F132XP-15 line card requires allocation in port-groups of 2 to align ASIC resources. </li></ul><ul><li>Each port on a N7K-M108X2-12L has its own ASIC so ports can be assigned individually. Best practice is to assign 1-4 ports and 5-8 ports to two separate VDC which makes one Forwarding Engine assigned per VDC. </li></ul><ul><li>N7K-M148GT-11 line cards have 4 port groups of 12 ports Best practice is to have all members of a port-group in a the same VDC. </li></ul>
    32. 32. Storage VDC <ul><li>With NX-OS 5.2 version a new type of VDC called Storage VDC is introduced. </li></ul><ul><li>It creates virtual MDS 9000 (Cisco SAN switch) within Nexus 7000. Allows FCoE VE_port to VE-port between Nexus 7000 in storage VDC and FCoE target support. </li></ul><ul><li>Only interfaces from F1 series line cards can be assigned under storage VDC no interfaces M1 series line cards are supported. &quot;limit-resource module-type f1&quot; configured by default. </li></ul><ul><li>Shared interfaces between storage VDC and another VDC. FCoE and FIP (ethertype 0x8904 and 0x8914) frames only are directed to the storage VDC, All other frames are directed toward the Ethernet VDC. </li></ul>
    33. 33. Troubleshooting L2 forwarding <ul><li>All the MAC addresses are downloaded on the Forwarding engines of the line cards. CPU is not directly involved in MAC address learning. </li></ul><ul><li>Layer 2 Forwarding Manager (L2FM) process maintains all MAC address table entries and keeps it in synch to all line cards via L2FM child (l2fmc) process. </li></ul><ul><li>“ show mac address-table” shows MAC addresses learned. </li></ul><ul><li>N7010A-Dist#show hardware mac address-table ? </li></ul><ul><li><1-10> Module Number </li></ul><ul><li>Shows MAC address table entry from line card Forwarding engine. </li></ul><ul><li>In case traffic is not forwarding between two host on same vlan and MAC address table has both MAC addresses but not available in hardware Mac table Collect </li></ul><ul><ul><ul><li>Show tech-support l2fm </li></ul></ul></ul><ul><li>For spanning tree related issues </li></ul><ul><ul><ul><li>show tech-support stp </li></ul></ul></ul>
    34. 34. Troubleshooting L2 forwarding <ul><li>If outgoing interface is a port-channel then which member port is chosen </li></ul><ul><li>N7010A-Dist# show port-channel load-balance </li></ul><ul><li>Port Channel Load-Balancing Configuration: </li></ul><ul><li>System: src-dst ip </li></ul><ul><li>Port Channel Load-Balancing Addresses Used Per-Protocol: </li></ul><ul><li>Non-IP: src-dst mac </li></ul><ul><li>IP: src-dst ip </li></ul><ul><li>N7010A-Dist# show port-channel rbh-distribution interface port-channel 10 </li></ul><ul><li>ChanId Member port RBH values Num of buckets </li></ul><ul><li>-------- ------------- ----------------- ---------------- </li></ul><ul><li>10 Eth1/10 4,5,6,7,12,13,14,15 8 </li></ul><ul><li>10 Eth1/12 0,1,2,3,8,9,10,11 8 </li></ul><ul><li>N7010A-Dist# show port-channel load-balance forwarding-path interface port-channel 10 src-ip 172.18.100.12 dst-ip 172.18.200.12 module 1 </li></ul><ul><li>Missing params will be substituted by 0's. </li></ul><ul><li>Module 1: Load-balance Algorithm: src-dst ip </li></ul><ul><li>RBH: 0 Outgoing port id: Ethernet1/12 </li></ul>
    35. 35. <ul><li>Routing protocol processes gathers routing information from the neighbors and selects the best route and place in routing table (RIB). CLI “ show ip route ” or “ show routing ipv4 unicast ”. </li></ul><ul><li>Unicast Forwarding Distribution manager UFDM process interface between URIBs on sup and IPFIB process on the line cards. Adjacency table contains next hop information. </li></ul><ul><li>IPFIB process finally programs the Forwarding engines on the line cards. </li></ul><ul><li>Any show commands with “Routing&quot; refers to Unicast RIB contents in supervisor control plane, any show commands with “Forwarding&quot; refers to FIB contents. </li></ul><ul><li>All forwarding decisions are made on ingress line card. </li></ul>Troubleshooting L3 Forwarding
    36. 36. Troubleshooting L3 Forwarding N7010B-Dist# show routing 150.100.1.1 IP Route Table for VRF &quot;default&quot; '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] 150.100.1.1/32, ubest/mbest: 1/0 *via 172.18.30.5, Eth2/4, [110/11], 1d18h, ospf-NEXUS, intra N7010B-Dist# show ip adjacency 172.18.30.5 Flags: # - Adjacencies Throttled for Glean IP Adjacency Table for VRF default Total number of entries: 1 Address MAC Address Pref Source Interface 172.18.30.5 0026.9807.eac2 50 arp Ethernet2/4 N7010B-Dist# show forwarding ipv4 route 150.100.1.1 module 1 IPv4 routes for table default/base ------------------+------------------+----------------------+----------------- Prefix | Next-hop | Interface | Labels ------------------+------------------+----------------------+----------------- 150.100.1.1/32 172.18.30.5 Ethernet2/4 N7010B-Dist# show forwarding adjacency 172.18.30.5 module 1 IPv4 adjacency information next-hop rewrite info interface -------------- --------------- ------------- 172.18.30.5 0026.9807.eac2 Ethernet2/4
    37. 37. L3 ECMP Forwarding N7010B-Dist# show routing 172.18.30.8 IP Route Table for VRF &quot;default&quot; '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] 172.18.30.8/30, ubest/mbest: 2/0 *via 172.18.30.5, Eth2/4, [110/20], 1d19h, ospf-NEXUS, intra *via 172.18.30.17, Eth2/3, [110/20], 1w4d, ospf-NEXUS, intra N7010B-Dist# show forwarding ipv4 route 172.18.30.8 module 1 IPv4 routes for table default/base ------------------+------------------+----------------------+----------------- Prefix | Next-hop | Interface | Labels ------------------+------------------+----------------------+----------------- 172.18.30.8/30 172.18.30.5 Ethernet2/4 172.18.30.17 Ethernet2/3 N7010B-Dist# show routing hash 172.18.100.1 172.18.30.8 62554 23 Load-share parameters used for software forwarding: load-share mode: address source-destination port source-destination Universal-id seed: 0x9e629bda Hash for VRF &quot;default&quot; Hashing to path *172.18.30.17 For route: 172.18.30.8/30, ubest/mbest: 2/0 *via 172.18.30.5, Eth2/4, [110/20], 1d19h, ospf-NEXUS, intra >*via 172.18.30.17, Eth2/3, [110/20], 1w4d, ospf-NEXUS, intra
    38. 38. Troubleshooting L3 Forwarding (Hardware Entry) N7010B-Dist# show system internal forwarding route 172.18.30.8 module 2 detail <output omitted> 172.18.30.8/30 , Ethernet2/4, No of paths: 2 Dev: 1 , Idx: 0x15006 , RPF Flags: V , DGT: 0 , VPN: 1 RPF_Intf_5: Ethernet2/4 (0x4021 ) AdjIdx: 0x4303c , LIFB: 0 , LIF: Ethernet2/4 (0x4021 ), DI: 0x475 DMAC: 0026.9807.eac2 SMAC: 0026.9811.9c41 AdjIdx: 0x4303d , LIFB: 0 , LIF: Ethernet2/3 (0x4020 ), DI: 0x474 DMAC: 0026.9811.9c42 SMAC: 0026.9811.9c41 N7010B-Dist# show system internal forwarding adjacency module 2 entry 0x4303c detail Device: 1 Index: 0x4303c DMAC: 0026.9807.eac2 SMAC: 0026.9811.9c41 LIF: 0x4021 (Ethernet2/4) DI: 0x475 ccc: 4 L2_FWD: NO RDT: YES packets: 0 bytes: 0 zone enforce: 0 N7010B-Dist# show system internal forwarding adjacency module 2 entry 0x4303d detail Device: 1 Index: 0x4303d DMAC: 0026.9811.9c42 SMAC: 0026.9811.9c41 LIF: 0x4020 (Ethernet2/3) DI: 0x474 ccc: 4 L2_FWD: NO RDT: YES packets: 0 bytes: 549755813888zone enforce: 0
    39. 39. PMR: XXXXX,035,649 <ul><li>Environment: two Cisco Nexus 7010 with 4.2.6 NX-OS code </li></ul><ul><li>Problem: Multicast traffic not flowing thru vpc on two vlans. (S,G) entry was missing on both switches but (*,G) entry was present. </li></ul><ul><li>Show policy-map interface control-lane shows violations in PIM class map. Excessive traffic was dropped. </li></ul><ul><li>RESOLUTION: Added access-list to CoPP policy map to allow multicast control plane traffic </li></ul>
    40. 40. PMR: XXXXX,057,649 <ul><li>Environment: Nexus 7K </li></ul><ul><li>Problem: iBGP connection thru firewall was not working. </li></ul><ul><li>Both BGP peers can ping each other. Both can also telnet to TCP port 179 to each other. But BGP session was not established. </li></ul><ul><li>Checked the bgp process it was initialized on the nexus. </li></ul><ul><li>Checked the control-plane policy and it was not dropping BGP related traffic. </li></ul><ul><li>Checked the “show ip bgp neigh” down in the output &quot; BGP neighbor is not configured for address-family IPv4 unicast&quot; . </li></ul><ul><li>Checked the config, customer is missing address-family ipv4 unicast under neighbor. </li></ul><ul><li>RESOLUTION: configured address family ipv4 unicast for neighbor </li></ul>
    41. 41. PMR: 18077,057,649 <ul><li>Environment: Nexus 7000 with NX-OS 5.2 </li></ul><ul><li>Problem: Cisco N7K sup1 shuts down when both jector button is pressed but does not come up when both ejectors are closed. </li></ul><ul><li>%PLATFORM-3-EJECTOR_STAT_CHANGED: Ejectors' status in slot 6 has changed, Top Ejector is OPEN, Bottom Ejector is OPEN </li></ul><ul><li>Supervisor shuts down always when both ejectors are opened and upon closing both ejector it does not power up. Supervisor needs to be removed from the slot and reinsert or power on module using CLI </li></ul><ul><li>N7010A-Dist(config)# no poweroff module 6 </li></ul><ul><li>Line cards however, boots up after closing ejector because it gets the NX-OS image from supervisor and as long as one active supervisor present in chassis, closing ejectors power upsthe line card. </li></ul>
    42. 42. PMR: XXXXX,057,649 <ul><li>Environment: Nexus 7000 and ASA firewall. </li></ul><ul><li>Problem: Traffic sent by ASA firewall to Nexus HSRP IP address was not forwarded and dropped. </li></ul><ul><li>All traffic that was dropped was sent to HSRP MAC address however, Active ASA was sending these traffic and connected to HSRP standby switch which was dropping it. </li></ul><ul><li>Resolution: Enable peer-gateway command under vpc domain which enables local forwarding of the traffic destined to peer’s mac address. </li></ul>
    43. 43. OCPM # XXJV5 <ul><li>Environment: Nexus 7000 M1 32 port and 48 port line card. </li></ul><ul><li>Problem: 2 Modules have diagnostic failure </li></ul><ul><ul><li>Captured </li></ul></ul><ul><ul><li>show tech module 2 </li></ul></ul><ul><ul><li>show diagnostic result module 2 detail </li></ul></ul><ul><ul><li>show tech module 4 </li></ul></ul><ul><ul><li>show diagnostic result module 4 detail </li></ul></ul><ul><ul><li>10) BootupPortLoopback: </li></ul></ul><ul><ul><li>Error code ------------------> DIAG TEST FAIL </li></ul></ul><ul><ul><li>Total run count -------------> 1 </li></ul></ul><ul><ul><li>Last test execution time ----> Wed Jun 8 13:47:51 2011 </li></ul></ul><ul><ul><li>First test failure time -----> Wed Jun 8 13:48:01 2011 </li></ul></ul><ul><ul><li>Last test failure time ------> Wed Jun 8 13:48:01 2011 </li></ul></ul><ul><ul><li>Last test pass time ---------> n/a </li></ul></ul><ul><ul><li>Total failure count ---------> 1 </li></ul></ul><ul><ul><li>Consecutive failure count ---> 1 </li></ul></ul><ul><ul><li>Last failure reason ---------> Bootup Loopback test has failed </li></ul></ul><ul><ul><li>_________________________________________________________________ </li></ul></ul><ul><ul><li>On module 4 The PortLoopback: test is failing. </li></ul></ul><ul><ul><li>show diagnostic description module 1 test PortLoopback </li></ul></ul><ul><ul><li>PortLoopback : </li></ul></ul><ul><ul><li>A health monitoring test that will test the packet path from the Supervisor card to the physical port in ADMIN DOWN state on Linecards. </li></ul></ul><ul><ul><li>6) PortLoopback: </li></ul></ul><ul><ul><li>Error code ------------------> DIAG TEST FAIL </li></ul></ul><ul><ul><li>Total run count -------------> 9488 </li></ul></ul><ul><ul><li>Last test execution time ----> Thu Sep 15 09:37:07 2011 </li></ul></ul><ul><ul><li>First test failure time -----> Mon Jun 20 17:35:14 2011 </li></ul></ul><ul><ul><li>Last test failure time ------> Thu Sep 15 09:38:15 2011 </li></ul></ul><ul><ul><li>Last test pass time ---------> Mon Jun 20 17:20:08 2011 </li></ul></ul><ul><ul><li>Total failure count ---------> 8321 </li></ul></ul><ul><ul><li>Consecutive failure count ---> 8321 </li></ul></ul><ul><ul><li>Last failure reason ---------> Loopback test skipped </li></ul></ul><ul><ul><li>because the port failure count exceeded the threshold </li></ul></ul><ul><ul><li>RESOLUTION: CSCtn81109 Diagnostic test fails N7K-M132XP-12 randomly on reload of a N7K, fixed in 5.1(4) and 5.2(1) </li></ul></ul>
    44. 44. OCPM # XXJTL <ul><li>Environment : Nexus 7000 with 4.2(6) code </li></ul><ul><li>Problem : Cannot configure Jumbo MTU on vpc peer-link. </li></ul><ul><li>NPEctswPTEAGG01-PTE(config-if)# mtu 9216 </li></ul><ul><li>ERROR: port-channel1: Cannot configure port MTU on Peer-Link. </li></ul><ul><li>RESOLUTION: Break the peer-link configuration using “no vpc peer-link” configure MTU 9216 on port-channel interface and put vpc peer-link configuration back in. </li></ul>
    45. 45. Appendix A: Lab Topology L3 L2 E1/1-2 E1/1-2 E1/4 E1/28 E1/28 E1/4 E1/3 E1/27 E1/10 E1/12 E1/7 E1/1-2 E1/1-2 E1/3-4 E1/3-4 E1/12 E1/10 N7010A-Core N7010B-Core N7010A-Dist N7010B-Dist E1/6 E1/7 E1/6 E1/27 N5010A N5010B E1/26 E1/26 N2K FEX 101 N2K FEX 102 Nexus 1010
    46. 46. Appendix B: IP addresses admin/Nexus1010 10.23.242.231 Cisco Nexus 1000v VSM-1 Nexus 1000v cisco/cisco 10.23.242.40 Catalyst 6500 VSS with sup 720 Cat6500 VSS admin/Nexus1010 10.23.242.230 Cisco Nexus 1010 Nexus-1010 admin/password 10.23.242.229 Cisco Nexus 1010 appliance CIMC Nexus-1010 CIMC admin/Nexus7010 10.23.242.228 Cisco Nexus 7010B core VDC Nexus7010B-Core admin/Nexus7010 10.23.242.227 Cisco Nexus 7010B CMP Slot 6 Nexus7010B-CMP2 admin/Nexus7010 10.23.242.226 Cisco Nexus 7010B CMP Slot 5 Nexus7010B-CMP1 admin/Nexus7010 10.23.242.225 Cisco Nexus 7010A MGMT Nexus7010B admin/admin 10.23.242.224 Cisco Nexus 5010B Nexus5010B admin/admin 10.23.242.223 Cisco Nexus 5010A Nexus5010A admin/Nexus7010 10.23.242.222 Cisco Nexus 7010A CMP slot 6 Nexus7010A-CMP2 admin/Nexus7010 10.23.242.221 Cisco Nexus 7010A CMP slot 5 Nexus7010A-CMP1 admin/Nexus7010 10.23.242.220 Cisco Nexus 7010A MGMT Nexus7010A admin/Nexus7010 10.23.242.219 Cisco Nexus 7010A core VDC Nexus7010A-Core

    ×