Paresh Gupta (paregupt@cisco.com)
Technical Marketing Engineer
Cisco Storage Networking
for solid-state storage and next-generation virtualized applications
Designing Storage Networks
19-April-2017
How do I
seamlessly
transition to
next generation
Storage networks?
How do I get deep
visibility without
compromising on
scale and cost?
How do I architect
to support
emerging storage
technologies?
Industry Leading Scale and Performance
768 Line Rate
32G PORTS
Deep Visibility
With Built in
Analytics
1536 Gbps
Line Rate Full
Duplex
Bandwidth
Investment
Protection
NVMe Over
Fabric Support
48 x 32G FC Ports
32G Seamless Support
Broadcom /
Emulex
Cavium /
Qlogic
40Gbps FCoE module
High speed connectivity for
ISLs or from Cisco UCS B-
series blade or C-series rack
servers
Fibre Channel
SCSI or NVMe
2/4/8/10/16/32GFC
Fibre Channel over Ethernet
SCSI or NVMe
10/40 Gbps FCoE
FICON
2/4/8/10/16GFC
FCIP
1/10/40 Gbps
32G FC module
High speed connectivity with
integrated analytics
16G FC module
Connectivity to legacy devices,
as low as 2G FC
10Gbps FCoE module
Connectivity to converged access
& UCS
24/10 SAN Extension Module
High speed FCIP SAN
extension across long distance
All 5 modules co-exist in the same chassis without any restrictions
FCIP
1/10/40 Gbps
Primary DC Remote DC (DR)
16/32G FC
32G FC or
40 Gbps FCoE
2/4/8/10/16/32G FC or
10/40 Gbps FCoE
16G FC or
10/40 Gbps FCoE
FC-NVMe initiator
FC-SCSI initiator
• Non-blocking, non-oversubscribed line-rate ports
• 3 port-groups per module, each with 16 ports.
• All ports are quad-rate – 4/8/16/32G FC
• Optics are tri-rate
• 32G optics for 8/16/32G FC
• 16G optics for 4/8/16G FC
• 8G optics for 4/8G FC
Port-Group 1 Port-Group 2 Port-Group 3
Port-Group 1 Port-Group 2 Port-Group 3
Description Details
Port-group size 16 ports
Default B2B credits per port 500
B2B credits shared among a single port-group 8300
Max B2B credits per port (with enterprise license) 8191
Max B2B credits per line card 24900
Max distance using 2KB frame size with base license 31 KM @ 32G FC
Max distance using 2KB frame size with enterprise license 512 KM @ 32G FC
Frame size
FC speeds 1 Gbps 2 Gbps 4 Gbps 8 Gbps 10 Gbps 16 Gbps 32 Gbps
512 bytes 2 BB/KM 4 BB/KM 8 BB/KM 16 BB/KM 24 BB/KM 32 BB/KM 64 BB/KM
1024 bytes 1 BB/KM 2 BB/KM 4 BB/KM 8 BB/KM 12 BB/KM 16 BB/KM 32 BB/KM
2112 bytes 0.5 BB/KM 1 BB/KM 2 BB/KM 4 BB/KM 6 BB/KM 8 BB/KM 16 BB/KM
B2Bcredit
requirement
Protocol Clocking Encoding Data Rate (Gbps) Data Rate (MB/s)
8G FC 8.500 8b/10b 6.8 850
10G FC 10.518 64b/66b 10 1275
16G FC 14.025 64b/66b 13.6 1700
32G FC 28.080 64b/66b 27.1 3400
10Gbps FCoE 10.3125 64b/66b 10.0 1250
40Gbps FCoE 41.250 64b/66b 40.0 5000
• Fabric modules (Crossbar or XBAR) provide data switching between two ports (inter & intra slot)
• XBAR are inserted from rear of chassis
Rear view of MDS 9706
Number of
XBAR
Front panel FC
bandwidth/slot
Front Panel FCoE
Bandwidth/Slot
1 256 Gbps 220 Gbps
2 512 Gbps 440 Gbps
3 768 Gbps 660 Gbps
4 1024 Gbps 880 Gbps
5 1280 Gbps 1100 Gbps
6 1536 Gbps 1320 Gbps
For 32G FC module
• 6 XBAR required for line rate performance on all ports
• 3 XBAR are enough if all ports have 16G or 8G optics
• 3 XBAR are enough if ports have 32G optics but operational speed is 16G or less for all ports
• Less than 6 XBAR are not mandated but highly recommended
• Insertion of XBAR is non-disruptive.
Module Type Front panel
bandwidth/slot
Number of XBAR required
for line-rate performance
48-port x 16G FC 768 Gbps 3
48-port x 10Gbps FCoE 480 Gbps 3
24-port x 40Gbps FCoE 960 Gbps 5
24/10 SEM 464 Gbps 3
48-port x 32G FC 1536 Gbps 6
Seamless transition to 32G  SW Upgrade + 32G module + (may require) additional XBAR
Design
Best Practices with
All FlashArrays (AFA)
16G
16G
16G
8G
AFA- 1
64K read request
Response at line rate
• All FlashArrays are extremely fast
• Frames are transmittedat line rate (of the directly connected interface)
• Backpressureis created if downstream links have less bandwidth
• Effect is multiplied if same host makes simultaneous read requests to multiple targets almost at the same time
AFA- 2Host-1
40µs 80µs 160µs
16G
8G
4G
Time to transmit 32 full size FC frames
Host-edge switch needs more
time to transmit the frames
64K read request
Response at line rateIncreased oversubscription
Calculations are approximated to convey the message using simple numbers
16G
16G
16G
8G
AFA- 1
Response at line rate
All flash arrays enable high speed random read of data from
underlying media
AFA- 2Host-1
16G
16G
16G
8G
Traditional
spinning-disk arrays
Response may not be at line rate
Host-1
Traditional spinning disks:
• Most have 8G FC ports
• Have large random-read delay (due to mechanical moving
parts)
• Data transmit is rarely at line rate
• Larger inter-framegap on the wire (at microsecond
granularity)
All Flash Arrays Spinning Disk Arrays
Better
Best
16G
16G
16G
8G
AFA- 1
AFA- 2Host-1
16G
16G
16G
16G
AFA- 1
AFA- 2Host-1
16G
16G
32G
32G
AFA- 1
AFA- 2Host-1
All FlashArrays (AFA) changed everything – responses are extremely fast - FC-NVMe will make it even faster
Physical ISLs must be the highest speed links in a fabric
Host-edge should have same link speed as the storage-edge
https://www.youtube.com/watch?v=tY7gu16ar_Q&feature=youtu.be&list=PL1F6F23C54113557F
• Single host does not return R_RDY
• Leads to Tx B2B credit starvation on switches
• Other end-devices get impacted
Explained in previous slides
16G
16G
16G
8G
AFA- 1
64K read request
Response at line rate
AFA- 2Host-1
Host-edge switch needs more
time to transmit the frames
64K read request
Response at line rateIncreased oversubscription
Oversubscription compared to typical slow drain due to Tx B2B credit starvation
SAN congestion due to oversubscription
vs
SAN congestion due to Tx B2B credit starvation
Host-1 Array-1
Array-2Host-2
R_RDY
Culprit
Impacted Impacted
Impacted
Important – Understand the difference for effective resolution
Host-1 Array-1
Array-2Host-2
R_RDY
Culprit
Slow Drain device
Impacted Impacted
Impacted
1
2
R_RDY R_RDY
R_RDY
Back Pressure Back Pressure
2
1
• Fibre Channel is a loss-less fabric – achieved using B2B credits and R_RDY
• Asingle misbehaving host, not sending back R_RDYfast enough, causes slow drain
• Impact is seen on multiple end-devices sharing the same pair of switches and ISLs
• Switchport connected to a slow drain device is starved forTx B2B credits
• Resolution depends on the duration of Tx B2B credit unavailability on switchport connected to the slow drain device
R_RDY
Host-1 Array-1
Array-2Host-2
R_RDY
Culprit
Slow Drain device
Impacted Impacted
Impacted
1
R_RDY
R_RDY
R_RDY
Back Pressure Back Pressure
1
• Slow Drain device is moved to Isolated state
• All traffic destined to isolated port is moved to low priority Virtual Link
• All virtual links have dedicated B2B credits
• B2B credit starvation in low priority virtual link does not impact B2B credits in normal priority virtual link
Low Priority VL
Normal Priority VL
R_RDY
R_RDY
Back Pressure
Released
Back Pressure
Released
Moved to Isolated state
2
new
Tx B2B credit continuous unavailability duration on port (ms)
100 200 300 400 500 1000
No-credit-drop
timeout
congestion-drop
timeout
Port-flap
Port-shutdown
Port-Isolation
using
Virtual-Links
Enable all the features together – one for every duration
Detection Troubleshooting
Slow Port
Stuck Port
Slow Port Monitoring
Credit transition to zero
Credit and remaining credit
Info of dropped frames
See frames in ingress Q
OBFL logging
History graph
TXWait period for frames
LR Rcvd B2B
DCNM
Fabric wide visibility
Automatic collection and graphical display of counters
Reduced false positives
Automatic Recovery
Virtual Output queues
Stuck Port Recovery
Port flap *
Congestion drop
No-credit-drop
Detection
1 ms
Action
Immediate
SNMP Trap *
Error disable port*
* = using Port Monitor
Automatic Recovery
Virtual Output queues
Stuck Port Recovery
Port flap *
Congestion drop
No-credit-drop
Detection
1 ms
Action
Immediate
SNMP Trap *
Error disable port*
* = using Port Monitor
Isolation to
Virtual Links*
New
• Hardware based automatic action on ports connected to
slow drain device
• Thresholds as low as 1 ms
• Best for small durations of credit unavailability
• Complementary to no-credit-drop
• Threshold as low as 1 second
• Best for longer duration of credit unavailability
• New isolation capability as an alternate to port flap or error
disable
Analyze
Hardware
Based SAN
Analytics
• Too many components involved
• Everybody is limited by their own view
• Virtualization adds complexity
• Hybrid-shared environments
• Bare-metals & virtualized servers
• Spinning disks & All flash arrays
• Multiple speed (4/8/16/32G FC)
Compute & Applications
Storage
Database
Server
Web
Server
Video
Streaming
Server
OLTP
All Flash
Arrays
Spinning Disk
Arrays
Application issues
Not an
App/host
issue
Not a
server
issue
Not a
SAN issue
SAN
Writes
Reads
Application
File System
Block
SCSI
FC Driver
HBA (firmware)
Drive enclosure
Backend connect
Storage Processor
FC Driver
HBA (firmware)
• Deep packet visibility - FC & SCSI headers
• Monitor every flow, every packet, at every
speed in real time
• Predictive & proactive, on-wire, vendor
neutral monitoring between I & T
Compute & Applications
Storage
Database
Server
Web
Server
Video
Streaming
Server
OLTP
All Flash
Arrays
Spinning Disk
Arrays
Application issues
Monitor the wire to
find the problem
SAN
Writes
Reads
Application
File System
Block
SCSI
FC Driver
HBA (firmware)
Drive enclosure
Backend connect
Storage Processor
FC Driver
HBA (firmware)
FC SCSI Data
Pervasive | No appliance | No probes | Always on
End to End
Visibility for
trouble shooting
Scale with MDS 9700
Director Platform
High Performance
Onboard analytics
engine for data collection
100% Visibility
Every Packet, Every
Flow, Every Speed
Hardware is available inApril 2017.Analytics functionality will be enabled in 2HCY17 by SW-only upgrade
SAN upgrade from MDS 9500 or competition
16G or 32G FC
32G FC
4, 8, 16, 32 G FC
32G FC module 16G FC module
Full deployment using 32G FC module Storage connectivity
• 16 or 32G FC
Host connectivity
• 4, 8, 16 or 32G FC
ISLconnectivity
• Up to 16 physical links at 32G FC
in a single port-channel (Up to 512 Gbps)
SAN analytics
• Everywhere in the fabric. No extra device
SAN Management
• Centralized using DCNM
SAN upgrade with 16G FC on MDS 9700 – Seamless adoption of 32G FC
16G or 32G FC
16G or 32G FC
2, 4, 8, 16, 32 G FC
32G FC module 16G FC module
32G FC module for ISL or storage connectivity Storage connectivity
• 16 or 32G FC using 32G FC module
Host connectivity
• Use existing inventory of 16G FC module for
2/4/8/10/16G FC, 32G FC module for 32G
ISLconnectivity
• Use existing inventory of 16G FC module or
32G FC module for higher speed
SAN analytics
• 32G FC module required in data path: Either
storage edge, ISLs or everywhere.
SAN Management
• Centralized using DCNM
• Native switch-integrated fabric-wide analytics
• Investment protection of 16G FC module on MDS 9700
• Seamless & non-disruptive insertion of 32G FC module
• High speed ISL  Increase performance with fewer links
SAN upgrade with 16G FC on MDS 9700 – Seamless adoption of 32G FC
32G FC module 16G FC module
Cisco
Recommended
Possible
Existing
321
Investment
Protection for
the Next Decade
NVMe-over Fabric:
Improved Performance and Faster
Response
• Written from ground-up (Incorporated years of learnings and best practices)
• Up to 64K queues for command submission and completion, each CPU core can have its
own queues
• Streamlined and simple command sets and many more…
• Non-Volatile Memory based storage (flash or solid state) has been widely adopted
• There are no more rotating motors or moving heads in the storage drives
• Reads and Writes are extremely fast
Storage
• CPU are extremely fast, multi-core, hyper-threaded
Compute
• Interconnect between CPU and storage is very fast (PCI 3.0 or > 100 Gbps fabrics)
Network
• The traditional SW layer (SATA or SAS) unable to take full advantage
Software
Welcome NVMe
• FC-NVMe end-devices are dual-stack – simultaneous support of NVMe and SCSI transport
• Cisco MDS enable simultaneous switching of NVMe & SCSI transport encapsulated in Fibre Channel frames
• SCSI-only or NVMe capability of end-devices is auto-detected and advertised
• Similar to the existing plug-and-play architecture of Fibre Channel
• FC-NVMe is independent of FC speed. Possible even at 2G FC. 32G FC recommended.
Traditional FC-SCSI capable initiator
FC-NVMe capable initiator
Traditional FC-SCSI capable target
FC-NVMe capable target
FC
SCSI
FC
SCSI NVMe
HBA
HBA
FC
SCSI
FC
SCSI NVMe
Cisco C-series
Rack Servers
Cisco MDS
MDS9700# show fcns database vsan 160
VSAN 160:
--------------------------------------------------------------------------
FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE
--------------------------------------------------------------------------
0x590020 N 10:00:00:90:fa:e0:08:5d (Emulex) scsi-fcp:init NVMe:init
0x590140 N 21:00:00:24:ff:7f:06:39 (Qlogic) scsi-fcp:init NVMe:init
(showing entries only for dual-stack NVMe capable initiators. Other devices will look similar)
FCNS database
Traditional FC-SCSI capable initiator
FC-NVMe capable initiator
Traditional FC-SCSI capable target
FC-NVMe capable target
FC
SCSI
FC
SCSI NVMe
HBA
HBA
FC
SCSI
FC
SCSI NVMe
Cisco C-series
Rack Servers
Cisco MDS
I am a
SCSI initiator
I am a
SCSI & NVMe
initiator
I am a
SCSI target
I am a
SCSI & NVMe
target
• Fibre ChannelNameServer(FCNS)is a distributeddatabaseonMDS switches
• End-devicesregisterUpperLayer Protocol(ULP)with FCNSdatabase,to be advertisedto other end-devicesin the same zone
FC-NVMe capable target
HBA
Cisco C-series
Rack Servers
Cisco MDS 9000
(All 16G and 32G switches)
HBA
Broadcom / Emulex
Cavium / Qlogic
HBA
HBA
Traditional FC-SCSI capable initiator
FC-NVMe capable initiator Traditional FC-SCSI capable target
FC-NVMe capable target
FC or FCoE fabric built using
Cisco MDS 9000 switches
Cisco C-series Rack Servers
Increased
Performance
Ecosystem
Support
Phased
Transition
Seamless
Insertion
Multiprotocol
Flexibility
No New Network Established Solution Co-exists with current Solutions
Key
Takeaways
High speed connectivity
• Physical ISL must be the
highest speed links in a fabric
• Host-edge should have same
link speed as the storage-edge
Industry’s first and only
line-rate, integrated SAN
analytics solution
• In-line analytic enabled at line
rate without any penalty
Enabling technology transition
• NVMe over FC : Scale Flash
deployments
• Seamless 16G to 32G Upgrade
• Lost Cost, Low Risk, Low turnaround
SAN Design for AFA Analyze Investment Protection
High-performance 32G Fibre Channel Module on MDS 9700 Directors:

High-performance 32G Fibre Channel Module on MDS 9700 Directors:

  • 1.
    Paresh Gupta (paregupt@cisco.com) TechnicalMarketing Engineer Cisco Storage Networking for solid-state storage and next-generation virtualized applications Designing Storage Networks 19-April-2017
  • 2.
    How do I seamlessly transitionto next generation Storage networks? How do I get deep visibility without compromising on scale and cost? How do I architect to support emerging storage technologies?
  • 3.
    Industry Leading Scaleand Performance 768 Line Rate 32G PORTS Deep Visibility With Built in Analytics 1536 Gbps Line Rate Full Duplex Bandwidth Investment Protection NVMe Over Fabric Support 48 x 32G FC Ports
  • 4.
    32G Seamless Support Broadcom/ Emulex Cavium / Qlogic
  • 5.
    40Gbps FCoE module Highspeed connectivity for ISLs or from Cisco UCS B- series blade or C-series rack servers Fibre Channel SCSI or NVMe 2/4/8/10/16/32GFC Fibre Channel over Ethernet SCSI or NVMe 10/40 Gbps FCoE FICON 2/4/8/10/16GFC FCIP 1/10/40 Gbps 32G FC module High speed connectivity with integrated analytics 16G FC module Connectivity to legacy devices, as low as 2G FC 10Gbps FCoE module Connectivity to converged access & UCS 24/10 SAN Extension Module High speed FCIP SAN extension across long distance All 5 modules co-exist in the same chassis without any restrictions
  • 6.
    FCIP 1/10/40 Gbps Primary DCRemote DC (DR) 16/32G FC 32G FC or 40 Gbps FCoE 2/4/8/10/16/32G FC or 10/40 Gbps FCoE 16G FC or 10/40 Gbps FCoE FC-NVMe initiator FC-SCSI initiator
  • 7.
    • Non-blocking, non-oversubscribedline-rate ports • 3 port-groups per module, each with 16 ports. • All ports are quad-rate – 4/8/16/32G FC • Optics are tri-rate • 32G optics for 8/16/32G FC • 16G optics for 4/8/16G FC • 8G optics for 4/8G FC Port-Group 1 Port-Group 2 Port-Group 3
  • 8.
    Port-Group 1 Port-Group2 Port-Group 3 Description Details Port-group size 16 ports Default B2B credits per port 500 B2B credits shared among a single port-group 8300 Max B2B credits per port (with enterprise license) 8191 Max B2B credits per line card 24900 Max distance using 2KB frame size with base license 31 KM @ 32G FC Max distance using 2KB frame size with enterprise license 512 KM @ 32G FC
  • 9.
    Frame size FC speeds1 Gbps 2 Gbps 4 Gbps 8 Gbps 10 Gbps 16 Gbps 32 Gbps 512 bytes 2 BB/KM 4 BB/KM 8 BB/KM 16 BB/KM 24 BB/KM 32 BB/KM 64 BB/KM 1024 bytes 1 BB/KM 2 BB/KM 4 BB/KM 8 BB/KM 12 BB/KM 16 BB/KM 32 BB/KM 2112 bytes 0.5 BB/KM 1 BB/KM 2 BB/KM 4 BB/KM 6 BB/KM 8 BB/KM 16 BB/KM B2Bcredit requirement
  • 10.
    Protocol Clocking EncodingData Rate (Gbps) Data Rate (MB/s) 8G FC 8.500 8b/10b 6.8 850 10G FC 10.518 64b/66b 10 1275 16G FC 14.025 64b/66b 13.6 1700 32G FC 28.080 64b/66b 27.1 3400 10Gbps FCoE 10.3125 64b/66b 10.0 1250 40Gbps FCoE 41.250 64b/66b 40.0 5000
  • 11.
    • Fabric modules(Crossbar or XBAR) provide data switching between two ports (inter & intra slot) • XBAR are inserted from rear of chassis Rear view of MDS 9706 Number of XBAR Front panel FC bandwidth/slot Front Panel FCoE Bandwidth/Slot 1 256 Gbps 220 Gbps 2 512 Gbps 440 Gbps 3 768 Gbps 660 Gbps 4 1024 Gbps 880 Gbps 5 1280 Gbps 1100 Gbps 6 1536 Gbps 1320 Gbps
  • 12.
    For 32G FCmodule • 6 XBAR required for line rate performance on all ports • 3 XBAR are enough if all ports have 16G or 8G optics • 3 XBAR are enough if ports have 32G optics but operational speed is 16G or less for all ports • Less than 6 XBAR are not mandated but highly recommended • Insertion of XBAR is non-disruptive. Module Type Front panel bandwidth/slot Number of XBAR required for line-rate performance 48-port x 16G FC 768 Gbps 3 48-port x 10Gbps FCoE 480 Gbps 3 24-port x 40Gbps FCoE 960 Gbps 5 24/10 SEM 464 Gbps 3 48-port x 32G FC 1536 Gbps 6 Seamless transition to 32G  SW Upgrade + 32G module + (may require) additional XBAR
  • 13.
  • 14.
    16G 16G 16G 8G AFA- 1 64K readrequest Response at line rate • All FlashArrays are extremely fast • Frames are transmittedat line rate (of the directly connected interface) • Backpressureis created if downstream links have less bandwidth • Effect is multiplied if same host makes simultaneous read requests to multiple targets almost at the same time AFA- 2Host-1 40µs 80µs 160µs 16G 8G 4G Time to transmit 32 full size FC frames Host-edge switch needs more time to transmit the frames 64K read request Response at line rateIncreased oversubscription Calculations are approximated to convey the message using simple numbers
  • 15.
    16G 16G 16G 8G AFA- 1 Response atline rate All flash arrays enable high speed random read of data from underlying media AFA- 2Host-1 16G 16G 16G 8G Traditional spinning-disk arrays Response may not be at line rate Host-1 Traditional spinning disks: • Most have 8G FC ports • Have large random-read delay (due to mechanical moving parts) • Data transmit is rarely at line rate • Larger inter-framegap on the wire (at microsecond granularity) All Flash Arrays Spinning Disk Arrays
  • 16.
    Better Best 16G 16G 16G 8G AFA- 1 AFA- 2Host-1 16G 16G 16G 16G AFA-1 AFA- 2Host-1 16G 16G 32G 32G AFA- 1 AFA- 2Host-1 All FlashArrays (AFA) changed everything – responses are extremely fast - FC-NVMe will make it even faster Physical ISLs must be the highest speed links in a fabric Host-edge should have same link speed as the storage-edge
  • 17.
  • 18.
    • Single hostdoes not return R_RDY • Leads to Tx B2B credit starvation on switches • Other end-devices get impacted Explained in previous slides 16G 16G 16G 8G AFA- 1 64K read request Response at line rate AFA- 2Host-1 Host-edge switch needs more time to transmit the frames 64K read request Response at line rateIncreased oversubscription Oversubscription compared to typical slow drain due to Tx B2B credit starvation SAN congestion due to oversubscription vs SAN congestion due to Tx B2B credit starvation Host-1 Array-1 Array-2Host-2 R_RDY Culprit Impacted Impacted Impacted Important – Understand the difference for effective resolution
  • 19.
    Host-1 Array-1 Array-2Host-2 R_RDY Culprit Slow Draindevice Impacted Impacted Impacted 1 2 R_RDY R_RDY R_RDY Back Pressure Back Pressure 2 1 • Fibre Channel is a loss-less fabric – achieved using B2B credits and R_RDY • Asingle misbehaving host, not sending back R_RDYfast enough, causes slow drain • Impact is seen on multiple end-devices sharing the same pair of switches and ISLs • Switchport connected to a slow drain device is starved forTx B2B credits • Resolution depends on the duration of Tx B2B credit unavailability on switchport connected to the slow drain device R_RDY
  • 20.
    Host-1 Array-1 Array-2Host-2 R_RDY Culprit Slow Draindevice Impacted Impacted Impacted 1 R_RDY R_RDY R_RDY Back Pressure Back Pressure 1 • Slow Drain device is moved to Isolated state • All traffic destined to isolated port is moved to low priority Virtual Link • All virtual links have dedicated B2B credits • B2B credit starvation in low priority virtual link does not impact B2B credits in normal priority virtual link Low Priority VL Normal Priority VL R_RDY R_RDY Back Pressure Released Back Pressure Released Moved to Isolated state 2
  • 21.
    new Tx B2B creditcontinuous unavailability duration on port (ms) 100 200 300 400 500 1000 No-credit-drop timeout congestion-drop timeout Port-flap Port-shutdown Port-Isolation using Virtual-Links Enable all the features together – one for every duration
  • 22.
    Detection Troubleshooting Slow Port StuckPort Slow Port Monitoring Credit transition to zero Credit and remaining credit Info of dropped frames See frames in ingress Q OBFL logging History graph TXWait period for frames LR Rcvd B2B DCNM Fabric wide visibility Automatic collection and graphical display of counters Reduced false positives Automatic Recovery Virtual Output queues Stuck Port Recovery Port flap * Congestion drop No-credit-drop Detection 1 ms Action Immediate SNMP Trap * Error disable port* * = using Port Monitor
  • 23.
    Automatic Recovery Virtual Outputqueues Stuck Port Recovery Port flap * Congestion drop No-credit-drop Detection 1 ms Action Immediate SNMP Trap * Error disable port* * = using Port Monitor Isolation to Virtual Links* New • Hardware based automatic action on ports connected to slow drain device • Thresholds as low as 1 ms • Best for small durations of credit unavailability • Complementary to no-credit-drop • Threshold as low as 1 second • Best for longer duration of credit unavailability • New isolation capability as an alternate to port flap or error disable
  • 24.
  • 25.
    • Too manycomponents involved • Everybody is limited by their own view • Virtualization adds complexity • Hybrid-shared environments • Bare-metals & virtualized servers • Spinning disks & All flash arrays • Multiple speed (4/8/16/32G FC) Compute & Applications Storage Database Server Web Server Video Streaming Server OLTP All Flash Arrays Spinning Disk Arrays Application issues Not an App/host issue Not a server issue Not a SAN issue SAN Writes Reads Application File System Block SCSI FC Driver HBA (firmware) Drive enclosure Backend connect Storage Processor FC Driver HBA (firmware)
  • 26.
    • Deep packetvisibility - FC & SCSI headers • Monitor every flow, every packet, at every speed in real time • Predictive & proactive, on-wire, vendor neutral monitoring between I & T Compute & Applications Storage Database Server Web Server Video Streaming Server OLTP All Flash Arrays Spinning Disk Arrays Application issues Monitor the wire to find the problem SAN Writes Reads Application File System Block SCSI FC Driver HBA (firmware) Drive enclosure Backend connect Storage Processor FC Driver HBA (firmware) FC SCSI Data
  • 27.
    Pervasive | Noappliance | No probes | Always on End to End Visibility for trouble shooting Scale with MDS 9700 Director Platform High Performance Onboard analytics engine for data collection 100% Visibility Every Packet, Every Flow, Every Speed Hardware is available inApril 2017.Analytics functionality will be enabled in 2HCY17 by SW-only upgrade
  • 28.
    SAN upgrade fromMDS 9500 or competition 16G or 32G FC 32G FC 4, 8, 16, 32 G FC 32G FC module 16G FC module Full deployment using 32G FC module Storage connectivity • 16 or 32G FC Host connectivity • 4, 8, 16 or 32G FC ISLconnectivity • Up to 16 physical links at 32G FC in a single port-channel (Up to 512 Gbps) SAN analytics • Everywhere in the fabric. No extra device SAN Management • Centralized using DCNM
  • 29.
    SAN upgrade with16G FC on MDS 9700 – Seamless adoption of 32G FC 16G or 32G FC 16G or 32G FC 2, 4, 8, 16, 32 G FC 32G FC module 16G FC module 32G FC module for ISL or storage connectivity Storage connectivity • 16 or 32G FC using 32G FC module Host connectivity • Use existing inventory of 16G FC module for 2/4/8/10/16G FC, 32G FC module for 32G ISLconnectivity • Use existing inventory of 16G FC module or 32G FC module for higher speed SAN analytics • 32G FC module required in data path: Either storage edge, ISLs or everywhere. SAN Management • Centralized using DCNM
  • 30.
    • Native switch-integratedfabric-wide analytics • Investment protection of 16G FC module on MDS 9700 • Seamless & non-disruptive insertion of 32G FC module • High speed ISL  Increase performance with fewer links SAN upgrade with 16G FC on MDS 9700 – Seamless adoption of 32G FC 32G FC module 16G FC module Cisco Recommended Possible Existing 321
  • 31.
    Investment Protection for the NextDecade NVMe-over Fabric: Improved Performance and Faster Response
  • 32.
    • Written fromground-up (Incorporated years of learnings and best practices) • Up to 64K queues for command submission and completion, each CPU core can have its own queues • Streamlined and simple command sets and many more… • Non-Volatile Memory based storage (flash or solid state) has been widely adopted • There are no more rotating motors or moving heads in the storage drives • Reads and Writes are extremely fast Storage • CPU are extremely fast, multi-core, hyper-threaded Compute • Interconnect between CPU and storage is very fast (PCI 3.0 or > 100 Gbps fabrics) Network • The traditional SW layer (SATA or SAS) unable to take full advantage Software Welcome NVMe
  • 33.
    • FC-NVMe end-devicesare dual-stack – simultaneous support of NVMe and SCSI transport • Cisco MDS enable simultaneous switching of NVMe & SCSI transport encapsulated in Fibre Channel frames • SCSI-only or NVMe capability of end-devices is auto-detected and advertised • Similar to the existing plug-and-play architecture of Fibre Channel • FC-NVMe is independent of FC speed. Possible even at 2G FC. 32G FC recommended. Traditional FC-SCSI capable initiator FC-NVMe capable initiator Traditional FC-SCSI capable target FC-NVMe capable target FC SCSI FC SCSI NVMe HBA HBA FC SCSI FC SCSI NVMe Cisco C-series Rack Servers Cisco MDS
  • 34.
    MDS9700# show fcnsdatabase vsan 160 VSAN 160: -------------------------------------------------------------------------- FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------- 0x590020 N 10:00:00:90:fa:e0:08:5d (Emulex) scsi-fcp:init NVMe:init 0x590140 N 21:00:00:24:ff:7f:06:39 (Qlogic) scsi-fcp:init NVMe:init (showing entries only for dual-stack NVMe capable initiators. Other devices will look similar) FCNS database Traditional FC-SCSI capable initiator FC-NVMe capable initiator Traditional FC-SCSI capable target FC-NVMe capable target FC SCSI FC SCSI NVMe HBA HBA FC SCSI FC SCSI NVMe Cisco C-series Rack Servers Cisco MDS I am a SCSI initiator I am a SCSI & NVMe initiator I am a SCSI target I am a SCSI & NVMe target • Fibre ChannelNameServer(FCNS)is a distributeddatabaseonMDS switches • End-devicesregisterUpperLayer Protocol(ULP)with FCNSdatabase,to be advertisedto other end-devicesin the same zone
  • 35.
    FC-NVMe capable target HBA CiscoC-series Rack Servers Cisco MDS 9000 (All 16G and 32G switches) HBA Broadcom / Emulex Cavium / Qlogic
  • 36.
    HBA HBA Traditional FC-SCSI capableinitiator FC-NVMe capable initiator Traditional FC-SCSI capable target FC-NVMe capable target FC or FCoE fabric built using Cisco MDS 9000 switches Cisco C-series Rack Servers Increased Performance Ecosystem Support Phased Transition Seamless Insertion Multiprotocol Flexibility No New Network Established Solution Co-exists with current Solutions
  • 37.
  • 38.
    High speed connectivity •Physical ISL must be the highest speed links in a fabric • Host-edge should have same link speed as the storage-edge Industry’s first and only line-rate, integrated SAN analytics solution • In-line analytic enabled at line rate without any penalty Enabling technology transition • NVMe over FC : Scale Flash deployments • Seamless 16G to 32G Upgrade • Lost Cost, Low Risk, Low turnaround SAN Design for AFA Analyze Investment Protection

Editor's Notes

  • #37 Increased Performance: Unleash the power of All flash arrays using NVMe over FC or FCoE Seamless Insertion: Support on all existing 16G FC and new 32G FC MDS switches Phased Transition: SCSI & NVMe initiators & targets continue to share same SAN Multiprotocol flexibility: SCSI or NVMe traffic over FC or FCoE transport Ecosystem support: Fully validated solution on UCS C-series servers, MDS switches, Broadcom & Avago HBAs