Compass-EOS for Linkmeup
Alex Clipper 17/1/15
Agenda
Conventional chip review
D-Chip review
Standard carrier-grade router architecture
Compass-EOS router architecture
Dev-test methodologies
Questions
D chip on the map
3600
412 mm2
4
Zooming in on the challenge
Data Transfer Between CMOS Processors
The Conventional Way
ο‚§ Electrical transmission via Copper
ο‚§ Loss is frequency dependent
ο‚§ High Power consumption as
additional electronics are needed to
compensate for the signal loss
ο‚§ Limited Interconnect length
The Compass Way
ο‚§ Using photons in fibers to transfer the data
ο‚§ Parallel optics are used to achieve high BW
ο‚§ Small interconnect footprint (10% of the total chip
area)
ο‚§ Low Power consumption
ο‚§ Negligible Loss, supports long distances (>200m)
CMOS
Serial Links
Traffic-in Package
Tx Matrix Rx Matrix
FibersTraffic-out
CMOS
CMOS
Fibers
Compass Multi Terabit interconnect
First Commercial Implementation – The D-chip
Tx
Rx
icPhotonicsβ„’ - Fastest Optical Interconnect
1.34Tb/s Full Duplex Bandwidth
Order of magnitude higher Chip
I/O Density. 64Gb/s per mm2
Passive optical links that stretch to
Hundreds of Meters vs.
Centimeters with Electronics
β€’ Direct Coupling to CMOS Chip
β€’ Low energy consumption: 10pJ/bit
β€’ Dozens of Patents Covering
Technology & Processes
β€’ Flexible form factor
β€’ Ready for Mass Production
Laser
Matrix
Photo
Detector
s
Standard
I/O
Digital
CMOS chip
Direct
optical
interface
1.34 Tb/s In
Fiber Optic Bundle
Compass Optical Interconnect
Half bundle illuminated
1.34 Tb/s In
The enabling Technology
2 x 350G
2 x 1344G
ο‚§ Low Power – 2.7T at < 15 W
ο‚§ High Density – 2.7T in 40 mmSQ
ο‚§ Future Proof – Protocol Agnostic Technology
ο‚§ Very Scalable – 20T+ per Slot
BW density 30X
ASR9k LC Architecture
A simple Multicast scenario, fabric links normalized at 100G
Multicast in conventional architectures
Ingress
buffer
Switch
Fabric
Forward
From
Fab
Forward
Egress
buffer
ToFab
Buffer
Ingress
bufferForward
From
Fab Forward
Egress
buffer
ToFab
Buffer
Ingress
bufferForward
From
Fab Forward
Egress
buffer
ToFab
Buffer
Ingress
bufferForward
From
Fab Forward
Egress
buffer
ToFab
Buffer
30G Multicast stream
100Gbps
100Gbps
100Gbps
100Gbps
30G Unicast stream
30G Unicast stream
30G Unicast stream
Congestion
β€’ From fab resource on egress card congested
Back pressure
β€’ Back pressure (BP) is signaled to all ingress
cards
X
X
X
β€’ In response to BP, all ingress cards reduces
transmit rate towards the congested card
30G Multicast stream
X
β€’ Multicast traffic to non-congested egress
cards can in worst case also be impacted !
X
?!?
?!?
Affected stream direction (streams with packet loss)
Operation of a distributed router system
ο‚§ Line cards comprises several
Silicon processors performing
different functions (Queuing,
Forwarding etc.)
ο‚§ Data needs to be exchanged
between these Silicon chips.
ο‚§ The interconnect can be chip
to chip, board to board or
chassis to chassis.
ο‚§ Data transfer rate between
processors is a significant
factor when it comes to router
performance
MAC
NPU
Egress
Buffers
D-chip
100Gbps
100Gbps
100Gbps
30G Multicast stream
30G Unicast stream
30G Unicast stream
30G Unicast stream
30G Multicast stream
MAC
NPU
Egress
Buffers
D-chip
MAC
NPU
Egress
Buffers
D-chip
MAC
NPU
Egress
Buffers
D-chip
β€’ Egress interface is congested
Congestion
β€’ Egress interface queuing have full visibility of all
traffic to this egress destination and handle
packets according to operator SLA priorities.
β€’ No impact on Traffic destined for ports on
same or other cards
A simple Multicast scenario, Backplane links normalized at 100G
r10004 architecture
Compass EOS - Evolving Router Architectures
Ingress
buffer
Forward
ToFab
Buffer
From
FabForward
Egress
buffer
Ingress
buffer
Forward
ToFab
Buffer
From
FabForward
Egress
buffer
Egress
buffer
Ingress
buffer
Egress
Buffer
Ingress
buffer
Forward
Forward
ToFab
Buffer
From
Fab
ToFab
Buffer
From
Fab
ToFab
Buffer
From
Fab
ToFab
Buffer
From
Fab
Compass EOS – Router Architecture
Ingress
Forward
engine
Egress
Forward
engine
Ingress
Buffer
Queing &
Egress
Buffer
Ingress
Forward
engine
Egress
Forward
engine
Ingress
Buffer
Queing &
Egress
Buffer
To
switch
Buffer
From
switch
Buffer
To
switch
Buffer
From
switch
Buffer
Input ports
Input ports
Output ports
Output ports
D
D
β€’Queing
β€’Egress Processing
β€’Egress Buffer
β€’Queing
β€’Egress Processing
β€’Egress Buffer
β€’β€œSwitch fabric” Router The Compass way
β€’ Chip to Chip Optical inter-connect
β€’ True Output Switched Router
β€’ Switch Fabric eliminated
β€’ No ingress side queuing
β€’ Maintaining all of the functionality
DP components – the LC
iNP
Interlaken 150G
Interlaken 100G
Interlaken 150G
Interlaken 100G
eNP
D
eBufferiBuffer
iBuffer
Interlaken 60G
Interlaken 60G
Interlaken 60G
Interlaken 60G
D
eBuffer
Interlaken 100G
Interlaken 150G eNP
Interlaken 100G
Interlaken 150G
iBuffer
iBuffer
Interlaken 60G
Interlaken 60G
Interlaken 60G
Interlaken 60G
IL 30G
L2 FPGA
PMC
SFP+
SFP+
L2 FPGA
L2 FPGA
L2 FPGA
PMC
SFP+
SFP+
PMC
SFP+
SFP+
PMC
SFP+
SFP+
PMC
SFP+
SFP+
PMC
SFP+
SFP+
PMC
SFP+
SFP+
PMC
SFP+
SFP+
PMC
SFP+
SFP+
PMC
SFP+
SFP+
IL 30G
IL 30G
IL 30G
IL 30G
IL 30G
IL 30G
IL 30G
IL 30G
IL 30G
IL 30G
IL 30G
CPU Module
10GE
TCAM
TCAM
iNP
TCAM
TCAM
Compass-1: Overview
800Gbps system
6 RU , 19” shelf
4 x Line Cards
Optical BP
Front to Back air flow
Distributed DC Power Supply
80x10GE, 8x100GE
Summary Full Mesh advantages (Real Output Queue)
Ease of QoS configuration
What you configure is what you get
Packet drop takes place where you have full visibility
Better Multicast scale
Better system security
More accurate SLA
Predicted and deterministic behavior
D iNP eNP
D iNP eNP
CPU
mac
mac
mac
mac
mac
D iNP eNP
D iNP eNP
CPU
mac
mac
mac
mac
mac
D iNP eNP
D iNP eNP
CPU
mac
mac
mac
mac
mac
eNP iNP D
eNP iNP D
CPU
mac
mac
mac
mac
mac
CPU MEM MEM
CPU MEM MEM
CPU MEM MEM
CPU MEM MEM

Linkmeup v23-compass-eos

  • 1.
  • 2.
    Agenda Conventional chip review D-Chipreview Standard carrier-grade router architecture Compass-EOS router architecture Dev-test methodologies Questions
  • 3.
    D chip onthe map 3600 412 mm2
  • 4.
    4 Zooming in onthe challenge
  • 5.
    Data Transfer BetweenCMOS Processors The Conventional Way ο‚§ Electrical transmission via Copper ο‚§ Loss is frequency dependent ο‚§ High Power consumption as additional electronics are needed to compensate for the signal loss ο‚§ Limited Interconnect length The Compass Way ο‚§ Using photons in fibers to transfer the data ο‚§ Parallel optics are used to achieve high BW ο‚§ Small interconnect footprint (10% of the total chip area) ο‚§ Low Power consumption ο‚§ Negligible Loss, supports long distances (>200m) CMOS Serial Links Traffic-in Package Tx Matrix Rx Matrix FibersTraffic-out CMOS CMOS Fibers
  • 6.
    Compass Multi Terabitinterconnect First Commercial Implementation – The D-chip Tx Rx
  • 7.
    icPhotonicsβ„’ - FastestOptical Interconnect 1.34Tb/s Full Duplex Bandwidth Order of magnitude higher Chip I/O Density. 64Gb/s per mm2 Passive optical links that stretch to Hundreds of Meters vs. Centimeters with Electronics β€’ Direct Coupling to CMOS Chip β€’ Low energy consumption: 10pJ/bit β€’ Dozens of Patents Covering Technology & Processes β€’ Flexible form factor β€’ Ready for Mass Production Laser Matrix Photo Detector s Standard I/O Digital CMOS chip Direct optical interface
  • 8.
    1.34 Tb/s In FiberOptic Bundle Compass Optical Interconnect Half bundle illuminated 1.34 Tb/s In
  • 9.
    The enabling Technology 2x 350G 2 x 1344G ο‚§ Low Power – 2.7T at < 15 W ο‚§ High Density – 2.7T in 40 mmSQ ο‚§ Future Proof – Protocol Agnostic Technology ο‚§ Very Scalable – 20T+ per Slot BW density 30X
  • 10.
  • 11.
    A simple Multicastscenario, fabric links normalized at 100G Multicast in conventional architectures Ingress buffer Switch Fabric Forward From Fab Forward Egress buffer ToFab Buffer Ingress bufferForward From Fab Forward Egress buffer ToFab Buffer Ingress bufferForward From Fab Forward Egress buffer ToFab Buffer Ingress bufferForward From Fab Forward Egress buffer ToFab Buffer 30G Multicast stream 100Gbps 100Gbps 100Gbps 100Gbps 30G Unicast stream 30G Unicast stream 30G Unicast stream Congestion β€’ From fab resource on egress card congested Back pressure β€’ Back pressure (BP) is signaled to all ingress cards X X X β€’ In response to BP, all ingress cards reduces transmit rate towards the congested card 30G Multicast stream X β€’ Multicast traffic to non-congested egress cards can in worst case also be impacted ! X ?!? ?!? Affected stream direction (streams with packet loss)
  • 12.
    Operation of adistributed router system ο‚§ Line cards comprises several Silicon processors performing different functions (Queuing, Forwarding etc.) ο‚§ Data needs to be exchanged between these Silicon chips. ο‚§ The interconnect can be chip to chip, board to board or chassis to chassis. ο‚§ Data transfer rate between processors is a significant factor when it comes to router performance
  • 13.
    MAC NPU Egress Buffers D-chip 100Gbps 100Gbps 100Gbps 30G Multicast stream 30GUnicast stream 30G Unicast stream 30G Unicast stream 30G Multicast stream MAC NPU Egress Buffers D-chip MAC NPU Egress Buffers D-chip MAC NPU Egress Buffers D-chip β€’ Egress interface is congested Congestion β€’ Egress interface queuing have full visibility of all traffic to this egress destination and handle packets according to operator SLA priorities. β€’ No impact on Traffic destined for ports on same or other cards A simple Multicast scenario, Backplane links normalized at 100G r10004 architecture
  • 14.
    Compass EOS -Evolving Router Architectures Ingress buffer Forward ToFab Buffer From FabForward Egress buffer Ingress buffer Forward ToFab Buffer From FabForward Egress buffer Egress buffer Ingress buffer Egress Buffer Ingress buffer Forward Forward ToFab Buffer From Fab ToFab Buffer From Fab ToFab Buffer From Fab ToFab Buffer From Fab
  • 15.
    Compass EOS –Router Architecture Ingress Forward engine Egress Forward engine Ingress Buffer Queing & Egress Buffer Ingress Forward engine Egress Forward engine Ingress Buffer Queing & Egress Buffer To switch Buffer From switch Buffer To switch Buffer From switch Buffer Input ports Input ports Output ports Output ports D D β€’Queing β€’Egress Processing β€’Egress Buffer β€’Queing β€’Egress Processing β€’Egress Buffer β€’β€œSwitch fabric” Router The Compass way β€’ Chip to Chip Optical inter-connect β€’ True Output Switched Router β€’ Switch Fabric eliminated β€’ No ingress side queuing β€’ Maintaining all of the functionality
  • 16.
    DP components –the LC iNP Interlaken 150G Interlaken 100G Interlaken 150G Interlaken 100G eNP D eBufferiBuffer iBuffer Interlaken 60G Interlaken 60G Interlaken 60G Interlaken 60G D eBuffer Interlaken 100G Interlaken 150G eNP Interlaken 100G Interlaken 150G iBuffer iBuffer Interlaken 60G Interlaken 60G Interlaken 60G Interlaken 60G IL 30G L2 FPGA PMC SFP+ SFP+ L2 FPGA L2 FPGA L2 FPGA PMC SFP+ SFP+ PMC SFP+ SFP+ PMC SFP+ SFP+ PMC SFP+ SFP+ PMC SFP+ SFP+ PMC SFP+ SFP+ PMC SFP+ SFP+ PMC SFP+ SFP+ PMC SFP+ SFP+ IL 30G IL 30G IL 30G IL 30G IL 30G IL 30G IL 30G IL 30G IL 30G IL 30G IL 30G CPU Module 10GE TCAM TCAM iNP TCAM TCAM
  • 17.
    Compass-1: Overview 800Gbps system 6RU , 19” shelf 4 x Line Cards Optical BP Front to Back air flow Distributed DC Power Supply 80x10GE, 8x100GE
  • 18.
    Summary Full Meshadvantages (Real Output Queue) Ease of QoS configuration What you configure is what you get Packet drop takes place where you have full visibility Better Multicast scale Better system security More accurate SLA Predicted and deterministic behavior D iNP eNP D iNP eNP CPU mac mac mac mac mac D iNP eNP D iNP eNP CPU mac mac mac mac mac D iNP eNP D iNP eNP CPU mac mac mac mac mac eNP iNP D eNP iNP D CPU mac mac mac mac mac CPU MEM MEM CPU MEM MEM CPU MEM MEM CPU MEM MEM