2. Introduction to CXL Fabrics
Vincent Haché, Director of Systems Architecture, Rambus
3. • Disaggregated, Composable Systems
– Pooled Host, Device and Memory
Resources
• Scale-out Systems –
HPC/ML/Analytics
• Add capabilities to expand CXL from
node to small number of racks
• Limited by 12b ID space (4k IDs)
• Scale beyond tree-based topologies
• Does not compromise node level
properties
CXL Fabrics – Motivation and
Overview
4. EP binding from across
fabric:
• Host sees up to 2 layers
of standard switches:
host edge and
downstream edge
• Enables re-use of
existing host SW
CXL Fabrics – Composability
5. Global Fabric Attached Memory (G-
FAM):
• Highly-scalable memory pool –
e.g., 2000+ hosts accessing a
memory pool of 2000+ G-FAM
devices (GFDs)
• Accessible by all hosts through
Fabric Address Segment Table
(FAST)
CXL Fabrics – Scale-Out
6. Port-Based Routing (PBR)
• Brand new flit mode in 3.0
• Transactions routed by PBR-ID:
• Destination PBR-ID (DPID)
carried by all transactions
• Source PBR-ID (SPID) carried
by select transactions as
needed
• Inter-switch links can carry traffic
from multiple VHs
• Supported only in 256B flit mode
Transport Level Details
256B Packing: G4/H4
PBR Message
8. • PBR Switch converts HBR format to PBR format at Host edge
• Any intermediate switches route by DPID
• Access control mechanisms:
• Host edge port must be configured for access to DPID
• Routing path must be configured
Routing Model
Fabric
Host
Edge
PBR
Switch
Hos
t
GFD
Intermedia
te PBR
Switch
Intermedia
te PBR
Switch
Downstrea
m Edge
PBR Switch
Device
HBR
Switch
…
…
PBR HBR
9. • HBR/PCIe EPs connected to Downstream edge switch
• Downstream edge converts transaction back to HBR format
• Access control mechanisms:
• Downstream edge switch binding configuration
Routing Model
Fabric
Host
Edge
PBR
Switch
Hos
t
GFD
Intermedia
te PBR
Switch
Intermedia
te PBR
Switch
Downstrea
m Edge
PBR Switch
Device
HBR
Switch
…
…
PBR HBR
10. • GFDs support PBR flit mode
• GFD translates HPA to DPA based on SPID
• Access control mechanisms:
• GFD manages access based on SPID
• Capacity exposed via DCD
Routing Model
Fabric
Host
Edge
PBR
Switch
Hos
t
GFD
Intermedia
te PBR
Switch
Intermedia
te PBR
Switch
Downstrea
m Edge
PBR Switch
Device
HBR
Switch
…
…
PBR HBR
11. GFD Access Control Mechanisms
• Capacity assigned in DC
block granularity
• DC blocks are assigned to
one Host Group
• Hosts assigned to one or
more Host Groups
12. HPA
Base
GFD Address Decode and Access
Control
HDM
Decoder
1
Host Physical Address
FAST
Segment
0
252
Extent 2
Tag X
Extent 1
Tag Y
Extent 0
Tag X
DC Region
DRAM
Interleave
GFAM
Interleave
DDR Bus
(Channel Interleave)
DPA 0
Target
GFD
Memory
Group Table
Group ID
Group ID
Group ID
Group ID
Group ID
Group ID
Group ID
Group ID
Fabric
Address
Space
Interleave
Decoder Table
PBR ID
PBR ID
PBR ID
PBR ID
GFAM
Decoder
Table
SPID
SPID
SPID
SPID
HPA
HPA
HPA
HPA
Extent 2
Tag X
Extent 1
Tag Y
Extent 0
Tag X
Device Media
Partition
DPA 0
HPA 0
Host
Switch
Host Edge Port
GFAM Device
Access?
16. Items to be defined in future
specification release:
• CXL Fabric Management
Specification
• PBR Switch Management
• GFD Management
• Host-to-Host communication
• Device-to-Device communication
• Cross-domain traffic
Specification Roadmap
17. • Download the CXL Specification
• Available here: https://www.computeexpresslink.org/download-the-
specification
• Join us in the CXL consortium Fabric Sub-Team to help out
• Consortium membership: https://www.computeexpresslink.org/join
• Join the Linux Kernel CXL mailing list for discussions about kernel
development
• Subscribe here: http://vger.kernel.org/vger-lists.html#linux-cxl
Call to Action