INTELLIGENCE TO SHAPE YOUR TOMORROW
| www.yolegroup.com | ©Yole Intelligence 2023
Memory-Processor
Interface 2023
Focus on Compute
Express Link (CXL)
YINTR23357
Memory Fabric Forum
ABOUT THE AUTHORS
2
Memory Fabric Forum | www.yolegroup.com
Principal Analyst
Simone.bertolazzi@yolegroup.com
John Lorenz
Senior Analyst
john.lorenz@yolegroup.com
Thibault GROSSI Lead author
Senior Analyst
Thibault.grossi@yolegroup.com
Emilie JOLIVET
Division Director
Emilie.jolivet@yolegroup.com
Simone BERTOLAZZI
A WIDE RANGE OF INFORMATION SOURCES
Our unique
position
allows us to
obtain
detailed and
accurate
information
to meet your
needs.
5,000 players
interviews
per year
120+ annual
conferences
100+
analysts
worldwide
6,800+
companies’
news relayed
25 years in the
semiconductor
industry
1250+ teardown
tracks available
3
Memory Fabric Forum | www.yolegroup.com
FIELDS OF EXPERTISE COVERING THE SEMICONDUCTOR INDUSTRY
4
Memory Fabric Forum | www.yolegroup.com
• Semiconductor Packaging
• Semiconductor Manufacturing
Equipment
• Memory
• Computing and Software
• Radio Frequency
• Compound Semiconductor
• Power Electronics
• Battery
• Photonics & Lighting
• Imaging
• Sensing & Actuating
• Display
• Electronic Systems
• Emerging Technologies
2022-2028 CXL MARKET REVENUE FORECAST
5
2022
$1.7M
2028
$15.8B
2026
$2.1B
Memory Fabric Forum | www.yolegroup.com
6
CURRENT AND FUTURE KEY DATA CENTER CHALLENGES
Critical requirements
0 Failure
objective
This means high reliability and total redundancy
(internet connection, power, cables, hardware capacity,
cooling systems, etc.). Each failure could cost tens of
thousands to millions of dollars, depending on
business size and data availability requirements. It is
measured with data center tier segmentation.
High level of
security
High
bandwidth
internet
connection
Both top cybersecurity level and near military physical
security level.
Very reliable internet connection and very high
bandwidth (often similar to submarine communication
cable). It is needed to allow for high data flow.
Low power
consumption
A lot of energy is needed to power all the
infrastructure. In this context, managing and
optimizing power consumption is critical.
Technical breakthroughs
High cooling
capacity
Advanced, energy-efficient cooling solutions are
needed.
All level
redundancy
Redundancy is needed at all levels to provide a
maximum level of resilience.
High storage
capacity
Need to store an ever-increasing amount of data.
High density To store and process ever more data in a fixed volume.
Memory
bottlenecks
Optimizing data flow between memory and
processing units. Bringing them as close together as
possible or further optimizing memory and storage
tiering.
Lifespan and
maintenance
Easy and efficient maintenance increases resilience,
and a long lifespan decreases costs.
Memory Fabric Forum | www.yolegroup.com
Main Rational for CXL adoption
• Big latency and capacity gaps exist between memory and storage devices. In the last years,
various emerging technologies (e.g., 3D XPoint) have been in the works to fill this gap in order to
boost computing performance at the system level.
• In terms of scaling, logic processing technologies have advanced faster than memory. The
memory-logic performance gap has become more evident in the last decade.
• Since 2012, the average number of cores per server has increased ~3 times, reducing the
memory bandwidth available per core.
• The DRAM spending per server has progressively increased: it went up from ~15% of the server
ASP in 2015 to ~28% in 2022.
• This trend is expected to continue in the next five years.
• Memory stranding is a common issue in server architectures. It occurs when cores are fully
allocated, but unrented memory remains, leading to inefficient use of memory resources.
• At the cloud scale, memory stranding translates into costly resources remaining idle during
operation.
• Among the workloads driving the development of the data center market, high-performance
computing (HPC) and AI/ML models, which are memory capacity sensitive, are rapidly
increasing in complexity. For instance, In the Natural Language Processing (NLP) models, the
number of parameters increases by 14x per year.
MEMORY CHALLENGES AND BOTTLENECKS IN THE DATA CENTER
7
Memory Fabric Forum | www.yolegroup.com
An overview
Open System
Interconnects
such as CXL (see
next chapter),
which is getting
broad support
from leading
players in the
industry, could
tackle most of
these bottlenecks
and revolutionize
computing
architectures for
data centers.
Cost of memory per
server is increasing
Memory stranding
Workloads are
growing in
complexity
Memory hierarchy
needs further
optimization
Memory bandwidth
per core is
decreasing
3
2
1
4
5
Memory Fabric Forum | www.yolegroup.com
COMPUTE EXPRESS LINK (CXL) –DEVICE TYPES
DIMM
Accelerator
NIC
Cache
Processor
DIMM
DIMM
Accelerator
Cache
Processor
DIMM
Memory
Memory
Source: CXL Consortium
DIMM
Memory Buffer
Processor
DIMM
Memory
Memory
Memory
Memory
Type 1
for accelerators with no local
memory, such as Network
Interface Cards (NIC).
Through CXL.io and
CXL.cache, those devices can
access memory attached to
host processors.
Type 2
in this case, accelerators such
as GPUs, FPGAs, and ASICs
that have their own DDR or
HBM employ all three CXL
protocols to make the
processor’s memory available
for the accelerator.
Type 3
provides memory expansion
thanks to CXL.io and
CXL.memory. A buffer
attached to the CXL bus
provides capacity and
bandwidth extension to the
processing unit.
Memory expansion to (x)PUs
Yole area of focus
CXL.io + CXL.cache CXL.io + CXL.cache + CXL.mem CXL.io + CXL.mem
Through a combination of the CXL protocols (CXL.io, CXL.cache, CXL.mem), three use cases are possible:
8
MEMORY EXPANSION, POOLING, AND DISAGGREGATION USING CXL INTEGRATION
9
CXL 1.1 In-server memory expansion
(server level)
CPU
DIMM
CXL
memory
controller
DDR
DIMM
DDR
CXL over
PCIe 5.0
Memory
Memory
Memory
Memory
DDR DDR
CXL
Memory expander
CXL 2.0 Memory pooling
(rack level)
CXL 3.1 Fully disaggregation and
composability of resources
(rack-to-rack)
CPU
DIMM banks
DDR
CPU
DDR
CPU
DDR
Orchestration
and fabrics
management
DIMM Bank
CXL
memory
expander
CXL
memory
expander
CXL
memory
expander
Memory pool
CXL switch
CXL over PCIe 5.0
CXL over PCIe 5.0
Source: CXL Consortium
CXL: Compute Express Link
CPU pool
DPU pool
GPU pool
Memory
pool
storage
pool
CXL over
PCIe 6.0
Server 1 Server 2 Server 3
Memory Fabric Forum | www.yolegroup.com
Memory Fabric Forum | www.yolegroup.com
MEMORY AND STORAGE TECHNOLOGIES LEVERAGING CXL
CXLZ
3- Memory Pooling (ENVM
and NAND Flash-based)
2- Memory Pooling
(DRAM-based)
1- Direct Attached CXL
Memory Expanders
3) Storage Class Memory (SCM),
CXL SSDs
1) CXL Memory Expanders
2) Persistent memory
Latency
Register
Cache
HBM
DDR
SSD
<1ns
<10ns
<100ns
<100ns
~ 1000 x
GAP
HDD
10s µs
to few ms
100ms
10
CXL MEMORY EXPANDER FORM FACTOR
11
Memory Fabric Forum | www.yolegroup.com
EDSFF E3.S Drive
Source: Samsung
Source: Astera Labs
CXL
Memory
Controller
DIMM Slot
DIMM Slot
DIMM Slot
DIMM Slot
Add In Card + DRAM DIMM
Add In Card
• Can be tuned to the capacity needed and possibly
reusing DDR4 DIMMs
• Existing and robust form factor standard,
making it more convenient for future server
chassis architecture.
-
5.00
10.00
15.00
0%
20%
40%
60%
80%
100%
2022 2023 2024 2025 2026 2027 2028
CXL memory expander volumes (Munits)
Break down by form factor (%)
Add-In Card
Drives
Total
12
Memory Fabric Forum | www.yolegroup.com
SERVER CPU, CXL MEMORY EXPANDERS TYPES AND USE CASES
A roadmap overview
Nov 2022,
EPYC GENOA, CXL 1.1
Jan 2023,
Sapphire Rapids, CXL 1.1
Q4 2023,
Emerald Rapids, CXL 2.0
2.0
2024,
EPYC Turin, CXL 2.0
2025 ?
EPYC Venice, CXL 3.0
2025 ?
Diamonds Rapids, CXL 3.0
CXL 1.1 CXL 2.0 CXL 3.0
CXL Drive – x8 PCIe lanes
CXL Drive – x4 PCIe lanes
CXL Add-in Card (x16 PCIe lanes)
Direct Attached Memory expander
Memory pooling behind switch
2022-2023 2024 - 2025 2025- 2026
Memory expander types
Use cases
Server CPU
Multi-headed memory pooling
DRAM EXPANDER MARKET FORECAST 2022 - 2028
Breakdown by use case and form factor
13
Memory Fabric Forum | www.yolegroup.com
2022
$1.7M
2022
$ 1.7M
2028
$15B
2028
$ 15B
Breakdown by format
Breakdown by use case
30%
70%
Drives
Add-In Card
87%
13%
100%
0%
Direct attached
memory expansion
Memory Pooling
25%
75%
CXL ADOPTION MILESTONES
14
Memory Fabric Forum | www.yolegroup.com
Estimated timeline
CXL could
revolutionize
the way
memory is
utilized and
accessed,
possibly
enabling the
rise of an
entire
industry
Services
Server memory
expansion
Memory pool:
• Memory utilization optimization
• Remove memory capacity limits
Memory pool:
• Ability to extend accelerator memory
• Ability to cascade switch and extend
CXL memory fabric
New memory technology.
• Rise of CXL SSD and ENVM
with thin data granularity
AI Server
HPC Server
Cloud and hyperscale servers
In memory database and analytics
Memory as a service
Software
Systems
CXL 1.1 CXL 2.0 CXL 3.1
CXL
adoption
Time
2022-2028 CXL MARKET REVENUE FORECAST
$15B
$12.5B
$1.9B
$1.5B
$1.7M
$0.2
M $1M
15
CXL DRAM
CXL memory expander controller
CXL memory expander*
CXL switch
2022
$1.7M
2028
$15.8B
*inclusive of DRAM and CXL memory expander controller
CXL : Compute Express Link
$741M
$112
M
$143M
$583 M
2026
$2.1B
Memory Fabric Forum | www.yolegroup.com
16
Memory Fabric Forum | www.yolegroup.com
COMPUTE EXPRESS LINK (CXL) – KEY TAKEAWAYS
Opportunities and threats of CXL deployment
Tailwinds Headwinds
• Cost?
• Delay on CXL capable CPU
roadmap
• Macroeconomic conditions
• Security concerns
• Reliability
• Large support from the
industry
• Datacenter memory
bottleneck and challenges.
• Enable full server disagregation
and composability.
17
Memory Fabric Forum | www.yolegroup.com
LET’S TALK ABOUT COST
2x
16 x 128GB DRAM DIMM
2x
8 x 128GB DRAM DIMM
8x
Pool of 16 x 512GB DRAM CXL exp
2x
+
8x
8x
Pool x 16 x 128GB DRAM CXL exp
1x
+
2x
8 x 128GB DRAM DIMM
Total DRAM
32TB
32TB
Total DRAM
accessible
per server
Estimated
cost
difference
18TB 4TB
18TB
4TB
+10%
-35%
*Considering RDIMM + JBOM+ CXL drives
1
2
3
Conventionnal
Capacity
optimized
Cost
optimized
Thank You!
Memory Fabric Forum | www.yolegroup.com 18
Report available here
General terms
and conditions
of sales
Follow us on
REPORTS, MONITORS
& TRACKS
NORTH AMERICA
sales.us@yolegroup.com
+1 833 338 4999
EMEA
sales.emea@yolegroup.com
+49 69 9621 7675
JAPAN, KOREA, REST OF ASIA
sales.japan@yolegroup.com
sales.korea@yolegroup.com
sales.restofasia@yolegroup.com
+81 3 4405 9204
GREATER CHINA
sales.gc@yolegroup.com
+886 979 336 809 +86 136 6156 6824
GLOBAL OPERATIONS
Custom Services
custom@yolegroup.com
Marketing
marketing@yolegroup.com
Public Relations & External Communications
publicrelations@yolegroup.com |
communication@yolegroup.com
General Inquiries
contact@yolegroup.com | +33 4 72 83 01 80
CONTACTS
19
Memory Fabric Forum | www.yolegroup.com
Report available here

Q1 Memory Fabric Forum: Memory Processor Interface 2023, Focus on CXL

  • 1.
    INTELLIGENCE TO SHAPEYOUR TOMORROW | www.yolegroup.com | ©Yole Intelligence 2023 Memory-Processor Interface 2023 Focus on Compute Express Link (CXL) YINTR23357 Memory Fabric Forum
  • 2.
    ABOUT THE AUTHORS 2 MemoryFabric Forum | www.yolegroup.com Principal Analyst Simone.bertolazzi@yolegroup.com John Lorenz Senior Analyst john.lorenz@yolegroup.com Thibault GROSSI Lead author Senior Analyst Thibault.grossi@yolegroup.com Emilie JOLIVET Division Director Emilie.jolivet@yolegroup.com Simone BERTOLAZZI
  • 3.
    A WIDE RANGEOF INFORMATION SOURCES Our unique position allows us to obtain detailed and accurate information to meet your needs. 5,000 players interviews per year 120+ annual conferences 100+ analysts worldwide 6,800+ companies’ news relayed 25 years in the semiconductor industry 1250+ teardown tracks available 3 Memory Fabric Forum | www.yolegroup.com
  • 4.
    FIELDS OF EXPERTISECOVERING THE SEMICONDUCTOR INDUSTRY 4 Memory Fabric Forum | www.yolegroup.com • Semiconductor Packaging • Semiconductor Manufacturing Equipment • Memory • Computing and Software • Radio Frequency • Compound Semiconductor • Power Electronics • Battery • Photonics & Lighting • Imaging • Sensing & Actuating • Display • Electronic Systems • Emerging Technologies
  • 5.
    2022-2028 CXL MARKETREVENUE FORECAST 5 2022 $1.7M 2028 $15.8B 2026 $2.1B Memory Fabric Forum | www.yolegroup.com
  • 6.
    6 CURRENT AND FUTUREKEY DATA CENTER CHALLENGES Critical requirements 0 Failure objective This means high reliability and total redundancy (internet connection, power, cables, hardware capacity, cooling systems, etc.). Each failure could cost tens of thousands to millions of dollars, depending on business size and data availability requirements. It is measured with data center tier segmentation. High level of security High bandwidth internet connection Both top cybersecurity level and near military physical security level. Very reliable internet connection and very high bandwidth (often similar to submarine communication cable). It is needed to allow for high data flow. Low power consumption A lot of energy is needed to power all the infrastructure. In this context, managing and optimizing power consumption is critical. Technical breakthroughs High cooling capacity Advanced, energy-efficient cooling solutions are needed. All level redundancy Redundancy is needed at all levels to provide a maximum level of resilience. High storage capacity Need to store an ever-increasing amount of data. High density To store and process ever more data in a fixed volume. Memory bottlenecks Optimizing data flow between memory and processing units. Bringing them as close together as possible or further optimizing memory and storage tiering. Lifespan and maintenance Easy and efficient maintenance increases resilience, and a long lifespan decreases costs. Memory Fabric Forum | www.yolegroup.com Main Rational for CXL adoption
  • 7.
    • Big latencyand capacity gaps exist between memory and storage devices. In the last years, various emerging technologies (e.g., 3D XPoint) have been in the works to fill this gap in order to boost computing performance at the system level. • In terms of scaling, logic processing technologies have advanced faster than memory. The memory-logic performance gap has become more evident in the last decade. • Since 2012, the average number of cores per server has increased ~3 times, reducing the memory bandwidth available per core. • The DRAM spending per server has progressively increased: it went up from ~15% of the server ASP in 2015 to ~28% in 2022. • This trend is expected to continue in the next five years. • Memory stranding is a common issue in server architectures. It occurs when cores are fully allocated, but unrented memory remains, leading to inefficient use of memory resources. • At the cloud scale, memory stranding translates into costly resources remaining idle during operation. • Among the workloads driving the development of the data center market, high-performance computing (HPC) and AI/ML models, which are memory capacity sensitive, are rapidly increasing in complexity. For instance, In the Natural Language Processing (NLP) models, the number of parameters increases by 14x per year. MEMORY CHALLENGES AND BOTTLENECKS IN THE DATA CENTER 7 Memory Fabric Forum | www.yolegroup.com An overview Open System Interconnects such as CXL (see next chapter), which is getting broad support from leading players in the industry, could tackle most of these bottlenecks and revolutionize computing architectures for data centers. Cost of memory per server is increasing Memory stranding Workloads are growing in complexity Memory hierarchy needs further optimization Memory bandwidth per core is decreasing 3 2 1 4 5
  • 8.
    Memory Fabric Forum| www.yolegroup.com COMPUTE EXPRESS LINK (CXL) –DEVICE TYPES DIMM Accelerator NIC Cache Processor DIMM DIMM Accelerator Cache Processor DIMM Memory Memory Source: CXL Consortium DIMM Memory Buffer Processor DIMM Memory Memory Memory Memory Type 1 for accelerators with no local memory, such as Network Interface Cards (NIC). Through CXL.io and CXL.cache, those devices can access memory attached to host processors. Type 2 in this case, accelerators such as GPUs, FPGAs, and ASICs that have their own DDR or HBM employ all three CXL protocols to make the processor’s memory available for the accelerator. Type 3 provides memory expansion thanks to CXL.io and CXL.memory. A buffer attached to the CXL bus provides capacity and bandwidth extension to the processing unit. Memory expansion to (x)PUs Yole area of focus CXL.io + CXL.cache CXL.io + CXL.cache + CXL.mem CXL.io + CXL.mem Through a combination of the CXL protocols (CXL.io, CXL.cache, CXL.mem), three use cases are possible: 8
  • 9.
    MEMORY EXPANSION, POOLING,AND DISAGGREGATION USING CXL INTEGRATION 9 CXL 1.1 In-server memory expansion (server level) CPU DIMM CXL memory controller DDR DIMM DDR CXL over PCIe 5.0 Memory Memory Memory Memory DDR DDR CXL Memory expander CXL 2.0 Memory pooling (rack level) CXL 3.1 Fully disaggregation and composability of resources (rack-to-rack) CPU DIMM banks DDR CPU DDR CPU DDR Orchestration and fabrics management DIMM Bank CXL memory expander CXL memory expander CXL memory expander Memory pool CXL switch CXL over PCIe 5.0 CXL over PCIe 5.0 Source: CXL Consortium CXL: Compute Express Link CPU pool DPU pool GPU pool Memory pool storage pool CXL over PCIe 6.0 Server 1 Server 2 Server 3 Memory Fabric Forum | www.yolegroup.com
  • 10.
    Memory Fabric Forum| www.yolegroup.com MEMORY AND STORAGE TECHNOLOGIES LEVERAGING CXL CXLZ 3- Memory Pooling (ENVM and NAND Flash-based) 2- Memory Pooling (DRAM-based) 1- Direct Attached CXL Memory Expanders 3) Storage Class Memory (SCM), CXL SSDs 1) CXL Memory Expanders 2) Persistent memory Latency Register Cache HBM DDR SSD <1ns <10ns <100ns <100ns ~ 1000 x GAP HDD 10s µs to few ms 100ms 10
  • 11.
    CXL MEMORY EXPANDERFORM FACTOR 11 Memory Fabric Forum | www.yolegroup.com EDSFF E3.S Drive Source: Samsung Source: Astera Labs CXL Memory Controller DIMM Slot DIMM Slot DIMM Slot DIMM Slot Add In Card + DRAM DIMM Add In Card • Can be tuned to the capacity needed and possibly reusing DDR4 DIMMs • Existing and robust form factor standard, making it more convenient for future server chassis architecture. - 5.00 10.00 15.00 0% 20% 40% 60% 80% 100% 2022 2023 2024 2025 2026 2027 2028 CXL memory expander volumes (Munits) Break down by form factor (%) Add-In Card Drives Total
  • 12.
    12 Memory Fabric Forum| www.yolegroup.com SERVER CPU, CXL MEMORY EXPANDERS TYPES AND USE CASES A roadmap overview Nov 2022, EPYC GENOA, CXL 1.1 Jan 2023, Sapphire Rapids, CXL 1.1 Q4 2023, Emerald Rapids, CXL 2.0 2.0 2024, EPYC Turin, CXL 2.0 2025 ? EPYC Venice, CXL 3.0 2025 ? Diamonds Rapids, CXL 3.0 CXL 1.1 CXL 2.0 CXL 3.0 CXL Drive – x8 PCIe lanes CXL Drive – x4 PCIe lanes CXL Add-in Card (x16 PCIe lanes) Direct Attached Memory expander Memory pooling behind switch 2022-2023 2024 - 2025 2025- 2026 Memory expander types Use cases Server CPU Multi-headed memory pooling
  • 13.
    DRAM EXPANDER MARKETFORECAST 2022 - 2028 Breakdown by use case and form factor 13 Memory Fabric Forum | www.yolegroup.com 2022 $1.7M 2022 $ 1.7M 2028 $15B 2028 $ 15B Breakdown by format Breakdown by use case 30% 70% Drives Add-In Card 87% 13% 100% 0% Direct attached memory expansion Memory Pooling 25% 75%
  • 14.
    CXL ADOPTION MILESTONES 14 MemoryFabric Forum | www.yolegroup.com Estimated timeline CXL could revolutionize the way memory is utilized and accessed, possibly enabling the rise of an entire industry Services Server memory expansion Memory pool: • Memory utilization optimization • Remove memory capacity limits Memory pool: • Ability to extend accelerator memory • Ability to cascade switch and extend CXL memory fabric New memory technology. • Rise of CXL SSD and ENVM with thin data granularity AI Server HPC Server Cloud and hyperscale servers In memory database and analytics Memory as a service Software Systems CXL 1.1 CXL 2.0 CXL 3.1 CXL adoption Time
  • 15.
    2022-2028 CXL MARKETREVENUE FORECAST $15B $12.5B $1.9B $1.5B $1.7M $0.2 M $1M 15 CXL DRAM CXL memory expander controller CXL memory expander* CXL switch 2022 $1.7M 2028 $15.8B *inclusive of DRAM and CXL memory expander controller CXL : Compute Express Link $741M $112 M $143M $583 M 2026 $2.1B Memory Fabric Forum | www.yolegroup.com
  • 16.
    16 Memory Fabric Forum| www.yolegroup.com COMPUTE EXPRESS LINK (CXL) – KEY TAKEAWAYS Opportunities and threats of CXL deployment Tailwinds Headwinds • Cost? • Delay on CXL capable CPU roadmap • Macroeconomic conditions • Security concerns • Reliability • Large support from the industry • Datacenter memory bottleneck and challenges. • Enable full server disagregation and composability.
  • 17.
    17 Memory Fabric Forum| www.yolegroup.com LET’S TALK ABOUT COST 2x 16 x 128GB DRAM DIMM 2x 8 x 128GB DRAM DIMM 8x Pool of 16 x 512GB DRAM CXL exp 2x + 8x 8x Pool x 16 x 128GB DRAM CXL exp 1x + 2x 8 x 128GB DRAM DIMM Total DRAM 32TB 32TB Total DRAM accessible per server Estimated cost difference 18TB 4TB 18TB 4TB +10% -35% *Considering RDIMM + JBOM+ CXL drives 1 2 3 Conventionnal Capacity optimized Cost optimized
  • 18.
    Thank You! Memory FabricForum | www.yolegroup.com 18 Report available here
  • 19.
    General terms and conditions ofsales Follow us on REPORTS, MONITORS & TRACKS NORTH AMERICA sales.us@yolegroup.com +1 833 338 4999 EMEA sales.emea@yolegroup.com +49 69 9621 7675 JAPAN, KOREA, REST OF ASIA sales.japan@yolegroup.com sales.korea@yolegroup.com sales.restofasia@yolegroup.com +81 3 4405 9204 GREATER CHINA sales.gc@yolegroup.com +886 979 336 809 +86 136 6156 6824 GLOBAL OPERATIONS Custom Services custom@yolegroup.com Marketing marketing@yolegroup.com Public Relations & External Communications publicrelations@yolegroup.com | communication@yolegroup.com General Inquiries contact@yolegroup.com | +33 4 72 83 01 80 CONTACTS 19 Memory Fabric Forum | www.yolegroup.com Report available here