The document discusses Compute Express Link (CXL) and industry efforts to enable and validate CXL technology. It provides details on:
1) The growing momentum and adoption of CXL as seen by demonstrations at events and the transfer of OpenCAPI and Gen-Z specifications to CXL.
2) The need for contributions and collaboration across the industry to successfully enable new technologies like CXL.
3) Intel's focus on validating CXL memory configurations and features through engagement with vendors and consortiums.
CXL Consortium will archive the Gen-Z specification for five years. Gen-Z specification can be found on the CXL Consortium website.
On this slide I will talk about what we anticipate the growth path of CXL memory to be.
We expect a crawl, walk run approach, where early products will allow CXL-attached DDR memory to be added to a system .. Essentially providing main memory expansion with bandwidth and features similar to natively-attached DDR.. We expect these products to be in the PCIe CEM-based formfactor.
Next, we anticipate we will see products that offer two-tier memory solutions.. Here we expect to see configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to DDR.. Two-tier is lower performing memory ..
As in.. lower bandwidth and higher latency compared to direct attached DDR memory but at a much higher capacity .. Capacity is the key value add.. This is where we expect persistent memory to come into play.
Then we expect to reach the “run” phase, with memory pooling… this is where memory is not local anymore.. It is attached via a switch solution or multi-port controllers...
It is different than local because it is not contained within the node itself but it can span multiple nodes.
This is when we expect optimal benefits: flexible memory allocation, offering lower total memory cost, a reduction in platform skus and improved operating expenses… at this time the bandwidth and features are expected to be similar to direct attach DDR and the latency similar to remote socket access.
By this phase, products are expected to be in the EDSFF formfactors.
**********************************************************************
Initial systems are PCIe CEM-based formfactors, eventually moving to EDSFF formfactors.
Early systems providing bandwidth or capacity expansion, and later on during the walk phase we eventually see the industry adopting memory pooling type of usages.
The initial system adoption of CXL memory will be based on CEM card form factors, with EDSFF coming later.
What usage models will come in.. You can use CXL attached memory to expand the main memory capacity or the bandwidth..
When you have native attach DDR memory and when you attach CXL you will get a boost in the memory capacity.. And you also get the benefit of additional CXL bandwidth
Local means the CXL link is attached to the CPU..
When you go to memory pooling scenario, it is not local anymore.. It is via a switch solution and a pool of memory which can be a separate mode by itself in a rack-based server .. A separate module which is composted memory.. Different than local because it is not contained within the node itself but it can span multiple nodes.
So that is the differentiation between memory pooling which requires switch or multi-port controllers.
Availability of switches.. Are we going to validate these memory pooling configurations with GNR? We can have pooling from CPU perspective in GNR but switch availability will not be there?
When we attach CXL-attached memory, we can expand the capacity and the bandwidth.. Two tier memory, you can have configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to the first tier memory which is natively attached DDR
Two-tier is lower performing but offers much higher capacity .. Capacity is the key value add.
Memory pooling is where we want the industry to head and it involves memory pooling where various nodes can share the memory or can access the same memory behind the CXL buffer.. And for that you will need switches or multi-port controllers… right now we have single-port controller POCs..
Crawl, walk and run
Where memory is, where CXL will explode..
Growth path
CEM now, EDSFF in GNR
Early systems providing b/w or capacity expansion.. Later on moving to memory pooling
Two-tier is optane/storage class mem
As far as what we, Intel is doing to enable and validate CXL memory…
Our desire is to evolve our approach over generations to be more like what we do for PCIe..
Today for DDR our engagement model is closed, for PCIe it is open and for CXL we can’t start open but that is the direction we will evolve towards..
we are starting out with plans to validated focused POR configurations of CXL memory per platform, with several vendors and modules..
It is by no means an exhaustive approach.. With the hope of in the future getting closer to the “open socket” approach we have with PCIe
In terms of our industry engagement model.. We are starting out engaging with numerous CXL memory device and module IHVs as well as key customers..
again, with the hope that long term we can get closer to what we do with the PCIe model, where there will be CXL consortium-based engagements with CXL vendors.
Finally, for CXL memory validation our current plan is to treat it as part of the platform’s memory subsystem with the long-term plan to participate in consortium-led compliance testing.
With this slide I want to emphasize that Intel has a role to play in the CXL memory enabling and validation but to be successful as an industry, this must be a coordinated effort with CXL vendors, OEMs and system providers
We Intel are only the CPU..
We work closely with device and module vendors to enable key features.. And earlier on we provided CXL vendors an open, bridge architecture reference document as an initial guide.
On the HW side..
We have plans to validate focused configurations and validate available CXL memory modules as part of the platform’s memory subsystem and For initial AIC CEM Modules, Intel will do limited validation of the media interface..
And on the SW side we provide reference system FW and BIOS, are actively participating in the industry effort to develop an open-source drive and we developed and shared a software guide for type 3 devices for OS and module vendors to use.
When you start connecting devices on the CXL bus, then we rely on CXL vendors to test those down-stream devices..
We support standardization of the overall CXL memory module form factors to include EDSFF and PCIe CEM… but we look to CXL vendors to work down at the buffer and module level and define and validate the buffer as well as memory media interface, channel electricals and media training..
As well as perform CXL compliance and perform interoperability testing on their devices.
The OEMs and system providers are in the best position to integrate the CXL memory devices and platforms as well as SW and test a larger, more diverse number of configurations. They can do in-rack level testing and focus on testing and debugging a variety of usage models… and as a result, generate personalized integrator lists.
There are a lot of pieces to developing a healthy CXL memory ecosystem and it is a coordinated industry effort that we are happy to be part of!
***************************************************
Bridge/module operation/features – also part of Intel platform validation
Intel is supportive of the EDSFF form factor as the standard for CXL memory. Having standardization at the memory module is what Intel believes in.. Going down to the memory buffer, there are too many variables.. It could hinder innovation.. It is too late, we already missed rev 1 of all this stuff.
Plethora of devices and configs..
Things that we can control, we will test, but we are not making the device. We will help enable the ecosystem…
When you look at EDSFF FF.. The FF we want standardized.. We don’t want to standardize the memory controller..
There are consortiums looking at other form-factors..
Maybe something on CXL Memory and Industry expectations :
Some way to educate on closed slot/open slot.. That we are not going to do everything
What we are expecting memory vendors to do
We expect the industry to test things..
Intel/We are only the CPU.. When you start connecting devices on the PCIe/CXL bus , then we don’t test those down-stream devices..
Expectation setting.. We can only do so much of this.. You “vendor” need to do your piece
Add a block OEMs/system providers to validate the full config.
On this slide I will talk about what we anticipate the growth path of CXL memory to be.
We expect a crawl, walk run approach, where early products will allow CXL-attached DDR memory to be added to a system .. Essentially providing main memory expansion with bandwidth and features similar to natively-attached DDR.. We expect these products to be in the PCIe CEM-based formfactor.
Next, we anticipate we will see products that offer two-tier memory solutions.. Here we expect to see configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to DDR.. Two-tier is lower performing memory ..
As in.. lower bandwidth and higher latency compared to direct attached DDR memory but at a much higher capacity .. Capacity is the key value add.. This is where we expect persistent memory to come into play.
Then we expect to reach the “run” phase, with memory pooling… this is where memory is not local anymore.. It is attached via a switch solution or multi-port controllers...
It is different than local because it is not contained within the node itself but it can span multiple nodes.
This is when we expect optimal benefits: flexible memory allocation, offering lower total memory cost, a reduction in platform skus and improved operating expenses… at this time the bandwidth and features are expected to be similar to direct attach DDR and the latency similar to remote socket access.
By this phase, products are expected to be in the EDSFF formfactors.
**********************************************************************
Initial systems are PCIe CEM-based formfactors, eventually moving to EDSFF formfactors.
Early systems providing bandwidth or capacity expansion, and later on during the walk phase we eventually see the industry adopting memory pooling type of usages.
The initial system adoption of CXL memory will be based on CEM card form factors, with EDSFF coming later.
What usage models will come in.. You can use CXL attached memory to expand the main memory capacity or the bandwidth..
When you have native attach DDR memory and when you attach CXL you will get a boost in the memory capacity.. And you also get the benefit of additional CXL bandwidth
Local means the CXL link is attached to the CPU..
When you go to memory pooling scenario, it is not local anymore.. It is via a switch solution and a pool of memory which can be a separate mode by itself in a rack-based server .. A separate module which is composted memory.. Different than local because it is not contained within the node itself but it can span multiple nodes.
So that is the differentiation between memory pooling which requires switch or multi-port controllers.
Availability of switches.. Are we going to validate these memory pooling configurations with GNR? We can have pooling from CPU perspective in GNR but switch availability will not be there?
When we attach CXL-attached memory, we can expand the capacity and the bandwidth.. Two tier memory, you can have configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to the first tier memory which is natively attached DDR
Two-tier is lower performing but offers much higher capacity .. Capacity is the key value add.
Memory pooling is where we want the industry to head and it involves memory pooling where various nodes can share the memory or can access the same memory behind the CXL buffer.. And for that you will need switches or multi-port controllers… right now we have single-port controller POCs..
Crawl, walk and run
Where memory is, where CXL will explode..
Growth path
CEM now, EDSFF in GNR
Early systems providing b/w or capacity expansion.. Later on moving to memory pooling
Two-tier is optane/storage class mem
On this slide I will talk about what we anticipate the growth path of CXL memory to be.
We expect a crawl, walk run approach, where early products will allow CXL-attached DDR memory to be added to a system .. Essentially providing main memory expansion with bandwidth and features similar to natively-attached DDR.. We expect these products to be in the PCIe CEM-based formfactor.
Next, we anticipate we will see products that offer two-tier memory solutions.. Here we expect to see configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to DDR.. Two-tier is lower performing memory ..
As in.. lower bandwidth and higher latency compared to direct attached DDR memory but at a much higher capacity .. Capacity is the key value add.. This is where we expect persistent memory to come into play.
Then we expect to reach the “run” phase, with memory pooling… this is where memory is not local anymore.. It is attached via a switch solution or multi-port controllers...
It is different than local because it is not contained within the node itself but it can span multiple nodes.
This is when we expect optimal benefits: flexible memory allocation, offering lower total memory cost, a reduction in platform skus and improved operating expenses… at this time the bandwidth and features are expected to be similar to direct attach DDR and the latency similar to remote socket access.
By this phase, products are expected to be in the EDSFF formfactors.
**********************************************************************
Initial systems are PCIe CEM-based formfactors, eventually moving to EDSFF formfactors.
Early systems providing bandwidth or capacity expansion, and later on during the walk phase we eventually see the industry adopting memory pooling type of usages.
The initial system adoption of CXL memory will be based on CEM card form factors, with EDSFF coming later.
What usage models will come in.. You can use CXL attached memory to expand the main memory capacity or the bandwidth..
When you have native attach DDR memory and when you attach CXL you will get a boost in the memory capacity.. And you also get the benefit of additional CXL bandwidth
Local means the CXL link is attached to the CPU..
When you go to memory pooling scenario, it is not local anymore.. It is via a switch solution and a pool of memory which can be a separate mode by itself in a rack-based server .. A separate module which is composted memory.. Different than local because it is not contained within the node itself but it can span multiple nodes.
So that is the differentiation between memory pooling which requires switch or multi-port controllers.
Availability of switches.. Are we going to validate these memory pooling configurations with GNR? We can have pooling from CPU perspective in GNR but switch availability will not be there?
When we attach CXL-attached memory, we can expand the capacity and the bandwidth.. Two tier memory, you can have configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to the first tier memory which is natively attached DDR
Two-tier is lower performing but offers much higher capacity .. Capacity is the key value add.
Memory pooling is where we want the industry to head and it involves memory pooling where various nodes can share the memory or can access the same memory behind the CXL buffer.. And for that you will need switches or multi-port controllers… right now we have single-port controller POCs..
Crawl, walk and run
Where memory is, where CXL will explode..
Growth path
CEM now, EDSFF in GNR
Early systems providing b/w or capacity expansion.. Later on moving to memory pooling
Two-tier is optane/storage class mem
On this slide I will talk about what we anticipate the growth path of CXL memory to be.
We expect a crawl, walk run approach, where early products will allow CXL-attached DDR memory to be added to a system .. Essentially providing main memory expansion with bandwidth and features similar to natively-attached DDR.. We expect these products to be in the PCIe CEM-based formfactor.
Next, we anticipate we will see products that offer two-tier memory solutions.. Here we expect to see configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to DDR.. Two-tier is lower performing memory ..
As in.. lower bandwidth and higher latency compared to direct attached DDR memory but at a much higher capacity .. Capacity is the key value add.. This is where we expect persistent memory to come into play.
Then we expect to reach the “run” phase, with memory pooling… this is where memory is not local anymore.. It is attached via a switch solution or multi-port controllers...
It is different than local because it is not contained within the node itself but it can span multiple nodes.
This is when we expect optimal benefits: flexible memory allocation, offering lower total memory cost, a reduction in platform skus and improved operating expenses… at this time the bandwidth and features are expected to be similar to direct attach DDR and the latency similar to remote socket access.
By this phase, products are expected to be in the EDSFF formfactors.
**********************************************************************
Initial systems are PCIe CEM-based formfactors, eventually moving to EDSFF formfactors.
Early systems providing bandwidth or capacity expansion, and later on during the walk phase we eventually see the industry adopting memory pooling type of usages.
The initial system adoption of CXL memory will be based on CEM card form factors, with EDSFF coming later.
What usage models will come in.. You can use CXL attached memory to expand the main memory capacity or the bandwidth..
When you have native attach DDR memory and when you attach CXL you will get a boost in the memory capacity.. And you also get the benefit of additional CXL bandwidth
Local means the CXL link is attached to the CPU..
When you go to memory pooling scenario, it is not local anymore.. It is via a switch solution and a pool of memory which can be a separate mode by itself in a rack-based server .. A separate module which is composted memory.. Different than local because it is not contained within the node itself but it can span multiple nodes.
So that is the differentiation between memory pooling which requires switch or multi-port controllers.
Availability of switches.. Are we going to validate these memory pooling configurations with GNR? We can have pooling from CPU perspective in GNR but switch availability will not be there?
When we attach CXL-attached memory, we can expand the capacity and the bandwidth.. Two tier memory, you can have configurations where you still have CXL-attached memory but you can have different performance characteristics of CXL attached memory compared to the first tier memory which is natively attached DDR
Two-tier is lower performing but offers much higher capacity .. Capacity is the key value add.
Memory pooling is where we want the industry to head and it involves memory pooling where various nodes can share the memory or can access the same memory behind the CXL buffer.. And for that you will need switches or multi-port controllers… right now we have single-port controller POCs..
Crawl, walk and run
Where memory is, where CXL will explode..
Growth path
CEM now, EDSFF in GNR
Early systems providing b/w or capacity expansion.. Later on moving to memory pooling
Two-tier is optane/storage class mem