This document is Proprietary and Confidential. No part of this document may be disclosed in any
manner to a third party without the prior written consent of ZeroPoint Technologies AB.
Remove the waste.
Release the power.
Presenter: Nilesh Shah, VP Biz Dev
nilesh.shah@zptcorp.com
A portable, hardware IP portfolio solution for
lossless memory compression and compaction
Data centers: 20-25% TCO reduction
Consumers: Lower cost, better battery and UX
Boosts memory capacity 2-4x, bandwidth and
performance/watt by 50%, and 1,000x faster than
competitors
Security
(SphinX)
Product | Hardware accelerated memory compression
IP across the memory hierarchy
On-chip
memory
Off-chip
memory
Storage
Memory hierarchy
Cache
Register
Main memory
NVMe
CXL memory
SSD
Cache expansion
(CacheMX)
CXL expansion
(DenseMem)
Flash expansion
(FlashMX)
Memory expansion
(SuperRAM)
Bandwidth acceleration
(ZiptilionBW)
ZeroPoint applications and their product names
This document is Proprietary and Confidential. No part of this document may be disclosed in any
manner to a third party without the prior written consent of ZeroPoint Technologies AB.
Challenge
4.6%
3%
Hyperscalers spending significant $$ on
software based compression
CPU cycles used
for compression
Hyperscaler requirement:
hardware compression is a MUST-have
OCP Hyperscale CXL Tiered
Memory Expander Spec
Deploy New Tiers: Ordinary + Compressed DRAM memory on CXL
Removes barriers, enables diversity of Hyperscalers
+ Enterprise Customers
Benefit Spec Component
Reduction in total cost of
ownership
Standardization, Hardware accelerated,
Lossless Compressed Memory Tier
Energy Efficiency,
Sustainability
Transparent Hardware accelerated
Compression
Preserve Software
Investments
Support for legacy Compression
Algorithms
CXL
Controller
Opportunity| Add Compressed CXL Memory Tier
1
2
3
OCP Spec:
https://www.opencompute.org/documents/hyperscale-cxl-tiered-memory-expander-for-ocp-base-specification-1-pdf
Requirement: Hyperscale CXL Tiered Memory Expander Spec
Preserving Software Investment, without
compromising performance
● Support for Legacy Compression Algorithms
● Future Proof , supports Algorithm Innovation
● Stringent Latency & Bandwidth Specifications
Parameter Specification
Latency Uncompressed
Access
90 to 150ns
Latency Compressed
Access
250ns to <1us
Bandwidth Efficiency
Read only/ Write only
80% / 75%
Legacy
Room for Innovation
Compression Algorithms
https://www.opencompute.org/documents/hyperscale-cxl-tiered-memory-expander-for-ocp-base-specification-1-pdf
Stringent Requirements
Proposed Solution: Hyperscale CXL Tiered Memory Expander Spec
OS/HW
Application
FW
CPU
Compression/
Decompression
CXL interface
Host
CXL.io CXL.mem
Memory subsystem
Media
Configuration
Compressed
Tier
Uncompressed
Tier
CXL Type 3 Device
Address Space
CXL
Controller
CXL Controller with integrated Hardware Acceleration:
(De)Compression+ Compaction + Transparent memory management
The ZeroPoint Solution| DenseMem IP
OCP Spec Compliant, portable IP
Spec compliant Plug and Play
(De)Compression IP
CXL
Controller
● OCP Spec compliant Hardware Accelerated CXL memory
(De)Compression + Compaction + Transparent
memory management IP block
● 2-4x transparent (de)compression major Datacenter
workloads
● Protocol: CXL 2.0, 3.0, 3.1
● Compression Algorithms: LZ4, ZID (proprietary)
● Portable: AXI4, CHI, Leading process node support
ZeroPoint Solution | Reduce Data Center TCO 20-25%
Today With CXL only With our
compression only
With CXL & Our
compression
~850
~780
~700
~650
-20-25%
CAPEX:
CPU
CAPEX: Memory
CAPEX: Other
OPEX: Power
OPEX: Cooling
OPEX: Other
Waste: Compression tax
Total Cost of Ownership (TCO) for 40 server rack over 3y lifetime [kUSD]
DenseMem IP Demo| QEMU + FPGA
Link to Video
DenseMem | Performance across Datacenter workloads
Summary | Call To Action
● Summary
○ DenseMem IP: OCP Spec Compliant
○ Portable across process nodes, AXI/CHI interface
○ Production ready mid ‘24
○ Performance verified
● Call To Action
○ Controller manufacturers: Collaboratively address
Hyperscale OCP requirements
○ ISVs: Host software integration, target workloads

Q1 Memory Fabric Forum: ZeroPoint. Remove the waste. Release the power.

  • 1.
    This document isProprietary and Confidential. No part of this document may be disclosed in any manner to a third party without the prior written consent of ZeroPoint Technologies AB. Remove the waste. Release the power. Presenter: Nilesh Shah, VP Biz Dev nilesh.shah@zptcorp.com
  • 2.
    A portable, hardwareIP portfolio solution for lossless memory compression and compaction Data centers: 20-25% TCO reduction Consumers: Lower cost, better battery and UX Boosts memory capacity 2-4x, bandwidth and performance/watt by 50%, and 1,000x faster than competitors
  • 3.
    Security (SphinX) Product | Hardwareaccelerated memory compression IP across the memory hierarchy On-chip memory Off-chip memory Storage Memory hierarchy Cache Register Main memory NVMe CXL memory SSD Cache expansion (CacheMX) CXL expansion (DenseMem) Flash expansion (FlashMX) Memory expansion (SuperRAM) Bandwidth acceleration (ZiptilionBW) ZeroPoint applications and their product names
  • 4.
    This document isProprietary and Confidential. No part of this document may be disclosed in any manner to a third party without the prior written consent of ZeroPoint Technologies AB. Challenge 4.6% 3% Hyperscalers spending significant $$ on software based compression CPU cycles used for compression Hyperscaler requirement: hardware compression is a MUST-have OCP Hyperscale CXL Tiered Memory Expander Spec
  • 5.
    Deploy New Tiers:Ordinary + Compressed DRAM memory on CXL Removes barriers, enables diversity of Hyperscalers + Enterprise Customers Benefit Spec Component Reduction in total cost of ownership Standardization, Hardware accelerated, Lossless Compressed Memory Tier Energy Efficiency, Sustainability Transparent Hardware accelerated Compression Preserve Software Investments Support for legacy Compression Algorithms CXL Controller Opportunity| Add Compressed CXL Memory Tier 1 2 3 OCP Spec: https://www.opencompute.org/documents/hyperscale-cxl-tiered-memory-expander-for-ocp-base-specification-1-pdf
  • 6.
    Requirement: Hyperscale CXLTiered Memory Expander Spec Preserving Software Investment, without compromising performance ● Support for Legacy Compression Algorithms ● Future Proof , supports Algorithm Innovation ● Stringent Latency & Bandwidth Specifications Parameter Specification Latency Uncompressed Access 90 to 150ns Latency Compressed Access 250ns to <1us Bandwidth Efficiency Read only/ Write only 80% / 75% Legacy Room for Innovation Compression Algorithms https://www.opencompute.org/documents/hyperscale-cxl-tiered-memory-expander-for-ocp-base-specification-1-pdf Stringent Requirements
  • 7.
    Proposed Solution: HyperscaleCXL Tiered Memory Expander Spec OS/HW Application FW CPU Compression/ Decompression CXL interface Host CXL.io CXL.mem Memory subsystem Media Configuration Compressed Tier Uncompressed Tier CXL Type 3 Device Address Space CXL Controller CXL Controller with integrated Hardware Acceleration: (De)Compression+ Compaction + Transparent memory management
  • 8.
    The ZeroPoint Solution|DenseMem IP OCP Spec Compliant, portable IP Spec compliant Plug and Play (De)Compression IP CXL Controller ● OCP Spec compliant Hardware Accelerated CXL memory (De)Compression + Compaction + Transparent memory management IP block ● 2-4x transparent (de)compression major Datacenter workloads ● Protocol: CXL 2.0, 3.0, 3.1 ● Compression Algorithms: LZ4, ZID (proprietary) ● Portable: AXI4, CHI, Leading process node support
  • 9.
    ZeroPoint Solution |Reduce Data Center TCO 20-25% Today With CXL only With our compression only With CXL & Our compression ~850 ~780 ~700 ~650 -20-25% CAPEX: CPU CAPEX: Memory CAPEX: Other OPEX: Power OPEX: Cooling OPEX: Other Waste: Compression tax Total Cost of Ownership (TCO) for 40 server rack over 3y lifetime [kUSD]
  • 10.
    DenseMem IP Demo|QEMU + FPGA Link to Video
  • 11.
    DenseMem | Performanceacross Datacenter workloads
  • 12.
    Summary | CallTo Action ● Summary ○ DenseMem IP: OCP Spec Compliant ○ Portable across process nodes, AXI/CHI interface ○ Production ready mid ‘24 ○ Performance verified ● Call To Action ○ Controller manufacturers: Collaboratively address Hyperscale OCP requirements ○ ISVs: Host software integration, target workloads