Siamak Tavallaei
Chief Systems Architect
CXL Advisor to the Board, CXL Consortium
Dharmesh Jani
Infrastructure Ecosystem and Partnership Lead,
Meta
Steering Committee co-Chair, OCP
The State of CXL-related Activities
within OCP
CXL™ Forum at OCP Summit
The State of CXL-related Activities within
OCP
Siamak Tavallaei
Chief Systems Architect
CXL Advisor to the Board, CXL Consortium
Dharmesh Jani
Infrastructure Ecosystem and Partnership Lead, Meta
Steering Committee co-Chair, OCP
CXL as the standard protocol for data-movement
through coherent memory and for the associated
management and system composability
OCP as a community to realize integrated systems
around CXL
Proposal for an OCP reference system for CXL-
enabled compute disaggregation
The State of CXL-related Activities within
OCP
CXL has demonstrated its adoption within the
computer industry as over 250 companies have
joined the consortium. Many use cases have found
CXL beneficial since it has become the de-facto
standard for efficient data-movement protocol. CXL
over PCIe and UCIe physical layers are gaining
momentum. OCP is the place to integrate useful
technologies such as CXL into practical integrated
systems. Server Project and its subprojects are
actively engaged in developing modular systems and
will benefit from this interplay.
Abstract
• 5 min A quick reminder of CXL specification
• 5 min System-level use case examples and challenges
• 5 min CXL-related activities within OCP Server Project
Outline
CXL Technical Work Groups
CXL Consortium
https://www.computeexpresslink.org
CXL Board of Directors (BoD)
https://www.computeexpresslink.org/meettheboard
Marketing Work Group (MWG)
Technical Task Force (TTF)
Active CXL WGs run under Technical Task Force (TTF)
PWG (Protocol WG)
SSWG (System and Software WG)
PHY (Physical and Link Work Group)
MSWG (Memory System WG)
Compliance WG
OCP Server Project
Active OCP Server Project subprojects and workstreams
Drive contributions (Base Spec, Design Spec, Products, …)
https://www.opencompute.org/wiki/Server
CMS (Composable Memory System)
https://www.opencompute.org/projects/composable-memory-system
DC-MHS (Datacenter-ready Modular Hardware System)
https://www.opencompute.org/projects/dc-mhs
Extended Connectivity Workstream (for PCIe and CXL)
https://www.opencompute.org/wiki/Server/PCIe_Extended_Connectivity_Requirements_Workstream
ODSA (Open Domain-Specific Architecture)
https://www.opencompute.org/wiki/Server/ODSA
OAI (Open Accelerator Infrastructure)
https://www.opencompute.org/wiki/Server/OAI
HPC (High-performance Computing)
https://www.opencompute.org/wiki/HPC
OCP NIC
https://www.opencompute.org/wiki/Server/NIC
Siamak Tavallaei
CXL Advisor to the Board of Directors, CXL™ Consortium
Oct 18, 2023
A converged memory environment
is the main key
for ease of software programing!
and efficient data-movement!
10/25/2023
ComputeExpressLink™ andCXL™aretrademarksof the ComputeExpressLinkConsortium.
• Use cases keep driving the need for higher IO and memory bandwidth:
e.g., high performance accelerators, system memory, SmartNIC etc.
• CPU capability requiring more memory capacity and bandwidth per core
• Efficient peer-to-peer resource-sharing & messaging across multiple domains
• Need to overcome memory bottlenecks due to CPU pin and thermal constraints
Industry trends
Increase
Connectivity
&
Heterogeneity
Provide
New Capabilities
The CXL 3.0 Spec
allows for a large
interconnected Fabric
ComputeExpressLink™ andCXL™Consortiumaretrademarksof the ComputeExpressLink Consortium. 35
CXL3.0Fabrics
Composable Systems with Spine/Leaf Architecture
CXL 3.0Fabric Architecture
• InterconnectedSpine Switch System
• Leaf Switch NIC Enclosure
• Leaf Switch CPU Enclosure
• Leaf Switch AcceleratorEnclosure
• Leaf Switch Memory Enclosure
CXL Switch
Accelerator Memory CPUs CPUs
CXL Switch
GFAM GFAM GFAM
CXL Switch CXL Switch
NIC NIC
CXL Switch
CXL Switch Spine
Switches
LeafSwitches
EndDevices
Fabric
Manager
Example
traffic
flow
Composable Architecture
n e w p a ra d i g m fo r
d i s a g g re ga te d c o m p u t i n g !
H P C , A I / M L , I n - m e m o r y D ata b a s e s
Challenges
Complex
Interconnect
?
Disaggregation Challenges
Switches provide value while adding system considerations
 Space, Power, Latency, Cost, Complexity, Firmware
Copper Interconnect provides connectivity with added complexity
 Space, bend radius, EMI/EMC, Cost, Signal Integrity, BER
High Availability
 Fault Zones, Power Zones, Cooling Zones
 New fault modes requiring redundancy
Authenticity, Reliability, Integrity, Security, Privacy, and Management
Successful Disaggregation Approach
First do no harm
 The OS running on a Server: The Platform and the CXL Fabric Manager provide
the same experience as a static server system
 Remedy every new fault mode
 Ride on PCIe, UEFI, and traditional RAS and Security
 Reduce the problem to that which has been solved before!
Put things where they belong
• Partition the system efficiently: ease of use, serviceability, maintenance
While pushing the envelop, if it hurts, don’t do it!
• Retreat from the extremes and avoid too many variables for the first generation
• Fail Fast, learn, and grow the solution through PoCs
Successful Disaggregation Approach
First do no harm
 The OS running on a Server: The Platform and the CXL Fabric Manager provide
the same experience as a static server system
 Remedy every new fault mode
 Ride on PCIe, UEFI, and traditional RAS and Security
 Reduce the problem to that which has been solved before!
Put things where they belong
• Partition the system efficiently: ease of use, serviceability, maintenance
While pushing the envelop, if it hurts, don’t do it!
• Retreat from the extremes and avoid too many variables for the first generation
• Fail Fast, learn, and grow the solution through PoCs
Successful Disaggregation Approach
First do no harm
 The OS running on a Server: The Platform and the CXL Fabric Manager provide
the same experience as a static server system
 Remedy every new fault mode
 Ride on PCIe, UEFI, and traditional RAS and Security
 Reduce the problem to that which has been solved before!
Put things where they belong
• Partition the system efficiently: ease of use, serviceability, maintenance
While pushing the envelop, if it hurts, don’t do it!
• Retreat from the extremes and avoid too many variables for the first generation
• Fail Fast, learn, and grow the solution through PoCs
Based on the Modular Building Block Architecture (MBA),
Build a reference hardware system
to allow the software architecture
to be ready for
the full set of features!
Enablers (Software and Firmware Ingredients)
CXL Fabric Manager
• Secure composability, allocation, on-lining/off-lining
Pre-boot Environment
• Discovery, enumeration, setup, …
CXL Bus/Class Driver
• Configuration, Resource Allocation
CXL Memory Device Driver
• Interactions with Bus/Class Driver, Fabric Manager, VMM, …
• RAS, Security, Fault-isolation, On-lining, Off-lining, …
• Error Isolation, Telemetry, Performance Monitoring
OS-specific Software
• VMM, Hypervisor
• VM Allocation, Orchestration, Fault-isolation & Recovery
A common software architecture
built on a reference hardware system
will drive hardware interoperability forward!
O C P
i s t h e p l a c e w h e r e w e
R e a l i z e
Te c h n o l o g i e s
i n t o
I n t e g r a t e d S y s t e m s
Activities within OCP Server Project
in support of CXL-enabled Systems
CMS (Composable Memory System)
Software Infrastructure for managing tiered, composable, disaggregated systems
https://www.opencompute.org/projects/composable-memory-system
DC-MHS (Datacenter-ready Modular Hardware System)
M-SIF (modular shared infrastructure)
Partition the system efficiently: ease of use, serviceability, maintenance
https://www.opencompute.org/projects/dc-mhs
https://www.opencompute.org/wiki/Server/DC-MHS
Extended Connectivity Workstream (for PCIe and CXL)
https://www.opencompute.org/wiki/Server/PCIe_Extended_Connectivity_Requirements_Workstream
Interconnect for the Disaggregated Computing
Local Disaggregation within a Chassis
With the option to Extend connectivity to Expansion Chassis
Considerations for Copper and Photonic Interconnect
B a s ed on
O C P
O p en Ac c el er a tor
I nf r as tructure
(OA I )
Summary:
Successful Disaggregation Approach(at OCP)
First do no harm (software compatibility, security, and management: CMS)
 The OS running on a Server: The Platform and the CXL Fabric Manager provide the
same experience as a static server system
 Remedy every new fault mode
 Ride on PCIe, UEFI, and traditional RAS and Security
 Reduce the problem to that which has been solved before!
Put things where they belong (modular hardware system: DC-MHS/M-SIF)
 Partition the system efficiently: ease of use, serviceability, maintenance
While pushing the envelop, if it hurts, don’t do it! (robust Extendeded Connectivity)
• Retreat from the extremes; avoid too many variables for the first generation
• Fail Fast, learn, and grow the solution through PoCs
Call to Action!
Join CXL Consortium, drive new use cases, propose solutions, and help draft
specifications
Join OCP Server Project and actively participate in the subprojects
Drive initiatives, make contributions (Base Spec, Design Spec, Products, …)
https://www.opencompute.org/wiki/Server
CMS (Composable Memory System)
https://www.opencompute.org/projects/composable-memory-system
DC-MHS (Datacenter-ready Modular Hardware System)
https://www.opencompute.org/projects/dc-mhs
Extended Connectivity Workstream (for PCIe and CXL)
https://www.opencompute.org/wiki/Server/PCIe_Extended_Connectivity_Requirements_Workstream
ODSA (Open Domain-Specific Architecture for die-to-die interconnect)
https://www.opencompute.org/wiki/Server/ODSA
OAI (Open Accelerator Infrastructure: interoperable compute module for common infrastructure)
https://www.opencompute.org/wiki/Server/OAI
HPC (High-performance Computing: specialized computing)
https://www.opencompute.org/wiki/HPC
OCP NIC (standard front-end networking)
https://www.opencompute.org/wiki/Server/NIC
Thank you!
Siamak Tavallaei has recently served as the CXL Consortium President, Chief
Systems Architect at Google Cloud, and the Incubation Committee (IC)
Representative for the Server Project. He is currently the CXL Advisor to the
Board at CXL Consortium and actively participates in OCP Steering
Committee. His current focus is the system optimization for large-scale,
mega-datacenters for general-purpose and tightly-connected, accelerated
machines built on co-designed hardware, software, security, and
management. He continues to drive the architecture and productization of
CXL-enabled solutions for AI/ML, HPC, and large memory-footprint
Databases. In 2016, he joined OCP as a co-lead of Server Project where he
drove open-sourced modular design concepts for integrated
hardware/software solutions (OAI, DC-SCM, CMS, DC-MHS, and DC-Stack).
His experiences as Chief Systems Architect at Google, Principal Architect at
Microsoft Azure, Distinguished Technologist at HP, and Principal Member of
Technical Staff at Compaq along with his contributions to industry
collaborations such as EISA, PCI, InfiniBand, and CXL give Siamak a broad
understanding of requirements and solutions for the Enterprise, Hyperscale,
and Edge datacenters and industry-wide initiatives.
Bio
Dharmesh Jani (‘DJ’) has been an active contributor to the Open Compute Project
(OCP) since 2012. At Meta, DJ leads Infra ecosystem and partnerships; for past 5
years, he is responsible for leading OCP and other open technologies, working with
stakeholders inside and outside the company. DJ currently is Chair of OCP Incubation
Committee and on the Board of directors for Universal Chiplet Interface Express
(UCIe).
DJ led the Sustainability Initiative at OCP in 2019, conceiving, championing, and
launching the effort. He led the initiative in its first two years, helping it grow into a
full OCP project in 2022. DJ also launched the chiplet initiative in OCP as Open
Domain Specific Architecture (ODSA) working with set of companies in 2018. ODSA is
also now a full OCP project.
Previously at Flex, he led business transformation for the biggest BU and was
instrumental in bringing Flex into OCP and, through founding the CloudLabs team,
built core competencies to launch a cloud business unit. Over his 20+ year career, DJ
has held leadership roles in engineering, product management, and business
strategy roles at 4 startups and 3 Fortune 500 companies such as Semtech, Corvis
Systems, Infinera, Meta and Intel. He was the product manager for the world’s first
coherent 100G MUX for long-haul transport systems at Semtech, and lead data path
designer of the first terrestrial FEC-based optical transmission system at Corvis
Systems.
Based in Menlo Park, CA, DJ looks forward to continuing to drive open data center
infrastructure in the OCP Community with his passion and work.
Bio

The State of CXL-related Activities within OCP

  • 1.
    Siamak Tavallaei Chief SystemsArchitect CXL Advisor to the Board, CXL Consortium Dharmesh Jani Infrastructure Ecosystem and Partnership Lead, Meta Steering Committee co-Chair, OCP The State of CXL-related Activities within OCP CXL™ Forum at OCP Summit
  • 2.
    The State ofCXL-related Activities within OCP Siamak Tavallaei Chief Systems Architect CXL Advisor to the Board, CXL Consortium Dharmesh Jani Infrastructure Ecosystem and Partnership Lead, Meta Steering Committee co-Chair, OCP
  • 3.
    CXL as thestandard protocol for data-movement through coherent memory and for the associated management and system composability OCP as a community to realize integrated systems around CXL Proposal for an OCP reference system for CXL- enabled compute disaggregation The State of CXL-related Activities within OCP
  • 4.
    CXL has demonstratedits adoption within the computer industry as over 250 companies have joined the consortium. Many use cases have found CXL beneficial since it has become the de-facto standard for efficient data-movement protocol. CXL over PCIe and UCIe physical layers are gaining momentum. OCP is the place to integrate useful technologies such as CXL into practical integrated systems. Server Project and its subprojects are actively engaged in developing modular systems and will benefit from this interplay. Abstract
  • 5.
    • 5 minA quick reminder of CXL specification • 5 min System-level use case examples and challenges • 5 min CXL-related activities within OCP Server Project Outline
  • 6.
    CXL Technical WorkGroups CXL Consortium https://www.computeexpresslink.org CXL Board of Directors (BoD) https://www.computeexpresslink.org/meettheboard Marketing Work Group (MWG) Technical Task Force (TTF) Active CXL WGs run under Technical Task Force (TTF) PWG (Protocol WG) SSWG (System and Software WG) PHY (Physical and Link Work Group) MSWG (Memory System WG) Compliance WG
  • 7.
    OCP Server Project ActiveOCP Server Project subprojects and workstreams Drive contributions (Base Spec, Design Spec, Products, …) https://www.opencompute.org/wiki/Server CMS (Composable Memory System) https://www.opencompute.org/projects/composable-memory-system DC-MHS (Datacenter-ready Modular Hardware System) https://www.opencompute.org/projects/dc-mhs Extended Connectivity Workstream (for PCIe and CXL) https://www.opencompute.org/wiki/Server/PCIe_Extended_Connectivity_Requirements_Workstream ODSA (Open Domain-Specific Architecture) https://www.opencompute.org/wiki/Server/ODSA OAI (Open Accelerator Infrastructure) https://www.opencompute.org/wiki/Server/OAI HPC (High-performance Computing) https://www.opencompute.org/wiki/HPC OCP NIC https://www.opencompute.org/wiki/Server/NIC
  • 8.
    Siamak Tavallaei CXL Advisorto the Board of Directors, CXL™ Consortium Oct 18, 2023
  • 13.
    A converged memoryenvironment is the main key for ease of software programing! and efficient data-movement!
  • 18.
    10/25/2023 ComputeExpressLink™ andCXL™aretrademarksof theComputeExpressLinkConsortium. • Use cases keep driving the need for higher IO and memory bandwidth: e.g., high performance accelerators, system memory, SmartNIC etc. • CPU capability requiring more memory capacity and bandwidth per core • Efficient peer-to-peer resource-sharing & messaging across multiple domains • Need to overcome memory bottlenecks due to CPU pin and thermal constraints Industry trends
  • 20.
  • 24.
  • 28.
    The CXL 3.0Spec allows for a large interconnected Fabric
  • 29.
    ComputeExpressLink™ andCXL™Consortiumaretrademarksof theComputeExpressLink Consortium. 35 CXL3.0Fabrics Composable Systems with Spine/Leaf Architecture CXL 3.0Fabric Architecture • InterconnectedSpine Switch System • Leaf Switch NIC Enclosure • Leaf Switch CPU Enclosure • Leaf Switch AcceleratorEnclosure • Leaf Switch Memory Enclosure CXL Switch Accelerator Memory CPUs CPUs CXL Switch GFAM GFAM GFAM CXL Switch CXL Switch NIC NIC CXL Switch CXL Switch Spine Switches LeafSwitches EndDevices Fabric Manager Example traffic flow
  • 30.
    Composable Architecture n ew p a ra d i g m fo r d i s a g g re ga te d c o m p u t i n g ! H P C , A I / M L , I n - m e m o r y D ata b a s e s
  • 38.
  • 39.
  • 40.
    Disaggregation Challenges Switches providevalue while adding system considerations  Space, Power, Latency, Cost, Complexity, Firmware Copper Interconnect provides connectivity with added complexity  Space, bend radius, EMI/EMC, Cost, Signal Integrity, BER High Availability  Fault Zones, Power Zones, Cooling Zones  New fault modes requiring redundancy Authenticity, Reliability, Integrity, Security, Privacy, and Management
  • 41.
    Successful Disaggregation Approach Firstdo no harm  The OS running on a Server: The Platform and the CXL Fabric Manager provide the same experience as a static server system  Remedy every new fault mode  Ride on PCIe, UEFI, and traditional RAS and Security  Reduce the problem to that which has been solved before! Put things where they belong • Partition the system efficiently: ease of use, serviceability, maintenance While pushing the envelop, if it hurts, don’t do it! • Retreat from the extremes and avoid too many variables for the first generation • Fail Fast, learn, and grow the solution through PoCs
  • 42.
    Successful Disaggregation Approach Firstdo no harm  The OS running on a Server: The Platform and the CXL Fabric Manager provide the same experience as a static server system  Remedy every new fault mode  Ride on PCIe, UEFI, and traditional RAS and Security  Reduce the problem to that which has been solved before! Put things where they belong • Partition the system efficiently: ease of use, serviceability, maintenance While pushing the envelop, if it hurts, don’t do it! • Retreat from the extremes and avoid too many variables for the first generation • Fail Fast, learn, and grow the solution through PoCs
  • 43.
    Successful Disaggregation Approach Firstdo no harm  The OS running on a Server: The Platform and the CXL Fabric Manager provide the same experience as a static server system  Remedy every new fault mode  Ride on PCIe, UEFI, and traditional RAS and Security  Reduce the problem to that which has been solved before! Put things where they belong • Partition the system efficiently: ease of use, serviceability, maintenance While pushing the envelop, if it hurts, don’t do it! • Retreat from the extremes and avoid too many variables for the first generation • Fail Fast, learn, and grow the solution through PoCs
  • 44.
    Based on theModular Building Block Architecture (MBA), Build a reference hardware system to allow the software architecture to be ready for the full set of features!
  • 52.
    Enablers (Software andFirmware Ingredients) CXL Fabric Manager • Secure composability, allocation, on-lining/off-lining Pre-boot Environment • Discovery, enumeration, setup, … CXL Bus/Class Driver • Configuration, Resource Allocation CXL Memory Device Driver • Interactions with Bus/Class Driver, Fabric Manager, VMM, … • RAS, Security, Fault-isolation, On-lining, Off-lining, … • Error Isolation, Telemetry, Performance Monitoring OS-specific Software • VMM, Hypervisor • VM Allocation, Orchestration, Fault-isolation & Recovery
  • 53.
    A common softwarearchitecture built on a reference hardware system will drive hardware interoperability forward!
  • 54.
    O C P is t h e p l a c e w h e r e w e R e a l i z e Te c h n o l o g i e s i n t o I n t e g r a t e d S y s t e m s
  • 55.
    Activities within OCPServer Project in support of CXL-enabled Systems CMS (Composable Memory System) Software Infrastructure for managing tiered, composable, disaggregated systems https://www.opencompute.org/projects/composable-memory-system DC-MHS (Datacenter-ready Modular Hardware System) M-SIF (modular shared infrastructure) Partition the system efficiently: ease of use, serviceability, maintenance https://www.opencompute.org/projects/dc-mhs https://www.opencompute.org/wiki/Server/DC-MHS Extended Connectivity Workstream (for PCIe and CXL) https://www.opencompute.org/wiki/Server/PCIe_Extended_Connectivity_Requirements_Workstream Interconnect for the Disaggregated Computing Local Disaggregation within a Chassis With the option to Extend connectivity to Expansion Chassis Considerations for Copper and Photonic Interconnect
  • 58.
    B a sed on O C P O p en Ac c el er a tor I nf r as tructure (OA I )
  • 61.
    Summary: Successful Disaggregation Approach(atOCP) First do no harm (software compatibility, security, and management: CMS)  The OS running on a Server: The Platform and the CXL Fabric Manager provide the same experience as a static server system  Remedy every new fault mode  Ride on PCIe, UEFI, and traditional RAS and Security  Reduce the problem to that which has been solved before! Put things where they belong (modular hardware system: DC-MHS/M-SIF)  Partition the system efficiently: ease of use, serviceability, maintenance While pushing the envelop, if it hurts, don’t do it! (robust Extendeded Connectivity) • Retreat from the extremes; avoid too many variables for the first generation • Fail Fast, learn, and grow the solution through PoCs
  • 62.
    Call to Action! JoinCXL Consortium, drive new use cases, propose solutions, and help draft specifications Join OCP Server Project and actively participate in the subprojects Drive initiatives, make contributions (Base Spec, Design Spec, Products, …) https://www.opencompute.org/wiki/Server CMS (Composable Memory System) https://www.opencompute.org/projects/composable-memory-system DC-MHS (Datacenter-ready Modular Hardware System) https://www.opencompute.org/projects/dc-mhs Extended Connectivity Workstream (for PCIe and CXL) https://www.opencompute.org/wiki/Server/PCIe_Extended_Connectivity_Requirements_Workstream ODSA (Open Domain-Specific Architecture for die-to-die interconnect) https://www.opencompute.org/wiki/Server/ODSA OAI (Open Accelerator Infrastructure: interoperable compute module for common infrastructure) https://www.opencompute.org/wiki/Server/OAI HPC (High-performance Computing: specialized computing) https://www.opencompute.org/wiki/HPC OCP NIC (standard front-end networking) https://www.opencompute.org/wiki/Server/NIC
  • 64.
  • 65.
    Siamak Tavallaei hasrecently served as the CXL Consortium President, Chief Systems Architect at Google Cloud, and the Incubation Committee (IC) Representative for the Server Project. He is currently the CXL Advisor to the Board at CXL Consortium and actively participates in OCP Steering Committee. His current focus is the system optimization for large-scale, mega-datacenters for general-purpose and tightly-connected, accelerated machines built on co-designed hardware, software, security, and management. He continues to drive the architecture and productization of CXL-enabled solutions for AI/ML, HPC, and large memory-footprint Databases. In 2016, he joined OCP as a co-lead of Server Project where he drove open-sourced modular design concepts for integrated hardware/software solutions (OAI, DC-SCM, CMS, DC-MHS, and DC-Stack). His experiences as Chief Systems Architect at Google, Principal Architect at Microsoft Azure, Distinguished Technologist at HP, and Principal Member of Technical Staff at Compaq along with his contributions to industry collaborations such as EISA, PCI, InfiniBand, and CXL give Siamak a broad understanding of requirements and solutions for the Enterprise, Hyperscale, and Edge datacenters and industry-wide initiatives. Bio
  • 66.
    Dharmesh Jani (‘DJ’)has been an active contributor to the Open Compute Project (OCP) since 2012. At Meta, DJ leads Infra ecosystem and partnerships; for past 5 years, he is responsible for leading OCP and other open technologies, working with stakeholders inside and outside the company. DJ currently is Chair of OCP Incubation Committee and on the Board of directors for Universal Chiplet Interface Express (UCIe). DJ led the Sustainability Initiative at OCP in 2019, conceiving, championing, and launching the effort. He led the initiative in its first two years, helping it grow into a full OCP project in 2022. DJ also launched the chiplet initiative in OCP as Open Domain Specific Architecture (ODSA) working with set of companies in 2018. ODSA is also now a full OCP project. Previously at Flex, he led business transformation for the biggest BU and was instrumental in bringing Flex into OCP and, through founding the CloudLabs team, built core competencies to launch a cloud business unit. Over his 20+ year career, DJ has held leadership roles in engineering, product management, and business strategy roles at 4 startups and 3 Fortune 500 companies such as Semtech, Corvis Systems, Infinera, Meta and Intel. He was the product manager for the world’s first coherent 100G MUX for long-haul transport systems at Semtech, and lead data path designer of the first terrestrial FEC-based optical transmission system at Corvis Systems. Based in Menlo Park, CA, DJ looks forward to continuing to drive open data center infrastructure in the OCP Community with his passion and work. Bio