Oct. 2022
Solution Arch.
Wooram, Alex, Kim alex.kim@intel.com
CCIE#9351/Cisco Mobile Packet Core R2.0/VCP5-DCV/RHCSA/Google PCA
Intel Confidential – CNDA Required
Data Platforms Group 2


Insufficient timing accuracy can lead to service
disruptions – dropped calls, less accurate location services, loss
of advanced network capabilities
Highly accurate time
synchronization is
required
Feature GNSS(GPS) SyncE - Layer 1
(SONET/SDH)
PTP 1588v2 - L2/L3 dependent
(L2 Ethernet/L3 IP Packet)
Phy. Layer RF Ethernet Ethernet
Scalability Scalable to large networks Limited scalability Scalable to large networks
Performance
(under network load)
No dependency No dependency Dependencies due to time stamping and
synchronization errors
Reliability Higher clock accuracy Higher clock accuracy Less accurate for large networks
Device support Intermediate nodes may or
may not support GNSS
All devices in a network must
support SyncE.
Intermediate nodes may or may not support
PTPv2.
Time(Phase) Sync and ToD
(Time of Day)
Supported N/A Supported
Clock(Frequency) Sync. Supported Supported Supported
Time/Phase Accuracy <15ns Not supported <15ns
Holdover (time to repair fault) N/A support support
Use cases* 3G/LTE/5G RAN 5G RAN Telecom, data center, financial and high
frequency trading, media, and industrial
applications
https://orandownloadsweb.azurewebsites.net/specifications
: Frequency and Time synchronization of O-DUs and O-RUs via Ethernet use
Synchronous Ethernet (SyncE) and IEEE 1588-2008 Precision Time Protocol (PTP). Transport of PTP directly over
L2 Ethernet (ITU-T G.8275.1 full timing on-path support) is assumed in this version of the specification, whilst
transport of PTP over UDP/IP (ITU-T G.8275.2 partial timing support from the network) is also possible albeit with
unassured synchronization performance.
 O-RAN. WG4.CUS.0-v09.00
 Protocol structure btw O-RU & O-DU
C-Plane
(Control Plane)
U-Plane
(User Plane)
S-Plane
(Synchronization Plane)
•
•
•
Hardware Enhancements
Oven-controlled external oscillator(OCXO)
• Maintains adapter timing precision
• Up to four hours of holdover time
SyncE enabled by Intel® Ethernet
Connection C827
GNSS mezzanine (optional)
• Integrated support for most GNSS satellite
systems
Dual SMA connectorsconnected to SDPs
• Connect to external timing resources, receive
input
• Connect to performance-auditing equipment
SMB connector
• Connectivity for optional external GNSS
antenna
1
3
2
4
Software Enhancements
Intel® Ethernet 800 Series driver and Open
Source support for 1588 PTP and SyncE
Intel Ethernet 810 Series Features
• Quad SFP28 ports (25/10/1Gbps)
• Dual QSFP28 ports (100/50/25/10Gbps)
• PCIe 4.0 x16
• Advanced features: Application Device
Queues, Dynamic Device Personalization,
RDMA (iWARP and RoCEv2)
Product Ordering Info
25Gbps
5
Product Order Code MM#
E810XXVDA4TG1
(without GNSS)
99AD9D
E810XXVDA4TGG
(with GNSS)
99ADGH
2
1
3
4
5
3
4
5
2
1
Click to add text
100Gbps
Product Code MM#
E810CQDA2TG1
(without GNSS)
99ARL5
E810CQDA2TGG1
(with GNSS)
99ARL6
SPR-EE (Q2 ‘23)
5G ISA
3x E810-XXVDA4T
12x25GbE
PCIe
4.0
Ethernet Connectivity
Vendor Production S/W
23-24 Platform
FEC
Acceleration
Ethernet Connectivity: Intel® Ethernet 810 Series
E810-XXVDA4T
4x 25GbE
E810-CQDA2T
2x 100GbE
(100Gb total BW)
(Q2 ‘22)
E810-XXVDA4
4x 25GbE
E810-CQDA2
2x 100GbE
(100Gb total BW)
E810-2CQDA2
2x 100GbE
(200Gb total BW)
Enhancedtiming
*The product formerly codenamed “Sapphire Rapids –Edge Enhanced Compute (SPR-EEC)” has been renamed to “Sapphire Rapids –Edge Enhanced (SPR-EE)”.
Integrated RAN SW
Higher Capacity (Cells / MIMO) w/ AVX-512 FP16
Sapphire Rapids EE
Integrated RAN HW IP (FEC....)
Monolithic Chip, up to 32C
4th Xeon CPU : “Sapphire Rapids –Edge Enhanced (SPR-EE)”.
Cloud Data
Center
Core Network
Network Edge or
Regional Data
Center
IoT and Devices
On-premises
Edge
Opportunity @ the Edge by
2025
 Multi-access Edge and Private Wireless
Hardware, Software, and Services - $29B 1
 75% of Data Created Outside
Central Data Centers 2
Key Technology Inflections
 CloudNativeSoftware(K8S)
 Connectivity(5G, Multi-Access)
 GPUintensiveApp.
- AI/IoT(VideoInf.), Media Streaming…
Edge of the Future
 Real Time/Deterministic
 On-Demand/Dynamic
 Energy Efficient/Sustainable
 Massively Geo-Distributed at Scale
Latency Expectation Varies <1 ms <5 ms <10-40 ms <60 ms ~100 ms
Bandwidth Expectation 1000+ Mb/s 10 Mb/s
100 Mb/s
Infra. challenges to overcome
- Deliver Common Edge platform consistency & scalability across diverse edge location requirements
- Lower TCO with a consistent cloud native platform approach across edge locations
1 MEC definition here refers to MEC2.0 hyperconverged edge. Source: IDC, Omdia,Intel Judgment.
2 What Edge Computing Means for Infrastructure and Operations Leaders, Gartner, Oct 3, 2018.
Middleware
frameworks and
runtimes
Low level libraries
Virtualization/
Orchestration
Drivers
OS
Cloud Graphics (VDI)
Cloud gaming
Media Delivery
AI Inference & Media
Analytics
Firmware and BIOS
CPU & GPU
clDNN
oneDNN oneVPL oneVPL clDNN
XMPT
oneVPL Intel QSV
Capture and
Stream SDK
Intel Bridge
Technology
oneVPL
DL Streamer
TensorFlow PyTorch
OpenVINO
FFmpeg
Open Visual Cloud
FFmpeg Open WebRTC Horizon 8.2206+
Xen App &
Desktop
7.2206+
VMWare and KVM
Level Zero
Media UMD
OpenCL Vulkan / OpenGL DirectX & Indirect Display Vulkan / OpenGL
Linux
Windows
AIC
K8S & KVM
GStreamer
3rd Party Component
Intel Component
Software Stack
Card Design
Intel® Data Center GPU Flex Series 140 Intel® Data Center GPU Flex Series 170
Card TDP Intel® Data Center GPU Flex Series 140 Intel® Data Center GPU Flex Series 170
Card Specifications
 Half height, half length, single-wide
 Passive cooling
 Full Height, ¾ length,single-wide
 Passive cooling
GPU Intel® Data Center GPU Flex Series 140 GPU Intel® Data Center GPU Flex 170 GPU
GPU’s Per Card 2 1
Memory w/ECC
Capacity: 12GB (6GB/GPU)
Mem xfer Rate: 1750GT/s
Mem Bus Width: 96 bits/GPU
Capacity: 16GB
Mem xfer Rate: 2250GT/s
Mem Bus Width: 256 bits
Fixed Function Media Units (Per Card)
4 (2 per GPU)
(28 transcode streams H.265 1080p60 1:1)
2 (2 per GPU)
(14 transcode streams H.265 1080p60 1:1)
Target Workloads Media, Visual Inference, Media Analytics, Mobile Cloud Gaming, PC Cloud Gaming, VDI
Systolic Arrays (AI Inference) (Relative) 1x Systolic Array 2.5x Systolic Arrays
Long Life Support 5 years, 80% active at base frequency, 20% idle
Operating System Linux (Ubuntu, CentOS, Debian, RHEL), Windows Server 2019 & 2022, Windows Client
Host CPU Support Whitley- Ice Lake (ICX) & Eagle Stream- Sapphire Rapids (SPR)
16GB
GDDR6
Intel® Data Center
GPU Flex Series 170
GPU
x16 Gen4 PCIe
6GB
GDDR6
Intel® Data Center
GPU Flex Series
140 GPU x8 Gen4 PCIe
6GB
GDDR6
Intel® Data Center
GPU Flex Series
140 GPU
x8 Gen4 PCIe
PCIe Switch
x8 Gen4
PCIe(electrical) x16 Gen4
PCIe(mechanical)
Intel Confidential
Department or Event Name 11
Intel Confidential 11
https://docs.o-ran-sc.org/projects/o-ran-sc-o-du-phy/en/latest/Setup-Configuration_fh.html?highlight=e810#a-2-prerequisites
https://docs.o-ran-sc.org/projects/o-ran-sc-o-du-phy/en/latest/overview1.html?highlight=e810#reference-documents
Time synchronization, also called phase
synchronization, means that both the
frequency of and the time between signals
remain constant. In this case, the time offset
between signals is always 0.
constant frequency offset
Frequency synchronization, also called
clock synchronization, refers to a constant
frequency offset or phase offset. In this case,
signals are transmitted at a constant average
rate during any given time period so that all the
devices on the network can work at the same
rate.

intel Sync. & Edge Solution udpate xEng-v1.0.pptx

  • 1.
    Oct. 2022 Solution Arch. Wooram,Alex, Kim alex.kim@intel.com CCIE#9351/Cisco Mobile Packet Core R2.0/VCP5-DCV/RHCSA/Google PCA
  • 2.
    Intel Confidential –CNDA Required Data Platforms Group 2
  • 3.
      Insufficient timing accuracycan lead to service disruptions – dropped calls, less accurate location services, loss of advanced network capabilities Highly accurate time synchronization is required
  • 4.
    Feature GNSS(GPS) SyncE- Layer 1 (SONET/SDH) PTP 1588v2 - L2/L3 dependent (L2 Ethernet/L3 IP Packet) Phy. Layer RF Ethernet Ethernet Scalability Scalable to large networks Limited scalability Scalable to large networks Performance (under network load) No dependency No dependency Dependencies due to time stamping and synchronization errors Reliability Higher clock accuracy Higher clock accuracy Less accurate for large networks Device support Intermediate nodes may or may not support GNSS All devices in a network must support SyncE. Intermediate nodes may or may not support PTPv2. Time(Phase) Sync and ToD (Time of Day) Supported N/A Supported Clock(Frequency) Sync. Supported Supported Supported Time/Phase Accuracy <15ns Not supported <15ns Holdover (time to repair fault) N/A support support Use cases* 3G/LTE/5G RAN 5G RAN Telecom, data center, financial and high frequency trading, media, and industrial applications
  • 5.
    https://orandownloadsweb.azurewebsites.net/specifications : Frequency andTime synchronization of O-DUs and O-RUs via Ethernet use Synchronous Ethernet (SyncE) and IEEE 1588-2008 Precision Time Protocol (PTP). Transport of PTP directly over L2 Ethernet (ITU-T G.8275.1 full timing on-path support) is assumed in this version of the specification, whilst transport of PTP over UDP/IP (ITU-T G.8275.2 partial timing support from the network) is also possible albeit with unassured synchronization performance.  O-RAN. WG4.CUS.0-v09.00  Protocol structure btw O-RU & O-DU C-Plane (Control Plane) U-Plane (User Plane) S-Plane (Synchronization Plane) • • •
  • 6.
    Hardware Enhancements Oven-controlled externaloscillator(OCXO) • Maintains adapter timing precision • Up to four hours of holdover time SyncE enabled by Intel® Ethernet Connection C827 GNSS mezzanine (optional) • Integrated support for most GNSS satellite systems Dual SMA connectorsconnected to SDPs • Connect to external timing resources, receive input • Connect to performance-auditing equipment SMB connector • Connectivity for optional external GNSS antenna 1 3 2 4 Software Enhancements Intel® Ethernet 800 Series driver and Open Source support for 1588 PTP and SyncE Intel Ethernet 810 Series Features • Quad SFP28 ports (25/10/1Gbps) • Dual QSFP28 ports (100/50/25/10Gbps) • PCIe 4.0 x16 • Advanced features: Application Device Queues, Dynamic Device Personalization, RDMA (iWARP and RoCEv2) Product Ordering Info 25Gbps 5 Product Order Code MM# E810XXVDA4TG1 (without GNSS) 99AD9D E810XXVDA4TGG (with GNSS) 99ADGH 2 1 3 4 5 3 4 5 2 1 Click to add text 100Gbps Product Code MM# E810CQDA2TG1 (without GNSS) 99ARL5 E810CQDA2TGG1 (with GNSS) 99ARL6
  • 7.
    SPR-EE (Q2 ‘23) 5GISA 3x E810-XXVDA4T 12x25GbE PCIe 4.0 Ethernet Connectivity Vendor Production S/W 23-24 Platform FEC Acceleration Ethernet Connectivity: Intel® Ethernet 810 Series E810-XXVDA4T 4x 25GbE E810-CQDA2T 2x 100GbE (100Gb total BW) (Q2 ‘22) E810-XXVDA4 4x 25GbE E810-CQDA2 2x 100GbE (100Gb total BW) E810-2CQDA2 2x 100GbE (200Gb total BW) Enhancedtiming *The product formerly codenamed “Sapphire Rapids –Edge Enhanced Compute (SPR-EEC)” has been renamed to “Sapphire Rapids –Edge Enhanced (SPR-EE)”. Integrated RAN SW Higher Capacity (Cells / MIMO) w/ AVX-512 FP16 Sapphire Rapids EE Integrated RAN HW IP (FEC....) Monolithic Chip, up to 32C 4th Xeon CPU : “Sapphire Rapids –Edge Enhanced (SPR-EE)”.
  • 8.
    Cloud Data Center Core Network NetworkEdge or Regional Data Center IoT and Devices On-premises Edge Opportunity @ the Edge by 2025  Multi-access Edge and Private Wireless Hardware, Software, and Services - $29B 1  75% of Data Created Outside Central Data Centers 2 Key Technology Inflections  CloudNativeSoftware(K8S)  Connectivity(5G, Multi-Access)  GPUintensiveApp. - AI/IoT(VideoInf.), Media Streaming… Edge of the Future  Real Time/Deterministic  On-Demand/Dynamic  Energy Efficient/Sustainable  Massively Geo-Distributed at Scale Latency Expectation Varies <1 ms <5 ms <10-40 ms <60 ms ~100 ms Bandwidth Expectation 1000+ Mb/s 10 Mb/s 100 Mb/s Infra. challenges to overcome - Deliver Common Edge platform consistency & scalability across diverse edge location requirements - Lower TCO with a consistent cloud native platform approach across edge locations 1 MEC definition here refers to MEC2.0 hyperconverged edge. Source: IDC, Omdia,Intel Judgment. 2 What Edge Computing Means for Infrastructure and Operations Leaders, Gartner, Oct 3, 2018.
  • 9.
    Middleware frameworks and runtimes Low levellibraries Virtualization/ Orchestration Drivers OS Cloud Graphics (VDI) Cloud gaming Media Delivery AI Inference & Media Analytics Firmware and BIOS CPU & GPU clDNN oneDNN oneVPL oneVPL clDNN XMPT oneVPL Intel QSV Capture and Stream SDK Intel Bridge Technology oneVPL DL Streamer TensorFlow PyTorch OpenVINO FFmpeg Open Visual Cloud FFmpeg Open WebRTC Horizon 8.2206+ Xen App & Desktop 7.2206+ VMWare and KVM Level Zero Media UMD OpenCL Vulkan / OpenGL DirectX & Indirect Display Vulkan / OpenGL Linux Windows AIC K8S & KVM GStreamer 3rd Party Component Intel Component Software Stack
  • 10.
    Card Design Intel® DataCenter GPU Flex Series 140 Intel® Data Center GPU Flex Series 170 Card TDP Intel® Data Center GPU Flex Series 140 Intel® Data Center GPU Flex Series 170 Card Specifications  Half height, half length, single-wide  Passive cooling  Full Height, ¾ length,single-wide  Passive cooling GPU Intel® Data Center GPU Flex Series 140 GPU Intel® Data Center GPU Flex 170 GPU GPU’s Per Card 2 1 Memory w/ECC Capacity: 12GB (6GB/GPU) Mem xfer Rate: 1750GT/s Mem Bus Width: 96 bits/GPU Capacity: 16GB Mem xfer Rate: 2250GT/s Mem Bus Width: 256 bits Fixed Function Media Units (Per Card) 4 (2 per GPU) (28 transcode streams H.265 1080p60 1:1) 2 (2 per GPU) (14 transcode streams H.265 1080p60 1:1) Target Workloads Media, Visual Inference, Media Analytics, Mobile Cloud Gaming, PC Cloud Gaming, VDI Systolic Arrays (AI Inference) (Relative) 1x Systolic Array 2.5x Systolic Arrays Long Life Support 5 years, 80% active at base frequency, 20% idle Operating System Linux (Ubuntu, CentOS, Debian, RHEL), Windows Server 2019 & 2022, Windows Client Host CPU Support Whitley- Ice Lake (ICX) & Eagle Stream- Sapphire Rapids (SPR) 16GB GDDR6 Intel® Data Center GPU Flex Series 170 GPU x16 Gen4 PCIe 6GB GDDR6 Intel® Data Center GPU Flex Series 140 GPU x8 Gen4 PCIe 6GB GDDR6 Intel® Data Center GPU Flex Series 140 GPU x8 Gen4 PCIe PCIe Switch x8 Gen4 PCIe(electrical) x16 Gen4 PCIe(mechanical)
  • 11.
    Intel Confidential Department orEvent Name 11 Intel Confidential 11 https://docs.o-ran-sc.org/projects/o-ran-sc-o-du-phy/en/latest/Setup-Configuration_fh.html?highlight=e810#a-2-prerequisites https://docs.o-ran-sc.org/projects/o-ran-sc-o-du-phy/en/latest/overview1.html?highlight=e810#reference-documents
  • 13.
    Time synchronization, alsocalled phase synchronization, means that both the frequency of and the time between signals remain constant. In this case, the time offset between signals is always 0. constant frequency offset Frequency synchronization, also called clock synchronization, refers to a constant frequency offset or phase offset. In this case, signals are transmitted at a constant average rate during any given time period so that all the devices on the network can work at the same rate.

Editor's Notes

  • #3 Our mission is to help our customers architect the future of data-centric infrastructure. To that end, Intel has been investing in a new approach to infrastructure design, one that is built to move data faster, to store more data, and to process everything from the cloud, to the network, to the edge. Move Faster In the last few years we’ve seen an explosion in network traffic in the DC. As this traffic grows, connectivity is becoming the bottleneck to completely utilizing and unleashing high performance compute. And so you have seen Intel increase our investments to help move data faster… from ethernet, to silicon photonics, to switches. Store More In addition to moving data, data-centric infrastructure must also store massive amounts of data with the ability to quickly access that data to deliver rapid, real time insights. We have been innovating across storage and memory with our investments in 3D NAND and Optane. Process Everything Intel has been investing for decades in a broad portfolio of CPU and XPU products. From Xeon, the foundation of today’s data center to Atom, extending our processing range into power constrained use cases. As well as our XPU offerings of FPGAs, GPUs, Movidius, and Habana, all designed to accelerate workloads even further. Software & System Level Optimized Underlying everything is our SW and system level approach to remove performance bottlenecks wherever they exist. We are finding more and more ways to optimize the system performance and TCO when using our ingredients together.
  • #4 Time synchronization, also called phase synchronization, means that both the frequency of and the time between signals remain constant. In this case, the time offset between signals is always 0. Frequency synchronization, also called clock synchronization, refers to a constant frequency offset or phase offset. In this case, signals are transmitted at a constant average rate during any given time period so that all the devices on the network can work at the same rate. FDD is considered better for coverage and TDD better for capacity
  • #5 Global Navigation Satellite System Holdover plays an important role in the network. It provides a temporary source of synchronization when the primary source of synchronization is unavailable. Loss of primary synchronization can occur for several reasons, one of which is equipment failure. Holdover allows a technician time to repair faulty equipment or to reconfigure the network and restore synchronization. Holdover protection allows equipment to continue operating with minimal disruption until the problem was resolved. It was common to expect equipment to maintain frequency synchronization up to 24 hours in a holdover mode using a local oscillator, such as an oven-controlled crystal oscillator (OCXO) or a temperature compensated crystal oscillator (TCXO) since the stability of these oscillators is sufficient for achieving required holdover performance.
  • #6 G.8275.1 is used to transport of PTP directly over L2 Ethernet. Full timing on path support. It requires boundary clocks at every node in the network. The G.8275.2 is a PTP profile. it is not required that each device in the network participates in the PTP protocol. Also, G.8275.2 uses PTP over IPv4 and IPv6 in unicast mode. The G.8275.2 profile is based on the partial timing support from the network. Hence nodes using G.8275.2 are not required to be directly connected. G.8275.2 is aimed at operation over existing networks The Challenge is 5G performance requirements driving precise timing needs – from Cloud to Edge Legacy proprietary solutions rely on High-cost, purpose-built appliances and specialized NICs Solutions based on Westport Channel and Logan Beach deliver HW-enhanced Precision Time Protocol and SyncE Ethernet NIC in standard servers. Offers precision timing across entire network at a cost-effective pricepoint for 5G infrastructure scale out Reduced solution cost vs. legacy hardware appliance and specialized NIC approach. This approach can also be used in other markets such as Industrial, Financial, Energy and more G.8261—동기식 이더넷 네트워크의 아키텍처와 이더넷 성능을 정의합니다. G.8262—EEC(Synchronous Ethernet Equipment Clock)의 타이밍 특성을 지정합니다. G.8264—ESMC(Ethernet Synchronization Message Channel)를 설명합니다.
  • #7 OCXO – enables precise timing accuracy, up to four hours of holdover time if source timing signal is lost SyncE – Synchronous Ethernet. First Intel Ethernet NIC with SyncE. If don’t want to fully disaggregate or building on legacy HW, this gives them the ability to do so. Partial disaggregation on legacy HW. GNSS mezzanine – optional. Supports most GNSS satellite systems. NAVIC not currently supported, but coming. Dual SMAs enable connecting to external timing sources, such as external GNSS receivers. Or can connect to performance-testing equipment such as oscillators SMB connect: connects to antenna for the internal GNSS unit
  • #9  There are a large number of vertical segments and use cases, like Industrial IoT, Transportation, Healthcare and Retail, driving the demand for edge computing. The common requirement across all of them is the need to bring all the benefits of cloud computing (open architectures, flexibility, rapid response, cost effectiveness, innovation) to locations much nearer to where the customer needs it. This location proximity may be driven by requirements for low-latency, offline operation, operation even with low network bandwidth, or data protection/sovereignty. All these use cases also require a common set of underlying services (the software stack) for security, network functions, performance optimization, telemetry and so on. However, the rapid growth in demand is leading to a proliferation of architectures and solutions specific to each use case, with their own set of integrated software stacks. This will pose its own set of “silo” problems in the long run. Intel is addressing this through a consistent cloud-native platform comprising of pre-integrated and optimized software stacks, with flavors specific to each type of edge deployment Key challenges to overcome Deliver platform consistency & scalability across diverse edge location requirements Optimize cloud native frameworks to meet stringent edge KPIs and network complexity Leverage a broad ecosystem and evolving standards for edge computing
  • #12 We are redefining the model of compute to enable new levels of efficiency and scale for our customers so that the industry can continue to deliver exponential growth in computing to keep up with the societal demands
  • #14 https://support.huawei.com/enterprise/en/doc/EDOC1100055049/530c5fc5/overview-of-1588v2and-g82751