During the CXL Forum at OCP Global Summit, Eddy Hwang of Nvidia and Wai Kong Poon of Molex presented a next-gen architecture for enabling copper for AI computing.
During the CXL Forum at OCP Global Summit, MemVerge CEO Charles Fan presented accomplishments of the CXL industry since 2019, the development of concept cars occurring today, and his predictions for the future of CXL
Q1 Memory Fabric Forum: Compute Express Link (CXL) 3.1 UpdateMemory Fabric Forum
OCP Steering Committee member and ex-President of the CXL Consortium, Siamak Tavallaei, provides an update on the CXL specifications with a focus on the recently released 3.1 specification.
Lightelligence: Optical CXL Interconnect for Large Scale Memory PoolingMemory Fabric Forum
During the CXL Forum at OCP Global Summit, Lightelligence Director of Engineering Ron Swatzentruber provides an overview of the company's optical port expander products and test results.
MemVerge CEO Charles Fan describes why memory-hungry generative AI is a driver for CXL technology, the new computing model for AI, and MemVerge software for CXL and AI.
This document provides an overview of NAND flash memory technology and reliability issues. It begins with introductions to memory technologies and flash applications. The document then discusses NAND flash cell structure and operation, including programming, erasing, and reading. Key reliability issues covered include endurance, data retention, and program interference. The document provides references and outlines potential failure mechanisms and mitigation techniques like error correction codes and wear leveling.
The document discusses AMD's 3D V-Cache technology which vertically stacks an additional last-level cache die on top of CPU cores using through-silicon vias (TSVs). It provides up to a 15% improvement in gaming performance for desktop CPUs and up to 66% faster RTL verification for server CPUs. The technology uses a face-to-back die stacking approach and copper-copper hybrid bonding to efficiently interconnect the cache die with the CPU die below while maintaining high yield and reliability.
DesignCon 2019 112-Gbps Electrical Interfaces: An OIF Update on CEI-112GLeah Wilkinson
DesignCon 2019
112-Gbps Electrical Interfaces: An OIF Update on CEI-112G
Brian Holden, Kandou Bus
Cathy Liu, Broadcom
Steve Sekel, Keysight
Nathan Tracy, TE Connectivity
Seminar report on third generation solid state driveAtishay Jain
The document summarizes a seminar report on third generation solid state drives (SSDs). It discusses SSD architecture including flash memory chips, controllers, and comparisons to hard disk drives (HDDs). SSDs provide advantages over HDDs like higher performance, lower power consumption, and greater reliability without moving parts. However, SSDs also have higher costs and offer less storage currently than HDDs. The document concludes that SSD performance will continue to improve while prices decline, leading to eventual replacement of HDDs.
During the CXL Forum at OCP Global Summit, MemVerge CEO Charles Fan presented accomplishments of the CXL industry since 2019, the development of concept cars occurring today, and his predictions for the future of CXL
Q1 Memory Fabric Forum: Compute Express Link (CXL) 3.1 UpdateMemory Fabric Forum
OCP Steering Committee member and ex-President of the CXL Consortium, Siamak Tavallaei, provides an update on the CXL specifications with a focus on the recently released 3.1 specification.
Lightelligence: Optical CXL Interconnect for Large Scale Memory PoolingMemory Fabric Forum
During the CXL Forum at OCP Global Summit, Lightelligence Director of Engineering Ron Swatzentruber provides an overview of the company's optical port expander products and test results.
MemVerge CEO Charles Fan describes why memory-hungry generative AI is a driver for CXL technology, the new computing model for AI, and MemVerge software for CXL and AI.
This document provides an overview of NAND flash memory technology and reliability issues. It begins with introductions to memory technologies and flash applications. The document then discusses NAND flash cell structure and operation, including programming, erasing, and reading. Key reliability issues covered include endurance, data retention, and program interference. The document provides references and outlines potential failure mechanisms and mitigation techniques like error correction codes and wear leveling.
The document discusses AMD's 3D V-Cache technology which vertically stacks an additional last-level cache die on top of CPU cores using through-silicon vias (TSVs). It provides up to a 15% improvement in gaming performance for desktop CPUs and up to 66% faster RTL verification for server CPUs. The technology uses a face-to-back die stacking approach and copper-copper hybrid bonding to efficiently interconnect the cache die with the CPU die below while maintaining high yield and reliability.
DesignCon 2019 112-Gbps Electrical Interfaces: An OIF Update on CEI-112GLeah Wilkinson
DesignCon 2019
112-Gbps Electrical Interfaces: An OIF Update on CEI-112G
Brian Holden, Kandou Bus
Cathy Liu, Broadcom
Steve Sekel, Keysight
Nathan Tracy, TE Connectivity
Seminar report on third generation solid state driveAtishay Jain
The document summarizes a seminar report on third generation solid state drives (SSDs). It discusses SSD architecture including flash memory chips, controllers, and comparisons to hard disk drives (HDDs). SSDs provide advantages over HDDs like higher performance, lower power consumption, and greater reliability without moving parts. However, SSDs also have higher costs and offer less storage currently than HDDs. The document concludes that SSD performance will continue to improve while prices decline, leading to eventual replacement of HDDs.
CXL Memory Expansion, Pooling, Sharing, FAM Enablement, and SwitchingMemory Fabric Forum
The document discusses CXL, a new open standard protocol for efficient CPU and memory connectivity. CXL allows for memory disaggregation and pooling across devices by enabling high-bandwidth, low-latency connections between CPUs, GPUs, accelerators, and memory. This helps address the growing CPU-memory bottleneck by allowing expansion of memory capacity beyond what can physically connect to the CPU. CXL also enables memory tiering by providing different performance and cost options for "near" directly attached memory versus "far" switched or fabric attached memory.
Electrical interfaces at 112 Gbps are a critical enabler of faster, more efficient and cost effective networks and data centers. A panel of OIF contributors will discuss the ongoing CEI-112G electrical interface development projects, and the new architectures they will enable including chiplet packaging, co-packaged optics and internal cable based solutions. The panel will provide an update on the multiple interfaces being defined by the OIF including CEI-112G MCM, XSR, VSR, MR and LR for 112 Gbps applications of die-to-die, chip-to-module, chip-to-chip and long reach over backplane and cables. Listen to thought leaders in the electrical interface industry debate the issues surrounding the CEI-112G projects and the architectures they will enable.
AMD is an American semiconductor company and the second largest supplier of microprocessors based on the x86 architecture after Intel. It develops computer processors and graphics cards. AMD was founded in 1969 and initially produced logic chips. In the 1980s, AMD began producing clones of Intel CPUs like the 286 and 386 under an agreement but this was later cancelled by Intel. AMD went on to produce its own CPUs like the K5, K6 and Athlon series to compete directly with Intel's offerings. It acquired ATI in 2006 to strengthen its graphics card business against Nvidia.
MOBILE PROCESSORS IN NOWADAYS AVAILABLE MOBILE AND TABLETS.Today’s smartphone and mobile processors are very powerful, so powerful that it is almost as powerful as a desktop computer. Processors are now coming up with more cores. Initially it was Single core, and then came Dual core; we now have Quad core, Hexa core and even Octa core. Most processors are 64 bit now as against 32 bit when it started initially. The processing speed has reached up to 3.0 -3.5 GHz. The ability to include GPU (Graphic Processing Unit) inside mobile processors has enabled devices to churn out the best graphics picture, 3D capability, Virtual Reality capability and 4k recording. The improved processor technology also made today’s modern mobile devices more power efficient. In this article we will learn different processor used in mobile, tablet, and laptops.
CXL is an open standard for connecting CPUs, GPUs, and accelerators that maintains memory coherency. It aims to provide high-speed, low-latency connections while enabling these devices to directly access each other's memory. CXL builds on PCIe physically but introduces new protocols for memory coherency and acceleration that make it well-suited for AI, machine learning, and high performance computing workloads. CXL devices come in three types - Type 1 devices have caches, Type 2 devices have local memory accessible by the CPU, and Type 3 devices are memory expanders.
This document provides information about the XPS 16550 UART, XPS Serial Peripheral Interface (SPI), XPS Timer/Counter, and associated tools. It describes the features and modules of each peripheral component, including diagrams of their top-level and detailed block designs. Key aspects like supported device families, register modules, and operating modes are summarized for each component.
Q1 Memory Fabric Forum: Building Fast and Secure Chips with CXL IPMemory Fabric Forum
Gary Ruggles, Sr Product Manger for PCIe and CXL Controller IP, provides an provides example use cases for adoption of CXL, an introduction to Synopsys CXL IP Solutions, interop and proof points.
The document discusses neurosynaptic chips and their advantages over conventional chips. It provides an introduction to neurosynaptic systems and artificial neural networks. It then compares neurosynaptic chips to conventional chips in terms of architecture, complexity, power efficiency, density and speed. Neurosynaptic chips are more efficient and dense as they mimic the brain's architecture by integrating processing and storage. The document also analyzes the performance of neurosynaptic systems from IBM, Stanford and other research organizations compared to the human brain.
This document provides an overview of system on chip (SoC) interconnect architectures and standard bus protocols. It discusses key considerations for choosing an interconnect architecture such as bandwidth, latency, and clock domains. Common SoC bus standards including AMBA, CoreConnect, and Wishbone are described along with their bus architectures and components. The document also provides details on specific buses within standards, such as AMBA's AHB, ASB, and APB buses and CoreConnect's PLB, OPB, and DCR buses.
During the CXL Forum at OCP Global Summit, Michael Ocampo of Astera Labs explained the problem of the memory wall, and how CXL memory powered by Astera Labs can break through
This document discusses NVIDIA's technologies for artificial intelligence and accelerated computing. It highlights NVIDIA's GPUs, systems, SDKs, and frameworks that power AI workloads at scale. These include the H100 GPU, DGX systems, Triton inference server, RAPIDS libraries, and Omniverse platform for simulation and digital twins. The document also outlines key applications and industries that are being accelerated by NVIDIA's technologies like autonomous vehicles, healthcare, robotics, and more.
This document provides information about Intel processors, specifically the i3, i5, and i7 models. It discusses the basic features of each line including core counts, clock speeds, cache sizes, and the inclusion of technologies like hyperthreading. The i3 is positioned as the entry-level option with dual cores, lower speeds and smaller caches. The i5 is mid-range with quad cores on some models and larger caches. The i7 is the high-end option with support for quad, hex, and octa-core configurations along with the largest caches and inclusion of hyperthreading across all models.
STT MRAM for Artificial Intelligence ApplicationsDanny Sabour
Yiming Huai of Avalanche Technology presented on STT-MRAM and its applications for artificial intelligence at Semicon Taiwan 2020. STT-MRAM offers benefits over other memory types like flash and SRAM as it has unlimited endurance, high speed performance comparable to DRAM, non-volatility, and can scale to smaller nodes. Avalanche has developed pMTJ technology that achieves fast write speeds of 20ns or less. Measurement results showed STT-MRAM macros achieved endurance of over 1014 cycles, 10-year data retention at 125C, and are manufactured on 22nm process through partnerships with foundries. STT-MRAM is well-suited for edge AI applications requiring high
The use of multiple cameras applications is exploding. We're not just talking about 2 cameras for 3D or depth sensing, but 3-12 cameras for applications like drones, robotics, automotive, etc. The increasing use of multiple cameras combined with the growing use of mobile components such as apps processors, image sensors, displays, etc., being used by the broad market requires logic to connect these devices. In this presentation, Ted Marena of Microsemi explains how FPGAs can be used to leverage mobile components to aggregate a large number of MIPI CSI-2 camera interfaces.
This document discusses AMD processors and their history. It provides details about AMD's first in-house x86 processor (K5), the introduction of the Athlon processor in 1999, and AMD's development of 64-bit processors including the Opteron and Sempron. Pros of AMD processors include competitive gaming performance and integrated security features, while cons include limited memory compatibility and potential overheating issues in older models. The document recommends AMD for their competitive pricing and power efficiency.
The document discusses Ovonic Unified Memory (OUM), a promising emerging non-volatile memory technology that uses a reversible phase change in chalcogenide materials to store data. OUM offers advantages over existing memory technologies like DRAM, SRAM, and flash memory by allowing higher density stacking and greater endurance. The document provides background on the history and development of OUM, which was originally explored in the 1960s and offers potential to address scaling limitations of other memory technologies.
As generative AI adoption grows at record-setting speeds and computing demands increase, hybrid processing is more important than ever. But just like traditional computing evolved from mainframes and thin clients to today’s mix of cloud and edge devices, AI processing must be distributed between the cloud and devices for AI to scale and reach its full potential. In this talk you’ll learn:
• Why on-device AI is key
• Which generative AI models can run on device
• Why the future of AI is hybrid
• Qualcomm Technologies’ role in making hybrid AI a reality
Pure Storage Company presentation - Ruben WuRuben Wu
Pure Storage is an enterprise data storage company that provides all-flash arrays and software subscriptions to businesses. Founded in 2009, Pure Storage released its first product, FlashArray, in 2011 and went public in 2014. With over 2500 employees currently, Pure Storage has become a market leader through its cutting-edge technology, integrated hardware and software solutions, and focus on customer support. The company is predicted to continue succeeding against competitors through its startup culture work environment and leadership in the all-flash storage market.
Delivering Carrier Grade OCP for Virtualized Data CentersRadisys Corporation
This webinar explores the requirements for carrier grade Open Compute Project (OCP) infrastructure for virtualized telecom data centers delivering SDN and NFV for digital services.
Lenovo's Cloud Network Operating System (CNOS) enables enterprise networks to scale in cloud environments. CNOS provides programmability, cloud scale, and resilience through features like event-driven multi-process architecture, fault isolation, high availability, state of the art routing protocols, and 32-way multipath scale out. CNOS works with software-defined data center tools to automate provisioning, configuration, and orchestration at large scale.
CXL Memory Expansion, Pooling, Sharing, FAM Enablement, and SwitchingMemory Fabric Forum
The document discusses CXL, a new open standard protocol for efficient CPU and memory connectivity. CXL allows for memory disaggregation and pooling across devices by enabling high-bandwidth, low-latency connections between CPUs, GPUs, accelerators, and memory. This helps address the growing CPU-memory bottleneck by allowing expansion of memory capacity beyond what can physically connect to the CPU. CXL also enables memory tiering by providing different performance and cost options for "near" directly attached memory versus "far" switched or fabric attached memory.
Electrical interfaces at 112 Gbps are a critical enabler of faster, more efficient and cost effective networks and data centers. A panel of OIF contributors will discuss the ongoing CEI-112G electrical interface development projects, and the new architectures they will enable including chiplet packaging, co-packaged optics and internal cable based solutions. The panel will provide an update on the multiple interfaces being defined by the OIF including CEI-112G MCM, XSR, VSR, MR and LR for 112 Gbps applications of die-to-die, chip-to-module, chip-to-chip and long reach over backplane and cables. Listen to thought leaders in the electrical interface industry debate the issues surrounding the CEI-112G projects and the architectures they will enable.
AMD is an American semiconductor company and the second largest supplier of microprocessors based on the x86 architecture after Intel. It develops computer processors and graphics cards. AMD was founded in 1969 and initially produced logic chips. In the 1980s, AMD began producing clones of Intel CPUs like the 286 and 386 under an agreement but this was later cancelled by Intel. AMD went on to produce its own CPUs like the K5, K6 and Athlon series to compete directly with Intel's offerings. It acquired ATI in 2006 to strengthen its graphics card business against Nvidia.
MOBILE PROCESSORS IN NOWADAYS AVAILABLE MOBILE AND TABLETS.Today’s smartphone and mobile processors are very powerful, so powerful that it is almost as powerful as a desktop computer. Processors are now coming up with more cores. Initially it was Single core, and then came Dual core; we now have Quad core, Hexa core and even Octa core. Most processors are 64 bit now as against 32 bit when it started initially. The processing speed has reached up to 3.0 -3.5 GHz. The ability to include GPU (Graphic Processing Unit) inside mobile processors has enabled devices to churn out the best graphics picture, 3D capability, Virtual Reality capability and 4k recording. The improved processor technology also made today’s modern mobile devices more power efficient. In this article we will learn different processor used in mobile, tablet, and laptops.
CXL is an open standard for connecting CPUs, GPUs, and accelerators that maintains memory coherency. It aims to provide high-speed, low-latency connections while enabling these devices to directly access each other's memory. CXL builds on PCIe physically but introduces new protocols for memory coherency and acceleration that make it well-suited for AI, machine learning, and high performance computing workloads. CXL devices come in three types - Type 1 devices have caches, Type 2 devices have local memory accessible by the CPU, and Type 3 devices are memory expanders.
This document provides information about the XPS 16550 UART, XPS Serial Peripheral Interface (SPI), XPS Timer/Counter, and associated tools. It describes the features and modules of each peripheral component, including diagrams of their top-level and detailed block designs. Key aspects like supported device families, register modules, and operating modes are summarized for each component.
Q1 Memory Fabric Forum: Building Fast and Secure Chips with CXL IPMemory Fabric Forum
Gary Ruggles, Sr Product Manger for PCIe and CXL Controller IP, provides an provides example use cases for adoption of CXL, an introduction to Synopsys CXL IP Solutions, interop and proof points.
The document discusses neurosynaptic chips and their advantages over conventional chips. It provides an introduction to neurosynaptic systems and artificial neural networks. It then compares neurosynaptic chips to conventional chips in terms of architecture, complexity, power efficiency, density and speed. Neurosynaptic chips are more efficient and dense as they mimic the brain's architecture by integrating processing and storage. The document also analyzes the performance of neurosynaptic systems from IBM, Stanford and other research organizations compared to the human brain.
This document provides an overview of system on chip (SoC) interconnect architectures and standard bus protocols. It discusses key considerations for choosing an interconnect architecture such as bandwidth, latency, and clock domains. Common SoC bus standards including AMBA, CoreConnect, and Wishbone are described along with their bus architectures and components. The document also provides details on specific buses within standards, such as AMBA's AHB, ASB, and APB buses and CoreConnect's PLB, OPB, and DCR buses.
During the CXL Forum at OCP Global Summit, Michael Ocampo of Astera Labs explained the problem of the memory wall, and how CXL memory powered by Astera Labs can break through
This document discusses NVIDIA's technologies for artificial intelligence and accelerated computing. It highlights NVIDIA's GPUs, systems, SDKs, and frameworks that power AI workloads at scale. These include the H100 GPU, DGX systems, Triton inference server, RAPIDS libraries, and Omniverse platform for simulation and digital twins. The document also outlines key applications and industries that are being accelerated by NVIDIA's technologies like autonomous vehicles, healthcare, robotics, and more.
This document provides information about Intel processors, specifically the i3, i5, and i7 models. It discusses the basic features of each line including core counts, clock speeds, cache sizes, and the inclusion of technologies like hyperthreading. The i3 is positioned as the entry-level option with dual cores, lower speeds and smaller caches. The i5 is mid-range with quad cores on some models and larger caches. The i7 is the high-end option with support for quad, hex, and octa-core configurations along with the largest caches and inclusion of hyperthreading across all models.
STT MRAM for Artificial Intelligence ApplicationsDanny Sabour
Yiming Huai of Avalanche Technology presented on STT-MRAM and its applications for artificial intelligence at Semicon Taiwan 2020. STT-MRAM offers benefits over other memory types like flash and SRAM as it has unlimited endurance, high speed performance comparable to DRAM, non-volatility, and can scale to smaller nodes. Avalanche has developed pMTJ technology that achieves fast write speeds of 20ns or less. Measurement results showed STT-MRAM macros achieved endurance of over 1014 cycles, 10-year data retention at 125C, and are manufactured on 22nm process through partnerships with foundries. STT-MRAM is well-suited for edge AI applications requiring high
The use of multiple cameras applications is exploding. We're not just talking about 2 cameras for 3D or depth sensing, but 3-12 cameras for applications like drones, robotics, automotive, etc. The increasing use of multiple cameras combined with the growing use of mobile components such as apps processors, image sensors, displays, etc., being used by the broad market requires logic to connect these devices. In this presentation, Ted Marena of Microsemi explains how FPGAs can be used to leverage mobile components to aggregate a large number of MIPI CSI-2 camera interfaces.
This document discusses AMD processors and their history. It provides details about AMD's first in-house x86 processor (K5), the introduction of the Athlon processor in 1999, and AMD's development of 64-bit processors including the Opteron and Sempron. Pros of AMD processors include competitive gaming performance and integrated security features, while cons include limited memory compatibility and potential overheating issues in older models. The document recommends AMD for their competitive pricing and power efficiency.
The document discusses Ovonic Unified Memory (OUM), a promising emerging non-volatile memory technology that uses a reversible phase change in chalcogenide materials to store data. OUM offers advantages over existing memory technologies like DRAM, SRAM, and flash memory by allowing higher density stacking and greater endurance. The document provides background on the history and development of OUM, which was originally explored in the 1960s and offers potential to address scaling limitations of other memory technologies.
As generative AI adoption grows at record-setting speeds and computing demands increase, hybrid processing is more important than ever. But just like traditional computing evolved from mainframes and thin clients to today’s mix of cloud and edge devices, AI processing must be distributed between the cloud and devices for AI to scale and reach its full potential. In this talk you’ll learn:
• Why on-device AI is key
• Which generative AI models can run on device
• Why the future of AI is hybrid
• Qualcomm Technologies’ role in making hybrid AI a reality
Pure Storage Company presentation - Ruben WuRuben Wu
Pure Storage is an enterprise data storage company that provides all-flash arrays and software subscriptions to businesses. Founded in 2009, Pure Storage released its first product, FlashArray, in 2011 and went public in 2014. With over 2500 employees currently, Pure Storage has become a market leader through its cutting-edge technology, integrated hardware and software solutions, and focus on customer support. The company is predicted to continue succeeding against competitors through its startup culture work environment and leadership in the all-flash storage market.
Delivering Carrier Grade OCP for Virtualized Data CentersRadisys Corporation
This webinar explores the requirements for carrier grade Open Compute Project (OCP) infrastructure for virtualized telecom data centers delivering SDN and NFV for digital services.
Lenovo's Cloud Network Operating System (CNOS) enables enterprise networks to scale in cloud environments. CNOS provides programmability, cloud scale, and resilience through features like event-driven multi-process architecture, fault isolation, high availability, state of the art routing protocols, and 32-way multipath scale out. CNOS works with software-defined data center tools to automate provisioning, configuration, and orchestration at large scale.
The document outlines Madge Perspective's strategy for advancing Token Ring networking into the 21st century by addressing scalability, affordability, new applications, and integration with Ethernet. It proposes using high-speed Token Ring, micro-segmentation switches, affordable workgroup switches, and standards-based integration of Ethernet and Token Ring via 802.1q to meet changing network demands and support new applications.
Platforms for Accelerating the Software Defined and Virtual Infrastructure6WIND
As network infrastructures evolve and selected elements shift from physical systems to virtual functions a new class of network appliance is required that provides high performance processing, balanced I/O and hardware or software acceleration. Such a platform must combine standard server technology and modular systems that can be configured to support line rate performance with network interfaces up to 100Gbit/s.
This webinar will discuss a class of network appliance that offers performance levels previously requiring more complex and costly architectures while integrating seamlessly with standard software frameworks such as Linux, Open vSwitch (OVS) and Intel® Data Plane Development Kit (DPDK).
Broad Sky explores 2 of the latest technologies in Wireless providing the fastest LTE speeds on the market. Carrier Aggregation, QAM, MIMO and Bonding technologies make the most out of the LTE carrier networks on the path to 5G. Found out more about how the Need for Speed has driven these technologies and what they can do for your customers.
In this deck, Gilad Shainer from Mellanox provides an overview of new products for the SC15 conference.
Learn more: http://mellanox.com
Watch the video presentation: http://wp.me/p3RLHQ-eJu
Sign up for our insideHPC Newsletter
This document discusses disruptive technologies, specifically how Moore's Law has impacted the technology industry and networking. It provides three key points:
1. Moore's Law, which predicted the doubling of transistors on integrated circuits every two years, has been the guiding principle for new product development. However, for networking, transistor count has doubled but speed has increased slowly.
2. Networking performance has not kept up with Moore's Law like CPU performance has. Network ASICs have increased 10x over 12 years while CPUs increased 64x.
3. Merchant silicon using full custom chip designs has allowed networking to scale at Moore's Law growth rates, providing higher port density, lower price per port, and lower power consumption
White Box Hardware Challenges in the 5G & IoT Hyperconnected EraCharo Sanchez
The development of an agile mobile network that supports a massive number of connected devices, low latencies, broadband speeds, network slicing, and edge intelligence is the result of a number of technologies that form the 5G vision. Advantech 5G Edge Servers and Universal Edge Appliances have been designed for the network edge to meet high availability network needs providing an open virtual infrastructure for seamless network transformation toward cloud native 5G architectures. From SD-WAN and private networks to virtual RAN, Central Office and Edge Cloud, Advantech is enabling the co-creation of products and services that will form the backbone of the new 5G & IoT economy.
www.advantech.com/nc/spotlight/5G
This document discusses 11ax wireless solutions from EnGenius Technologies for SMBs. It begins with an introduction to EnGenius and their mission to deliver reliable and affordable connectivity solutions. It then covers the growing demands on wireless networks from more connected devices and increasing bandwidth needs. The bulk of the document discusses Wi-Fi 6 technology standards and how EnGenius 11ax access points meet the new capabilities like higher speeds, improved range, and increased client capacity. Specifically, it introduces EnGenius' first 11ax 2x2 and 4x4 access point models targeted at different customer deployment needs.
This document discusses converged I/O and networking solutions for HP ProLiant Gen8 servers. It describes how the FlexLOM adapter card allows servers to support 10GbE, iSCSI, and FCoE with a single network interface. The document also discusses how virtualization and higher server densities are driving demand for 10GbE networking and converged infrastructure solutions.
Intel® Ethernet Series Delivering Real-World Value. As computing and networking scale in performance, interconnect technologies play a critical role in ensuring systems reach their full potential in the speed at which they move data. Intel has been at the forefront of research and development into interconnect technologies since the dawn of the PC era. Today in the data center, Intel is working to deliver greater levels of intelligence within its connectivity solutions to overcome network bottlenecks and accelerate applications. Between PC and peripherals, Intel is heavily involved with the industry as it brings the latest technologies to market for the best user experiences. At the chip level, Intel is leading the industry in advanced packaging with technologies that connect chiplets and modules in order to deliver Moore’s Law advances, while also working to reduce latency between memory and CPU. From “Microns to Miles,” Intel’s investments in interconnect technologies are among the broadest in the industry.
1) Huawei's agile network solution focuses on providing a user-centric, collaborative network with real-time quality awareness and centralized dynamic orchestration through software-defined networking.
2) The solution includes agile campus, branch, WAN, and data center networks. It leverages an agile controller, super switches, and virtual fabric to provide unified, simplified management of wired and wireless infrastructure.
3) Huawei's innovations for the agile network include integrated wired and wireless configuration, aggregation of multiple nodes, free user mobility across networks, distributed security coordination, and quality awareness measurement technologies.
Fernando Loureiro Presentation / CloudViews.Org - Cloud Computing Conference ...EuroCloud
The document discusses how cloud computing and network virtualization are adding pressure to communication networks to be more reliable, fast, and available from anywhere. It summarizes how network equipment has entered the 100G era due to data growth. It also discusses the benefits of multi-area, multi-layer network designs including good security, scalability, and availability. The document promotes 3Com and its solutions for data centers, virtualization, and intelligent resilient frameworks (IRF).
Martello Technologies hosted a webinar on unleashing the power of 5G and enhancing SD-WAN experiences. The webinar discussed how 5G will accelerate industry growth in transportation, manufacturing, healthcare and government through technologies like AI, edge computing and automation. It also covered challenges 5G faces with deployment, compatibility with emerging 5G standards, and how SD-WAN can help organizations get the most out of their networks in a 5G environment by reducing data transport times, prioritizing application traffic, and offering better security. The webinar concluded by outlining how Martello can help organizations both today and prepare for 5G's future.
- eInfochips provides semiconductor solutions including silicon design, verification, physical design, and board design. They have over 16 years of experience and 150+ tape-outs.
- Their offerings include silicon design, reference design, product design, and they have expertise in various tools. They have 30+ verification IPs and 20+ design IPs.
- They have long term relationships with 12 ODCs and have delivered first pass silicon success for tier 1 clients. Their experience and internal processes help ensure on-time delivery.
The document summarizes emerging computing trends in data centers, including:
1) The shift to multi-core CPU designs after Dennard scaling broke down, driven by the need for energy efficient designs for cloud computing.
2) The rise of heterogeneous computing using application-specific accelerators like GPUs and FPGAs to improve efficiency for targeted workloads like machine learning.
3) How technologies developed for mobile and edge computing like ARM cores can improve data center server efficiency through typical-use optimization rather than just peak performance.
The number of internet-connected devices is growing exponentially, enabling an increasing number of edge applications in environments such as smart cities, retail, and industry 4.0. These intelligent solutions often require processing large amounts of data, running models to enable image recognition, predictive analytics, autonomous systems, and more. Increasing system workloads and data processing capacity at the edge is essential to minimize latency, improve responsiveness, and reduce network traffic back to data centers. Purpose-built systems such as Supermicro’s short-depth, multi-node SuperEdge, powered by 3rd Gen Intel® Xeon® Scalable processors, increase compute and I/O density at the edge and enable businesses to further accelerate innovation.
Join this webinar to discover new insights in edge-to-cloud infrastructures and learn how Supermicro SuperEdge multi-node solutions leverage data center scale, performance, and efficiency for 5G, IoT, and Edge applications.
The document discusses potential network architectures using Riverbed products to optimize wide area network (WAN) performance for a school district. It describes the district's current network with over 230 sites on various connection speeds. It then outlines several Riverbed deployment options including in-path, out-of-path, and virtual in-path configurations. Benefits mentioned include increased WAN performance, user productivity, data backup and replication speeds, and cost reductions. Implementation and support processes are also summarized.
Learn how AWS, along with partners Teradici and Sohonet, enables a virtual workstation environment for VFX content creation. Using AWS G3 instances, this PCoIP solution for creative professionals delivers a pixel perfect, color accurate, fully-interactive native desktop experience for both Windows and Linux platforms. This is ideal for visual effects artists who also require various input peripherals such as latest generation Wacom 8K pressure sensitive tablets and Wacom Cintiq monitors to work as seamlessly as they do on-premises.
Similar to Molex and Nvidia - Partnership to enable copper for the next generation artificial intelligence computing (20)
Q1 Memory Fabric Forum: ZeroPoint. Remove the waste. Release the power.Memory Fabric Forum
Nilesh Shah provide an overview of the ZeroPoint portable, hardware IP portfolio for lossless memory compression and compaction. The IP boosts memory capacity 2-4x, bandwidth and performance/watt by 50%, and is 1,000x faster than competitors.
Q1 Memory Fabric Forum: Using CXL with AI Applications - Steve Scargall.pptxMemory Fabric Forum
MemVerge product manager and software architect Steve Scargall discusses key factors related to the use of CXL with AI apps including, memory expansion form factors, latency and bandwidth memory placement strategies, RDBMS investigation and results, vector database investigation, and results understanding your application behavior.
Q1 Memory Fabric Forum: Memory expansion with CXL-Ready Systems and DevicesMemory Fabric Forum
Ravi Gummaluri, Director, CXL System Architecture at Micron describes use cases for memory expansion with tiered DRAM and CXL memory, along with performance data.
Q1 Memory Fabric Forum: CXL-Related Activities within OCPMemory Fabric Forum
OCP steering committee member, and former President of the CXL Consortium, Siamak Tavallaei, provides an overview of CXL-related activities happening within the Open Compute Project.
Q1 Memory Fabric Forum: CXL Controller by Montage TechnologyMemory Fabric Forum
For CXL AIC and memory module designers, Nilesh Shah of Montage provides and overview of their CXL memory controller product, technology, and performance.
Nick Kriczsky and Gorden Getty provide an overview of Teledyne LeCroy’s Austin Labs portfolio of products to services including: 1) testing for protocol and electrical compliance, interoperability, data integrity, and performance, 2) In depth protocol training (PCIe, USB, NVMe, NVMe-oF, Fibre Channel), and 3) Automation (solutions for analysis, jamming, generation)
Ecosystem Alliance Manager Michael Ocampo talks about the CXL industry's effort to break through the memory wall, memory bound use cases, CXL for modular shared infrastructure, and critical CXL collaboration that's happening now.
Torry Steed, Sr. Product Marketing Manager at SMART Modular, provides an overview of CXL PCIe Add-in Cards (AICs) and memory modules that can be used to expand capacity in servers or in external memory pooling systems.
Torry Steed, Sr. Staff Product Manager at SMART Modular, covers the changing shape of memory leading to new categories of CXL form factors. He dives deeper to address EDSFF and AIC variations, mechanical sizes, installation locations, capacity considerations, and power ratings.
Q1 Memory Fabric Forum: Memory Fabric in a Composable SystemMemory Fabric Forum
Eddie McMorrow, Sr. Product Manager at GigaIO, defines composable infrastructure and memory fabrics, then provides and overview of the FabreX memory fabric.
Q1 Memory Fabric Forum: Memory Processor Interface 2023, Focus on CXLMemory Fabric Forum
Thibault Grossi, Sr. Technology & Market Analyst, shares excerpts from the recently published report, Memory Processor Interface, Focus on CXL. The reports provides a taxonomy of CXL market segments and revenue forecasts through 2028.
Q1 Memory Fabric Forum: Micron CXL-Compatible Memory ModulesMemory Fabric Forum
Michael Abraham, Director of Product Management at Micron, discusses data center challenges, the memory and storage hierarchy, Micron CZ120 memory modules, database (TPC-H) improvements, AI inferencing improvements, and how to enabling in your company.
Q1 Memory Fabric Forum: Advantages of Optical CXL for Disaggregated Compute ...Memory Fabric Forum
Ron Swartzentruber, Director of Engineering at Lightelligence, explains why optical connectivity is needed for CXL fabrics, and provides an overview of the Photowave line of port expander PCIe cards and active optical cables.
Q1 Memory Fabric Forum: Intel Enabling Compute Express Link (CXL)Memory Fabric Forum
- Memory intensive workloads are dominating computing and increasing memory capacity just with CPU-attached DRAM is getting expensive.
- CXL allows augmenting system memory footprint at lower cost by running over existing PCIe links to add memory outside of the CPU package.
- Intel Xeon roadmap fully supports CXL starting with 5th Gen Xeons, and Intel CPUs offer unique hardware-based tiering modes between native DRAM and CXL memory without depending on the operating system.
- CXL has full industry support as the standard for coherent input/output.
Arvind Jagannath of VMware makes the case for bridging the CPU-Memory imbalance with memory tiering, describes their vision for memory disaggregation, and explains that VMware will support CXL Expanders – Specific Configurations, Memory Tiering to reduce overall TCO, and Memory Accelerators to enable CXL-based use-cases.
MemVerge Field CTO Yong Tian shows what memory expansion costs with an analysis of various server configurations with up to 8TB of tiered DRAM and CXL memory.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
How to Get CNIC Information System with Paksim Ga.pptx
Molex and Nvidia - Partnership to enable copper for the next generation artificial intelligence computing
1. Partnership to enable copper for the next
generation artificial intelligence computing
2. Partnership to enable copper for the next
generation artificial intelligence computing
[Seunghyun Eddy Hwang, Principal SI Lead, NVIDIA]
[Wai Kiong Poon, Global Product Manager, MOLEX]
3. Enabling Copper for Artificial Intelligence Computing
Next Generation Architecture
4. Voice of Customer
High Speed Board-To-Board (112G PAM4 and beyond)
Low Profile (5mm mated height as initial target)
Surface Mount Termination (Solder Ball Attachment)
Minimum PCB Real-Estate
Mechanical Robust
Simplicity in Design
Ease of manufacturing; Cost Effective
High Speed Mezzanine Connector
5. MM
MM Pro
MMe
• Speed up to 112Gbps
• Speed up to 112Gbps (improved performance)
• Speed up to 224Gbps
Evolution of High-Speed Mezzanine
High Speed Mezzanine Connector
6. High Speed Mezzanine Connector
What have we learnt from the current Mezzanine Design?
What have we learnt from the current Mezzanine Design?
— What are the Current Issues?
— What is the Signal Integrity Limitations?
Moving from 112G to 224G:
— Understanding issues with current design
— Understanding limitations with current assembly process
These understandings will enable us to design the next generation of High-
Speed Board to Board Connector
7. SI Performance influenced by Terminal & Assembly Process
A good impedance control is critical for 224G application, but the impedance
optimization is limited by the following:
— Connector Design
— Assembly Process
Due to the current stitched terminal design,
the flexibility and sensitivity of the terminal
width tuning becomes very challenging
Dimension Constraints due to Assembly Process.
High Speed Mezzanine Connector
8. Limitations of Current Design
Minor variation in the dimensions will result in resonance
This type of variation is very difficult to control during assembly process
High Speed Mezzanine Connector
9. TDR Connector only
Sensitivity of SI Performance due to Terminal Deflection
Aside from dimension variation, the impedance is also sensitive to
deflection condition which affects the distance between terminal and its
nearby plastic
High Speed Mezzanine Connector
11. MM
MM Pro
MMe
• Speed up to 112Gbps
• Speed up to 112Gbps (improved performance)
• Speed up to 224Gbps
Evolution of High-Speed Mezzanine
High Speed Mezzanine Connector
12. Victim : Diff 4
Aggressor : others
#Rference impedance 86ohm
Comparing MM, MM Pro Vs. MMe (Signal Integrity)
High Speed Mezzanine Connector
13. Comparing MM, MM Pro Vs. MMe (Signal Integrity)
High Speed Mezzanine Connector
14. • It’s no secret that GPU accelerators now power many of the world’s fastest supercomputers and AI
systems
• NVLink is the world’s first proprietary system interconnect technology from Nvidia that allows multiple
GPUs to communicate directly via a high-speed interconnection
• NVLink connects the machines’ processors – CPUs and GPUs – so they can exchange data much faster
than CPU
What is NVLink?
15. NVIDIA DGX
• Nvidia Tesla V100 is the
world’s most advanced
data center GPU
• Supports AI, deep
learning, HPC, and
autonomous driving
• Tesla V100 offers the
• DGX H100, which uses 4th
generation of NVLINK, is the
world’s most advanced GPU
for large generative AI and
other transformer-based
workloads
• H100 contains 8 GPU modules
that communicate thru
NVLink needs high-speed
connector for baseboard
attachment
16. Improved H100 Performance Bring SI
Challenge
• Significant performance boost from A100
• Fourth-generation NVIDIA
NVLink provides a 3x bandwidth
increase on all-reduce operations and
multi-GPU IO operating at 7x the
bandwidth of PCIe Gen 5.
• Yet, no significant form factor change for
improved performance, which introduces
significant SI challenges
A100
H100
17. Mezzanine Connector Selection
Criteria
• SI performance
• As shown in crosstalk
comparison plot in left
figure, other vendor
crosstalk is much worse
than Molex
• Formfactor
• Molex pinout can route all
NVLink high-speed signals
in less number of layers
than other vendor
• Mechanical stability
Molex
Other vendor