OCP Steering Committee member and ex-President of the CXL Consortium, Siamak Tavallaei, provides an update on the CXL specifications with a focus on the recently released 3.1 specification.
Torry Steed, Sr. Product Marketing Manager at SMART Modular, provides an overview of CXL PCIe Add-in Cards (AICs) and memory modules that can be used to expand capacity in servers or in external memory pooling systems.
Q1 Memory Fabric Forum: Memory Processor Interface 2023, Focus on CXLMemory Fabric Forum
Thibault Grossi, Sr. Technology & Market Analyst, shares excerpts from the recently published report, Memory Processor Interface, Focus on CXL. The reports provides a taxonomy of CXL market segments and revenue forecasts through 2028.
Q1 Memory Fabric Forum: Intel Enabling Compute Express Link (CXL)Memory Fabric Forum
- Memory intensive workloads are dominating computing and increasing memory capacity just with CPU-attached DRAM is getting expensive.
- CXL allows augmenting system memory footprint at lower cost by running over existing PCIe links to add memory outside of the CPU package.
- Intel Xeon roadmap fully supports CXL starting with 5th Gen Xeons, and Intel CPUs offer unique hardware-based tiering modes between native DRAM and CXL memory without depending on the operating system.
- CXL has full industry support as the standard for coherent input/output.
During the CXL Forum at OCP Global Summit, SMART Modular Director Product Marketing Arthur Sainio, provides an overview of the company's CXL memory cards and modules.
During the CXL Forum at OCP Global Summit 23, Rick Kutcipal and Sreeni Bagalkote of Broadcom presented their PCIe/CXL Roadmap and announced their Atlas 4 CXL switch.
Torry Steed, Sr. Product Marketing Manager at SMART Modular, provides an overview of CXL PCIe Add-in Cards (AICs) and memory modules that can be used to expand capacity in servers or in external memory pooling systems.
Q1 Memory Fabric Forum: Memory Processor Interface 2023, Focus on CXLMemory Fabric Forum
Thibault Grossi, Sr. Technology & Market Analyst, shares excerpts from the recently published report, Memory Processor Interface, Focus on CXL. The reports provides a taxonomy of CXL market segments and revenue forecasts through 2028.
Q1 Memory Fabric Forum: Intel Enabling Compute Express Link (CXL)Memory Fabric Forum
- Memory intensive workloads are dominating computing and increasing memory capacity just with CPU-attached DRAM is getting expensive.
- CXL allows augmenting system memory footprint at lower cost by running over existing PCIe links to add memory outside of the CPU package.
- Intel Xeon roadmap fully supports CXL starting with 5th Gen Xeons, and Intel CPUs offer unique hardware-based tiering modes between native DRAM and CXL memory without depending on the operating system.
- CXL has full industry support as the standard for coherent input/output.
During the CXL Forum at OCP Global Summit, SMART Modular Director Product Marketing Arthur Sainio, provides an overview of the company's CXL memory cards and modules.
During the CXL Forum at OCP Global Summit 23, Rick Kutcipal and Sreeni Bagalkote of Broadcom presented their PCIe/CXL Roadmap and announced their Atlas 4 CXL switch.
During the CXL Forum at OCP Global Summit, memory system architect Jungmin Choi of SK hynix talks about the need for memory bandwidth and capacity, and the SK hynix Niagara solution.
During the CXL Forum at OCP Global Summit, Enfabrica CEO Rochan Sankar described how to bridge the network and memory worlds with their accelerated compute fabric switch.
During the CXL Forum at OCP Global Summit, Dharmesh Jani of Meta and Siamak Tavalllei of the CXL Consortium describe the extensive work being done by the Open Compute Project related to CXL
During the CXL Forum at OCP Global Summit, Mahesh Wagh, CXL Consortium TTF Co-chair and Senior Fellow at AMD, presented and update of the CXL Consortium mission and road map.
In the CXL Forum Theater at SC23 hosted by MemVerge, the Open Compute Project provided an overview of CXL, as well as CXL-related hardware and software projects at OCP
Arm: Enabling CXL devices within the Data Center with Arm SolutionsMemory Fabric Forum
During the CXL Forum at OCP Summit, Arm Director of Segment Marketing Parag Beeraka provides and overview of the Arm portfolio of CXL products for the Data Center
Linux Memory Management with CMA (Contiguous Memory Allocator)Pankaj Suryawanshi
Fundamentals of Linux Memory Management and CMA (Contiguous Memory Allocator) In Linux.
Virtual Memory, Physical Memory, Swap Space, DMA, IOMMU, Paging, Segmentation, TLB, Hugepages, Ion google memory manager
Shared Memory Centric Computing with CXL & OMIAllan Cantle
Discusses how CXL can be better utilized as a separate Fabric Cache domain to a processors own Local Cache Domain. This is done by leveraging a Shared Memory Centric architectures that utilize both the Open Memory Interface OMI, and Compute eXpress Link, CXL, for the memory ports.
All Presentations during CXL Forum at Flash Memory Summit 22Memory Fabric Forum
The document summarizes a full-day forum hosted by the CXL Consortium and MemVerge on CXL. The morning agenda includes presentations on CXL from representatives of Google, Intel, PCI-SIG, Marvell, Samsung, and Micron. The afternoon agenda includes panels on CXL usage models from Meta, OCP, Anthropic, and MemVerge. A keynote presentation provides an update on the CXL Consortium and the recently released CXL 3.0 specification, including its expanded fabric capabilities and management features. The specification is aimed at enabling new usage models for memory sharing and expansion to address industry trends toward increased data processing demands.
• Semplifica l’ambiente di virtualizzazione per i sistemi IBM Power
• Aumenta la produttività dell’amministratore di Sistema
• Consente di replicare e creare rapidamente nuove VMs
• In fase di rilascio nuove funzioni e capacità di ripristino
1) cuDNN is a library of deep learning primitives for GPUs that provides highly tuned implementations of routines such as convolutions, pooling, and activation layers.
2) Version 2 of cuDNN focuses on improved performance and new features for deep learning practitioners. It supports 3D datasets and new GPUs like Tegra X1.
3) cuDNN can be easily enabled in frameworks like Caffe and Torch by making minor changes to code and is compatible with APIs for deep learning routines.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Oracle Cloud Infrastructure (OCI) is a secure, scalable, and highly available cloud computing service provided by Oracle. It offers infrastructure services like compute, storage, and networking, and features built-in security, high performance, and hybrid integration capabilities. Customers can use OCI to run enterprise workloads, develop applications, process big data, and more, with flexible pricing and 24/7 technical support.
This document summarizes a presentation on the Xen hypervisor. It begins with an introduction explaining that Xen allows multiple virtual machines to run simultaneously on one physical computer. It was developed by Xen Source and released in 2003 as open source software. Xen uses para-virtualization, requiring guest operating systems to be modified for its environment. The document then describes Xen's features such as being a type-1 hypervisor that controls hardware directly. It supports various guest operating systems. The architecture of Xen including its components like the hypervisor, Domain-0, and guest virtual machines is explained. Advantages like consolidation and costs savings are contrasted with disadvantages like service reliability. A comparison of Citrix XenServer and VMware shows XenServer is open source while VM
Hardware for deep learning includes CPUs, GPUs, FPGAs, and ASICs. CPUs are general purpose but support deep learning through instructions like AVX-512 and libraries. GPUs like NVIDIA and AMD models are commonly used due to high parallelism and memory bandwidth. FPGAs offer high efficiency but require specialized programming. ASICs like Google's TPU are customized for deep learning and provide high performance but limited flexibility. Emerging hardware aims to improve efficiency and better match neural network computations.
During the CXL Forum at OCP Global Summit, MemVerge software architect Steve Scargall defines the CXL software stack and where the development is being done.
During the CXL Forum at OCP Global Summit, Jeff Hilland of HPE explained what CXL, PCI SIG, DMTF, OFA, OCP, and SNIA are doing to make CXL fabric, memory and device management interoperable.
During the CXL Forum at OCP Global Summit, memory system architect Jungmin Choi of SK hynix talks about the need for memory bandwidth and capacity, and the SK hynix Niagara solution.
During the CXL Forum at OCP Global Summit, Enfabrica CEO Rochan Sankar described how to bridge the network and memory worlds with their accelerated compute fabric switch.
During the CXL Forum at OCP Global Summit, Dharmesh Jani of Meta and Siamak Tavalllei of the CXL Consortium describe the extensive work being done by the Open Compute Project related to CXL
During the CXL Forum at OCP Global Summit, Mahesh Wagh, CXL Consortium TTF Co-chair and Senior Fellow at AMD, presented and update of the CXL Consortium mission and road map.
In the CXL Forum Theater at SC23 hosted by MemVerge, the Open Compute Project provided an overview of CXL, as well as CXL-related hardware and software projects at OCP
Arm: Enabling CXL devices within the Data Center with Arm SolutionsMemory Fabric Forum
During the CXL Forum at OCP Summit, Arm Director of Segment Marketing Parag Beeraka provides and overview of the Arm portfolio of CXL products for the Data Center
Linux Memory Management with CMA (Contiguous Memory Allocator)Pankaj Suryawanshi
Fundamentals of Linux Memory Management and CMA (Contiguous Memory Allocator) In Linux.
Virtual Memory, Physical Memory, Swap Space, DMA, IOMMU, Paging, Segmentation, TLB, Hugepages, Ion google memory manager
Shared Memory Centric Computing with CXL & OMIAllan Cantle
Discusses how CXL can be better utilized as a separate Fabric Cache domain to a processors own Local Cache Domain. This is done by leveraging a Shared Memory Centric architectures that utilize both the Open Memory Interface OMI, and Compute eXpress Link, CXL, for the memory ports.
All Presentations during CXL Forum at Flash Memory Summit 22Memory Fabric Forum
The document summarizes a full-day forum hosted by the CXL Consortium and MemVerge on CXL. The morning agenda includes presentations on CXL from representatives of Google, Intel, PCI-SIG, Marvell, Samsung, and Micron. The afternoon agenda includes panels on CXL usage models from Meta, OCP, Anthropic, and MemVerge. A keynote presentation provides an update on the CXL Consortium and the recently released CXL 3.0 specification, including its expanded fabric capabilities and management features. The specification is aimed at enabling new usage models for memory sharing and expansion to address industry trends toward increased data processing demands.
• Semplifica l’ambiente di virtualizzazione per i sistemi IBM Power
• Aumenta la produttività dell’amministratore di Sistema
• Consente di replicare e creare rapidamente nuove VMs
• In fase di rilascio nuove funzioni e capacità di ripristino
1) cuDNN is a library of deep learning primitives for GPUs that provides highly tuned implementations of routines such as convolutions, pooling, and activation layers.
2) Version 2 of cuDNN focuses on improved performance and new features for deep learning practitioners. It supports 3D datasets and new GPUs like Tegra X1.
3) cuDNN can be easily enabled in frameworks like Caffe and Torch by making minor changes to code and is compatible with APIs for deep learning routines.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Oracle Cloud Infrastructure (OCI) is a secure, scalable, and highly available cloud computing service provided by Oracle. It offers infrastructure services like compute, storage, and networking, and features built-in security, high performance, and hybrid integration capabilities. Customers can use OCI to run enterprise workloads, develop applications, process big data, and more, with flexible pricing and 24/7 technical support.
This document summarizes a presentation on the Xen hypervisor. It begins with an introduction explaining that Xen allows multiple virtual machines to run simultaneously on one physical computer. It was developed by Xen Source and released in 2003 as open source software. Xen uses para-virtualization, requiring guest operating systems to be modified for its environment. The document then describes Xen's features such as being a type-1 hypervisor that controls hardware directly. It supports various guest operating systems. The architecture of Xen including its components like the hypervisor, Domain-0, and guest virtual machines is explained. Advantages like consolidation and costs savings are contrasted with disadvantages like service reliability. A comparison of Citrix XenServer and VMware shows XenServer is open source while VM
Hardware for deep learning includes CPUs, GPUs, FPGAs, and ASICs. CPUs are general purpose but support deep learning through instructions like AVX-512 and libraries. GPUs like NVIDIA and AMD models are commonly used due to high parallelism and memory bandwidth. FPGAs offer high efficiency but require specialized programming. ASICs like Google's TPU are customized for deep learning and provide high performance but limited flexibility. Emerging hardware aims to improve efficiency and better match neural network computations.
During the CXL Forum at OCP Global Summit, MemVerge software architect Steve Scargall defines the CXL software stack and where the development is being done.
During the CXL Forum at OCP Global Summit, Jeff Hilland of HPE explained what CXL, PCI SIG, DMTF, OFA, OCP, and SNIA are doing to make CXL fabric, memory and device management interoperable.
Synopsys: Achieve First Pass Silicon Success with Synopsys CXL IP SolutionsMemory Fabric Forum
This document discusses Synopsys' CXL IP solutions for enabling first pass silicon success. It provides an overview of:
- How large data sets are driving the need for CXL and larger, more efficient cache coherent storage.
- How CXL allows memory expansion by enabling one interface to connect to various memory types like DDR, LPDDR, and persistent memory.
- Synopsys' complete CXL IP solution which uses proven PCIe IP to provide a highly efficient 512-bit controller and 32GT/s PHY for maximum bandwidth and low latency.
- Synopsys' work with XConn to achieve first pass silicon success on a 256 lane CXL 2.0 switch SOC
Compute Express Link (CXL), an open industry standard memory connection. It offers fast connectivity between the many forms of memory utilized in modern data centers, including CPUs, TPUs, GPUs, and
other processor types.
To know more about “Compute Express Link (CXL) – Everything You Ought To Know,” click on the link https://www.logic-fruit.com/blog/cxl/compute-express-link-cxl/
About Logic Fruit Technologies
Logic Fruit Technologies is a product engineering R&D & consulting services provider for embedded systems and application development. We provide end-to-end solutions from the conception of the idea and design to the finished product. We have been servicing customers globally for over a decade.
The company has specific experience in various fields, such as
-FPGA Design & hardware design
-RTL IP Design
-A variety of digital protocols
-Communication buses as1G, 10G Ethernet
-PCIe
-DIGRF
-STM16/64
-HDMI.
Logic Fruit Technologies is also an expert in developing,
-software-defined radio (SDR) IPs
-Encryption
-Signal generation
-Data analysis, and
-Multiple Image Processing Techniques.
Recently Logic Fruit technologies are also exploring FPGA acceleration on data centers for real-time data processing.
**Our Social Media Channels**
Facebook: https://www.facebook.com/LogicFruit/
Twitter: https://twitter.com/logicfruit
LinkedIn: https://www.linkedin.com/company/logi…
Website: https://www.logic-fruit.com/
#LFT #LogicFruitTechnologies #LogicFruit
Interested to view more Slide shares, Click on the below links,
https://www.slideshare.net/LogicFruit/a-designers-practical-guide-to-arinc-429-standard-3pptx
https://www.slideshare.net/LogicFruit/a-swift-introduction-to-milstd
https://www.slideshare.net/LogicFruit/arinc-the-ultimate-guide-to-modern-avionics-protocol/LogicFruit/arinc-the-ultimate-guide-to-modern-avionics-protocol
https://www.slideshare.net/LogicFruit/arinc-629-digital-data-bus-specifications/LogicFruit/arinc-629-digital-data-bus-specifications
https://www.slideshare.net/LogicFruit/afdx
https://www.slideshare.net/LogicFruit/end-system-design-parameters-of-the-arinc-664-part-7
The researchers redesigned the software architecture for the sample handling robots at the Australian Synchrotron to improve reliability and enable new capabilities. The new architecture moves all robot control code to the robot's native SPEL language, exposes robot state through EPICS, and implements a client-server model with the robot controller interfacing through Python. This will allow upgrades like integrating a new web interface and enabling non-stop sample mounting while collecting data.
Evaluating UCIe based multi-die SoC to meet timing and power Deepak Shankar
This document discusses evaluating a UCIe-based multi-die system-on-chip (SoC) using system modeling to meet timing and power constraints. It provides an overview of UCIe and how it can be used to connect multiple dies. It then describes assembling a system model in VisualSim Architect using UCIe components to analyze configurations and optimize latency, bandwidth, and power. Examples of multi-media and automotive applications using UCIe-based chiplet designs are also presented.
Architecting for Hyper-Scale Datacenter EfficiencyIntel IT Center
Diane Bryant S. VP & GM of Intel’s Datacenter & Connected Systems Group discusses “Architecting for Hyper-Scale Datacenter Efficiency.” She reviews the newly announced Intel Atom Processor C2000 now in production, 1st “Silvermont” based SoC (System on a Chip), and the more than 50 new systems designs enabled – Microserver, Cold Storage, plus Entry Networking solutions. Diane also outlines the new Rack Scale Architecture Technologies enabled by next generation interconnect technologies using Intel silicon photonics.
Q1 Memory Fabric Forum: Building Fast and Secure Chips with CXL IPMemory Fabric Forum
Gary Ruggles, Sr Product Manger for PCIe and CXL Controller IP, provides an provides example use cases for adoption of CXL, an introduction to Synopsys CXL IP Solutions, interop and proof points.
Accelerating system verilog uvm based vip to improve methodology for verifica...VLSICS Design
In this paper we present the development of Acceleratable UVCs from standard UVCs in System Verilog
and their usage in UVM based Verification Environment of Image Signal Processing designs to increase
run time performance. This paper covers development of Acceleratable UVCs from standard UVCs for
internal control and data buses of ST imaging group by partitioning of transaction-level components and
cycle-accurate signal-level components between the software simulator and hardware accelerator
respectively. Standard Co-Emulation API: Modeling Interface (SCE-MI) compliant, transaction-level
communications link between test benches running on a host system and Emulation machine is established.
Accelerated Verification IPs are used at UVM based Verification Environment of Image Signal Processing
designs both with simulator and emulator as UVM acceleration is an extension of the standard simulationonly
UVM and is fully backward compatible with it. Acceleratable UVCs significantly reduces development
schedule risks while leveraging transaction models used during simulation.
In this paper, we discuss our experiences on UVM based methodology adoption on TestBench-Xpress
(TBX) based technology step by step. We are also doing comparison between the run time performance
results from earlier simulator-only environment and the new, hardware-accelerated environment. Although
this paper focuses on Acceleratable UVC’s development and their usage for image signal processing
designs. Same concept can be extended for non-image signal processing designs.
Performance of State-of-the-Art Cryptography on ARM-based MicroprocessorsHannes Tschofenig
Position paper for the NIST Lightweight Cryptography Workshop, 20th and 21st July 2015, Gaithersburg, US.
The link to the workshop is available at: http://www.nist.gov/itl/csd/ct/lwc_workshop2015.cfm
Ecosystem Alliance Manager Michael Ocampo talks about the CXL industry's effort to break through the memory wall, memory bound use cases, CXL for modular shared infrastructure, and critical CXL collaboration that's happening now.
Intel open stack-summit-session-nov13-finalDeepak Mane
- Intel is a major contributor to OpenStack and open source projects, contributing across every layer of the OpenStack stack. As the #2 Linux kernel contributor, Intel helps improve performance, stability, and efficiency.
- Intel enables OpenStack cloud deployments through contributions to OpenStack projects, open source tools, and optimizations. Intel IT also uses OpenStack in their own private cloud.
- Intel is working on technologies to address challenges in datacenters, including security and compliance, cost reduction, and business uptime. Technologies include trusted compute pools, erasure coding, and enhanced platform awareness.
Test system architectures using advanced standardized test languagesMiguel Conde-Ferreira
This document discusses using advanced standardized test languages like TTCN-3, UML testing profile, and TDL to test system architectures. It provides an overview of TTCN-3, including its design principles, domains of use in telecom and other industries, and examples of test system architectures using TTCN-3. Key benefits of TTCN-3 include its ability to test distributed systems, support various test types, and enable graphical test development, documentation and analysis.
A computer cluster is a group of tightly coupled computers that work together as a single computer. Clusters provide increased processing power at lower costs compared to single computers. They improve availability by eliminating single points of failure. Additional nodes can be added to a cluster to increase its overall capacity as processing demands grow. Key components of clusters include processors, memory, fast networking components, and specialized cluster software.
The document discusses the verification of the QorIQ Communication Platform containing the CoreNet Fabric using SystemVerilog. It describes the QorIQ platform as an SoC processor containing single, dual, and many cores that offers high performance, power efficiency, and programmability. Specifically, it highlights the QorIQ P4080 processor, which integrates eight Power Architecture cores, a tri-level cache hierarchy, and an innovative CoreNet fabric and data path acceleration. The presentation will focus on the verification challenges and solutions in verifying the CoreNet platform using SystemVerilog.
Machbase Neo is an innovative iot data processing solution that integrates various features into an #all_in_one timeseries database.
In the past, development organizations had to invest a lot of time and resources to build a single service or solution. Moreover, they had to navigate complex and challenging processes for data collection and processing. But now, with the introduction of Machbase Neo, these problems have been solved. You can now set up everything using just one Machbase Neo server, allowing developers to focus on their core tasks. This product can save developers over 90% of their time by eliminating unnecessary tasks.
Similar to Q1 Memory Fabric Forum: Compute Express Link (CXL) 3.1 Update (20)
Q1 Memory Fabric Forum: ZeroPoint. Remove the waste. Release the power.Memory Fabric Forum
Nilesh Shah provide an overview of the ZeroPoint portable, hardware IP portfolio for lossless memory compression and compaction. The IP boosts memory capacity 2-4x, bandwidth and performance/watt by 50%, and is 1,000x faster than competitors.
Q1 Memory Fabric Forum: Using CXL with AI Applications - Steve Scargall.pptxMemory Fabric Forum
MemVerge product manager and software architect Steve Scargall discusses key factors related to the use of CXL with AI apps including, memory expansion form factors, latency and bandwidth memory placement strategies, RDBMS investigation and results, vector database investigation, and results understanding your application behavior.
Q1 Memory Fabric Forum: Memory expansion with CXL-Ready Systems and DevicesMemory Fabric Forum
Ravi Gummaluri, Director, CXL System Architecture at Micron describes use cases for memory expansion with tiered DRAM and CXL memory, along with performance data.
Q1 Memory Fabric Forum: CXL-Related Activities within OCPMemory Fabric Forum
OCP steering committee member, and former President of the CXL Consortium, Siamak Tavallaei, provides an overview of CXL-related activities happening within the Open Compute Project.
Q1 Memory Fabric Forum: CXL Controller by Montage TechnologyMemory Fabric Forum
For CXL AIC and memory module designers, Nilesh Shah of Montage provides and overview of their CXL memory controller product, technology, and performance.
Nick Kriczsky and Gorden Getty provide an overview of Teledyne LeCroy’s Austin Labs portfolio of products to services including: 1) testing for protocol and electrical compliance, interoperability, data integrity, and performance, 2) In depth protocol training (PCIe, USB, NVMe, NVMe-oF, Fibre Channel), and 3) Automation (solutions for analysis, jamming, generation)
Torry Steed, Sr. Staff Product Manager at SMART Modular, covers the changing shape of memory leading to new categories of CXL form factors. He dives deeper to address EDSFF and AIC variations, mechanical sizes, installation locations, capacity considerations, and power ratings.
Q1 Memory Fabric Forum: Memory Fabric in a Composable SystemMemory Fabric Forum
Eddie McMorrow, Sr. Product Manager at GigaIO, defines composable infrastructure and memory fabrics, then provides and overview of the FabreX memory fabric.
MemVerge CEO Charles Fan describes why memory-hungry generative AI is a driver for CXL technology, the new computing model for AI, and MemVerge software for CXL and AI.
Q1 Memory Fabric Forum: Micron CXL-Compatible Memory ModulesMemory Fabric Forum
Michael Abraham, Director of Product Management at Micron, discusses data center challenges, the memory and storage hierarchy, Micron CZ120 memory modules, database (TPC-H) improvements, AI inferencing improvements, and how to enabling in your company.
Q1 Memory Fabric Forum: Advantages of Optical CXL for Disaggregated Compute ...Memory Fabric Forum
Ron Swartzentruber, Director of Engineering at Lightelligence, explains why optical connectivity is needed for CXL fabrics, and provides an overview of the Photowave line of port expander PCIe cards and active optical cables.
Arvind Jagannath of VMware makes the case for bridging the CPU-Memory imbalance with memory tiering, describes their vision for memory disaggregation, and explains that VMware will support CXL Expanders – Specific Configurations, Memory Tiering to reduce overall TCO, and Memory Accelerators to enable CXL-based use-cases.
MemVerge Field CTO Yong Tian shows what memory expansion costs with an analysis of various server configurations with up to 8TB of tiered DRAM and CXL memory.
In the CXL Forum Theater at SC23 hosted by MemVerge, Lightelligence describes CXL's need for optical connectivity and their portfolio of CXL optical expander cards and cables
In the CXL Forum Theater at SC23 hosted by MemVerge, Samsung described their the architecture and use cases of their hybrid drive that includes DRAM and Flash memory
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
4. The CXL specification continues to evolve to meet new usage models
• New features introduced in the CXL 3.1 specification:
• CXL Fabric Improvements/Extensions
• Scale-outof CXL fabrics using PBR (Port Based Routing)
• Trusted-Execution-Environment Security Protocol (TSP)
• Allows for Virtualization-basedTrustedExecutionEnvironments (TEEs) to host Confidential Computing Workloads
• Memory Expander Improvements
• Up to 32-bit of metadata and RAS capability enhancements
CXL 3.1 Feature Enhancements
12
ComputeExpressLink™ andCXL™aretrademarksof the ComputeExpressLinkConsortium.
6. • Fabric Decode/Routing requirements
• Host-to-Host communication with Global Integrated Memory (GIM)
concept (with .UIO)
• Direct P2P .mem support through PBR Switches
• Adds a form of symmetric Link Layer definition
• Enables direct caching of CXL.mem for an accelerator (caching is not possible
with .UIO)
• Fabric Manager (FM) API definition for PBR Switch
CXL Fabric Improvements/Extensions
ComputeExpressLink™ andCXL™aretrademarksof the ComputeExpressLinkConsortium. 14
11. 19
Trusted Execution Environment (TEE)
&
TEE Security Protocol (TSP)
ComputeExpressLink™ andCXL™aretrademarksof the ComputeExpressLinkConsortium.
12. RECAP: CXL 2.0 Security Benefits
CXL 2.0 provides Integrity and Data Encryption (IDE)
of traffic across all entities (Root Complex, Switch, Device)
at the Link Layer
CXL 2.0 Switch
CPU/SoC Root Complex CXL Device
Home Agent
MC
Host
Memory
CoherentBridge
IO Bridge IOMMU CXL.memory
CXL.io
Area of
Protection
Coherent Cache
(Optional)
DTLB
MC
Device
Memory
CXL.cache
ComputeExpressLink™ andCXL™aretrademarksof the ComputeExpressLinkConsortium.
13. CXL 3.1 Trusted Security Protocol (TSP)
Allows for Virtualization-based, Trusted Execution Environments (TEEs)
to host Confidential Computing Workloads (CC WL)
Benefits:
• Freedom tomigratesensitiveWLs to TSP-enabledClouds
• Collaborationwithmultiplepartiesfor sharingdata
• Conform to Compliance & Data-sovereigntyprograms
• StrengthenApplicationsecurity& SoftwareIP protection
KeyCapabilities:
• Separationbetween TVM*& CSP’s infrastructure(VMM)
• Configurationof CXL device
• Encryptionof sensitivedatain both Host and Devicememory
• Cryptographicallyverify correctconfigurationof trusted
computingenvironment
*TVM = Trusted VM
ComputeExpressLink™ andCXL™aretrademarksof the ComputeExpressLinkConsortium.
CXL Device
CXL Link
MC
TVM*
VM
VM
VM
VM
CXL.io
CXL.mem
CXL IDE
(Defined in CXL v2.0)
TEE Capable Host
MC
Device
Memory
Device
Memory
Host
Mem
Host
Mem
CXL.io
CXL.mem
14. Elements of TSP / TSP Overview
22
ComputeExpressLink™ andCXL™aretrademarksof the ComputeExpressLinkConsortium.
TSP Components for
Confidential Computing
• TrustedExecutionState& Access Control
• How access to memoryis controlled
• Configuration
• Ability to determinethe supported security
featureson the device, enable required
features,and lock the configuration
• Attestation& Authentication
• Trusting who you are talking to
• Memory Encryption
• Encrypting data-at-rest
• Transport Security
• Encrypting the link to protect data-in-flight
and detect/preventphysical attacks
Confidential Computing (initiator)
CXL TrustedExecutionEnvironment SecurityProtocol (TSP)
Attestation
Authentication
Trusted Execution
State & Access
Control
Memory
Encryption
(Data-at-rest)
Transport
Security
(Data-in-flight)
Configuration
CXL memoryexpander (target)
HDM-H (CXL 3.1), HDM-DB (CXL 3.1 ECN)
17. CXL Specification Feature Summary
Features CXL 1.0 / 1.1 CXL 2.0 CXL 3.0 CXL 3.1
Releasedate 2019 2020 August 2022 November2023
Max link rate 32GT/s 32GT/s 64GT/s 64GT/s
Flit 68 byte (up to 32 GT/s)
Type 1, Type 2 and Type 3 Devices
MemoryPooling w/ MLDs
Global PersistentFlush
CXL IDE
Switching (Single-level)
Switching (Multi-level)
Multiple Type 1/Type 2 devicesper rootport
Direct memoryaccess for peer-to-peer
256-byte Flit (up to 64 GT/s PAM4)
256-byte Flit (Enhanced coherency)
256-byte Flit (Memorysharing)
256-byte Flit (Fabric capabilities)
Fabric Manager API definition for PBR Switch
Host-to-Hostcommunication with Global Integrated Memory(GIM) concept
Trusted-Execution-Environment(TEE) SecurityProtocol
Memoryexpander enhancements (up to 34-bit ofmeta data, RAS capability enhancements)
ComputeExpressLink™ andCXL™aretrademarksof the ComputeExpressLinkConsortium.
Supported
Not Supported
18. • The CXL specification continues to evolve to meet the
usage models
• New features introduced in the CXL 3.1 specification:
• CXL Fabric Improvements/Extensions
• Scale-outof CXL fabrics using PBR (Port Based Routing)
• Trusted-Execution-Environment Security Protocol (TSP)
• Allows for Virtualization-based,TrustedExecutionEnvironments(TEEs)
to host ConfidentialComputing Workloads
• Memory Expander Improvements
• Up to 34-bit of meta-dataand RAS capability enhancements
CXL 3.1 Summary
26
ComputeExpressLink™ andCXL™aretrademarksof the ComputeExpressLinkConsortium.
www.ComputeExpressLink.org
Call to Action
Supportfuture specification
development by joining the
CXL Consortium
Download the
CXL 3.1 Specification
Followus on social media for
updates!
20. Siamak Tavallaei has recently served as the CXL Consortium President, Chief
Systems Architect at Google Cloud, and the Incubation Committee (IC)
Representative for the Server Project. He is currently the CXL Advisor to the
Board at CXL Consortium and actively participates in OCP Steering
Committee. His current focus is the system optimization for large-scale,
mega-datacenters for general-purpose and tightly-connected, accelerated
machines built on co-designed hardware, software, security, and
management. He continues to drive the architecture and productization of
CXL-enabled solutions for AI/ML, HPC, and large memory-footprint
Databases. In 2016, he joined OCP as a co-lead of Server Project where he
drove open-sourced modular design concepts for integrated
hardware/software solutions (OAI, DC-SCM, CMS, DC-MHS, and DC-Stack).
His experiences as Chief Systems Architect at Google, Principal Architect at
Microsoft Azure, Distinguished Technologist at HP, and Principal Member of
Technical Staff at Compaq along with his contributions to industry
collaborations such as EISA, PCI, InfiniBand, and CXL give Siamak a broad
understanding of requirements and solutions for the Enterprise, Hyperscale,
and Edge datacenters and industry-wide initiatives.
Bio
Editor's Notes
CXL 2.0 = scale out
CXL 3.0 = scale up
HBR is classic switch routing – where the host(s) communicates to all the devices underneath the tree. However, the host(s) cannot talk to each other and neither can the devices.
While, PBR allows for communication between the ports (hosts or end-point).
PRB (Port Base Routed) switch allow for switch topologies beyond 2.0 single layer HBR switches.
Global Fabric Device (G-FAM device ) Global Fabric Attached Memory
GIM Global Integrated Memory address space allows for multiple Host and Multiple Fabric Attached Memory to share address spaces.
CXL 3.1: Fabric Manager API for PBR switches
Multi-host CXL Cluster with Memory on Host and Device Exposed as GIM
CXL System components configured, allocated and managed by CXL System Fabric Manager
Enabled by CXL 3.1 management API
All CXL devices, Hosts, Accelerators, Switches and Memory are managed by CXL Fabric Manager
DMTF Redfish based /SNIA Swordfish and OFA Sunfish compliant for Industry Ecosystem interoperability
Fabric manager API is now able to support PBR switches compared to previous generations where it only supported HBR switches
Speaker notes:
Integrity and Data Encryption (CXL IDE) – data at flight encryption
expand re data encryption
Security extends to CXL Switch as well
Protect platforms against physical attacks and sophisticated hardware attacks on platform interconnects
Provides Confidentiality, Integrity and Replay protection for data transiting the CXL link
The cryptographic schemes aligned with current industry best practices
Supports a variety of use models while providing for broad interoperability.
CXL IDE can be used to secure traffic within a Trusted Execution Environment (TEE) that is composed of multiple components.
Maintains high performance while maintaining flexibly for security
Speaker notes:
Integrity and Data Encryption (CXL IDE) – data at flight encryption
expand re data encryption
Security extends to CXL Switch as well
Protect platforms against physical attacks and sophisticated hardware attacks on platform interconnects
Provides Confidentiality, Integrity and Replay protection for data transiting the CXL link
The cryptographic schemes aligned with current industry best practices
Supports a variety of use models while providing for broad interoperability.
CXL IDE can be used to secure traffic within a Trusted Execution Environment (TEE) that is composed of multiple components.
Maintains high performance while maintaining flexibly for security