Abstract: Although new non-volatile media inherently offers very low latency, remote access
using protocols such as NVMe-oF and presenting the data to VMs via virtualized interfaces such as virtio
adds considerable software overhead. One way to reduce the overhead is to use the Storage
Performance Development Kit (SPDK), an open-source software project that provides building blocks for
scalable and efficient storage applications with breakthrough performance. Comparing the software
paths for virtualizing block storage I/O illustrates the advantages of the SPDK-based approach. Empirical
data shows that using SPDK can improve CPU efficiency by up to 10 x and reduce latency up to 50% over
existing methods. Future enhancements for SPDK will make its advantages even greater.
Speaker Bio: Anu Rao is Product line manager for storage software in Data center Group. She helps
customer ease into and adopt open source Storage software like Storage Performance Development Kit
(SPDK) and Intelligent Software Acceleration-Library (ISA-L).
What are latest new features that DPDK brings into 2018?Michelle Holley
We will provide an overview of the new features of the latest DPDK release including source code browsing and API listing of top two new features of latest DPDK release. And on top of that, there will be a hands-on lab, on the Intel® microarchitecture servers, to learn how getting started with DPDK will become much simpler and powerful.
In this presentation, Yasunori Goto and Qi fuli will talk about basis of NVDIMM, the issues of RAS of Non Volatile DIMM(NVDIMM), and what feature is made and is developing for it.
NVDIMM is expected as new age device recently. Though a cpu can read/write the NVDIMM directly like RAM, the data of NVDIMM remains after power down or reboot. So, on memory database will be one of the good example of usecase of NVDIMM.
Since many people have made great effort for Linux, NVDIMM drivers, filesystems,management command, and many libraries has been well developed for a few years,
However, Yasunori Goto found some issues about RAS(Reliabivity, Availability, and Serviceability) feature of NVDIMM, because characteristic of the NVDIMM is likely mixture of Storage and RAM. For example, NVDIMM does not have hotplug feature because it is inserted at DIMM slot like RAM, but its data must be back-upped/restored like storage.
What are latest new features that DPDK brings into 2018?Michelle Holley
We will provide an overview of the new features of the latest DPDK release including source code browsing and API listing of top two new features of latest DPDK release. And on top of that, there will be a hands-on lab, on the Intel® microarchitecture servers, to learn how getting started with DPDK will become much simpler and powerful.
In this presentation, Yasunori Goto and Qi fuli will talk about basis of NVDIMM, the issues of RAS of Non Volatile DIMM(NVDIMM), and what feature is made and is developing for it.
NVDIMM is expected as new age device recently. Though a cpu can read/write the NVDIMM directly like RAM, the data of NVDIMM remains after power down or reboot. So, on memory database will be one of the good example of usecase of NVDIMM.
Since many people have made great effort for Linux, NVDIMM drivers, filesystems,management command, and many libraries has been well developed for a few years,
However, Yasunori Goto found some issues about RAS(Reliabivity, Availability, and Serviceability) feature of NVDIMM, because characteristic of the NVDIMM is likely mixture of Storage and RAM. For example, NVDIMM does not have hotplug feature because it is inserted at DIMM slot like RAM, but its data must be back-upped/restored like storage.
This presentation introduces Data Plane Development Kit overview and basics. It is a part of a Network Programming Series.
First, the presentation focuses on the network performance challenges on the modern systems by comparing modern CPUs with modern 10 Gbps ethernet links. Then it touches memory hierarchy and kernel bottlenecks.
The following part explains the main DPDK techniques, like polling, bursts, hugepages and multicore processing.
DPDK overview explains how is the DPDK application is being initialized and run, touches lockless queues (rte_ring), memory pools (rte_mempool), memory buffers (rte_mbuf), hashes (rte_hash), cuckoo hashing, longest prefix match library (rte_lpm), poll mode drivers (PMDs) and kernel NIC interface (KNI).
At the end, there are few DPDK performance tips.
Tags: access time, burst, cache, dpdk, driver, ethernet, hub, hugepage, ip, kernel, lcore, linux, memory, pmd, polling, rss, softswitch, switch, userspace, xeon
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
FOSDEM15 SDN developer room talk
DPDK performance
How to not just do a demo with DPDK
The Intel DPDK provides a platform for building high performance Network Function Virtualization applications. But it is hard to get high performance unless certain design tradeoffs are made. This talk focuses on the lessons learned in creating the Brocade vRouter using DPDK. It covers some of the architecture, locking and low level issues that all have to be dealt with to achieve 80 Million packets per second forwarding.
During the CXL Forum at OCP Global Summit, Jeff Hilland of HPE explained what CXL, PCI SIG, DMTF, OFA, OCP, and SNIA are doing to make CXL fabric, memory and device management interoperable.
SFO15-TR9: PSCI, ACPI (and UEFI to boot)
Speaker: Bill Fletcher
Date: September 24, 2015
★ Session Description ★
An introductory session of a system-level overview at Power State Coordination
- Focus on ARMv8
- Goes top-down from ACPI
- A demo based on the current code in qemu
- The specifications are very dynamic - what’s onging for ACPI and PSCI
★ Resources ★
Video: https://www.youtube.com/watch?v=vXzPdpaZVto
Presentation: http://www.slideshare.net/linaroorg/sfo15tr9-psci-acpi-and-uefi-to-boot
Etherpad: pad.linaro.org/p/sfo15-tr9
Pathable: https://sfo15.pathable.com/meetings/303087
★ Event Details ★
Linaro Connect San Francisco 2015 - #SFO15
September 21-25, 2015
Hyatt Regency Hotel
http://www.linaro.org
http://connect.linaro.org
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...Anne Nicolas
Understanding how Linux kernel IO subsystem works is a key to analysis of a wide variety of issues occurring when running a Linux system. This talk is aimed at helping Linux users understand what is going on and how to get more insight into what is happening.
First we present an overview of Linux kernel block layer including different IO schedulers. We also talk about a new block multiqueue implementation that gets used for more and more devices.
After surveying the basic architecture we will be prepared to talk about tools to peek into it. We start with lightweight monitoring like iostat and continue with more heavy blktrace and variety of tools that are based on it. We demonstrate use of the tools on analysis of real world issues.
Jan Kara, SUSE
Ariel Waizel discusses the Data Plane Development Kit (DPDK), an API for developing fast packet processing code in user space.
* Who needs this library? Why bypass the kernel?
* How does it work?
* How good is it? What are the benchmarks?
* Pros and cons
Ariel worked on kernel development at the IDF, Ben Gurion University, and several companies. He is interested in networking, security, machine learning, and basically everything except UI development. Currently a Solution Architect at ConteXtream (an HPE company), which specializes in SDN solutions for the telecom industry.
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
This presentation introduces Data Plane Development Kit overview and basics. It is a part of a Network Programming Series.
First, the presentation focuses on the network performance challenges on the modern systems by comparing modern CPUs with modern 10 Gbps ethernet links. Then it touches memory hierarchy and kernel bottlenecks.
The following part explains the main DPDK techniques, like polling, bursts, hugepages and multicore processing.
DPDK overview explains how is the DPDK application is being initialized and run, touches lockless queues (rte_ring), memory pools (rte_mempool), memory buffers (rte_mbuf), hashes (rte_hash), cuckoo hashing, longest prefix match library (rte_lpm), poll mode drivers (PMDs) and kernel NIC interface (KNI).
At the end, there are few DPDK performance tips.
Tags: access time, burst, cache, dpdk, driver, ethernet, hub, hugepage, ip, kernel, lcore, linux, memory, pmd, polling, rss, softswitch, switch, userspace, xeon
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
FOSDEM15 SDN developer room talk
DPDK performance
How to not just do a demo with DPDK
The Intel DPDK provides a platform for building high performance Network Function Virtualization applications. But it is hard to get high performance unless certain design tradeoffs are made. This talk focuses on the lessons learned in creating the Brocade vRouter using DPDK. It covers some of the architecture, locking and low level issues that all have to be dealt with to achieve 80 Million packets per second forwarding.
During the CXL Forum at OCP Global Summit, Jeff Hilland of HPE explained what CXL, PCI SIG, DMTF, OFA, OCP, and SNIA are doing to make CXL fabric, memory and device management interoperable.
SFO15-TR9: PSCI, ACPI (and UEFI to boot)
Speaker: Bill Fletcher
Date: September 24, 2015
★ Session Description ★
An introductory session of a system-level overview at Power State Coordination
- Focus on ARMv8
- Goes top-down from ACPI
- A demo based on the current code in qemu
- The specifications are very dynamic - what’s onging for ACPI and PSCI
★ Resources ★
Video: https://www.youtube.com/watch?v=vXzPdpaZVto
Presentation: http://www.slideshare.net/linaroorg/sfo15tr9-psci-acpi-and-uefi-to-boot
Etherpad: pad.linaro.org/p/sfo15-tr9
Pathable: https://sfo15.pathable.com/meetings/303087
★ Event Details ★
Linaro Connect San Francisco 2015 - #SFO15
September 21-25, 2015
Hyatt Regency Hotel
http://www.linaro.org
http://connect.linaro.org
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...Anne Nicolas
Understanding how Linux kernel IO subsystem works is a key to analysis of a wide variety of issues occurring when running a Linux system. This talk is aimed at helping Linux users understand what is going on and how to get more insight into what is happening.
First we present an overview of Linux kernel block layer including different IO schedulers. We also talk about a new block multiqueue implementation that gets used for more and more devices.
After surveying the basic architecture we will be prepared to talk about tools to peek into it. We start with lightweight monitoring like iostat and continue with more heavy blktrace and variety of tools that are based on it. We demonstrate use of the tools on analysis of real world issues.
Jan Kara, SUSE
Ariel Waizel discusses the Data Plane Development Kit (DPDK), an API for developing fast packet processing code in user space.
* Who needs this library? Why bypass the kernel?
* How does it work?
* How good is it? What are the benchmarks?
* Pros and cons
Ariel worked on kernel development at the IDF, Ben Gurion University, and several companies. He is interested in networking, security, machine learning, and basically everything except UI development. Currently a Solution Architect at ConteXtream (an HPE company), which specializes in SDN solutions for the telecom industry.
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
Abstract: Explore the packet I/O data path from a NIC across PCI-Express to cache/memory and understand how to build efficient CPU code for networked applications.
Speaker: Venky Venkatesan, Intel Fellow, Chief Architect – Packet Processing and Networking Applications
Intel® Xeon® Scalable Processors Enabled Applications Marketing GuideIntel IT Center
The Future-Ready Data Center platform is here. Whether you navigate in the High Performance Computing, Enterprise, Cloud, or Communications spheres, you will find an Intel® Xeon® processor that is ready to power your data center now and well into the future. An innovative approach to platform design in the Intel® Xeon® Scalable processor platform unlocks the power of scalable performance for today’s data centers—from the smallest workloads to your most mission-critical applications. Powerful convergence and capabilities across compute, storage, memory, network and security deliver unprecedented scale and highly optimized performance across a broad range of workloads—from high performance computing (HPC) and network functions virtualization, to advanced analytics and artificial intelligence (AI). Many examples here show how our software partner ecosystem has optimized their applications and/or taken advantage of inherent platform enhancements to deliver dramatic performance gains, that can translate into tangible business benefits.
E5 Intel Xeon Processor E5 Family Making the Business Case Intel IT Center
This presentation highlights cloud computing advantages of the Intel® Xeon® processor E5 family and helps you make the business case for investing. Includes access to an ROI calculator.
HPC DAY 2017 | Accelerating tomorrow's HPC and AI workflows with Intel Archit...HPC DAY
HPC DAY 2017 - http://www.hpcday.eu/
Accelerating tomorrow's HPC and AI workflows with Intel Architecture
Atanas Atanasov | HPC solution architect, EMEA region at Intel
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryDatabricks
The capacity of data grows rapidly in big data area, more and more memory are consumed either in the computation or holding the intermediate data for analytic jobs. For those memory intensive workloads, end-point users have to scale out the computation cluster or extend memory with storage like HDD or SSD to meet the requirement of computing tasks. For scaling out the cluster, the extra cost from cluster management, operation and maintenance will increase the total cost if the extra CPU resources are not fully utilized. To address the shortcoming above, Intel Optane DC persistent memory (Optane DCPM) breaks the traditional memory/storage hierarchy and scale up the computing server with higher capacity persistent memory. Also it brings higher bandwidth & lower latency than storage like SSD or HDD. And Apache Spark is widely used in the analytics like SQL and Machine Learning on the cloud environment. For cloud environment, low performance of remote data access is typical a stop gap for users especially for some I/O intensive queries. For the ML workload, it's an iterative model which I/O bandwidth is the key to the end-2-end performance. In this talk, we will introduce how to accelerate Spark SQL with OAP (https://github.com/Intel-bigdata/OAP) to accelerate SQL performance on Cloud to archive 8X performance gain and RDD cache to improve K-means performance with 2.5X performance gain leveraging Intel Optane DCPM. Also we will have a deep dive how Optane DCPM for these performance gains.
Speakers: Cheng Xu, Piotr Balcer
Describes how Clear Linux OS is designed, highlighting core features, operating models, and foundational tools that are key to understanding how the distro operates.
Intel® Select Solutions for the Network provide a faster means to address these challenges as we transition to 5G with pre-validated, optimized building blocks to help drive scale. Hear the what, why, when and where around Intel® Select Solutions for the Network.
Spring Hill (NNP-I 1000): Intel's Data Center Inference Chipinside-BigData.com
Today at Hot Chips 2019, Intel revealed new details of upcoming high-performance AI accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O.
"To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources," said Naveen Rao, Intel vice president and GM, Artificial Intelligence Products Group. "Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”
Learn more: https://www.intel.ai/accelerating-for-ai/?elq_cid=1192980
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergenceinside-BigData.com
In this deck, Johann Lombardi from Intel presents: DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence.
"Intel has been building an entirely open source software ecosystem for data-centric computing, fully optimized for Intel® architecture and non-volatile memory (NVM) technologies, including Intel Optane DC persistent memory and Intel Optane DC SSDs. Distributed Asynchronous Object Storage (DAOS) is the foundation of the Intel exascale storage stack. DAOS is an open source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications. It enables next-generation data-centric workflows that combine simulation, data analytics, and AI."
Unlike traditional storage stacks that were primarily designed for rotating media, DAOS is architected from the ground up to make use of new NVM technologies, and it is extremely lightweight because it operates end-to-end in user space with full operating system bypass. DAOS offers a shift away from an I/O model designed for block-based, high-latency storage to one that inherently supports fine- grained data access and unlocks the performance of next- generation storage technologies.
Watch the video: https://youtu.be/wnGBW31yhLM
Learn more: https://www.intel.com/content/www/us/en/high-performance-computing/daos-high-performance-storage-brief.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Optimizing Apache Spark Throughput Using Intel Optane and Intel Memory Drive...Databricks
Apache Spark is a popular data processing engine designed to execute advanced analytics on very large data sets which are common in today’s enterprise use cases. To enable Spark’s high performance for different workloads (e.g. machine-learning applications), in-memory data storage capabilities are built right in.
However, Spark’s in-memory capabilities are limited by the memory available in the server; it is common for computing resources to be idle during the execution of a Spark job, even though the system’s memory is saturated. To mitigate this limitation, Spark’s distributed architecture can run on a cluster of nodes, thus taking advantage of the memory available across all nodes. While employing additional nodes would solve the server DRAM capacity problem, it does so at an increased cost. Intel(R) Memory Drive Technology is a software-defned memory (SDM) technology, which combined with an Intel(R) Optane(TM) SSD, expands the system’s memory.
This combination of Intel(R) Optane(TM) SSD with Intel Memory Drive Technology alleviates those memory limitations that are inherent to Spark, by making more memory available to the operating system and to Spark jobs, transparently.
Extend HPC Workloads to Amazon EC2 Instances with Intel and Rescale (CMP373-S...Amazon Web Services
Cloud services built on compute-optimized EC2 instances can serve as your next-generation HPC platform. Learn how to utilize the Rescale platform on AWS to meet the ever-increasing demands on compute resources while avoiding costly capex investments. Custom Intel Xeon processors enable you to meet your HPC needs by taking advantage of the newest technologies in the cloud. This session is brought to you by AWS partner, Intel.
NFF-GO (YANFF) - Yet Another Network Function FrameworkMichelle Holley
NFF-Go is a framework allows developers to deploy performant cloud-native network functions much faster. NFF-Go internally implements low-level optimizations and can auto-scale to multicores using built-in capabilities to take advantage of Intel® architecture. NFF uses Data Plane Development Kit (DPDK) for efficient input/output (I/O) and Go programming language as a high-level, safe, productive language.
Edge and 5G: What is in it for the developers?Michelle Holley
5G is not just the next generation of networks but is also an innovation platform for services, applications, and connected devices. Moving services and applications to edge is accelerating services “today”, without having to wait for 5G to happen. But what does it take to develop an application that is ready for the Edge and 5G? What sort of hardware, software and ecosystem can enable an application that is future ready. In this talk we will discuss what is Intel doing in this space not only terms of products and solutions but also acting as an vendor neutral eco system enabler. We will also discuss the opportunities available to developers today no matter where they belong in the ecosystem.
Speaker: Chandresh Ruparel, Director, Ecosystem Strategy and Intel Network Builders
This presentation covers an industry perspective and a roadmap towards 5G with open and democratized interfaces. It covers examples of open reference platforms and how open source communities can complement standard bodies such as 3GPP and IEEE. It characterizes RAN and user and control plane core micro services and discusses opportunities for embedded network telemetry for emerging machine learning applications.
Speaker: Tom Tofigh, Principal Member of Technical Staff (Architect) at AT&T
De-fogging Edge Computing: Ecosystem, Use-cases, and OpportunitiesMichelle Holley
This presentation is intended to provide clarity around Edge Computing by providing an overview of the edge computing ecosystem and providing context of it’s possibilities through a discussion around use-cases and highlighting opportunities for developers, enterprises, and large companies. We will focus more on practical implications of Edge Computing on business and consumer ecosystems rather than implementations.
Speaker: Faraz Hoodbhoy, Director Outreach Ecosystem & Innovation, AT&T
With uCPE/SD-WAN taking center stage in enabling software-defined Cloud services to enterprise branch offices globally, this session will provide a uCPE review from a solution, deployment and reference design standpoint.
Speaker: Sab Gosal, Segment Manager
Network Platforms Group (NPG), September 2018
Application developers are key to the success of an edge compute strategy. They are the backbone for any digital ecosystem and their requirements drive the platform architecture. Edge computing is no different. In this talk, we will focus on some key requirements, challenges and possible solutions for a developer centric architecture for multi-access edge computing including abstraction of the service provider’s network complexity, low footprint cloud native builder models, micro-services, hardware abstractions, intelligence layers and massive monitoring of application instances.
About the speaker: Shamik Mishra is currently Assistant Vice President (AVP), Technology and Innovation at Aricent. He is a practice leader for new product architectures. He has extensive experience and contributions in software development in cloud, wireless technologies, edge computing and platform software. His research interests are Network Function Virtualization (NFV), Cloud and edge computing and Machine Learning (ML). He has spoken in several conferences and his work is regularly covered in the media. Shamik has a bachelor’s and a master’s degree from Indian Institute of Technology (IIT) Kharagpur, India.
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Michelle Holley
This demo/lab will guide you to install and configure FD.io Vector Packet Processing (VPP) on Intel® Architecture (AI) Server. You will also learn to install TRex* on another AI Server to send packets to the VPP, and use some VPP commands to forward packets back to the TRex*.
Speaker: Loc Nguyen. Loc is a Software Application Engineer in Data Center Scale Engineering Team. Loc joined Intel in 2005, and has worked in various projects. Before joining the network group, Loc worked in High-Performance Computing area and supported Intel® Xeon Phi™ Product Family. His interest includes computer graphics, parallel computing, and computer networking.
Cloud native architecture is emerging for Telecom workloads. To support these emerging trends, Intel is targeting enhancements to the Dataplane Development Kit (DPDK). The enhancements would target network service mesh with dedicated sidecar accelerators and the mechanism to build the mesh dynamically.
Speaker: Gerald Rogers. Gerald Rogers is a Principal Engineer in the Network Products Group focused on virtual switching, network function virtualization and Data Plane Development Kit (DPDK). After joining Intel in 2005, Gerald has worked as a software engineer and architect in the embedded and networking groups. For the past 7 years Gerald has led the network virtual switching software and hardware acceleration effort to drive Intel architecture into the networking and telecommunications industry. Gerald holds a Bachelor’s degree in Electrical Engineering and a Master’s degree in Computer Science, and has 20 years of experience in the networking and telecommunications industry.
Presentation will cover recent changes in project lifecycle and release model as well as latest additions and technical trends in OpenDaylight.
Speaker: Luis Gomez - Luis Gomez is a Software Test Engineer at lumina Networks. He is member of the OpenDaylight Technical Steering Committee (TSC) and committer for integration and releng projects. Previously he was a Principal Software Test Engineer in the Open Source Software group at Brocade where he spent 4 years integrating, testing and supporting OpenDaylight in customer solutions, before he was a Solution Integration Engineer at Ericsson where he spent more than a decade integrating and testing service provider networks.
The presentation will provide a brief overview of Tungsten Fabric, and the new features in the recent 5.0 release. A demo of Tungsten Fabric will follow, with an overview of core functionality, and newly released features.
Speaker: Nick Davey, Cloud - SDN Product Manager
Orchestrating NFV Workloads in Multiple CloudsMichelle Holley
Open Network Automation Platform (ONAP) is missioned to deploy and manage VNFs on multiple infrastructure environments, including virtualized infrastructure and cloud native. Workload deployment and orchestration in multiple clouds is expected to play an essential role in ONAP operational success. This talk introduces overall ONAP architecture and orchestration workflow, and related supporting functions such as homing and optimization.
Speaker: Bin Hu, Bin is an innovation thought-leader in NFV, SDN and Cloud. He is the Convener of OPNFV's Technical Community, PTL of IPv6 and PTL of Gluon in OpenStack for the next generation of NFV networking services. He was the Winner of OPNFV 2015 Annual Award.
Convergence of device and data at the Edge CloudMichelle Holley
Ever growing need of Intelligent Systems evolves analytics and decision making into AI with Machine Learning as tools for knowledge assimilation. What is essential for ML is a form of data that has inherent information that can be translated to useful information (intelligence) for decision making. IoT is the key for intelligent systems as they collect data at every end point. They are like ends of neuron network in human body. And the data collected has to be refined for decision making as it traverses up to the brain (AI Cloud) – like lymph nodes we have Edge Clouds. We will explore in this short talk two aspects of such IoT infrastructure where you have lossy network for IoTs, gateway options for device data and how it can seamlessly integrate with Edge Cloud Networks. We will review such protocols as Wireless Mesh, programmable gateways and extension of overlays into the Cloud.
Speaker: Murali Rangachari, Futurewei Technologies
The rapid growth of data requires advanced intelligence closer to the endpoints that are both generating and consuming data. To capture and accelerate this opportunity, the powerful data processing and analytics capabilities that have traditionally lived in the heart of the data center must be strategically placed closer-and-closer to the data generating and consuming endpoints, at the “edge.” This presentation will look at the opportunities facing the Edge ecosystem and show how Intel via its Intel Network Builders’ Network Edge Ecosystem program is helping the community capitalize on this opportunity and accelerate the deployment of Edge solutions.
Speaker: Orla Mooney, Team Lead, Network Edge Ecosystem program
Design Implications, Challenges and Principles of Zero-Touch Management Envir...Michelle Holley
Use of zero-touch management environments requires a paradigm shift in terms of how core management capabilities are delivered, deployed and utilized for the purpose of network service and infrastructure management. In this talk we will examine several key implications and challenges presented by use of zero-touch management practices. We will also propose a set of core architectural principles for design and operation of zero-touch management systems.
Speaker: Alexander Vul, Intel. Alexander is currently working as a Cloud Solutions Architect in the Datacenter Solutions Group at Intel. In his current position, Alexander is responsible for defining and driving Intel’s SDN/NFV MANO solutions and for leading Intel’s participation in the ONAP open source communities.
Using Microservices Architecture and Patterns to Address Applications Require...Michelle Holley
Edge Computing Infrastructure needs to be closer to end-user yet provide ability to offload compute from End user devices for apps such that it can manage both real-time and lossless applications. MEC architecture is inherently complex and of several challenges; state management of applications is key. This talk focuses on aspects of microservices patterns, container workload and persistent stores to address and improve application latency, to match SLAs with use cases like AR; extending home gateway to pole gateway for IoT and address optimization techniques needed for the same.
Speakers:
Prem Sankar Gopannan, Ericsson Opensource Ecosystem team and Opendaylight team
Prakash Ramchandran, Openstack 2018 Board Member
In this talk, Tong will start with the current landscape and typical use cases of Artificial Intelligence applications in the Telco domain. Then, she will introduce Intel’s strategy and products for Network AI, including our focus areas, our hardware portfolio, software stacks, roadmaps and some case studies.
Speaker: Tong Zhang, Principal Engineer and Chief Architect for AI and Analytics of the Network Platforms Group, Intel
Learn how artificial intelligence impacts performance, security, compute, and resources within the network.
Speakers:
“Ali” Osamah Mohammed Ali and Wes Jensen, Netrolix
The concept of service mesh is one of the new technologies that have grown up around the container and micro-service model over the last couple of years, and Istio is the latest entry into this space. As Istio was recently included as an incubated project in the CNCF, many companies are now looking to it to provide a set of key functions to accelerate their micro-service application management model. Istio enables bi-directional authentication and security of service communication via TLS based authentication and encryption, and at the same time is able to capture application level communication statistics, improving the application development team's visibility into the otherwise difficult to track communication patterns. In this way, Istio acts like an application level network, riding across the underlying capabilities of Kubernetes CNI based networks and network policy. We will implement Istio on a GKE kubernetes cluster, and instrument a simple application to get better insight into how Istio provides its capabilities.
Speaker Bio:
With over 20 years of experience as a systems reliability engineer, and a focus on automating not only application deployments but the underlying infrastructure as well, Robert Starmer brings a wealth of knowledge to the full application enablement stack. He has applied this knowledge in fields from high-performance computing to high-frequency trading environments, and everything in between. Robert also holds patents in network, data center, and application performance and scale enhancements. He is a Founder and the CTO at Kumulus Technologies, a DevOps, Systems Reliability Engineering and cloud computing consultancy. Additionally, Robert is an incurable photography nerd and has been known to stay up until dawn in remote locations to capture celestial time-lapses.
Intel® QuickAssist Technology Introduction, Applications, and Lab, Including ...Michelle Holley
Abstract: Intel® QuickAssist Technology improves performance and efficiency across the data center and other computing platforms by handling the compute-intensive operations of bulk cryptography, public key cryptography, and data compression. In this course, we will give an overview of the technology along with the summary of resources to get started with integrating Intel® QAT into your platform solutions. We will also demonstrate using Intel® QAT with applications such as OpenSSL, NGINX, and HAProxy, with a hands-on lab.
Speaker Bios:
Joel Auernheimer, a Platform Application Engineer at Intel, has been focused on enabling customers to integrate Intel® QuickAssist Technology in their platform solutions. Joel is a native of Phoenix, Arizona and enjoys hiking, basketball, soccer, singing, and spending time with friends and family.
Joel Schuetze has been with Intel since 1996. For the last 9+ years he has worked as Platform Application Engineer supporting customers with Intel QuickAssist Technology.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
4. Latency(µs)
Technology claims are based on comparisons of latency, density and write cycling metrics amongst memory technologies recorded on published specifications of in-market memory products against internal Intel specifications.
0
25
50
75
100
125
150
175
200
10,000
HDD
+SAS/SATA
SSDNAND
+SAS/SATA
SSDNAND
+NVMe™
SSDoptane™
+NVMe™
kerneldriveroverhead1-8%
kerneldriverOverhead<0.01%
kerneldriveroverhead30%-50%
Drive Latency Controller Latency Driver Latency
The Challenge: Media Latency
5. Storage
Performance
Development
Kit
5
Scalable and Efficient Software Ingredients
• User space, lockless, polled-mode components
• Up to millions of IOPS per core
• Designed to extract maximum performance from
non-volatile media
Storage Reference Architecture
• Optimized for latest generation CPUs and SSDs
• Open source composable building blocks (BSD
licensed)
• Available via SPDK.io
• Follow @SPDKProject on twitter for latest events
and activities
6. Benefits of using SPDK
SPDK
more performance
from CPUs, non-
volatile media, and
networking
10X MORE IOPS/coreUp to for NVMe-oF* vs. Linux kernel
for NVMe vs. Linux kernel8X MORE IOPS/coreUp to
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using
specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to
assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance
50% BETTERUp to for RocksDB workloadTail Latency
3X BETTERUp to for Virtualized StorageIOPS/core & Latency
Up to10.8 Million IOPS with Intel® Xeon Scalable Family and 24 Intel®
Optane™ SSD DC P4800X
1.5X BETTERUp to for NVMEoF vs Kernel for Optane SSDLatency
7. SPDK Community
http://SPDK.IO
Real Time Chat w/
Development Community
Backlog and
Ideas for Things to Do
Main Web Presence
Email Discussions
Weekly Calls
Multiple Annual Meetups
Code Reviews & Repo
Continuous
Integration
20. Sharing SSDs in userspace
Typically not 1:1 VM to local attached NVMe SSD
otherwise just use PCI direct assignment
What about SR-IOV?
SR-IOV SSDs not prevalent yet
precludes features such as snapshots
What about LVM?
LVM depends on Linux kernel block layer and storage drivers (i.e. nvme)
SPDK wants to use userspace polled mode drivers
SPDK Blobstore and Logical Volumes!
21.
22. SPDK vhost Performance
0
10
20
30
40
50
Linux QEMU SPDK
QD=1 Latency (in us)
System Configuration: 2S Intel® Xeon® Platinum 8180: 28C, E5-2699v3: 18C, 2.5GHz (HT off), Intel® Turbo Boost Technology enabled, 12x16GB DDR4 2133 MT/s, 1 DIMM per channel, Ubuntu* Server 16.04.2 LTS, 4.11 kernel,
23x Intel® P4800x Optane SSD – 375GB, 1 SPDK lvolstore or LVM lvgroup per SSD, SPDK commit ID c5d8b108f22ab, 46 VMs (CentOS 3.10, 1vCPU, 2GB DRAM, 100GB logical volume), vhost dedicated to 10 cores
As measured by: fio 2.10.1 – Direct=Yes, 4KB random read I/O, Ramp Time=30s, Run Time=180s, Norandommap=1, I/O Engine = libaio, Numjobs=1
Legend: Linux: Kernel vhost-scsi QEMU: virtio-blk dataplane SPDK: Userspace vhost-scsi
SPDK up to 3x better efficiency and latency
23. 48 VMs: vhost-scsi performance (SPDK vs. Kernel)
Intel Xeon Platinum 8180 Processor, 24x Intel P4800x 375GB
2 partitions per VM, 10 vhost I/O processing cores
1
11
2.86 2.77
3.4
9.23 8.98
9.49
0
1
2
3
4
5
6
7
8
9
10
4K 100% Read 4K 100% Write 4K 70%Read30%Write
IOPSinMillions
vhost-kernel vhost-spdk
3.2x 2.7x3.2x
• Aggregate IOPS across all 48x VMs
reported. All VMs on separate cores
than vhost-scsi cores.
• 10 vhost-scsi cores for I/O
processing
• SPDK vhost-scsi up to 3.2x better
with 4K 100% Random read I/Os
• Used cgroups to restrict kernel
vhost-scsi processes to 10 cores
System Configuration:Intel Xeon Platinum 8180 @ 2.5GHz. 56 physical cores 6x 16GB, 2667 DDR4, 6 memory Channels, SSD: Intel P4800x 375GB x24 drives, Bios: HT disabled, p-states enabled, turbo enabled, Ubuntu 16.04.1
LTS, 4.11.0 x86_64 kernel, 48 VMs, number of partition: 2, VM config : 1core 1GB memory, VM OS: fedora 25, blk-mq enabled, Software packages: Qemu-2.9, libvirt-3.0.0, spdk (3bfecec994), IO distribution: 10 vhost-cores for SPDK /
Kernel. Rest 46 cores for QEMU using cgroups, FIO-2.1.10 with SPDK plugin, io depth=1, 8, 32 numjobs=1, direct=1, block size 4k
24. VM Density: Rate Limiting 20K IOPS per VM
Intel Xeon Platinum 8180 Processor, 24x Intel P4800x 375GB
10 vhost-scsi cores
1
11
0
10
20
30
40
50
60
70
80
90
100
0
200000
400000
600000
800000
1000000
1200000
1400000
1600000
1800000
24 48 96
%CPUUtilization
(lowerisbetter)
IOPS
(higherisbetter)
No. of VMs
Kernel IOPS SPDK IOPS Kernel CPU Util. SPDK CPU Util.
• % CPU utilized shown from
VM side
• Each VM was running queue
depth=1, 4KB random read
workload
• Hyper threading enabled to
allow 112 cores.
• Each VM rate limited to 20K
IOPS using cgroups
• SPDK able to scale to 96 VMs,
supporting 20K per VM.
Kernel scale till 48 VMs.
Beyond 48 VMs, 10 vhost-
cores seem bottleneck
System Configuration:Intel Xeon Platinum 8180 @ 2.5GHz. 56 physical cores 6x 16GB, 2667 DDR4, 6 memory Channels, SSD: Intel P4800x 375GB x24 drives, Bios: HT disabled, p-states enabled, turbo enabled,
Ubuntu 16.04.1 LTS, 4.11.0 x86_64 kernel, 48 VMs, number of partition: 2, VM config : 1core 1GB memory, VM OS: fedora 25, blk-mq enabled, Software packages: Qemu-2.9, libvirt-3.0.0, spdk (3bfecec994), IO
distribution: 10 vhost-cores for SPDK / Kernel. Rest 46 cores for QEMU using cgroups, FIO-2.1.10 with SPDK plugin, io depth=1, 8, 32 numjobs=1, direct=1, block size 4k
29. For More information on SPDK
• Visit SPDK.io for tutorials and links to github, maillist, IRC channel and other
resources
• Follow @SPDKProject on twitter for latest events, blogs and other SPDK
community information and activities
33. Basic Architecture
Repeat for additional
VMs
pollers spread across
available cores
Logical
Core 0
Logical
Core 1
vhost-scsi ctrlr
NVMe SSD
scsi dev
scsi lun
bdev
nvme
/spdk/vhost.0
vhost-scsi
poller
VQVQVQ
QP
bdev-nvme
poller
vhost-scsi
poller
bdev-nvme
poller
vhost-scsi
poller
bdev-nvme
poller
vhost-scsi
poller
bdev-nvme
poller
VM
34.
35. Blobstore Design – Design Goals
• Minimalistic for targeted storage
use cases like Logical Volumes
and RocksDB
• Deliver only the basics to enable
another class of application
• Design for fast storage media
36. Blobstore Design – High Level
Application interacts with chunks of data called blobs
Mutable array of pages of data, accessible via ID
Asynchronous
No blocking, queuing or waiting
Fully parallel
No locks in IO path
Atomic metadata operations
Depends on SSD atomicity (i.e. NVMe)
1+ 4KB metadata pages per blob
37. Logical Volumes
Blobstore plus:
UUID xattr for lvolstore, lvols
Friendly names
– lvol name unique within lvolstore
– lvolstore name unique within application
Future
– snapshots (requires blobstore support)
NVMe SSD
bdev
bdev
nvme
lvol
blobstore
lvolstore
...
bdev
lvol
38. Asynchronous Polling
Poller execution
Reactor on each core
Iterates through pollers
round-robin
vhost-scsi poller
– poll for new I/O requests
– submit to NVMe SSD
bdev-nvme poller
– poll for I/O completions
– complete to guest VM
Logical
Core 0
Logical
Core 1
vhost-scsi ctrlr
NVMe SSD
scsi dev
scsi lun
bdev
nvme
/spdk/vhost.0
vhost-scsi
poller
VQVQVQ
QP
bdev-nvme
poller
vhost-scsi
poller
bdev-nvme
poller
vhost-scsi
poller
bdev-nvme
poller
vhost-scsi
poller
bdev-nvme
poller
VM