Une très bonne présentation qui introduit la technologie NVM Express qui sera à coup sure l'interface du futur (proche) des "disques" SSD. Adieu SAS et SATA, bienvenu au PCI Express dans les serveurs (et postes clients)
Linux is usually at the edge of implementing new storage standards, and NVMe over Fabrics is no different in this regard. This presentation gives an overview of the Linux NVMe over Fabrics implementation on the host and target sides, highlighting how it influenced the design of the protocol by early prototyping feedback. It also tells how the lessons learned during developing the NVMe over Fabrics, and how they helped reshaping parts of the Linux kernel to support NVMe over Fabrics and other storage protocols better."
This presentation was delivered at LinuxCon Japan 2016 by Christoph Hellwig
PCI Express* based Storage: Data Center NVM Express* Platform TopologiesOdinot Stanislas
(FR)
Le PCI Express se démocratise de plus en plus dans les serveurs. Présents depuis des années comme bus pour les cartes d'extensions, on va maintenant le trouver en façades des serveurs pour servir des disque flash 2,5 pouces (connecteur SF-8639) et sous la forme de câble appelés OCulink.
(EN)
PCI Express is becoming more and more present in servers. As a communication bus for extension cards since years, now it will serve 2.5 inches flash drive and through PCIe cables named OCulink.
Auteurs/Authors:
Michael Hall
Director of Technology Solutions Enabling, Data Center Group, Intel Corporation
Jonmichael Hands
Technical Program Manager, Non-Volatile Memory Solutions Group, Intel Corporation
Linux is usually at the edge of implementing new storage standards, and NVMe over Fabrics is no different in this regard. This presentation gives an overview of the Linux NVMe over Fabrics implementation on the host and target sides, highlighting how it influenced the design of the protocol by early prototyping feedback. It also tells how the lessons learned during developing the NVMe over Fabrics, and how they helped reshaping parts of the Linux kernel to support NVMe over Fabrics and other storage protocols better."
This presentation was delivered at LinuxCon Japan 2016 by Christoph Hellwig
PCI Express* based Storage: Data Center NVM Express* Platform TopologiesOdinot Stanislas
(FR)
Le PCI Express se démocratise de plus en plus dans les serveurs. Présents depuis des années comme bus pour les cartes d'extensions, on va maintenant le trouver en façades des serveurs pour servir des disque flash 2,5 pouces (connecteur SF-8639) et sous la forme de câble appelés OCulink.
(EN)
PCI Express is becoming more and more present in servers. As a communication bus for extension cards since years, now it will serve 2.5 inches flash drive and through PCIe cables named OCulink.
Auteurs/Authors:
Michael Hall
Director of Technology Solutions Enabling, Data Center Group, Intel Corporation
Jonmichael Hands
Technical Program Manager, Non-Volatile Memory Solutions Group, Intel Corporation
LCU13: Deep Dive into ARM Trusted Firmware
Resource: LCU13
Name: Deep Dive into ARM Trusted Firmware
Date: 31-10-2013
Speaker: Dan Handley / Charles Garcia-Tobin
eMMC 5.0 is the latest generation of embedded NAND Flash IP. Arasan provides a complete solution including digital controllers for host and device, the mixed PHY I/O and pads, software drivers, hardware validation and support.
PCI Express (Peripheral Component Interconnect Express) abbreviated as PCIe or PCI-E, is designed to replace the older PCI, PCI-X, AGP standards. We present a data communication developed system for use the transfer data between the host and the peripheral devices via PCIe. The performance and the available area on the board are effective by using the PCIe. PCIe is a serial expansion bus interconnection method which is use for high speed communication. PCI Express represents the currently fastest and most expensive solution to connect the peripheral devices with general purpose CPU. It provides a highest bandwidth connection in the PC platform. In this paper, we highlight the different types of bus architecture. Here the PCIe architecture is described how data transfer between the CPU to the destination.
DPDK (Data Plane Development Kit) Overview by Rami Rosen
* Background and short history
* Advantages and disadvantages
- Very High speed networking acceleration in L2
- How this acceleration is achieved (hugepages, optimizations)
- rte_kni (and KCP)
- VPP (and FD.io project) , providing routing and switching.
- TLDK (Transport Layer Development Kit, TCP/UDP)
* Anatomy of a simple DPDK application.
* Development and governance model
* Testpmd: DPDK CLI tool
* DDP - Dynamic Device Profiles
Rami Rosen is a Linux Kernel expert, the author of "Linux Kernel Networking", Apress, 2014.
Rami had published two articles about DPDK in the last year:
"Network acceleration with DPDK"
https://lwn.net/Articles/725254/
"Userspace Networking with DPDK"
https://www.linuxjournal.com/content/userspace-networking-dpdk
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV Clusterbyonggon chun
Introduce the container runtime environment which is set up with Kubernetes and various CRI runtimes(Docker, Containerd, CRI-O) and the method of NUMA-aware resource management(CPU Manager, Topology Manager, Etc) for CNF(Containerized Network Function) within Kubernetes and related issues.
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
44CON 2014 - Stupid PCIe Tricks, Joe Fitzpatrick44CON
44CON 2014 - Stupid PCIe Tricks, Joe Fitzpatrick.
Hardware hacks tend to focus on low-speed (jtag, uart) and external (network, usb) interfaces, and PCI Express is typically neither. After a crash course in PCIe Architecture, we’ll demonstrate a handful of hacks showing how pull PCIe outside of your system case and add PCIe slots to systems without them, including embedded platforms. We’ll top it off with a demonstration of SLOTSCREAMER, an inexpensive device that’s part of the NSA Playset which we’ve configured to access memory and IO, cross-platform and transparent to the OS - all by design with no 0-day needed. The open hardware and software framework that we will release will expand your Playset with the ability to tinker with DMA attacks to read memory, bypass software and hardware security measures, and directly attack other hardware devices in the system.
This course gets you started with writing device drivers in Linux by providing real time hardware exposure. Equip you with real-time tools, debugging techniques and industry usage in a hands-on manner. Dedicated hardware by Emertxe's device driver learning kit. Special focus on character and USB device drivers.
"Session ID: BUD17-400
Session Name: Secure Data Path with OPTEE - BUD17-400
Speaker: Mark Gregotski
Track: LHG
★ Session Summary ★
LHG is using the ION-based secure memory allocator integrated with OPTEE as the basis for secure data path processing pipeline. LHG is following the W3C EME protocol and supporting Content Decryption Modules (CDMs) from Widevine and PlayReady.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/bud17/bud17-400/
Presentation: https://www.slideshare.net/linaroorg/bud17400-secure-data-path-with-optee
Video: https://youtu.be/6JdzsWZq4Ls
---------------------------------------------------
★ Event Details ★
Linaro Connect Budapest 2017 (BUD17)
6-10 March 2017
Corinthia Hotel, Budapest,
Erzsébet krt. 43-49,
1073 Hungary
---------------------------------------------------
Keyword: LHG, secure-data, OPTEE
http://www.linaro.org
http://connect.linaro.org
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://twitter.com/linaroorg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
Tutorial: Using GoBGP as an IXP connecting routerShu Sugimoto
- Show you how GoBGP can be used as a software router in conjunction with quagga
- (Tutorial) Walk through the setup of IXP connecting router using GoBGP
Intel and DataStax: 3D XPoint and NVME Technology Cassandra Storage ComparisonDataStax Academy
Does your choice of storage really matter in a Cassandra deployment? Intel and Datastax engineers will discuss results of recent performance testing on a variety of storage devices including classic spinning media, SATA SSD’s and NVMe SSD’s. Session will include an overview of the various storage types, and technology trends. Next we will discuss our recent testing and look at some preliminary results. Even if you are only at the early stages of considering a Cassandra deployment, fully understanding the impact storage choices have on your results can be critical to your projects success.
LCU13: Deep Dive into ARM Trusted Firmware
Resource: LCU13
Name: Deep Dive into ARM Trusted Firmware
Date: 31-10-2013
Speaker: Dan Handley / Charles Garcia-Tobin
eMMC 5.0 is the latest generation of embedded NAND Flash IP. Arasan provides a complete solution including digital controllers for host and device, the mixed PHY I/O and pads, software drivers, hardware validation and support.
PCI Express (Peripheral Component Interconnect Express) abbreviated as PCIe or PCI-E, is designed to replace the older PCI, PCI-X, AGP standards. We present a data communication developed system for use the transfer data between the host and the peripheral devices via PCIe. The performance and the available area on the board are effective by using the PCIe. PCIe is a serial expansion bus interconnection method which is use for high speed communication. PCI Express represents the currently fastest and most expensive solution to connect the peripheral devices with general purpose CPU. It provides a highest bandwidth connection in the PC platform. In this paper, we highlight the different types of bus architecture. Here the PCIe architecture is described how data transfer between the CPU to the destination.
DPDK (Data Plane Development Kit) Overview by Rami Rosen
* Background and short history
* Advantages and disadvantages
- Very High speed networking acceleration in L2
- How this acceleration is achieved (hugepages, optimizations)
- rte_kni (and KCP)
- VPP (and FD.io project) , providing routing and switching.
- TLDK (Transport Layer Development Kit, TCP/UDP)
* Anatomy of a simple DPDK application.
* Development and governance model
* Testpmd: DPDK CLI tool
* DDP - Dynamic Device Profiles
Rami Rosen is a Linux Kernel expert, the author of "Linux Kernel Networking", Apress, 2014.
Rami had published two articles about DPDK in the last year:
"Network acceleration with DPDK"
https://lwn.net/Articles/725254/
"Userspace Networking with DPDK"
https://www.linuxjournal.com/content/userspace-networking-dpdk
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV Clusterbyonggon chun
Introduce the container runtime environment which is set up with Kubernetes and various CRI runtimes(Docker, Containerd, CRI-O) and the method of NUMA-aware resource management(CPU Manager, Topology Manager, Etc) for CNF(Containerized Network Function) within Kubernetes and related issues.
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
44CON 2014 - Stupid PCIe Tricks, Joe Fitzpatrick44CON
44CON 2014 - Stupid PCIe Tricks, Joe Fitzpatrick.
Hardware hacks tend to focus on low-speed (jtag, uart) and external (network, usb) interfaces, and PCI Express is typically neither. After a crash course in PCIe Architecture, we’ll demonstrate a handful of hacks showing how pull PCIe outside of your system case and add PCIe slots to systems without them, including embedded platforms. We’ll top it off with a demonstration of SLOTSCREAMER, an inexpensive device that’s part of the NSA Playset which we’ve configured to access memory and IO, cross-platform and transparent to the OS - all by design with no 0-day needed. The open hardware and software framework that we will release will expand your Playset with the ability to tinker with DMA attacks to read memory, bypass software and hardware security measures, and directly attack other hardware devices in the system.
This course gets you started with writing device drivers in Linux by providing real time hardware exposure. Equip you with real-time tools, debugging techniques and industry usage in a hands-on manner. Dedicated hardware by Emertxe's device driver learning kit. Special focus on character and USB device drivers.
"Session ID: BUD17-400
Session Name: Secure Data Path with OPTEE - BUD17-400
Speaker: Mark Gregotski
Track: LHG
★ Session Summary ★
LHG is using the ION-based secure memory allocator integrated with OPTEE as the basis for secure data path processing pipeline. LHG is following the W3C EME protocol and supporting Content Decryption Modules (CDMs) from Widevine and PlayReady.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/bud17/bud17-400/
Presentation: https://www.slideshare.net/linaroorg/bud17400-secure-data-path-with-optee
Video: https://youtu.be/6JdzsWZq4Ls
---------------------------------------------------
★ Event Details ★
Linaro Connect Budapest 2017 (BUD17)
6-10 March 2017
Corinthia Hotel, Budapest,
Erzsébet krt. 43-49,
1073 Hungary
---------------------------------------------------
Keyword: LHG, secure-data, OPTEE
http://www.linaro.org
http://connect.linaro.org
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://twitter.com/linaroorg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
Tutorial: Using GoBGP as an IXP connecting routerShu Sugimoto
- Show you how GoBGP can be used as a software router in conjunction with quagga
- (Tutorial) Walk through the setup of IXP connecting router using GoBGP
Intel and DataStax: 3D XPoint and NVME Technology Cassandra Storage ComparisonDataStax Academy
Does your choice of storage really matter in a Cassandra deployment? Intel and Datastax engineers will discuss results of recent performance testing on a variety of storage devices including classic spinning media, SATA SSD’s and NVMe SSD’s. Session will include an overview of the various storage types, and technology trends. Next we will discuss our recent testing and look at some preliminary results. Even if you are only at the early stages of considering a Cassandra deployment, fully understanding the impact storage choices have on your results can be critical to your projects success.
NVMe PCIe and TLC V-NAND It’s about TimeDell World
With an explosion in data and the relentless growth in demand for information, identifying a much more efficient means of storage has become extremely important. In this session, we will cover the key drivers behind the need for faster and more efficient storage. NVMe, a standardized protocol for PCIe-based storage, is giving users the huge leap in bandwidth required for demanding applications. Samsung, who makes the fastest NVMe SSDs on the market, will cover the benefits enabled by such technology, in areas such as fraud prevention and surgical procedures.
The technology behind flash drives – NAND memory – will be spotlighted in this presentation. Memory manufacturers have improved NAND’s value by migrating from single-level-cell to multi-level-cell designs, but the most significant evolution will be a marriage of triple-level-cell and V-NAND flash manufacturing technologies. Samsung will also provide an overview of the prospects for TLC V-NAND with mobile device manufacturers, while examining the strong potential for a much wider TLC V-NAND market in data centers.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
The bottleneck in flash storage is often the interface. SAS/SATA interfaces were designed specifically for hard disk drives not for flash media. For example, flash storage can support many more simultaneous I/O operations. The resolution to the problem is to use a different interface, one that is higher throughput and is more directly accessible from the CPU. Leveraging one of these interfaces and extracting optimal performance from the flash media means leaving the confines of the SCSI protocol with customized proprietary drivers. The result is complexity and slow innovation.
This IT Brand Pulse mini-report includes 2016 market leader data from the independent, non-sponsored survey covering six categories of brand leadership–Market, Price, Performance, Reliability, Service & Support and Innovation–for thirteen classes (plus two special achievement) of Flash Storage/NVMe.
Complete survey data for each product category is available. Please contact us at info@itbrandpulse.com for information and pricing.
Read the 2016 Flash Storage-NVMe Brand Leader Survey Press Release: http://www.itbrandpulse.com/press-release/it-pros-choose-2016-flash-storagenvme-brand-leaders/
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Odinot Stanislas
Issue du salon orienté développeurs d'Intel (l'IDF) voici une présentation plutôt sympa sur le stockage dit "scale out" avec une présentation des différents fournisseurs de solutions (slide 6) comprenant ceux qui font du mode fichier, bloc et objet. Puis du benchmark sur certains d'entre eux dont Swift, Ceph et GlusterFS.
Keynote presentation from Flash Memory Summit 2016 by Dr. Siva Sivaram.
Learn his perspective on opportunities and challenges in developing a memory cell solution for the Storage Class Memory market, and lessons learned from 3D NAND.
PCIe Gen 3.0 Presentation @ 4th FPGA CampFPGA Central
PCIe Gen3 presentation by PLDA at 4th FPGA Camp in Santa Clara, CA. For more details visit http://www.fpgacentral.com/fpgacamp or http://www.fpgacentral.com
High-Density Top-Loading Storage for Cloud Scale Applications Rebekah Rodriguez
In this webinar, we will discuss how high-capacity Top-Loading Storage systems are being used for enterprise and cloud scale applications and will identify the key features of the modular architecture for use in today’s software defined storage (SDS) environments. - https://www.brighttalk.com/webcast/17278/527798
Heterogeneous Computing : The Future of SystemsAnand Haridass
Charts from NITK-IBM Computer Systems Research Group (NCSRG)
- Dennard Scaling,Moore's Law, OpenPOWER, Storage Class Memory, FPGA, GPU, CAPI, OpenCAPI, nVidia nvlink, Google Microsoft Heterogeneous system usage
Stephen Bates, Technical Director in the Chief Strategy and Technology Office of PMC-Sierra presented a poster on recent developments in Donard projects at the recent UCSD Non-Volatile Memories Workshop 2015 March 1-3.
Watch the replay: http://bit.ly/2wbz3Cd
The fifth generation of Cisco Unified Computing System (UCS) offers faster CPUs, and more cores, GPUs, memory and modularity than any other UCS server. We introduced these new M5 Series Servers in a recent episode of TechWiseTV.
Explore all the customer-inspired innovations that can help you scale up or out, and deliver greater insights with data-intensive analytics where you need them most.
Resources:
Watch the related TechWiseTV episode: http://bit.ly/2wQ6fMp
With the HPE ProLiant DL325 Gen10 server, Hewlett Packard Enterprise is extending the worlds' most secure industry standard servers product families. This a secure and versatile single socket (1P) 1U AMD EPYC™ based platform offers an exceptional balance of processor, memory and I/O for virtualization and data intensive workloads. With up to 32 cores, up to 16 DIMMs, 2 TB memory capacity and support for up to 10 NVMe drives, this server delivers 2P performance with 1P economics.This datasheet includes features, port description, configuration guide and specification of this series.
Delivering Supermicro Software Defined Storage Solutions with OSNexus QuantaStorRebekah Rodriguez
With security and cost concerns at an all-time high, organizations are searching for solutions to lower labor costs and keep their data safe from threats. Supermicro and OSNexus partner to bring a solution to you with a single point of management for file, block, and object storage, along with cutting-edge security features and certifications for regulated industries.
Yesterday's thinking may still believe NVMe (NVM Express) is in transition to a production ready solution. In this session, we will discuss how the evolution of NVMe is ready for production, the history and evolution of NVMe and the Linux stack to address where NVMe has progressed today to become the low latency, highly reliable database key value store mechanism that will drive the future of cloud expansion. Examples of protocol efficiencies and types of storage engines that are optimizing for NVMe will be discussed. Please join us for an exciting session where in-memory computing and persistence have evolved.
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Shuquan Huang
Today data scientist is turning to cloud for AI and HPC workloads. However, AI/HPC applications require high computational throughput where generic cloud resources would not suffice. There is a strong demand for OpenStack to support hardware accelerated devices in a dynamic model.
In this session, we will introduce OpenStack Acceleration Service – Cyborg, which provides a management framework for accelerator devices (e.g. FPGA, GPU, NVMe SSD). We will also discuss Rack Scale Design (RSD) technology and explain how physical hardware resources can be dynamically aggregated to meet the AI/HPC requirements. The ability to “compose on the fly” with workload-optimized hardware and accelerator devices through an API allow data center managers to manage these resources in an efficient automated manner.
We will also introduce an enhanced telemetry solution with Gnnochi, bandwidth discovery and smart scheduling, by leveraging RSD technology, for efficient workloads management in HPC/AI cloud.
The Power of HPC with Next Generation Supermicro Systems Rebekah Rodriguez
Witness the astonishing improvement in performance and security with the next new generation of Supermicro platforms. New Supermicro systems deliver unprecedented levels of compute power for the most challenging high-performance workloads. In this Supercomputing roundtable, learn how the new Supermicro products provide a differentiated advantage for early adopters of the most advanced accelerated computing infrastructure in the world.
Similar to Moving to PCI Express based SSD with NVM Express (20)
Using a Field Programmable Gate Array to Accelerate Application PerformanceOdinot Stanislas
Intel s'intéresse tout particulièrement aux FPGA et notamment au potentiel qu'ils apportent lorsque les ISV et développeurs ont des besoins très spécifiques en Génomique, traitement d'images, traitement de bases de données, et même dans le Cloud. Dans ce document vous aurez l'occasion d'en savoir plus sur notre stratégie, et sur un programme de recherche lancé par Intel et Altera impliquant des Xeon E5 équipés... de FPGA
Intel is looking at FPGA and what they bring to ISVs and developers and their very specific needs in genomics, image processing, databases, and even in the cloud. In this document you will have the opportunity to learn more about our strategy, and a research program initiated by Intel and Altera involving Xeon E5 with... FPGA inside.
Auteur(s)/Author(s):
P. K. Gupta, Director of Cloud Platform Technology, Intel Corporation
Le SDN et NFV sont très à la mode en ce moment car en passant des appliance physiques aux équipement réseau massivement logiciel, celà devrait offrir une grande flexibilité et agilité aux entreprises (et telco en particulier). Néanmoins chainer des services réseau est un exercice encore très complexe et ce document vous explique ce qu'il est déjà possible de faire sur OpenStack en couplant par exemple : un load balancer (BigIP), un Firewall (BigIP), un réseau virtuel WAN (RiverBed) ou encore un routeur virtuel (Brocade).
SNIA : Swift Object Storage adding EC (Erasure Code)Odinot Stanislas
In depth presentation on EC integration in Swift object storage. Content delivered by Paul Luse, Sr. Staff Engineer @ Intel and Kevin Greenan, Staff Software Engineer - Box during fall SNIA event
Bare-metal, Docker Containers, and Virtualization: The Growing Choices for Cl...Odinot Stanislas
(FR)
Introduction très sympathique autour des environnements Cloud avec un focus particulier sur la virtualisation et les containers (Docker)
(ENG)
Friendly presentation about Cloud solutions with a focus on virtualization and containers (Docker).
Author: Nicholas Weaver – Principal Architect, Intel Corporation
Software Defined Storage - Open Framework and Intel® Architecture TechnologiesOdinot Stanislas
(FR)
Dans cette présentation vous aurez le plaisir d'y trouver une introduction plutôt détaillées sur la notion de "SDS Controller" qui est en résumé la couche applicative destinée à contrôler à terme toutes les technologies de stockage (SAN, NAS, stockage distribué sur disque, flash...) et chargée de les exposer aux orchestrateurs de Cloud et donc aux applications.
(ENG)
This presentation cover in detail the notion of "SDS Controller" which is in summary a software stack able to handle all storage technologies (SAN, NDA, distributed file systems on disk, flash...) and expose it to Cloud orchestrators and applications. Lots of good content.
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)Odinot Stanislas
Une très intéressante présentation autour de la virtualisation des réseaux contenant des explications détaillées autour des VLAN, VXLAN, mais aussi d'NVGRE et surtout de GENEVE (Generic Network Virtualization Encapsulation) supporté pour la première fois sur la dernière carte 40 GbE d'Intel (XL710)
Intel développe une "ONP" (Open Network Platform) dit autrement un switch ouvert offrant les fonctions de base nécessaires au SDN. Si vous souhaitez connaitre le matériel utilisé, les stack logicielle exploitée et les compatibilité avec notamment les orchestrateurs, ce doc est fait pour vous.
Intel and Siveo wrote this content which explain how their Cloud Orchestrator is working. You will learn how to configure it, benefit from automatical workload placement feature and manage multiple hypervisors transparently.
Intel IT Open Cloud - What's under the Hood and How do we Drive it?Odinot Stanislas
L'IT d'Intel fait sa révolution et s'impose d'agir comme un "Cloud Service Provider". La transformation est initiée avec au programme la mise en place d'un Cloud Fédéré, Interopérable et Open mais aussi d'un framework de maturité, du DevOps et de la prise de risque. Bref, vraiment intéressant
Configuration and Deployment Guide For Memcached on Intel® ArchitectureOdinot Stanislas
This Configuration and Deployment Guide explores designing and building a Memcached infrastructure that is scalable, reliable, manageable and secure. The guide uses experience with real-world deployments as well as data from benchmark tests. Configuration guidelines on clusters of Intel® Xeon®- and Atom™-based servers take into account differing business scenarios and inform the various tradeoffs to accommodate different Service Level Agreement (SLA) requirements and Total Cost of Ownership (TCO) objectives.
Dans ce document vous trouverez les dernières améliorations faites sur OpenStack et comment certaines technologies Intel dopent la performance et la sécurité de l'environnement Cloud. Quelques exemple avec :
Comment créer des "pool" de VM sécurisées avec possibilité de géo tagging (technologies Intel présentent dans les serveurs HP, DELL, IBM… + Folsom, Nova, Horizon, Open Attestation)
Comment doper la sécurité du nouveau module de gestion des clés d'OpenStack (technologies Intel + Barbican)
Comment benchmarker le stockage object Swift avec COSBench (qui supporte maintenant Ceph, S3 et Amplidata)
Auteurs:
Girish Gopal - Strategic Planning, Intel Corporation
Malini Bhandaru - Security Architect, Intel Corporation
Big Data and Intel® Intelligent Systems Solution for Intelligent transportationOdinot Stanislas
Explications sur comment il est possible d'utiliser la puissance d'Hadoop pour analyser les vidéos des caméras présentent sur les réseaux routiers avec pour objectif d'identifier l'état du trafic, le type de véhicule en déplacement et même l'usurpation de plaques d'immatriculation.
Big Data Beyond Hadoop*: Research Directions for the FutureOdinot Stanislas
Michael Wrinn
Research Program Director, University Research Office,
Intel Corporation
Jason Dai
Engineering Director and Principal Engineer,
Intel Corporation
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
9. 9
Why PCI Express* for SSDs?
Added PCI Express* SSD Benefits
• Even better performance
• Increased Data Center CPU I/O:
40 PCI Express Lanes per CPU
• Even lower latency
• No external IOC means
Lower power (~10W)
& cost (~$15)
10. 10
Agenda
• Why PCI Express* (PCIe) for SSDs?
– PCIe SSD in Client
– PCIe SSD in Data Center
• Why NVM Express (NVMe) for PCIe SSDs?
– Overview NVMe
– Driver ecosystem update
– NVMe technology developments
• Deploying PCIe SSD with NVMe
11. 11
Client PCI Express* SSD Considerations
• Form Factors?
• Attach to CPU or PCH?
• PCI Express* x2 or x4?
• Path to NVM Express?
• What about battery life?
• Thermal concerns?
Trending well, but hurdles remain
12. 12
Card-based PCI Express* SSD Options
M.2
Socket 2
M.2
Socket 3
SATA
Yes, Shared Yes, Shared
PCIe x2
PCIe x4 No Yes
Comms
Support?
Yes No
Ref Clock Required Required
Max “Up to”
Performance
2 GB/s 4 GB/s
Bottom Line Flexibility Performance
Host Socket 2 Host Socket 3
Device w/ B&M Slots
22x80mm DS recommended for capacity
22x42mm SS recommended for size & weight
M.2 defines: single or double
sided SSDs in 5 lengths, and
2 SSD host sockets
13. 13
Card-based PCI Express* SSD Options
M.2
Socket 2
M.2
Socket 3
SATA
Yes, Shared Yes, Shared
PCIe x2
PCIe x4 No Yes
Comms
Support?
Yes No
Ref Clock Required Required
Max “Up to”
Performance
2 GB/s 4 GB/s
Bottom Line Flexibility Performance
Host Socket 2 Host Socket 3
Device w/ B&M Slots
22x80mm DS recommended for capacity
22x42mm SS recommended for size & weight
M.2 defines: single or double
sided SSDs in 5 lengths, and
2 SSD host sockets
Industry alignment for M.2 length will
lower costs and accelerate transitions
14. 14
PCI Express* SSD Connector Options
SATA
Express*
SFF-8639
SATA* Yes Yes
PCIe x2 x2 or x4
Host Mux Yes No
Ref Clock Optional Required
EMI SRIS Shielding
Height 7mm 15mm
Max “Up to”
Performance
2 GB/s 4 GB/s
Bottom Line
Flexibility
& Cost
Performance
SATA Express*: flexibility for HDD
Alignments on connectors for PCI Express* SSDs
will lower costs and accelerate transitions
Separate Refclk Independent
SSC (SRIS) removes clocks
from cables, reducing
emissions & costs of shielding
SFF-8639: Best performance
15. 15
PCI Express* SSD Connector Options
SATA
Express*
SFF-8639
SATA* Yes Yes
PCIe x2 x2 or x4
Host Mux Yes No
Ref Clock Optional Required
EMI SRIS Shielding
Height 7mm 15mm
Max “Up to”
Performance
2 GB/s 4 GB/s
Bottom Line
Flexibility
& Cost
Performance
SATA Express*: flexibility for HDD
Alignments on connectors for PCI Express* SSDs
will lower costs and accelerate transitions
Separate Refclk Independent
SSC (SRIS) removes clocks
from cables, reducing
emissions & costs of shielding
SFF-8639: Best performance
Use an M.2 interface without cables for
x4 PCI Express* performance, and lower cost
18. 18
• SSD can attach to
Processor (Gen 3.0) or
Chipset (Gen 2.0
today, Gen 3.0 in
future)
• SSD uses PCIe
x1, x2 or x4
• Driver interface can be
AHCI or NVM Express
Many Options to Connect PCI Express* SSDs
19. 19
• SSD can attach to
Processor (Gen 3.0) or
Chipset (Gen 2.0
today, Gen 3.0 in
future)
• SSD uses PCIe
x1, x2 or x4
• Driver interface can be
AHCI or NVM Express
Many Options to Connect PCI Express* SSDs
Chipset attached PCI Express* Gen 2.0 x2 SSDs provide
~2x SATA 6Gbps performance today
20. 20
PCI Express* Gen 3.0, x4 SSDs with NVM Express
provide even better SSD performance tomorrow
• SSD can attach to
Processor (Gen 3.0) or
Chipset (Gen 2.0
today, Gen 3.0 in
future)
• SSD uses PCIe
x1, x2 or x4
• Driver interface can be
AHCI or NVM Express
Many Options to Connect PCI Express* SSDs
21. 21
Intel® Rapid Storage Technology 13.x
Intel® RST driver support for PCI
Express Storage coming in 2014
PCI Express* Storage + Intel® RST driver delivers
power, performance and responsiveness across
innovative form-factors in 2014 Platforms
Detachables, Convertibles,
All-in-Ones
Mainstream &
Performance
Intel® Rapid Storage Technology (Intel® RST)
22. 22
Client SATA* vs. PCI Express*
SSD Power Management
Activity Device
State
SATA /
AHCI
State
SATA
I/O
Ready
Power
Example
PCIe
Link
State
Time to
Registe
r Read
PCIe
I/O
Ready
Active
D0/
D1/D2
Active NA ~500mW L0 NA ~ 60 µs
Light
Active
Partial 10 µs ~450mW
L1.2
< 150 µs ~ 5ms
Idle Slumber 10 ms ~350mW
Pervasive
Idle /
Lid down
D3_hot DevSlp
50 -
200 ms
~15mW < 500 µs ~ 100ms
D3_cold
/ RTD3
off < 1 s 0W L3 ~100ms ~300 ms
Autonomous transition
D3_cold/off, L1.2, autonomous transitions & two-step
resume improves PCI Express* SSD battery life
~5mW
23. 23
Client PCI Express* (PCIe) SSD
Peak Power Challenges
• Max Power:
100% Sequential Writes
• SATA*: ~3.5W @ ~400MB/s
• x2 PCIe 2.0: up to 2x (7W)
• x4 PCIe 3.0: up to ~15W2
0.00
1.00
2.00
3.00
4.00
5.00
1 2 3 4 5 Average
Power(Watts)
Drive
SATA 128K Sequential Write Power
Compressible Data, QD=321
Max
1. Data collected using Agilent* DC Power Analyzer N6705B. System configuration: Intel® Core™ i7-3960X (15MB L3 Cache, 3.3GHz) on Intel Desktop Board DX79SI, AMD* Radeon HD 6990
and driver 8.881.0.0, BIOS SIX791OJ.86A.0193.2011.0809.1137, Intel INF 9.1.2.1007, Memory 16GB (4X4GB) Triple-channel Samsung DDR3-1600, Microsoft* Windows* 7 MSAHCI storage
driver, Microsoft Windows 7 Ultimate 64-bit Build 7600 with SP1, Various SSDs. Results have been estimated based on internal Intel analysis and are provided for informational purposes only.
Any difference in system hardware or software design or configuration may affect actual performance. For more information go to http://www.intel.com/performance
2. M.2 Socket 3 has nine 3.3V supply pins, each capable of 0.5A for a total power capability of 14.85W
Attention needed for power supply, thermals, and benchmarking
Source: Intel
Motherboard
M.2 SSD
Thermal
Interface
Material
24. 24
Client PCI Express* SSD Accelerators
• The client ecosystem is ready:
Implement PCI Express* SSDs now!
• Use 42mm & 80mm length M.2 for client PCIe SSD
• Implement L1.2 and extend RTD3 software support
for optimal battery life
• Use careful power supply & thermal design
• High performance desktop and workstations can
consider SFF-8639 data center SSDs for PCI
Express* x4 performance today
Drive PCI Express* client adoption with
specification alignment and careful design
25. 25
Agenda
• Why PCI Express* (PCIe) for SSDs?
– PCIe SSD in Client
– PCIe SSD in Data Center
• Why NVM Express (NVMe) for PCIe SSDs?
– Overview NVMe
– Driver ecosystem update
– NVMe technology developments
• Deploying PCIe SSD with NVMe
26. 26
2.5” Enterprise SFF-8639
PCI Express* SSDs
The path to mainstream: innovators begin shipping
2.5” enterprise PCI Express* SSDs!
Image sources: Samsung*, Micron*, and Dell*
27. 27
Datacenter PCI Express* SSD
Considerations
• Form Factor?
• Implementation options?
• Hot plug or remove?
• Traditional RAID?
• Thermal/peak power?
• Managements?
Developments are on the way
28. 28
PCI Express* Enterprise SSD Form Factor
• SFF-8639 supports 4
pluggable device types
• Host slots can be designed to
accept more than one type of
device
• Use PRSNT#, IfDet#, and
DualPortEn# pins for device
Presence Detect and device
type decoding
SFF-8639 enables multi-capable hosts
29. 29
SFF-8639 Connection Topologies
• Interconnect standards currently in process
• 2 & 3 connector designs
• “beyond the scope of this specification” a common
phrase for standards currently in development
Source: “PCI Express SFF-8639 Module Specification”, Rev. 0.3
Meeting PCI
Express 3.0*
jitter budgets
for 3 connector
designs is non-
trivial. Consider
active signal
conditioning to
accelerate
adoption.
30. 30
Solution Example – 5 Connectors
PCI Express* (PCIe) signal
retimers & switches are
available from multiple sources
Images: Dell* Poweredge* R720* PCIe drive interconnect.
Contact PLX* or IDT* for more information on retimers or switches
4
5
3
Retimer or Switch
Active signal conditioning enables
SFF-8639 solutions with more connectors
31. 31
Hot-Plug Use Cases
• Hot Add & Remove are software managed events
• During boot, the system must prepare for hot-plug:
– Configure PCI Express* Slot Capability registers
– Enable and register for hot plug events to higher level
storage software (e.g., RAID or tiering software)
– Pre-allocate slot resources (Bus IDs, interrupts, memory
regions) using ACPI* tables
Existing BIOS and Windows*/Linux* OS are
prepared to support PCI Express* Hot-Plug today
32. 32
Surprise Hot-Remove
• Random device failure or operator error
can result in surprise removal during I/O
• Storage controller driver and the software
stack are required to be robust for such cases
• Storage controller driver must check for Master Abort
– On all reads to the device, the driver checks register for FFFF_FFFFh
– If data is FFFF_FFFFh, then driver reads another register expected to have
a value that includes zeroes to verify device is still present
• Time order of removal notification is unknown (e.g. Storage controller
driver via Master Abort, or PCI Bus driver via Presence Change
interrupt, or RAID software may signal removal first)
Surprise Hot-Remove requires careful software design
33. 33
RAID for PCI Express* SSDs?
• Software RAID is a hardware
redundant solution to enable Highly
Available (HA) systems today with
PCI Express* (PCIe) SSDs
• Multi copies of Application images
(redundant resource)
• Open cloud infrastructure that
supports data redundancy with
software implementations, such as
Ceph* object storage
Storage Pool
Row B
Row A
Row B
Hardware RAID for PCIe SSD is under-developments
Data Striped
Datareplicated
34. 34
Data Center PCI Express* (PCIe) SSD
Peak Power Challenges
• Max Power:
100% Sequential Writes
• Larger capacities have high
concurrency, consume most
power (up to 25W!2)
• Power varies >40%
depending on capacity and
workload
• Consider UL touch safety
standards when planning
airflow designs or slot power
limits3
1. Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may
affect actual performance. For more information go to http://www.intel.com/performance
2. PCI Express* “Enterprise SSD Form Factor” specification requires 2.5” SSD maximum continuous power of <25W
3. See PCI Express* Base Specification, Revision 3.0, Section 6.9 for more details on Slot Power Limit Control
Attention needed for power supply, thermals, and SAFETY
Source: Intel
0
5
10
15
20
25
30
Large Small
Power,W
100% Seq Write
50/50 Seq Read/Write
70/30 Seq Read/Write
100% Seq Read
Capacity
Modeled PCI Express* SSD Power1
35. 35
PCI Express* SSDs Enclosure
Management
• SSD Form Factor Specification
(www.ssdformfactor.org) defines
hot plug indicator uses, Out-of-
Band managements
• PCI Express* Base Specification
Rev. 3.0 defines enclosure
indicators and registers intended
for Hot-Plug management
support (Registers: Device
Capabilities, Slot Capabilities, Slot
Control, Slot Status
• SFF-8485 standard defines
SGPIO enclosure management
interface
Standardize PCI Express* SSD enclosure management
36. 36
Data Center PCI Express*(PCIe) SSD
Accelerators
• The data center ecosystem is capable:
Implement PCI Express* SSDs now!
• Proved system implementations of design-in 2.5”
PCIe SSDs
• Understand Hot-Plug capabilities of your device,
system and OS
• Design thermal solutions with safety in mind
• Collaborate on PCI Express SSD enclosure
management standards
Drive PCI Express* data center adoption through
education, collaboration, and careful software design
37. 37
Agenda
• Why PCI Express* (PCIe) for SSDs?
– PCIe SSD in Client
– PCIe SSD in Data Center
• Why NVM Express (NVMe) for PCIe SSDs?
– Overview NVMe
– Driver ecosystem update
– NVMe technology developments
• Deploying PCIe SSD with NVMe
38. 38
PCI Express* for Data Center/Enterprise SSDs
• PCI Express* (PCIe) is a great interface for SSDs
– Stunning performance 1 GB/s per lane (PCIe Gen3 x1)
– With PCIe scalability 8 GB/s per device (PCIe Gen3 x8) or more
– Lower latency Platform+Adapter: 10 µsec down to 3 µsec
– Lower power No external SAS IOC saves 7-10 W
– Lower cost No external SAS IOC saves ~ $15
– PCIe lanes off the CPU 40 Gen3 (80 in dual socket)
• HOWEVER, there is NO standard driver
Fusion-io*
Micron*
LSI*
Virident*
Marvell*
Intel
OCZ*
PCIe SSDs are emerging in Data Center/Enterprise,
co-existing with SAS & SATA depending on application
39. 39
Next Generation NVM Technology
Family Defining Switching
Characteristics
Phase
Change
Memory
Energy (heat) converts material
between crystalline (conductive)
and amorphous (resistive) phases
Magnetic
Tunnel
Junction
(MTJ)
Switching of magnetic resistive
layer by spin-polarized electrons
Electrochemical
Cells (ECM)
Formation / dissolution of
“nano-bridge” by electrochemistry
Binary Oxide
Filament
Cells
Reversible filament formation by
Oxidation-Reduction
Interfacial
Switching
Oxygen vacancy drift diffusion
induced barrier modulation
Scalable Resistive Memory Element
Resistive RAM NVM Options
Cross Point Array in Backend Layers ~4l2 Cell
Wordlines Memory
Element
Selector
Device
Many candidate next generation NVM technologies.
Offer ~ 1000x speed-up over NAND.
40. 40
Fully Exploiting Next Generation NVM
• With Next Generation NVM, the NVM is no longer the bottleneck
– Need optimized platform storage interconnect
– Need optimized software storage access methods
*
NVM Express is the interface architected for
NAND today and next generation NVM
41. 41
Agenda
• Why PCI Express* (PCIe) for SSDs?
– PCIe SSD in Client
– PCIe SSD in Data Center
• Why NVM Express (NVMe) for PCIe SSDs?
– Overview NVMe
– Driver ecosystem update
– NVMe technology developments
• Deploying PCIe SSD with NVMe
42. 42
Technical Basics
• All parameters for 4KB command in single 64B command
• Supports deep queues (64K commands per queue, up to 64K queues)
• Supports MSI-X and interrupt steering
• Streamlined & simple command set (13 required commands)
• Optional features to address target segment (Client, Enterprise, etc.)
– Enterprise: End-to-end data protection, reservations, etc.
– Client: Autonomous power state transitions, etc.
• Designed to scale for next generation NVM, agnostic to NVM type used
http://www.nvmexpress.org/
43. 43
Queuing Interface
Command Submission & Processing
Submission
Queue Host Memory
Completion
Queue
Host
NVMe Controller
Head
Tail
1
Submission Queue
Tail Doorbell
Completion Queue
Head Doorbell
2
3 4
Tail
Head
5 6
7
8
Queue
Command
Ring
Doorbell
New Tail
Fetch
Command
Process
Command
Queue
Completion
Generate
Interrupt
Process
Completion
Ring
Doorbell
New Head
Command Submission
1. Host writes command to
Submission Queue
2. Host writes updated
Submission Queue tail
pointer to doorbell
Command Processing
3. Controller fetches
command
4. Controller processes
command
*
44. 44
Queuing Interface
Command Completion
Submission
Queue Host Memory
Completion
Queue
Host
NVMe Controller
Head
Tail
1
Submission Queue
Tail Doorbell
Completion Queue
Head Doorbell
2
3 4
Tail
Head
5 6
7
8
Queue
Command
Ring
Doorbell
New Tail
Fetch
Command
Process
Command
Queue
Completion
Generate
Interrupt
Process
Completion
Ring
Doorbell
New Head
Command Completion
5. Controller writes
completion to
Completion Queue
6. Controller generates
MSI-X interrupt
7. Host processes
completion
8. Host writes updated
Completion Queue head
pointer to doorbell
*
45. 45
Simple Command Set – Optimized for NVM
Admin Commands
Create I/O Submission Queue
Delete I/O Submission Queue
Create I/O Completion Queue
Delete I/O Completion Queue
Get Log Page
Identify
Abort
Set Features
Get Features
Asynchronous Event Request
Firmware Activate (optional)
Firmware Image Download (opt)
Format NVM (optional)
Security Send (optional)
Security Receive (optional)
NVM I/O Commands
Read
Write
Flush
Write Uncorrectable (optional)
Compare (optional)
Dataset Management (optional)
Write Zeros (optional)
Reservation Register (optional)
Reservation Report (optional)
Reservation Acquire (optional)
Reservation Release (optional)
Only 10 Admin and 3 I/O commands required
46. 46
Agenda
• Why PCI Express* (PCIe) for SSDs?
– PCIe SSD in Client
– PCIe SSD in Data Center
• Why NVM Express (NVMe) for PCIe SSDs?
– Overview NVMe
– Driver ecosystem update
– NVMe technology developments
• Deploying PCIe SSD with NVMe
47. 47
Driver Development on Major OSes
• Windows* 8.1 and Windows* Server 2012 R2
include native driver
• Open source driver in collaboration with OFA
Windows*
• Stable OS driver since Linux* kernel 3.10Linux*
• FreeBSD driver upstreamUnix
• Solaris driver will ship in S12Solaris*
• vmklinux driver certified release in 1H, 2014VMware*
• Open source driver available on SourceForgeUEFI
Native OS drivers already available, with more coming!
48. 48
Windows* Open Source Driver Update
• 64-bit support on Windows* 7 and Windows Server
2008 R2
• Mandatory features
Release 1
Q2 2012
• Added 64-bit support Windows 8
• Public IOCTLs and Windows 8 Storport updates
Release 1.1
Q4 2012
• Added 64-bit support on Windows Server 2012
• Signed executable drivers
Release 1.2
Aug 2013
• Hibernation on boot drive
• NUMA group support in core enumeration
Release 1.3
March 2014
• WHQL certification
• Drive Trace feature, WVI command processing
• Migrate to VS2013, WDK8.1
Release 1.4
Oct, 2014
Four major open source releases since 2012.
Contributors include Huawei*, PMC-Sierra*, Intel, LSI* & SanDisk*
https://www.openfabrics.org/resources/developer-tools/nvme-windows-development.html
49. 49
Linux* Driver Update
Recent Features
• Stabled Linux* 3.10, Latest driver in 3.14
• Surprise hot plug/remove
• Dynamic partitioning
• Deallocate (i.e., Trim support)
• 4KB sector support (in addition to 512B)
• MSI support (in addition to MSI-X)
• Disk I/O statistics
Linux OS distributors’ support
• RHEL 6.5, Ubuntu 13.10 has native drivers
• RHEL 7.0, Ubuntu 14.04LTS and SLES 12 will
have latest native drivers
• SuSE is testing external driver package for
SLES11 SP3
Works in progress: power management, end-to-end data
protection, sysfs manageability & NUMA
/dev/nvme0n1
50. 50
FreeBSD Driver Update
• NVM Express* (NVMe) support is upstream in the head and
stable/9 branches
• FreeBSD 9.2 released in September is the first official release
with NVMe support
nvme
Core NVMe driver
nvd
NVMe/block layer shim
nvmecontrol
User space utility,
including firmware update
FreeBSDNVMeModules
51. 51
Solaris* Driver Update
• Current Status from Oracle* team
- Fully compliant with 1.0e spec
- Direct block interfaces bypassing complex SCSI code path
- NUMA optimized queue/interrupt allocation
- Support x86 and SPARC platform
- A command line tool to monitor and configure the controller
- Delivered to S12 and S11 Update 2
• Future Development Plans
- Boot & install on SPARC and X86
- Surprise removal support
- Shared hosts and multi-pathing
52. 52
VMware Driver Update
• Vmklinux based driver development is completed
– First release in mid-Oct, 2013
– Public release will be 1H, 2014
• A native VMware* NVMe driver is available for end
user evaluations
• VMware’s I/O Vendor Partner Program (IOVP) offers
members a comprehensive set of tools, resources
and processes needed to develop, certify and release
software modules, including device drivers and
utility libraries for VMware ESXi
53. 53
UEFI Driver Update
• The UEFI 2.4 specification available at www.UEFI.org contains
updates for NVM Express* (NVMe)
• An open source version of an NVMe driver for UEFI is available
at nvmexpress.org/resources
“AMI is working with vendors
of NVMe devices and plans for
full BIOS support of NVMe in
2014.”
Sandip Datta Roy
VP BIOS R&D, AMI
NVMe boot support with UEFI will start percolating
releases from Independent BIOS Vendors in 2014
54. 54
Agenda
• Why PCI Express* (PCIe) for SSDs?
– PCIe SSD in Client
– PCIe SSD in Data Center
• Why NVM Express (NVMe) for PCIe SSDs?
– Overview NVMe
– Driver ecosystem update
– NVMe technology developments
• Deploying PCIe SSD with NVMe
55. 55
NVMe Promoters
“Board of Directors”
Technical
Workgroup
Queueing Interface
Admin Command Set
NVMe I/O Command Set
Driver Based Management
Current spec version: NVMe 1.1
Management Interface
Workgroup
In-Band (PCIe) and Out-of-Band (SMBus)
PCIe SSD Management
First specification will be Q3, 2014
NVM Express Organization Architected for Performance
56. 56
NVM Express 1.1 Overview
• The NVM Express 1.1 specification, published in October of 2012, adds
additional optional client and Enterprise features
57. 57
NVM Express 1.1 Overview
• The NVM Express 1.1 specification, published in October of 2012, adds
additional optional client and Enterprise features
Multi-path Support
• Reservations
• Unique Identifier per Namespace
• Subsystem Reset
58. 58
NVM Express 1.1 Overview
• The NVM Express 1.1 specification, published in October of 2012, adds
additional optional client and Enterprise features
Power Optimizations
• Autonomous Power State
Transitions
Multi-path Support
• Reservations
• Unique Identifier per Namespace
• Subsystem Reset
59. 59
NVM Express 1.1 Overview
• The NVM Express 1.1 specification, published in October of 2012, adds
additional optional client and Enterprise features
Power Optimizations
• Autonomous Power State
Transitions
Command Enhancements
• Scatter Gather List support
• Active Namespace Reporting
• Persistent Features Across
Power States
• Write Zeros Command
Multi-path Support
• Reservations
• Unique Identifier per Namespace
• Subsystem Reset
61. 61
Reservations
• In some multi-host environments, like Windows* clusters, reservations
may be used to coordinate host access
• NVMe 1.1 includes a simplified reservations mechanism that is
compatible with implementations that use SCSI reservations
• What is a reservation? Enables two or more hosts to coordinate
access to a shared namespace.
– A reservation may allow Host A and Host B access, but disallow Host C
Namespace
NSID 1
NVM Express
Controller 1
Host ID = A
NSID 1
NVM Express
Controller 2
Host ID = A
NSID 1
NVM Express
Controller 3
Host ID = B
NSID 1
Host
A
Host
B
Host
C
NVM Subsystem
NVM Express
Controller 4
Host ID = C
62. 62
Power Optimizations
• NVMe 1.1 added the Autonomous Power State Transition feature for
client power focused implementations
• Without software intervention, the NVMe controller transitions to a
lower power state after a certain idle period
– Idle period prior to transition programmed by software
Power
State
Opera-
tional?
Max
Power
Entrance
Latency
Exit
Latency
0 Yes 4 W 10 µs 10 µs
1 No 10 mW 10 ms 5 ms
2 No 1 mW 15 ms 30 ms
Example Power States
Power State 0
Power State 1
Power State 2
After 50 ms idle
After 500 ms idle
63. 63
Continuing to Advance NVM Express
• NVM Express continues to add features to meet the needs of
client and Enterprise market segments as they evolve
• The Workgroup is defining features for the next revision of the
specification, expected ~ middle of 2014
Features for Next Revision
Namespace Management
Management Interface
Live Firmware Update
Power Optimizations
Enhanced Status Reporting
Events for Namespace Changes
…
Get involved – join the NVMe Workgroup
nvmexpress.org
64. 64
Agenda
• Why PCI Express* (PCIe) for SSDs?
– PCIe SSD in Client
– PCIe SSD in Data Center
• Why NVM Express (NVMe) for PCIe SSDs?
– Overview NVMe
– Driver ecosystem update
– NVMe technology developments
• Deploying PCIe SSD with NVMe
65. 65
Considerations of PCI Express* SSD with
NVM Express, NVMe SSD
• NVMe driver assistant?
• S.M.A.R.T/Management?
• Performance scalability?
• PCIe SSD vs SATA SSDs?
• PCIe SSD grades?
• Software optimizations?
NVMe SSDs are on the way to Data Center
66. 66
PCI Express* SSD vs Multi SATA* SSDs
SATA SSDs advantages
• Matured hardware RAID/Adapter
for management of SSDs
• Matured technology/eco system
for SSDs
• Cost & performance balance
Quick Performance Comparison
• Random WRITE IOPS: 6 x S3700
= one PCIe SSD 1.6T (4 lanes,
Gen3)
• Random READ IOPS: ~8 x S3700
= 1 x PCIe SSD
Mix-Use PCIe and SATA SSDs
• hot-pluggable 2.5” PCIe SSD has
same maintenance advantage as
SATA SSD
• TCO, balance on performance
and cost
Performance of 6~8 Intel S3700 SSDs is close to 1x PCIe SSD
4K random workloads (IOPS)
Measurements made on Hanlan Creek (Intel S5520HC) system with two Intel Xeon X5560@ 2.93GHz and 12GB (per CPU) Mem running
RHEL6.4 O/S, Intel S3700 SATA Gen3 SSDs are connected to LSI* HBA 9211, NVMe SSD is under development, data collected by FIO* tool
0
100000
200000
300000
400000
500000
600000
100% read 50% read 0% read
6x800GB Intel S3700
1x NVMe 1600GB
IOPS
68. 68
Selections of PCI Express* SSD with NVM
Express, NVMe SSD
• High Endurance Technology (HET) PCIe SSD
Applications with intensive random write workloads, typical are high
percentage small block random writes, such as critical database,
OLTs…
• Middle Tier PCIe SSD
Applications needs random write performance and endurance, but
much lower than HET PCIe SSD, typical workloads is <70% random
writes.
• Low cost PCIe SSD
Same read performance as above, however it has 1/10th of HET
write performance and endurance, Applications with high intensive
read workloads, such as search engine etc.
Application determines cost and performance
70. 70
Optimizations of PCI Express* SSD with
NVM Express, NVMe SSD
NVMe Administration
Controller capability/identify
NVMe features
Asynchronous Event
NVMe logs
Optional IO Command
Data Set management (Trim)
71. 71
Optimizations of PCI Express* SSD with
NVM Express, NVMe SSD
NVMe Administration
Controller capability/identify
NVMe features
Asynchronous Event
NVMe logs
Optional IO Command
Data Set management (Trim)
72. 72
Optimizations of PCI Express* SSD with
NVM Express, NVMe SSD
NVMe Administration
Controller capability/identify
NVMe features
Asynchronous Event
NVMe logs
Optional IO Command
Data Set management (Trim)
73. 73
Optimizations of PCI Express* SSD with
NVM Express, NVMe SSD
NVMe Administration
Controller capability/identify
NVMe features
Asynchronous Event
NVMe logs
Optional IO Command
Data Set management (Trim)
74. 74
Optimizations of PCI Express* SSD with
NVM Express, NVMe SSD
NVMe Administration
Controller capability/identify
NVMe features
Asynchronous Event
NVMe logs
Optional IO Command
Data Set management (Trim)
NVMe IO Threaded structure
Understand number of CPU logic
cores in your system
Write multi-Thread application
programs
No need for handling rq_affinity
75. 75
Optimizations of PCI Express* SSD with
NVM Express, NVMe SSD
NVMe Administration
Controller capability/identify
NVMe features
Asynchronous Event
NVMe logs
Optional IO Command
Data Set management (Trim)
NVMe IO Threaded structure
Understand number of CPU logic
cores in your system
Write multi-Thread application
programs
No need for handling rq_affinity
Write NVMe friendly applications
76. 76
Optimizations of PCI Express* SSD with
NVM Express (cont.)
IOPS performance
• Chose higher number of threads ( < min(number system CPU cores, SSD
controller maximum allocated queues))
• Chose Low Queue depth for each thread (asynchronous IO)
• Avoid to use single thread with much higher Queue Depth(QD), especially for
small transfer blocks
• Example: 4K random read on one drive in a system with 8 CPU cores, use 8
threads with Queue Depth(QD)=16 per thread instead of single thread with
QD=128.
Latency
• Lower QD for better latency
• For intensive random write, there is a sweet point of threads & QD for
balancing performance and latency
• Example: 4K random write in 8-core system, threads=8, sweet QD is 4 to 6.
Sequential vs Random workload
• Multi-threads sequential workloads may turn to be random workloads at SSD
side
Use Multi-Threads with Low Queue Depth
77. 77
NVM Express (NVMe) Driver beyond NVMe
Specification
NVMe Linux driver is open source
LBA0……………………..LBA255 LBA256…………..…..LBA511
LBA512……………….LBA767 LBA768……………..LBA1023
LBA1024…………….………..etc. …etc…
Core 0 Core 1
• Driver Assisted Striping
– Dual core NVMe controller
each core maintains separate
NAND array and striped LBA
ranges (like RAID 0)
– Driver can enforce all
commands fall within KB
stripe, ensuring maximum
performance
• Contribute to NVMe driver
78. 78
S.M.A.R.T and Management
Use PCIe in-band
commands to get SSD
SMART log (NVMe log)
Statistical data, status,
Warnings,
Temperature,
endurance indicator
• Use Out-Of-Band
SMBus to access VPD
EEPROM, Vendor
information
• Use Out-of-Band
SMBus temperature
sensor for close loop
thermal controls (Fan
speed)
NVMe Standardizes S.M.A.R.T. on PCIe SSD
79. 79
Scalability of Multi-PCI Express* SSDs with
NVM Express
Performance on 4 PCIe SSDs = Performance on 1 PCIe SSD X 4
Advantage of NVM Express threaded and MSI-X structure!
100% random read
0.00
2.00
4.00
6.00
8.00
10.00
12.00
4K 8K 16K 64k
1xNVMe 1600GB
2xNVMe 1600GB
4xNVMe 1600GB
GB/s
0
0.5
1
1.5
2
2.5
3
3.5
4K 8K 16K 64k
1xNVMe 1600GB
2xNVMe 1600GB
4xNVMe 1600GB
GB/s
100% random write
Measurements made on Intel system with two Intel Xeon™ CPU E5-2680 v2@ 2.80GHz and 32GB Mem running RHEL6.5 O/S, NVMe SSD is
under development, data collected by FIO* tool, numJob=30, queue depth (QD)=4 (read), QD=1 (write), libaio.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance
tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions.
Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist
you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
80. 80
PCI Express* SSD with NVM Express (NVMe
SSD) deployments
Source: Geoffrey Moore, Crossing the Chasm
SSDs are a disruptive technology, approaching “The Chasm”
Adoption success relies on clear benefit, simplification, and ease of use
81. 81
Summary
• PCI Express* SSD enables lower latency and further
alleviates the IO bottleneck
• NVM Express is the interface architected for PCI
Express* SSD, NAND Flash of today and next
generation NVM of tomorrow
• Promoting and adopting PCIe SSD with NVMe as
mainstream technology and get ready for next
generation of NVM
83. 83
Risk Factors
The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the
future are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,”
“intends,” “plans,” “believes,” “seeks,” “estimates,” “may,” “will,” “should” and their variations identify forward-looking statements.
Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many
factors could affect Intel’s actual results, and variances from Intel’s current expectations regarding such factors could cause actual
results to differ materially from those expressed in these forward-looking statements. Intel presently considers the following to be the
important factors that could cause actual results to differ materially from the company’s expectations. Demand could be different from
Intel's expectations due to factors including changes in business and economic conditions; customer acceptance of Intel’s and
competitors’ products; supply constraints and other disruptions affecting customers; changes in customer order patterns including
order cancellations; and changes in the level of inventory at customers. Uncertainty in global economic and financial conditions poses a
risk that consumers and businesses may defer purchases in response to negative financial events, which could negatively affect
product demand and other related matters. Intel operates in intensely competitive industries that are characterized by a high
percentage of costs that are fixed or difficult to reduce in the short term and product demand that is highly variable and difficult to
forecast. Revenue and the gross margin percentage are affected by the timing of Intel product introductions and the demand for and
market acceptance of Intel's products; actions taken by Intel's competitors, including product offerings and introductions, marketing
programs and pricing pressures and Intel’s response to such actions; and Intel’s ability to respond quickly to technological
developments and to incorporate new features into its products. The gross margin percentage could vary significantly from
expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying
products for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and
associated costs; start-up costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials
or resources; product manufacturing quality/yields; and impairments of long-lived assets, including manufacturing, assembly/test and
intangible assets. Intel's results could be affected by adverse economic, social, political and physical/infrastructure conditions in
countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters,
infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Expenses, particularly certain marketing and
compensation expenses, as well as restructuring and asset impairment charges, vary depending on the level of demand for Intel's
products and the level of revenue and profits. Intel’s results could be affected by the timing of closing of acquisitions and divestitures.
Intel's results could be affected by adverse effects associated with product defects and errata (deviations from published
specifications), and by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust, disclosure and
other issues, such as the litigation and regulatory matters described in Intel's SEC reports. An unfavorable ruling could include
monetary damages or an injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business
practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual
property. A detailed discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the
company’s most recent reports on Form 10-Q, Form 10-K and earnings release.
Rev. 1/16/14