Intel s'intéresse tout particulièrement aux FPGA et notamment au potentiel qu'ils apportent lorsque les ISV et développeurs ont des besoins très spécifiques en Génomique, traitement d'images, traitement de bases de données, et même dans le Cloud. Dans ce document vous aurez l'occasion d'en savoir plus sur notre stratégie, et sur un programme de recherche lancé par Intel et Altera impliquant des Xeon E5 équipés... de FPGA
Intel is looking at FPGA and what they bring to ISVs and developers and their very specific needs in genomics, image processing, databases, and even in the cloud. In this document you will have the opportunity to learn more about our strategy, and a research program initiated by Intel and Altera involving Xeon E5 with... FPGA inside.
Auteur(s)/Author(s):
P. K. Gupta, Director of Cloud Platform Technology, Intel Corporation
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
The document describes how the latest Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instructions and Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI) enabled in the latest Intel® 3rd Generation Xeon® Scalable Processor are used to significantly increase and achieve 1 Tb of IPsec throughput.
Moving to PCI Express based SSD with NVM ExpressOdinot Stanislas
Une très bonne présentation qui introduit la technologie NVM Express qui sera à coup sure l'interface du futur (proche) des "disques" SSD. Adieu SAS et SATA, bienvenu au PCI Express dans les serveurs (et postes clients)
In this deck from the Argonne Training Program on Extreme-Scale Computing 2019, Howard Pritchard from LANL and Simon Hammond from Sandia present: NNSA Explorations: ARM for Supercomputing.
"The Arm-based Astra system at Sandia will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.
"By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE. “Sandia National Laboratories has been an active partner in leveraging our Arm-based platform since its early design, and featuring it in the deployment of the world’s largest Arm-based supercomputer, is a strategic investment for the DOE and the industry as a whole as we race toward achieving exascale computing.”
Watch the video: https://wp.me/p3RLHQ-l29
Learn more: https://insidehpc.com/2018/06/arm-goes-big-hpe-builds-petaflop-supercomputer-sandia/
and
https://extremecomputingtraining.anl.gov/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
The document describes how the latest Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instructions and Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI) enabled in the latest Intel® 3rd Generation Xeon® Scalable Processor are used to significantly increase and achieve 1 Tb of IPsec throughput.
Moving to PCI Express based SSD with NVM ExpressOdinot Stanislas
Une très bonne présentation qui introduit la technologie NVM Express qui sera à coup sure l'interface du futur (proche) des "disques" SSD. Adieu SAS et SATA, bienvenu au PCI Express dans les serveurs (et postes clients)
In this deck from the Argonne Training Program on Extreme-Scale Computing 2019, Howard Pritchard from LANL and Simon Hammond from Sandia present: NNSA Explorations: ARM for Supercomputing.
"The Arm-based Astra system at Sandia will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.
"By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE. “Sandia National Laboratories has been an active partner in leveraging our Arm-based platform since its early design, and featuring it in the deployment of the world’s largest Arm-based supercomputer, is a strategic investment for the DOE and the industry as a whole as we race toward achieving exascale computing.”
Watch the video: https://wp.me/p3RLHQ-l29
Learn more: https://insidehpc.com/2018/06/arm-goes-big-hpe-builds-petaflop-supercomputer-sandia/
and
https://extremecomputingtraining.anl.gov/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPCinside-BigData.com
In this deck from the Perth HPC Conference, Werner Scholz from XENON Systems presents: Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPC.
"A decade ago, 100 watts per CPU was devastating to thermal design. Today, Intel’s highest performing CPUs (e.g. Intel Cascade Lake-AP 9282 processor) have a thermal design envelope of 400 watts. There really is no end in sight, and accommodating more power is critical to advancing performance. The ability to dissipate the resulting heat is the hard ceiling that systems face in terms of performance – giving greater importance to liquid cooling breakthroughs. With liquid cooling, less energy is expended to cool systems – a significant savings in HPC deployments with arrays of servers drawing energy and generating heat. Electrical current drives the CPU and enables it to function. This electrical power is converted into thermal energy (heat). To maintain a stable temperature, the CPU needs to be cooled by efficiently removing this heat and releasing it. Liquid cooling is the best way to cool a system because liquid transfers heat much more efficiently than air. From an environmental perspective, liquid cooling reduces both those characteristics to create a smarter and more ecological approach on a grand scale. The cascade of value continues, as ambient heat removed from systems can then be used to heat buildings and augment or replace traditional heating systems. It’s an intelligent approach to thermal management, distributing the economic value of reduced energy use and transforming heat into an enterprise asset."
Watch the video: https://wp.me/p3RLHQ-kZa
Learn more: https://www.xenon.com.au/
and
http://hpcadvisorycouncil.com/events/2019/australia-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
Evan Powell presented this deck at the MSST 2107 Mass Storage Conference.
"What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment."
Watch the video: http://wp.me/p3RLHQ-gPs
Learn more: blog.openebs.io
and
http://storageconference.us
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
This session covers the engineering strategies and lessons learned at IBM creating industry leading in-memory data warehousing technology for use with both cloud and on-premises software. Along with rich in-memory SQL support for OLAP, data mining, and data warehousing leveraging memory optimized parallel vector processing, we’ll showcase the in-database analytics for R, spatial, and the built-in synchronization with Cloudant JSON NoSQL. We'll take a closer look at the architectural strategy for treating RAM as the new disk (and worth avoiding access to), while dramatically constraining the potential cost pressures of in-memory technology. We’ll describe how we designed for super-simplicity with load-and-go no-tuning technology for any size system, and of course… a demo. Ridiculously easy to use and freakishly fast. Not your grandmother’s IBM database.
In this deck from the HPC User Forum in Tucson, Jeff Stuecheli from IBM presents: POWER9 for AI & HPC.
"Built from the ground-up for data intensive workloads, POWER9 is the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI."
Watch the video: https://wp.me/p3RLHQ-isJ
Learn more: https://www.ibm.com/it-infrastructure/power/power9
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Macromolecular crystallography is an experimental technique allowing to explore 3D atomic structure of proteins, used by academics for research in biology and by pharmaceutical companies in rational drug design. While up to now development of the technique was limited by scientific instruments performance, recently computing performance becomes a key limitation. In my presentation I will present a computing challenge to handle 18 GB/s data stream coming from the new X-ray detector. I will show PSI experiences in applying conventional hardware for the task and why this attempt failed. I will then present how IC 922 server with OpenCAPI enabled FPGA boards allowed to build a sustainable and scalable solution for high speed data acquisition. Finally, I will give a perspective, how the advancement in hardware development will enable better science by users of the Swiss Light Source.
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This was presented by Yong LU at OpenPOWER summit EU 2019. The original one is uploaded at:
https://static.sched.com/hosted_files/opeu19/16/OpenCAPI%20Acceleration%20Framework_YongLu_ver2.pdf
Yesterday's thinking may still believe NVMe (NVM Express) is in transition to a production ready solution. In this session, we will discuss how the evolution of NVMe is ready for production, the history and evolution of NVMe and the Linux stack to address where NVMe has progressed today to become the low latency, highly reliable database key value store mechanism that will drive the future of cloud expansion. Examples of protocol efficiencies and types of storage engines that are optimizing for NVMe will be discussed. Please join us for an exciting session where in-memory computing and persistence have evolved.
Compare Performance-power of Arm Cortex vs RISC-V for AI applications_oct_2021Deepak Shankar
Abstract: In the Webinar, we will show you how to construct, simulate, analyze, validate, optimize an architecture model using pre-built components. We will compare micro and application benchmarks on system SoC models containing clusters of ARM Cortex A53, SiFive u74, ARM Cortex A77, and other vendor cores. The system will be built around custom switches, Ingress/Egress buffers, credit flow control, AI accelerators, NoC and AMBA AXI buses with multi-level caches, DDR4 DRAM and DMA. The evaluation and optimization criteria will be task latency, dCache hit-ratio, power consumed/task and memory bandwidth. The parameters to be modified are bus topology, cache size, processor clock speed, custom arbiters, task thread allocation and changing the processor pipeline.
Selection of cores is a combination of financial and technical bias. Technical comparison of processor cores requires the understanding of the workload, task partitioning and cache-memory structure. A core must be evaluated in the context of the target application. To evaluate these selections, architecture simulation software must be fortified with a library of Intellectual property for power and timing accurate processor cores, simulator at 100 million events per second, peripherals, and all possible traffic distributions
Key Takeaways:
1. Validating architecture models using mathematical calculus and hardware traces
2. Construct custom policies, arbitrations and configure processor cores
3. Select the right combination of statistics to detect bottlenecks and optimize the architecture
4. Identify the right use of stochastic, transaction, cycle-accurate and traces to construct the model
Speaker Bio:
Alex Su is a FPGA solution architect at E-Elements Technology, Hsinchu, Taiwan. He has been an FPGA Solution Architect and Xilinx FPGA Trainer for a number of years, supporting companies, research centers and universities in China and Taiwan. Prior to that, Mr Su has worked at ARM Ltd for 5 years in technical support of Arm CPU and System IP. Alex has also been engaged with a variety of FPGA-based Hardware Emulation System and over ten years in ASIC/SoC design and verification engineer.
Deepak Shankar is the Founder of Mirabilis Design and has been involved in the architecture exploration of over 250 SoC and processors. Mr. Shankar started Mirabilis Design because of a vacuum in the systems engineering and modeling space with the focus shifting to network design and early software development. Deepak has published over 50 articles and presented at over 30 conferences in EDA, semiconductors and embedded computing. Mr. Shankar has an MBA from UC Berkeley, MS in from Clemson University and BS from Coimbatore Institute of Technology, both in Electronics and Communication.
Accelerate Big Data Processing with High-Performance Computing TechnologiesIntel® Software
Learn about opportunities and challenges for accelerating big data middleware on modern high-performance computing (HPC) clusters by exploiting HPC technologies.
Field Programmable Gate Array(FPGA) Application In Instrument Landing System...Mal Mai
Field Programmable Gate Array(FPGA) Application In Instrument Landing-is the system where an aircraft landing automatically to the runway selected. The system will catch to glideslope and and localize
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPCinside-BigData.com
In this deck from the Perth HPC Conference, Werner Scholz from XENON Systems presents: Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPC.
"A decade ago, 100 watts per CPU was devastating to thermal design. Today, Intel’s highest performing CPUs (e.g. Intel Cascade Lake-AP 9282 processor) have a thermal design envelope of 400 watts. There really is no end in sight, and accommodating more power is critical to advancing performance. The ability to dissipate the resulting heat is the hard ceiling that systems face in terms of performance – giving greater importance to liquid cooling breakthroughs. With liquid cooling, less energy is expended to cool systems – a significant savings in HPC deployments with arrays of servers drawing energy and generating heat. Electrical current drives the CPU and enables it to function. This electrical power is converted into thermal energy (heat). To maintain a stable temperature, the CPU needs to be cooled by efficiently removing this heat and releasing it. Liquid cooling is the best way to cool a system because liquid transfers heat much more efficiently than air. From an environmental perspective, liquid cooling reduces both those characteristics to create a smarter and more ecological approach on a grand scale. The cascade of value continues, as ambient heat removed from systems can then be used to heat buildings and augment or replace traditional heating systems. It’s an intelligent approach to thermal management, distributing the economic value of reduced energy use and transforming heat into an enterprise asset."
Watch the video: https://wp.me/p3RLHQ-kZa
Learn more: https://www.xenon.com.au/
and
http://hpcadvisorycouncil.com/events/2019/australia-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
Evan Powell presented this deck at the MSST 2107 Mass Storage Conference.
"What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment."
Watch the video: http://wp.me/p3RLHQ-gPs
Learn more: blog.openebs.io
and
http://storageconference.us
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The IBM POWER10 processor represents the 10th generation of the POWER family of enterprise computing engines. Its performance is a result of both powerful processing cores and high-bandwidth intra- and inter-chip interconnect. POWER10 systems can be configured with up to 16 processor chips and 1920 simultaneous threads of execution. Cross-system memory sharing, through the new Memory Inception technology, and 2 Petabytes of addressing space support an expansive memory system. The POWER10 processing core has been significantly enhanced over its POWER9 predecessor, including a doubling of vector units and the addition of an all-new matrix math engine. Throughput gains from POWER9 to POWER10 average 30% at the core level and three-fold at the socket level. Those gains can reach ten- or twenty-fold at the socket level for matrix-intensive computations.
This session covers the engineering strategies and lessons learned at IBM creating industry leading in-memory data warehousing technology for use with both cloud and on-premises software. Along with rich in-memory SQL support for OLAP, data mining, and data warehousing leveraging memory optimized parallel vector processing, we’ll showcase the in-database analytics for R, spatial, and the built-in synchronization with Cloudant JSON NoSQL. We'll take a closer look at the architectural strategy for treating RAM as the new disk (and worth avoiding access to), while dramatically constraining the potential cost pressures of in-memory technology. We’ll describe how we designed for super-simplicity with load-and-go no-tuning technology for any size system, and of course… a demo. Ridiculously easy to use and freakishly fast. Not your grandmother’s IBM database.
In this deck from the HPC User Forum in Tucson, Jeff Stuecheli from IBM presents: POWER9 for AI & HPC.
"Built from the ground-up for data intensive workloads, POWER9 is the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI."
Watch the video: https://wp.me/p3RLHQ-isJ
Learn more: https://www.ibm.com/it-infrastructure/power/power9
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Macromolecular crystallography is an experimental technique allowing to explore 3D atomic structure of proteins, used by academics for research in biology and by pharmaceutical companies in rational drug design. While up to now development of the technique was limited by scientific instruments performance, recently computing performance becomes a key limitation. In my presentation I will present a computing challenge to handle 18 GB/s data stream coming from the new X-ray detector. I will show PSI experiences in applying conventional hardware for the task and why this attempt failed. I will then present how IC 922 server with OpenCAPI enabled FPGA boards allowed to build a sustainable and scalable solution for high speed data acquisition. Finally, I will give a perspective, how the advancement in hardware development will enable better science by users of the Swiss Light Source.
In this deck from the UK HPC Conference, Gunter Roeth from NVIDIA presents: Hardware & Software Platforms for HPC, AI and ML.
"Data is driving the transformation of industries around the world and a new generation of AI applications are effectively becoming programs that write software, powered by data, vs by computer programmers. Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures."
Watch the video: https://wp.me/p3RLHQ-l2Y
Learn more: http://nvidia.com
and
http://hpcadvisorycouncil.com/events/2019/uk-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This was presented by Yong LU at OpenPOWER summit EU 2019. The original one is uploaded at:
https://static.sched.com/hosted_files/opeu19/16/OpenCAPI%20Acceleration%20Framework_YongLu_ver2.pdf
Yesterday's thinking may still believe NVMe (NVM Express) is in transition to a production ready solution. In this session, we will discuss how the evolution of NVMe is ready for production, the history and evolution of NVMe and the Linux stack to address where NVMe has progressed today to become the low latency, highly reliable database key value store mechanism that will drive the future of cloud expansion. Examples of protocol efficiencies and types of storage engines that are optimizing for NVMe will be discussed. Please join us for an exciting session where in-memory computing and persistence have evolved.
Compare Performance-power of Arm Cortex vs RISC-V for AI applications_oct_2021Deepak Shankar
Abstract: In the Webinar, we will show you how to construct, simulate, analyze, validate, optimize an architecture model using pre-built components. We will compare micro and application benchmarks on system SoC models containing clusters of ARM Cortex A53, SiFive u74, ARM Cortex A77, and other vendor cores. The system will be built around custom switches, Ingress/Egress buffers, credit flow control, AI accelerators, NoC and AMBA AXI buses with multi-level caches, DDR4 DRAM and DMA. The evaluation and optimization criteria will be task latency, dCache hit-ratio, power consumed/task and memory bandwidth. The parameters to be modified are bus topology, cache size, processor clock speed, custom arbiters, task thread allocation and changing the processor pipeline.
Selection of cores is a combination of financial and technical bias. Technical comparison of processor cores requires the understanding of the workload, task partitioning and cache-memory structure. A core must be evaluated in the context of the target application. To evaluate these selections, architecture simulation software must be fortified with a library of Intellectual property for power and timing accurate processor cores, simulator at 100 million events per second, peripherals, and all possible traffic distributions
Key Takeaways:
1. Validating architecture models using mathematical calculus and hardware traces
2. Construct custom policies, arbitrations and configure processor cores
3. Select the right combination of statistics to detect bottlenecks and optimize the architecture
4. Identify the right use of stochastic, transaction, cycle-accurate and traces to construct the model
Speaker Bio:
Alex Su is a FPGA solution architect at E-Elements Technology, Hsinchu, Taiwan. He has been an FPGA Solution Architect and Xilinx FPGA Trainer for a number of years, supporting companies, research centers and universities in China and Taiwan. Prior to that, Mr Su has worked at ARM Ltd for 5 years in technical support of Arm CPU and System IP. Alex has also been engaged with a variety of FPGA-based Hardware Emulation System and over ten years in ASIC/SoC design and verification engineer.
Deepak Shankar is the Founder of Mirabilis Design and has been involved in the architecture exploration of over 250 SoC and processors. Mr. Shankar started Mirabilis Design because of a vacuum in the systems engineering and modeling space with the focus shifting to network design and early software development. Deepak has published over 50 articles and presented at over 30 conferences in EDA, semiconductors and embedded computing. Mr. Shankar has an MBA from UC Berkeley, MS in from Clemson University and BS from Coimbatore Institute of Technology, both in Electronics and Communication.
Accelerate Big Data Processing with High-Performance Computing TechnologiesIntel® Software
Learn about opportunities and challenges for accelerating big data middleware on modern high-performance computing (HPC) clusters by exploiting HPC technologies.
Field Programmable Gate Array(FPGA) Application In Instrument Landing System...Mal Mai
Field Programmable Gate Array(FPGA) Application In Instrument Landing-is the system where an aircraft landing automatically to the runway selected. The system will catch to glideslope and and localize
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
Software Defined Storage - Open Framework and Intel® Architecture TechnologiesOdinot Stanislas
(FR)
Dans cette présentation vous aurez le plaisir d'y trouver une introduction plutôt détaillées sur la notion de "SDS Controller" qui est en résumé la couche applicative destinée à contrôler à terme toutes les technologies de stockage (SAN, NAS, stockage distribué sur disque, flash...) et chargée de les exposer aux orchestrateurs de Cloud et donc aux applications.
(ENG)
This presentation cover in detail the notion of "SDS Controller" which is in summary a software stack able to handle all storage technologies (SAN, NDA, distributed file systems on disk, flash...) and expose it to Cloud orchestrators and applications. Lots of good content.
Krammer P. et al.: Electrical impedance tomography Simulator.Hauke Sann
Swisstom Scientific Library; 16th International Conference on Biomedical Applications of Electrical Impedance Tomography, Neuchâtel Switzerland, June 2-5, 2015
Three-phase ac motors have been the workhorse of industry since the earliest days of electrical engineering. They are reliable, efficient, cost-effective and need little or no maintenance. In addition, ac motors such as induction and reluctance motors need no electrical connection to the rotor, so can easily be made flameproof for use in hazardous environments such as in mines.
In order to provide proper speed control of an ac motor, it is necessary to supply the motor with a three phase supply of which both the voltage and the frequency can be varied. Such a supply will create a variable speed rotating field in the stator that will allow the rotor to rotate at the required speed with low slip. This ac motor drive can efficiently provide full torque from zero speed to full speed, can overspeed if necessary, and can, by changing phase rotation, easily provide bi-directional operation of the motor. A drive with these characteristics is known as a PWM (Pulse Width Modulated) motor drive.
Drives and motors are an integral part of industrial equipment from packaging,robotics, computer numerical control (CNC), machine tools, industrial pumps,and fans. Designing next-generation drive systems to lower operating costs requires complex control algorithms at very low latencies as well as a flexibleplatform to support changing needs and the ability to design multiple-axis systems.
Traditional drive systems based on ASICs, digital signal processors (DSPs), and microcontroller units lack the performance and flexibility to address these needs. Altera’s family of FPGAs provides a scalable platform that can be used to offload control algorithm elements in hardware. You may also integrate the whole drive system with industry-proven processor architectures while supporting multipletypes of encoders and industrial Ethernet protocols. This “drive on a chip” system reduces cost and simplifies development.
Single phase grid connected fuel system based on boost inverterRohithasangaraju
Abstract—
In this project, the boost-inverter topology is used as a building block for a single-phase grid-connected fuel cell (FC) system offering low cost and compactness. In addition, the pro- posed system incorporates battery-based energy storage and a dc–dc bidirectional converter to support the slow dynamics of the FC. The single-phase boost inverter is voltage-mode controlled and the dc–dc bidirectional converter is current-mode controlled. The low-frequency current ripple is supplied by the battery which minimizes the effects of such ripple being drawn directly from the FC itself. Moreover, this system can operate either in a grid-connected or stand-alone mode. In the grid-connected mode, the boost inverter is able to control the active (P) and reactive (Q) powers using an algorithm based on a second-order generalized integrator which provides a fast signal conditioning for single-phase systems. Design guidelines, simulation, and experimental results taken from a laboratory prototype are presented to confirm the performance of the proposed system.
Big Data Beyond Hadoop*: Research Directions for the FutureOdinot Stanislas
Michael Wrinn
Research Program Director, University Research Office,
Intel Corporation
Jason Dai
Engineering Director and Principal Engineer,
Intel Corporation
Efficient Motor Control Solutions: High Performance Servo Control (Design Con...Analog Devices, Inc.
This session provides insight into the operation of electric motor drive systems. Topics include electric motor operation and construction, motor control strategies, feedback sensors and circuits, power and isolation, and challenges of designing highly efficient motor control systems. A new high performance servo control FMC board will be presented, which provides an efficient motor control solution for different types of electric motors, addresses power and isolation challenges, and provides accurate measurement of motor feedback signals and increased control flexibility due to FPGA interfacing capabilities. The motor control hardware platform will be used to demonstrate rapid prototyping of motor control algorithms using Xilinx base platforms and the MathWorks development and simulation tools.
SNIA : Swift Object Storage adding EC (Erasure Code)Odinot Stanislas
In depth presentation on EC integration in Swift object storage. Content delivered by Paul Luse, Sr. Staff Engineer @ Intel and Kevin Greenan, Staff Software Engineer - Box during fall SNIA event
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)Odinot Stanislas
Une très intéressante présentation autour de la virtualisation des réseaux contenant des explications détaillées autour des VLAN, VXLAN, mais aussi d'NVGRE et surtout de GENEVE (Generic Network Virtualization Encapsulation) supporté pour la première fois sur la dernière carte 40 GbE d'Intel (XL710)
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang Spark Summit
In this session we will present a Configurable FPGA-Based Spark SQL Acceleration Architecture. It is target to leverage FPGA highly parallel computing capability to accelerate Spark SQL Query and for FPGA’s higher power efficiency than CPU we can lower the power consumption at the same time. The Architecture consists of SQL query decomposition algorithms, fine-grained FPGA based Engine Units which perform basic computation of sub string, arithmetic and logic operations. Using SQL query decomposition algorithm, we are able to decompose a complex SQL query into basic operations and according to their patterns each is fed into an Engine Unit. SQL Engine Units are highly configurable and can be chained together to perform complex Spark SQL queries, finally one SQL query is transformed into a Hardware Pipeline. We will present the performance benchmark results comparing the queries with FGPA-Based Spark SQL Acceleration Architecture on XEON E5 and FPGA to the ones with Spark SQL Query on XEON E5 with 10X ~ 100X improvement and we will demonstrate one SQL query workload from a real customer.
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
JT Kellington, IBM and Allan Cantle, Nallatech present at the 2015 HPCC Systems Engineering Summit Community Day about porting HPCC Systems to the POWER8-based ppc64el architecture.
Mirabilis_Design AMD Versal System-Level IP LibraryDeepak Shankar
Mirabilis Design provides the VisualSim Versal Library that enable System Architect and Algorithm Designers to quickly map the signal processing algorithms onto the Versal FPGA and define the Fabric based on the performance. The Versal IP support all the heterogeneous resource.
Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)Ontico
HighLoad++ 2017
Зал «Москва», 7 ноября, 13:00
Тезисы:
http://www.highload.ru/2017/abstracts/2909.html
OpenDataPlane (ODP, https://www.opendataplane.org) является open-source-разработкой API для сетевых data plane-приложений, представляющий абстракцию между сетевым чипом и приложением. Сейчас вендоры, такие как TI, Freescale, Cavium, выпускают SDK с поддержкой ODP на своих микросхемах SoC. Если проводить аналогию с графическим стеком, то ODP можно сравнить с OpenGL API, но только в области сетевого программирования.
...
Heterogeneous Computing : The Future of SystemsAnand Haridass
Charts from NITK-IBM Computer Systems Research Group (NCSRG)
- Dennard Scaling,Moore's Law, OpenPOWER, Storage Class Memory, FPGA, GPU, CAPI, OpenCAPI, nVidia nvlink, Google Microsoft Heterogeneous system usage
Introduction to Programmable Networks by Clarence Anslem, IntelMyNOG
Network devices like switches or routers are most commonly designed a bottom-up. The switch vendors that offer products to their clients usually rely on external chips from 3rd party silicon vendors. The chip is the heart of the system and in practice determines how device OS is realized and what functionality it can offer. Since the chip is a fixed-function unit and its internal packet processing pipeline cannot be easily reconfigured at runtime, adding a new feature set is a complex process that may take months. This is because a chip redesign is usually required. P4 & Programmable ASIC’S aims to break these barriers and enable innovation on networking devices similar to CPU’s, GPU’s, DSP’s in the computing ecosystem.
Google and Intel speak on NFV and SFC service delivery
The slides are as presented at the meet up "Out of Box Network Developers" sponsored by Intel Networking Developer Zone
Here is the Agenda of the slides:
How DPDK, RDT and gRPC fit into SDI/SDN, NFV and OpenStack
Key Platform Requirements for SDI
SDI Platform Ingredients: DPDK, IntelⓇRDT
gRPC Service Framework
IntelⓇ RDT and gRPC service framework
VEDLIoT at FPL'23_Accelerators for Heterogenous Computing in AIoTVEDLIoT Project
VEDLIoT took part in the 33rd International Conference on Field-Programmable Logic and Applications (FPL 2023), in Gothenburg, Sweden. René Griessl (UNIBI) presented VEDLIoT and our latest achievements in the Research Projects Event session, giving a presentation entitled "Accelerators for Heterogenous Computing in AIoT".
NFV and SDN: 4G LTE and 5G Wireless Networks on Intel(r) ArchitectureMichelle Holley
The Presentation will outline the KPIs and key optimizations at the platform, NFVi and Stack level in implementing wireless base station stack and Telco Edge cloud on Intel Architecture. The presentation will use the FlexRAN LTE Reference PHY and NEV SDK for MEC to outline the NFV and 5G use cases like network slicing.
Similar to Using a Field Programmable Gate Array to Accelerate Application Performance (20)
Le SDN et NFV sont très à la mode en ce moment car en passant des appliance physiques aux équipement réseau massivement logiciel, celà devrait offrir une grande flexibilité et agilité aux entreprises (et telco en particulier). Néanmoins chainer des services réseau est un exercice encore très complexe et ce document vous explique ce qu'il est déjà possible de faire sur OpenStack en couplant par exemple : un load balancer (BigIP), un Firewall (BigIP), un réseau virtuel WAN (RiverBed) ou encore un routeur virtuel (Brocade).
PCI Express* based Storage: Data Center NVM Express* Platform TopologiesOdinot Stanislas
(FR)
Le PCI Express se démocratise de plus en plus dans les serveurs. Présents depuis des années comme bus pour les cartes d'extensions, on va maintenant le trouver en façades des serveurs pour servir des disque flash 2,5 pouces (connecteur SF-8639) et sous la forme de câble appelés OCulink.
(EN)
PCI Express is becoming more and more present in servers. As a communication bus for extension cards since years, now it will serve 2.5 inches flash drive and through PCIe cables named OCulink.
Auteurs/Authors:
Michael Hall
Director of Technology Solutions Enabling, Data Center Group, Intel Corporation
Jonmichael Hands
Technical Program Manager, Non-Volatile Memory Solutions Group, Intel Corporation
Bare-metal, Docker Containers, and Virtualization: The Growing Choices for Cl...Odinot Stanislas
(FR)
Introduction très sympathique autour des environnements Cloud avec un focus particulier sur la virtualisation et les containers (Docker)
(ENG)
Friendly presentation about Cloud solutions with a focus on virtualization and containers (Docker).
Author: Nicholas Weaver – Principal Architect, Intel Corporation
Intel développe une "ONP" (Open Network Platform) dit autrement un switch ouvert offrant les fonctions de base nécessaires au SDN. Si vous souhaitez connaitre le matériel utilisé, les stack logicielle exploitée et les compatibilité avec notamment les orchestrateurs, ce doc est fait pour vous.
Intel and Siveo wrote this content which explain how their Cloud Orchestrator is working. You will learn how to configure it, benefit from automatical workload placement feature and manage multiple hypervisors transparently.
Intel IT Open Cloud - What's under the Hood and How do we Drive it?Odinot Stanislas
L'IT d'Intel fait sa révolution et s'impose d'agir comme un "Cloud Service Provider". La transformation est initiée avec au programme la mise en place d'un Cloud Fédéré, Interopérable et Open mais aussi d'un framework de maturité, du DevOps et de la prise de risque. Bref, vraiment intéressant
Configuration and Deployment Guide For Memcached on Intel® ArchitectureOdinot Stanislas
This Configuration and Deployment Guide explores designing and building a Memcached infrastructure that is scalable, reliable, manageable and secure. The guide uses experience with real-world deployments as well as data from benchmark tests. Configuration guidelines on clusters of Intel® Xeon®- and Atom™-based servers take into account differing business scenarios and inform the various tradeoffs to accommodate different Service Level Agreement (SLA) requirements and Total Cost of Ownership (TCO) objectives.
Dans ce document vous trouverez les dernières améliorations faites sur OpenStack et comment certaines technologies Intel dopent la performance et la sécurité de l'environnement Cloud. Quelques exemple avec :
Comment créer des "pool" de VM sécurisées avec possibilité de géo tagging (technologies Intel présentent dans les serveurs HP, DELL, IBM… + Folsom, Nova, Horizon, Open Attestation)
Comment doper la sécurité du nouveau module de gestion des clés d'OpenStack (technologies Intel + Barbican)
Comment benchmarker le stockage object Swift avec COSBench (qui supporte maintenant Ceph, S3 et Amplidata)
Auteurs:
Girish Gopal - Strategic Planning, Intel Corporation
Malini Bhandaru - Security Architect, Intel Corporation
Scale-out Storage on Intel® Architecture Based Platforms: Characterizing and ...Odinot Stanislas
Issue du salon orienté développeurs d'Intel (l'IDF) voici une présentation plutôt sympa sur le stockage dit "scale out" avec une présentation des différents fournisseurs de solutions (slide 6) comprenant ceux qui font du mode fichier, bloc et objet. Puis du benchmark sur certains d'entre eux dont Swift, Ceph et GlusterFS.
Big Data and Intel® Intelligent Systems Solution for Intelligent transportationOdinot Stanislas
Explications sur comment il est possible d'utiliser la puissance d'Hadoop pour analyser les vidéos des caméras présentent sur les réseaux routiers avec pour objectif d'identifier l'état du trafic, le type de véhicule en déplacement et même l'usurpation de plaques d'immatriculation.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Pushing the limits of ePRTC: 100ns holdover for 100 days
Using a Field Programmable Gate Array to Accelerate Application Performance
1. 1
Using a Field Programmable Gate Array to
Accelerate Application Performance
P. K. Gupta
Director of Cloud Platform Technology, Intel Corporation
DCWS008
2. 2
Agenda
• Accelerators: Motivation and Use Cases
• Using Field Programmable Gate Array (FPGA) as an Accelerator
• Intel® Xeon® Processor + FPGA Accelerator Platform
• Hardware and Software Programming Interfaces
• Example Applications
3. 3
Agenda
• Accelerators: Motivation and Use Cases
• Using Field Programmable Gate Array (FPGA) as an Accelerator
• Intel® Xeon® Processor + FPGA Accelerator Platform
• Hardware and Software Programming Interfaces
• Example Applications
4. 4
Digital Services Economy
Build out of
the CLOUD
$120B³
50¹ Billion
DEVICES
New
SERVICES
$450B²
1: Sources: AMS Research, Gartner, IDC, McKinsey Global Institute, and various others industry analysts and commentators
2: Source IDC, 2013. 2016 calculated base don reported CAGR ‘13-’17
3: Source: iDATA /Digiworld, 2013
Digital Services Economy…
6. 6
Cloud Economics
Amazon’s TCO Analysis¹
Hadoop Queries
Storage Capacity
Web Transactions / Sec
VMs per System
Workload Performance Metrics
1: Source: James Hamilton, Amazon* http://perspectives.mvdirona.com/2010/09/overall-data-center-costs/
Performance / TCO is the key metric
7. 7
Diverse Data Center Demands
Intel estimates; bubble size is relative CPU intensity
Accelerators can increase Performance at lower TCO for targeted workloads
8. 8
Agenda
• Accelerators: Motivation and Use Cases
• Using Field Programmable Gate Array (FPGA) as an Accelerator
• Intel® Xeon® Processor + FPGA Accelerator Platform
• Hardware and Software Programming Interfaces
• Example Applications
10. 10
Benefits of Reconfigurable Accelerators:
Savings in Area /Power
• Can be configured to implement different functions efficiently
- Meeting performance goals for segment
- Saving area and power compared to multiple Fixed Functions
Performance
Cost
Software
Fixed Functions
Programmable
Accelerator
11. 11
Benefits of Reconfigurable Accelerators:
Meeting Customer Needs for Differentiation
Workload
Optimized
Silicon
Pervasive
Analytics &
Insights
Intelligent
Resource
Orchestration
Dynamic
Resource
Pooling
Driving the Digital Service Economy
12. 12
What is a Field Programmable Gate Array (FPGA)?
FPGAs (Field Programmable Gate Arrays) are
semiconductor devices that can be programmed
• Desired functionality of the FPGA can be (re-) programmed
by downloading a configuration into the device
FPGAs offer several advantages over potential
alternatives:
• Lower one-time development cost, and faster time to market
compared to custom designed chips (ASICs)
• Ability to implement customer-specific functionality beyond
what is available from standard products (ASSPs)
• Customizable and reprogrammable after the device has
been deployed to the field compared to both ASIC and ASSP
Logic Blocks
Interconnect Resources
I/O Cells
13. 13
Agenda
• Accelerators: Motivation and Use Cases
• Using Field Programmable Gate Array (FPGA) as an Accelerator
• Intel® Xeon® Processor + FPGA Accelerator Platform
• Hardware and Software Programming Interfaces
• Example Applications
14. 14
Intel® Xeon® E5 + Field Programmable Gate Array Software
Development Platform (SDP) Shipping Today
Intel QPI
DDR3
DDR3
DDR3
DDR3
DDR3
PCIe3.0x8
DMI2
PCIe3.0x8
PCIe3.0x8
PCIe3.0x8
PCIe3.0x8
PCIe3.0x8
DDR3
Intel Xeon
Processor E5
Product Family
FPGA
Processor Intel Xeon Processor E5
FPGA Module Altera* Stratix* V
QPI Speed
6.4 GT/s full width
(target 8.0 GT/s at full width)
Memory to
FPGA Module
2 channels of DDR3
(up to 64 GB)
Expansion
connector
to FPGA Module
PCI Express® (PCIe) 3.0 x8
lanes - maybe used for direct
I/O e.g. Ethernet
Features
Configuration Agent, Caching
Agent, (optional) Memory
Controller
Software
Accelerator Abstraction Layer
(AAL) runtime, drivers, sample
applications
Software Development for Accelerating Workloads using Intel® Xeon® processors and coherently attached FPGA in-socket
Intel® QuickPath Interconnect (Intel® QPI)
15. 15
System Logical View
• AFUs can access coherent cache on FPGA
• AFUs can “not” implement a second level cache
• Intel® Quick Path Interconnect (Intel® QPI) IP participates in cache coherency
with Processors
Cores LLC AFUs
QPI
DRAM
DDR
DRAM
DRAM
Processor FPGA
CCI
Multi-processor Coherence Domain Cache access Domain
C
a
c
h
e
Intel
QPI
IP
16. 16
Intel® Xeon® + Field Programmable Gate Array SDP: Intel®
Quick Path Interconnect 1.1 RTL Microarchitecture
• PHY – Implements the Intel QPI PHY 1.1
(Analog/Digital)
• Intel QPI Link layer- provides flow control
and reliable communication
• Intel QPI Protocol – implements Intel QPI
Cache Agent + Configuration Agent
• Cache Controller – Cache hit/miss
determination and generates Intel QPI
protocol requests.
• Cache Tag – Tracks state of cacheline (MESI +
internal states for tracking outstanding
requests)
• Coherency Table – Programmable table that
implements coherency protocol rules
• System Protocol Layer (SPL2) – Implements
Address translation functionality. Can
provide up to 2GB device virtual address
space to AFU. SPL2 cannot handle page
faults.
• AFU – User designed Accelerator Function
Unit
QPI interface to pins
QPI Link / Protocol Control
QPI PHYRx Align Tx Align
Rx Control Tx Control
Cache controller
Cache
Data
Cache Tag
Cache Table
Rx
Tx
SPL2
CCI-E
Rx
Tx
CCI-S
Intel QPI FPGA IP
640 bits640 bits
Address translation
User:
Accelerator Function Unit (AFU)
Intel® QuickPath Interconnect (Intel® QPI)
17. 17
Agenda
• Accelerators: Motivation and Use Cases
• Using Field Programmable Gate Array (FPGA) as an Accelerator
• Intel® Xeon® Processor + FPGA Accelerator Platform
• Hardware and Software Programming Interfaces
• Example Applications
19. 19
Programming Interfaces
Host Application
Virtual Memory
API
Addr Translation
Interfaces
Intel QPI/KTI Link,
Protocol, & PHY
CPU
Intel QPI
CCI1
standard
Accelerator Function
Units (AFU)
CCI1
extended
Service API
Physical Memory API
Accelerator
Abstraction
Layer
Standard Programming Interfaces : AAL and CCI
Programming interfaces will be forward compatible from SDP2 to future MCP3 solutions
Simulation Environment available for development of SW and RTL4
Field Programmable Gate Array
Intel® QuickPath Interconnect (Intel® QPI)
1. Coherent Cache Interface 3. Multi-chip package
2. Software Development Platform 4. Register Transfer Level
20. 20
Programming Interfaces: OpenCL™
20
OpenCL
Application
Virtual Memory API VirtMem
CPU
CCI
Standard
OpenCL Kernels
CCI
Extended
Service API
Physical Memory API
Accelerator
Abstraction
Layer
System Memory
C
F
G
Physical Memory API
OpenCL RunTime
OpenCL™
Host Code
OpenCL
Kernel Code
Field Programmable Gate Array
Intel® QuickPath Interconnect (Intel® QPI)
Unified application code abstracted from the hardware environment
Portable across generations and families of CPUs and FPGAs
Intel QPI/PCI Express®
21. 21
Agenda
• Accelerators: Motivation and Use Cases
• Using Field Programmable Gate Array (FPGA) as an Accelerator
• Intel® Xeon® Processor + FPGA Accelerator Platform
• Hardware and Software Programming Interfaces
• Example Applications
22. 22
Example Usage:
Deep Learning Framework for Visual Understanding
clusternodedeviceprimitives
Processing Tile ‘n’
Processing Tile 1DMA
PE
Weights
Inputs
Outputs
Processing Tile 0
PE PE
Read Write Reg
Access
SRAM Controller
Control
State
Machine
IP
Registers
CCI Interface
CNN (Convolutional Neural Network) function accelerated on FPGA:
Power-performance of CNN classification boosted up to 2.2X†
†Source: Intel Measured (Intel® Xeon® processor E5-2699v3 results; Altera Estimated (4x Arria-10 results)
2S Intel( Xeon E5-2699v3 + 4x GX1150 PCI Express® cards. Most computations executed on Arria-10 FPGA's, 2S Intel Xeon E5-2699v3 host assumed to be near idle, doing misc. networking/housekeeping functions.
Arria-10 results estimated by Altera with Altera custom classification network. 2x Intel Xeon E5-2699v3 power estimated @ 139W while doing "housekeeping" for GX1150 cards based on Intel measured
microbenchmark. In order to sustain ~2400 img/s we need a I/O bandwidth of ~500 MB/s, which can be supported by a 10GigE link and software stack
23. 23
Example Usage:
Genomics Analysis Toolkit
HaplotypeCaller (PairHMM)BWA mem (Smith-Waterman)
PairHMM function accelerated on FPGA:
Power-performance of pHMM boosted up to 3.8X†
†pHMM Algorithm performance is measured in terms of Millions Cell Updates per seconds (CUPS).
Performance projections: CPU Performance: includes: 1 core Intel® Xeon® processor E5-2680v2 @ 2.8GHz delivers 2101.1 MCUP/s measured; estimated value assumes linear scaling to 10 Cores on Xeon ES2680v2 @
2.8 GHz & 115W TDP; FPGA Performance includes: 1 FPGA PE (Processing Engine) delivers 408.9 MCUP/s @ 200 MHz measured; estimated value assumes linear scaling to 32 PEs and 90% frequency scaling on Stratix-
V A7 400 MHz based on RTL Synthesis results (35W TDP). Intel estimated based on 1S Xeon E5-2680v2 + 1 Stratix-V A7 with QPI 1.1 @ 6.4 GT/s full width using Intel® QuickAssist FPGA System Release 3.3, ICC (CPU is
essentially idle when work load is offloaded to the FPGA)
24. 24
Example Usage:
Database Query Processing
DB
Application
Query
NAS
Select * from
table where
a<100
Network
Router
Query to Disk
Query to
Disk
Compressed
Data
Data
Decompression
+ Query
Execution
Decompression function accelerated on FPGA:
Power-performance of LZO Decompression boosted up to 1.9X†
†LZO Decompression performance is measure in terms of Byte Decompressed per second.
Performance projections for stream files of size 111kB where the decompression matches are in range of FPGA buffer not requiring any system memory R/W requests: FPGA performance (estimated): 0.48 Clocks/Byte
per LZOD PE (Processing Engine) (resulting in 727 MB/s throughput @ 350 MHz) based on cycle accurate RTL simulation measurements; assuming linear scaling to 20 LZOD PE on Arria-10 1150 @ 350 MHz (60W TDP)
(CPU is essentially idle when work load is offloaded to the FPGA). CPU performance: 4.5 Clocks/Byte measured on one thread E5-2699v3 using IPP 9.0.0 (resulting in 511 MB/s Throughput @ 2.3GHz); assuming linear
scaling to 36 Threads on 1S E5-2699v3 @ 2.3 GHz (145W TDP)
25. 25
Academic Research in FPGA Usages
Intel & Altera jointly launched Hardware Accelerator Research Program
• Q1’15: Call for proposals “which will provide faculty with computer systems
containing Intel microprocessors and an Altera* Stratix* V FPGA module that
incorporates Intel® QuickAssist Technology in order to spur research in
programming tools, operating systems, and innovative applications for
accelerator-based computing systems”
• Q2’15: Proposals reviewed and selected
• Q3’15: Systems being shipped to universities
26. 26
Intel® Xeon® + FPGA1 in the Cloud
Vision
Workload
Static/dynamic
FPGA programming
Place
workload
Intel® Xeon®
+FPGA
26
Storage Network
Orchestration Software
Intel
Developed IP
3rd party
Developed IP
FPGA Vendor
Developed IP
End User
Developed IP
Compute
Resource Pool
Software
Defined
Infrastructure
Cloud Users
IP Library
Launch workload Workload
accelerators
1: Field Programmable Gate Array (FPGA)
27. 27
Summary and Next Steps
• Intel® Xeon® Processor + FPGA platform is targeted for acceleration of
various workloads in the data center
• Intel has launched the Hardware Accelerator Research Program for
research in FPGA programming and applications
A PDF of this presentation is available from our Technical Session
Catalog: www.intel.com/idfsessionsSF. This URL is also printed on
the top of Session Agenda Pages in the Pocket Guide.
29. 29
Risk FactorsThe above statements and any others in this document that refer to plans and expectations for the second quarter, the year and the future are forward-
looking statements that involve a number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "plans," "believes," "seeks,"
"estimates," "may," "will," "should" and their variations identify forward-looking statements. Statements that refer to or are based on projections, uncertain
events or assumptions also identify forward-looking statements. Many factors could affect Intel's actual results, and variances from Intel's current
expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-looking statements. Intel
presently considers the following to be important factors that could cause actual results to differ materially from the company's expectations. Demand for
Intel's products is highly variable and could differ from expectations due to factors including changes in business and economic conditions; consumer
confidence or income levels; the introduction, availability and market acceptance of Intel's products, products used together with Intel products and
competitors' products; competitive and pricing pressures, including actions taken by competitors; supply constraints and other disruptions affecting
customers; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Intel's gross margin
percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the
timing of qualifying products for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated
costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials or resources; and product manufacturing
quality/yields. Variations in gross margin may also be caused by the timing of Intel product introductions and related expenses, including marketing
expenses, and Intel's ability to respond quickly to technological developments and to introduce new products or incorporate new features into existing
products, which may result in restructuring and asset impairment charges. Intel's results could be affected by adverse economic, social, political and
physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural
disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Results may also be affected by the formal or informal
imposition by countries of new or revised export and/or import and doing-business regulations, which could be changed without prior notice. Intel
operates in highly competitive industries and its operations have high costs that are either fixed or difficult to reduce in the short term. The amount, timing
and execution of Intel's stock repurchase program could be affected by changes in Intel's priorities for the use of cash, such as operational spending,
capital spending, acquisitions, and as a result of changes to Intel's cash flows or changes in tax laws. Product defects or errata (deviations from published
specifications) may adversely impact our expenses, revenues and reputation. Intel's results could be affected by litigation or regulatory matters involving
intellectual property, stockholder, consumer, antitrust, disclosure and other issues. An unfavorable ruling could include monetary damages or an
injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business practices, impacting Intel's ability to design
its products, or requiring other remedies such as compulsory licensing of intellectual property. Intel's results may be affected by the timing of closing of
acquisitions, divestitures and other significant transactions. A detailed discussion of these and other factors that could affect Intel's results is included in
Intel's SEC filings, including the company's most recent reports on Form 10-Q, Form 10-K and earnings release.
Rev. 4/14/15