PCI stands for Peripheral Component Interconnect, and it is a common connection interface attaching different computer components to a motherboard. It was introduced back in the early 90’s by a group of engineers in Intel, AMD, and other companies.
This document discusses using functional programming languages for embedded systems and low-level code. It mentions that Haskell can be used to write NetBSD kernel drivers, ATS2 can write code for 8-bit systems like an Arduino, and the goal of the Metasepi project is to re-write Linux kernel drivers in ATS2 to benefit from dependent and linear types. The document also notes that people at PPL 2015 wearing a specific T-shirt can explain how to install the ATS2 language.
This document is a resume for Qiang Yu, an embedded/software system architect and consultant based in Bristol, UK. It outlines his objective of seeking consulting opportunities utilizing his experience in product development, architecture design, and problem-solving skills. It then details his work experience, including developing low power wireless applications, network stacks, safety critical systems, firmware, software architectures, data recording systems, and more for clients like Atkins, Expro, and Schlumberger as an independent consultant. It also outlines his experience as a lead software engineer and senior software engineer for companies in access control and rail systems. His skills, education, and qualifications are also summarized.
This document highlights the top 5 stories in high performance computing and artificial intelligence from the past week. The stories include: 1) the announcement of the new CUDA Toolkit 10, 2) Docker compatibility with Singularity containers for HPC, 3) the benefits of combining HPC and AI, 4) how containers can be easily created for HPC applications using HPC Container Maker, and 5) how NVIDIA powers the Dell EMC AI stack with ubiquitous GPU acceleration.
Building a Remote Control Robot with Automotive Grade LinuxLeon Anavi
Automotive Grade Linux (AGL) is a leading embedded Linux distribution for the automotive industry and soon it will debut on the 2018 Toyota Camry. Out of the box AGL offers reliable open source solutions for graphics, connectivity, security and software over the air updates. Could other industries benefit from these features?
In a quest to discover if AGL is suitable for Internet of Things (IoT) outside the automotive industry, this presentation will reveal a practical experiment of using AGL in robotics. Attendees will learn the exact steps for building a do it yourself (DIY) robot based on Raspberry Pi 3 with off-the-shelf components. The talk will provide guidelines for integration of additional software, sensors and other peripheral hardware device in a headless AGL profile.
This document provides a summary of Qiang Yu's experience as an Embedded/Software System Architect and Consultant based in Bristol, UK. It includes details of his role and responsibilities in previous positions at Eastinco Ltd, Gunnebo Entrance Control, and Knorr-Bremse Rail Systems (UK) Ltd where he developed firmware, software, and embedded systems using languages like C, C++, and C# on platforms such as ARM, PIC, and multicore systems. It also lists his skills in areas like architecture, systems, languages, tools, and technologies.
PCI stands for Peripheral Component Interconnect, and it is a common connection interface attaching different computer components to a motherboard. It was introduced back in the early 90’s by a group of engineers in Intel, AMD, and other companies.
This document discusses using functional programming languages for embedded systems and low-level code. It mentions that Haskell can be used to write NetBSD kernel drivers, ATS2 can write code for 8-bit systems like an Arduino, and the goal of the Metasepi project is to re-write Linux kernel drivers in ATS2 to benefit from dependent and linear types. The document also notes that people at PPL 2015 wearing a specific T-shirt can explain how to install the ATS2 language.
This document is a resume for Qiang Yu, an embedded/software system architect and consultant based in Bristol, UK. It outlines his objective of seeking consulting opportunities utilizing his experience in product development, architecture design, and problem-solving skills. It then details his work experience, including developing low power wireless applications, network stacks, safety critical systems, firmware, software architectures, data recording systems, and more for clients like Atkins, Expro, and Schlumberger as an independent consultant. It also outlines his experience as a lead software engineer and senior software engineer for companies in access control and rail systems. His skills, education, and qualifications are also summarized.
This document highlights the top 5 stories in high performance computing and artificial intelligence from the past week. The stories include: 1) the announcement of the new CUDA Toolkit 10, 2) Docker compatibility with Singularity containers for HPC, 3) the benefits of combining HPC and AI, 4) how containers can be easily created for HPC applications using HPC Container Maker, and 5) how NVIDIA powers the Dell EMC AI stack with ubiquitous GPU acceleration.
Building a Remote Control Robot with Automotive Grade LinuxLeon Anavi
Automotive Grade Linux (AGL) is a leading embedded Linux distribution for the automotive industry and soon it will debut on the 2018 Toyota Camry. Out of the box AGL offers reliable open source solutions for graphics, connectivity, security and software over the air updates. Could other industries benefit from these features?
In a quest to discover if AGL is suitable for Internet of Things (IoT) outside the automotive industry, this presentation will reveal a practical experiment of using AGL in robotics. Attendees will learn the exact steps for building a do it yourself (DIY) robot based on Raspberry Pi 3 with off-the-shelf components. The talk will provide guidelines for integration of additional software, sensors and other peripheral hardware device in a headless AGL profile.
This document provides a summary of Qiang Yu's experience as an Embedded/Software System Architect and Consultant based in Bristol, UK. It includes details of his role and responsibilities in previous positions at Eastinco Ltd, Gunnebo Entrance Control, and Knorr-Bremse Rail Systems (UK) Ltd where he developed firmware, software, and embedded systems using languages like C, C++, and C# on platforms such as ARM, PIC, and multicore systems. It also lists his skills in areas like architecture, systems, languages, tools, and technologies.
06 EPI: the European approach for Exascale agesRCCSRENKEI
The document discusses the European approach for achieving exascale performance in high performance computing. It notes the trend towards specialization through the use of accelerators rather than CPU-only systems. The European approach involves developing the RHEA processor, which acts as a common platform to unify different accelerators and handle complex workflows. This approach aims to provide a common open platform and ecosystem based on ARM architecture to cover applications from IoT to supercomputing.
Orchestrate Your AI Workload with Cisco Hyperflex, Powered by NVIDIA GPUs Renee Yao
Deep learning, a collection of statistical machine learning techniques, is transforming every digital business. As data grows, businesses need to find new ways of capitalizing on the volume of information to drive their competitive advantage. GPUs are becoming mainstream in the datacenter for accelerating containerized AI workloads. Kubernetes is a popular management framework for orchestrating containers at scale. However, managing GPUs in Kubernetes is still nascent, and setting up a Kubernetes cluster with GPUs can be challenging for customers. Join this session to learn more about how to use Kubernetes to orchestrate your AI workloads on Cisco Hyperflex, powered by NVIDIA V100, world’s most powerful GPU.
This paper describes return of experiences about using Linux technologies for industrial software developments. It gives feedback about embedded and real time usages of Linux.
End-to-End Big Data AI with Analytics ZooJason Dai
The document discusses Analytics Zoo, an open-source software platform for building end-to-end big data AI applications. It provides distributed deep learning frameworks like TensorFlow and PyTorch on Apache Spark. Analytics Zoo allows seamless scaling of AI models from laptop to distributed big data and includes features like automated machine learning, time series forecasting, and serving models in production. It aims to simplify development of end-to-end big data AI solutions.
CINECA for HCP and e-infrastructures infrastructuresCineca
Sanzio Bassini. Head of the HPC Department of Cineca. Cineca is the technological partner of the Ministry of Education, and takes part in the Italian commitment for the development of e-infrastrcuture in Italy and in Europe for HCP and HCP technologies; scientific data repository and management, cloud computing for industries and Public administration, for the development of computing intensive and data intensive methods for science and engineering
Cineca offers a unique offer for: open access of integrated tier0 and tier1 HCP national infrastructure; of education and training activities under the umbrella of PRACE Training
advanced center action; integrated help desk and scale up process for HCP users support
In this deck from the HPC User Forum in Detroit, Bob Sorensen from Hyperion Research presents: Exascale Update. As a research firm, Hyperion is tracking the development of Exascale supercomputers worldwide.
"The four geographies actively developing Exascale machines are: USA, China, Europe, and Japan. While it is important to emphasize that this is not a race, the first machine to achieve Exascale in terms of sustained LINPACK should be the A21 Aurora system at Argonne in 2021. It will be followed soon after by machines from all the other active projects."
Watch the video: https://wp.me/p3RLHQ-j1U
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This slideshow gives feedback about using Linux in industrial projects. It is part of a conference held by our company CIO Informatique Industrielle at ERTS 2008, the European Embedded Real Time software Congress in Toulouse
In this deck, Jean-Pierre Panziera from Atos presents: BXI - Bull eXascale Interconnect.
"Exascale entails an explosion of performance, of the number of nodes/cores, of data volume and data movement. At such a scale, optimizing the network that is the backbone of the system becomes a major contributor to global performance. The interconnect is going to be a key enabling technology for exascale systems. This is why one of the cornerstones of Bull’s exascale program is the development of our own new-generation interconnect. The Bull eXascale Interconnect or BXI introduces a paradigm shift in terms of performance, scalability, efficiency, reliability and quality of service for extreme workloads."
Watch the video: http://wp.me/p3RLHQ-gJa
Learn more: https://bull.com/bull-exascale-interconnect/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Software based video, audio, web conferencing - can standard servers deliver?Anders Løkke
Video and audio conferencing infrastructure technology has traditionally been hardware based.
Pexip Infinity is a virtualized software based platform that aims to deliver better and more flexible performance from standard off-the-shelf servers.
This white paper discusses findings and performance of such.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2018-embedded-vision-summit-trevett
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group and Vice President at NVIDIA, presents the "APIs for Accelerating Vision and Inferencing: Options and Trade-offs" tutorial at the May 2018 Embedded Vision Summit.
The landscape of SDKs, APIs and file formats for accelerating inferencing and vision applications continues to rapidly evolve. Low-level compute APIs, such as OpenCL, Vulkan and CUDA are being used to accelerate inferencing engines such as OpenVX, CoreML, NNAPI and TensorRT. Inferencing engines are being fed via neural network file formats such as NNEF and ONNX. Some of these APIs, like OpenCV, are vision-specific, while others, like OpenCL, are general-purpose. Some engines, like CoreML and TensorRT, are supplier-specific, while others, such as OpenVX, are open standards that any supplier can adopt. Which ones should you use for your project?
In this presentation, Trevett presents the current landscape of APIs, file formats and SDKs for inferencing and vision acceleration, explaining where each one fits in the development flow. Trevett also highlights where these APIs overlap and where they complement each other, and previews some of the latest developments in these APIs.
The Civil Infrastructure Platform (CIP) is creating a super long-term supported (SLTS) open source "base layer" for industrial grade software. We have been working on security fixes and some backported features since the moment we decided that Linux kernel v4.4 would be the first SLTS version. In this talk, we will describe the current development
status of the SLTS kernel and testing environment. First, we'll explain our kernel development policy. Then, we'll describe the functionality that has been backported. Second, we'll talk about testing before using our base-layer on real products. We have been developing a test framework to collect and share test results. To build it, we don't want to duplicate existing work such as KernelCI, Fuego and others. For that reason, we are trying to collaborate and contribute to such projects.
Fundamentals of Ethernet /IP Technologysoftconsystem
This document provides an overview of the agenda and topics for a Rockwell Automation TechED presentation on EtherNet/IP networking technology. The presentation will cover standard industrial network technology including EtherNet/IP, the OSI reference model, and industrial automation network architectures. It will also discuss topics like converged plantwide Ethernet, EtherNet/IP capabilities, industrial network trends, and the physical layer of networking.
Presentazione IBM Power System Evento Venaria 14 ottobrePRAGMA PROGETTI
The document discusses IBM's POWER8 processor and Linux on Power platform. It provides an overview of the OpenPOWER Consortium which aims to drive innovation through an open development model. Key highlights of POWER8 include 12 cores per socket, improved caches and memory bandwidth. Linux is highlighted as a growing enterprise workload with over 90% of supercomputers using it. Linux on Power is positioned as a strategic platform for new workloads like big data and analytics by combining Linux with the performance of POWER8.
AI Bridging Cloud Infrastructure (ABCI) and its communication performanceinside-BigData.com
In this deck from the MVAPICH User Group, Shinichiro Takizawa from AIST presents: AI Bridging Cloud Infrastructure (ABCI) and its communication performance.
"AI Bridging Cloud Infrastructure (ABCI) is the world's first large-scale Open AI Computing Infrastructure, constructed and operated by National Institute of Advanced Industrial Science and Technology (AIST), Japan. It delivers 19.9 petaflops of HPL performance and world' fastest training time of 1.17 minutes in ResNet-50 training on ImageNet datasets as of July 2019. ABCI consists of 1,088 compute nodes each of which equipped with two Intel Xeon Gold Scalable Processors, four NVIDIA Tesla V100 GPUs, two InfiniBand EDR HCAs and an NVMe SSD. ABCI offers a sophisticated high performance AI development environment realized by CUDA, Linux containers, on-demand parallel filesystem, MPI, including MVAPICH, etc. In this talk, we focus on ABCI’s network architecture and communication libraries available on ABCI and shows their performance and recent research achievements."
Watch the video: https://wp.me/p3RLHQ-kLz
Learn more: https://abci.ai/
and
http://mug.mvapich.cse.ohio-state.edu/program/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document discusses EtherNet/IP networking technology. It provides an overview of the OSI reference model and how EtherNet/IP uses standard Ethernet and IP networking. It describes how EtherNet/IP supports the convergence of industrial applications onto a single network using common Ethernet infrastructure and the CIP application layer protocol.
HPC in the cloud provides benefits like faster time to market, global accessibility, and unlocking new capabilities through access to unlimited hardware. However, there are also challenges to address like one size not fitting all needs, the need for expertise in HPC, cloud, and software, and software licensing issues. HPC in the cloud requires more than just remote servers - it involves ensuring a cloud-ready stack, performance management, security and compliance, change management, remote access, appropriate CAE applications, data management, and the right HPC stack. Case studies show how HPC in the cloud can enable breakthroughs like 27x faster living heart simulations and winning innovation awards.
An overview on the ease of development of a very complex system
Have you ever stood infront of a Vending machine and wished it could be smarter, remember that you like one sugar with your coffee, that is easier to interact with, easier to pay?
In this talk SECO lays the out our Oniro Blueprint, to build a vending machine for the truly digital age, one that knows and remembers your preferences, one that is easy to interact with even if you have no wallet or coins and one that is truly easy for businesses to manage and maintain while boosting the machine use and revenue.
Finally will be announced the future roadmap for the new and existing services and reference boards from SECO, including the data orchestration platform Astarte.
Computing Platforms for the XXIc - DSD/SEAA KeynoteIan Phillips
Wikipedia defines Platform as "A raised level surface on which people or things can stand". A more familiar technical interpretation applies to the hardware and OS configuration applicable to the execution of software; most frequently applicable to highly stable PC or Mainframe architectures. But the world has changed a lot since serious computing power moved into the embedded consumer arena. Now, with runs of many millions for single products, the argument for customisation is much more justifiable; so the traditional view of platforms is struggling against a tide of individuality. Can the ARM architecture bring stability back into this chaos, or is something else needed? Isaac Newton realised the reality of platforms when he talked of standing on the shoulders of giants. A platform is a stable place where engineers and scientists can stand to achieve more than they would otherwise have done. So our XXI Century Platforms are the shape to deliver improved Productivity, Reuse, Quality, TTM, Cost, etc. for the System Products we are now charged to deliver. Its business, stupid!
Journal Seminar: Is Singularity-based Container Technology Ready for Running ...Kento Aoyama
1. The study evaluates the performance of running MPI applications in Singularity containers on HPC clouds and local clusters.
2. Experiments were conducted on Chameleon Cloud using Intel Xeon Haswell nodes and a local cluster using Intel Knights Landing nodes.
3. Results show that Singularity incurs less than 8% overhead for MPI point-to-point and collective communications and for HPC applications, demonstrating its potential for efficiently running MPI workloads on HPC clouds and systems.
06 EPI: the European approach for Exascale agesRCCSRENKEI
The document discusses the European approach for achieving exascale performance in high performance computing. It notes the trend towards specialization through the use of accelerators rather than CPU-only systems. The European approach involves developing the RHEA processor, which acts as a common platform to unify different accelerators and handle complex workflows. This approach aims to provide a common open platform and ecosystem based on ARM architecture to cover applications from IoT to supercomputing.
Orchestrate Your AI Workload with Cisco Hyperflex, Powered by NVIDIA GPUs Renee Yao
Deep learning, a collection of statistical machine learning techniques, is transforming every digital business. As data grows, businesses need to find new ways of capitalizing on the volume of information to drive their competitive advantage. GPUs are becoming mainstream in the datacenter for accelerating containerized AI workloads. Kubernetes is a popular management framework for orchestrating containers at scale. However, managing GPUs in Kubernetes is still nascent, and setting up a Kubernetes cluster with GPUs can be challenging for customers. Join this session to learn more about how to use Kubernetes to orchestrate your AI workloads on Cisco Hyperflex, powered by NVIDIA V100, world’s most powerful GPU.
This paper describes return of experiences about using Linux technologies for industrial software developments. It gives feedback about embedded and real time usages of Linux.
End-to-End Big Data AI with Analytics ZooJason Dai
The document discusses Analytics Zoo, an open-source software platform for building end-to-end big data AI applications. It provides distributed deep learning frameworks like TensorFlow and PyTorch on Apache Spark. Analytics Zoo allows seamless scaling of AI models from laptop to distributed big data and includes features like automated machine learning, time series forecasting, and serving models in production. It aims to simplify development of end-to-end big data AI solutions.
CINECA for HCP and e-infrastructures infrastructuresCineca
Sanzio Bassini. Head of the HPC Department of Cineca. Cineca is the technological partner of the Ministry of Education, and takes part in the Italian commitment for the development of e-infrastrcuture in Italy and in Europe for HCP and HCP technologies; scientific data repository and management, cloud computing for industries and Public administration, for the development of computing intensive and data intensive methods for science and engineering
Cineca offers a unique offer for: open access of integrated tier0 and tier1 HCP national infrastructure; of education and training activities under the umbrella of PRACE Training
advanced center action; integrated help desk and scale up process for HCP users support
In this deck from the HPC User Forum in Detroit, Bob Sorensen from Hyperion Research presents: Exascale Update. As a research firm, Hyperion is tracking the development of Exascale supercomputers worldwide.
"The four geographies actively developing Exascale machines are: USA, China, Europe, and Japan. While it is important to emphasize that this is not a race, the first machine to achieve Exascale in terms of sustained LINPACK should be the A21 Aurora system at Argonne in 2021. It will be followed soon after by machines from all the other active projects."
Watch the video: https://wp.me/p3RLHQ-j1U
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
This slideshow gives feedback about using Linux in industrial projects. It is part of a conference held by our company CIO Informatique Industrielle at ERTS 2008, the European Embedded Real Time software Congress in Toulouse
In this deck, Jean-Pierre Panziera from Atos presents: BXI - Bull eXascale Interconnect.
"Exascale entails an explosion of performance, of the number of nodes/cores, of data volume and data movement. At such a scale, optimizing the network that is the backbone of the system becomes a major contributor to global performance. The interconnect is going to be a key enabling technology for exascale systems. This is why one of the cornerstones of Bull’s exascale program is the development of our own new-generation interconnect. The Bull eXascale Interconnect or BXI introduces a paradigm shift in terms of performance, scalability, efficiency, reliability and quality of service for extreme workloads."
Watch the video: http://wp.me/p3RLHQ-gJa
Learn more: https://bull.com/bull-exascale-interconnect/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Software based video, audio, web conferencing - can standard servers deliver?Anders Løkke
Video and audio conferencing infrastructure technology has traditionally been hardware based.
Pexip Infinity is a virtualized software based platform that aims to deliver better and more flexible performance from standard off-the-shelf servers.
This white paper discusses findings and performance of such.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2018-embedded-vision-summit-trevett
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Neil Trevett, President of the Khronos Group and Vice President at NVIDIA, presents the "APIs for Accelerating Vision and Inferencing: Options and Trade-offs" tutorial at the May 2018 Embedded Vision Summit.
The landscape of SDKs, APIs and file formats for accelerating inferencing and vision applications continues to rapidly evolve. Low-level compute APIs, such as OpenCL, Vulkan and CUDA are being used to accelerate inferencing engines such as OpenVX, CoreML, NNAPI and TensorRT. Inferencing engines are being fed via neural network file formats such as NNEF and ONNX. Some of these APIs, like OpenCV, are vision-specific, while others, like OpenCL, are general-purpose. Some engines, like CoreML and TensorRT, are supplier-specific, while others, such as OpenVX, are open standards that any supplier can adopt. Which ones should you use for your project?
In this presentation, Trevett presents the current landscape of APIs, file formats and SDKs for inferencing and vision acceleration, explaining where each one fits in the development flow. Trevett also highlights where these APIs overlap and where they complement each other, and previews some of the latest developments in these APIs.
The Civil Infrastructure Platform (CIP) is creating a super long-term supported (SLTS) open source "base layer" for industrial grade software. We have been working on security fixes and some backported features since the moment we decided that Linux kernel v4.4 would be the first SLTS version. In this talk, we will describe the current development
status of the SLTS kernel and testing environment. First, we'll explain our kernel development policy. Then, we'll describe the functionality that has been backported. Second, we'll talk about testing before using our base-layer on real products. We have been developing a test framework to collect and share test results. To build it, we don't want to duplicate existing work such as KernelCI, Fuego and others. For that reason, we are trying to collaborate and contribute to such projects.
Fundamentals of Ethernet /IP Technologysoftconsystem
This document provides an overview of the agenda and topics for a Rockwell Automation TechED presentation on EtherNet/IP networking technology. The presentation will cover standard industrial network technology including EtherNet/IP, the OSI reference model, and industrial automation network architectures. It will also discuss topics like converged plantwide Ethernet, EtherNet/IP capabilities, industrial network trends, and the physical layer of networking.
Presentazione IBM Power System Evento Venaria 14 ottobrePRAGMA PROGETTI
The document discusses IBM's POWER8 processor and Linux on Power platform. It provides an overview of the OpenPOWER Consortium which aims to drive innovation through an open development model. Key highlights of POWER8 include 12 cores per socket, improved caches and memory bandwidth. Linux is highlighted as a growing enterprise workload with over 90% of supercomputers using it. Linux on Power is positioned as a strategic platform for new workloads like big data and analytics by combining Linux with the performance of POWER8.
AI Bridging Cloud Infrastructure (ABCI) and its communication performanceinside-BigData.com
In this deck from the MVAPICH User Group, Shinichiro Takizawa from AIST presents: AI Bridging Cloud Infrastructure (ABCI) and its communication performance.
"AI Bridging Cloud Infrastructure (ABCI) is the world's first large-scale Open AI Computing Infrastructure, constructed and operated by National Institute of Advanced Industrial Science and Technology (AIST), Japan. It delivers 19.9 petaflops of HPL performance and world' fastest training time of 1.17 minutes in ResNet-50 training on ImageNet datasets as of July 2019. ABCI consists of 1,088 compute nodes each of which equipped with two Intel Xeon Gold Scalable Processors, four NVIDIA Tesla V100 GPUs, two InfiniBand EDR HCAs and an NVMe SSD. ABCI offers a sophisticated high performance AI development environment realized by CUDA, Linux containers, on-demand parallel filesystem, MPI, including MVAPICH, etc. In this talk, we focus on ABCI’s network architecture and communication libraries available on ABCI and shows their performance and recent research achievements."
Watch the video: https://wp.me/p3RLHQ-kLz
Learn more: https://abci.ai/
and
http://mug.mvapich.cse.ohio-state.edu/program/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The document discusses EtherNet/IP networking technology. It provides an overview of the OSI reference model and how EtherNet/IP uses standard Ethernet and IP networking. It describes how EtherNet/IP supports the convergence of industrial applications onto a single network using common Ethernet infrastructure and the CIP application layer protocol.
HPC in the cloud provides benefits like faster time to market, global accessibility, and unlocking new capabilities through access to unlimited hardware. However, there are also challenges to address like one size not fitting all needs, the need for expertise in HPC, cloud, and software, and software licensing issues. HPC in the cloud requires more than just remote servers - it involves ensuring a cloud-ready stack, performance management, security and compliance, change management, remote access, appropriate CAE applications, data management, and the right HPC stack. Case studies show how HPC in the cloud can enable breakthroughs like 27x faster living heart simulations and winning innovation awards.
An overview on the ease of development of a very complex system
Have you ever stood infront of a Vending machine and wished it could be smarter, remember that you like one sugar with your coffee, that is easier to interact with, easier to pay?
In this talk SECO lays the out our Oniro Blueprint, to build a vending machine for the truly digital age, one that knows and remembers your preferences, one that is easy to interact with even if you have no wallet or coins and one that is truly easy for businesses to manage and maintain while boosting the machine use and revenue.
Finally will be announced the future roadmap for the new and existing services and reference boards from SECO, including the data orchestration platform Astarte.
Computing Platforms for the XXIc - DSD/SEAA KeynoteIan Phillips
Wikipedia defines Platform as "A raised level surface on which people or things can stand". A more familiar technical interpretation applies to the hardware and OS configuration applicable to the execution of software; most frequently applicable to highly stable PC or Mainframe architectures. But the world has changed a lot since serious computing power moved into the embedded consumer arena. Now, with runs of many millions for single products, the argument for customisation is much more justifiable; so the traditional view of platforms is struggling against a tide of individuality. Can the ARM architecture bring stability back into this chaos, or is something else needed? Isaac Newton realised the reality of platforms when he talked of standing on the shoulders of giants. A platform is a stable place where engineers and scientists can stand to achieve more than they would otherwise have done. So our XXI Century Platforms are the shape to deliver improved Productivity, Reuse, Quality, TTM, Cost, etc. for the System Products we are now charged to deliver. Its business, stupid!
Journal Seminar: Is Singularity-based Container Technology Ready for Running ...Kento Aoyama
1. The study evaluates the performance of running MPI applications in Singularity containers on HPC clouds and local clusters.
2. Experiments were conducted on Chameleon Cloud using Intel Xeon Haswell nodes and a local cluster using Intel Knights Landing nodes.
3. Results show that Singularity incurs less than 8% overhead for MPI point-to-point and collective communications and for HPC applications, demonstrating its potential for efficiently running MPI workloads on HPC clouds and systems.
Similar to Data Driven Innovation Open Summit: HPC@ENI. Marco Bianchi, ENI (20)
CityOpenSource as a civic tech tool (Ilaria Vitellio, CityOpenSource)Data Driven Innovation
City{OpenSource} is a civic tech tool that uses collaborative mapping to create an alternative narrative of cities through user-generated cultural contributions. Users can geotag photos, videos, sounds and texts to tell stories about their urban experiences. The platform also aims to promote the reuse of unused buildings and spaces through ideas and projects proposed by citizens, cultural organizations, and events. It allows inhabitants to become "urban performers" by sharing their local knowledge and expertise to build a collective storytelling of their city.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.