Session ID: HKG18-419
Session Name: HKG18-419 - OpenHPC on Ansible
Speaker: Renato Golin,Takeharu KATO
Track: Enterprise
★ Session Summary ★
OpenHPC is slowly moving to Ansible, but with radically different deployments (x86 vs ARM, different components, network topology/technology, etc), we need to find the right balance between what stays in OpenHPC and what's private, as well as how to update our auto-generated documents from Ansible rules.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-419/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-419.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-419.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Enterprise
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018Linaro
Speaker: Renato Golin
Speaker Bio:
He started programming in the late 80's in C for PCs after a few years playing with 8-bit computers, but he only started programming professionally in the late 90's during the .com bubble. After many years working on Internet's back-end, he moved to UK and worked a few years on bioinformatics at EBI before joining ARM, where he worked on the DS-5 debugger and on the EDG-to-LLVM bridge, where he became the LLVM Tech Lead. Recently, he worked with large clusters and big data at HPCC before moving to Linaro.
Talk Title: OpenHPC Automation with Ansible
Talk Abstract: "In order to test OpenHPC packages and components and to use it as a
platform to benchmark HPC applications, Linaro is developing an automated deployment strategy, using Ansible, Mr-Provisioner and Jenkins, to install the
OS, OpenHPC and prepare the environment on varied architectures (Arm, x86). This work is meant to replace the existing ageing Bash-based recipes upstream while still keeping the documents intact. Our aim is to make it easier to vary hardware configuration, allow for different provisioning techniques and mix internal infrastructure logic to different labs, while still using the same recipes. We hope this will help more people use OpenHPC with a better out-of-the-box experience and with more robust results"
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
OpenStack Compute (Nova), has been a core component of OpenStack since the original Austin release in 2010. In the intervening years development has proceeded at a rapid pace adding support for new virtualization technologies and exposing additional features. Learn how Compute fits into the OpenStack architecture, and how it interacts with other OpenStack components and the hypervisors it manages.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
The Linux Kernel Scheduler (For Beginners) - SFO17-421Linaro
Session ID: SFO17-421
Session Name: The Linux Kernel Scheduler (For Beginners) - SFO17-421
Speaker: Viresh Kumar
Track: Power Management
★ Session Summary ★
This talk will take you through the internals of the Linux Kernel scheduler.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/sfo17/sfo17-421/
Presentation:
Video: https://www.youtube.com/watch?v=q283Wm__QQ0
---------------------------------------------------
★ Event Details ★
Linaro Connect San Francisco 2017 (SFO17)
25-29 September 2017
Hyatt Regency San Francisco Airport
---------------------------------------------------
Keyword:
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://twitter.com/linaroorg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018Linaro
Speaker: Renato Golin
Speaker Bio:
He started programming in the late 80's in C for PCs after a few years playing with 8-bit computers, but he only started programming professionally in the late 90's during the .com bubble. After many years working on Internet's back-end, he moved to UK and worked a few years on bioinformatics at EBI before joining ARM, where he worked on the DS-5 debugger and on the EDG-to-LLVM bridge, where he became the LLVM Tech Lead. Recently, he worked with large clusters and big data at HPCC before moving to Linaro.
Talk Title: OpenHPC Automation with Ansible
Talk Abstract: "In order to test OpenHPC packages and components and to use it as a
platform to benchmark HPC applications, Linaro is developing an automated deployment strategy, using Ansible, Mr-Provisioner and Jenkins, to install the
OS, OpenHPC and prepare the environment on varied architectures (Arm, x86). This work is meant to replace the existing ageing Bash-based recipes upstream while still keeping the documents intact. Our aim is to make it easier to vary hardware configuration, allow for different provisioning techniques and mix internal infrastructure logic to different labs, while still using the same recipes. We hope this will help more people use OpenHPC with a better out-of-the-box experience and with more robust results"
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
OpenStack Compute (Nova), has been a core component of OpenStack since the original Austin release in 2010. In the intervening years development has proceeded at a rapid pace adding support for new virtualization technologies and exposing additional features. Learn how Compute fits into the OpenStack architecture, and how it interacts with other OpenStack components and the hypervisors it manages.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
The Linux Kernel Scheduler (For Beginners) - SFO17-421Linaro
Session ID: SFO17-421
Session Name: The Linux Kernel Scheduler (For Beginners) - SFO17-421
Speaker: Viresh Kumar
Track: Power Management
★ Session Summary ★
This talk will take you through the internals of the Linux Kernel scheduler.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/sfo17/sfo17-421/
Presentation:
Video: https://www.youtube.com/watch?v=q283Wm__QQ0
---------------------------------------------------
★ Event Details ★
Linaro Connect San Francisco 2017 (SFO17)
25-29 September 2017
Hyatt Regency San Francisco Airport
---------------------------------------------------
Keyword:
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://twitter.com/linaroorg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
The Linux Block Layer - Built for Fast StorageKernel TLV
The arrival of flash storage introduced a radical change in performance profiles of direct attached devices. At the time, it was obvious that Linux I/O stack needed to be redesigned in order to support devices capable of millions of IOPs, and with extremely low latency.
In this talk we revisit the changes the Linux block layer in the
last decade or so, that made it what it is today - a performant, scalable, robust and NUMA-aware subsystem. In addition, we cover the new NVMe over Fabrics support in Linux.
Sagi Grimberg
Sagi is Principal Architect and co-founder at LightBits Labs.
Session ID: SFO17-509
Session Name: Deep Learning on ARM Platforms
- SFO17-509
Speaker: Jammy Zhou
Track:
★ Session Summary ★
A new era of deep learning is coming with algorithm evolvement, powerful computing platforms and large dataset availability. This session will focus on existing and potential heterogeneous accelerator solutions (GPU, FPGA, DSP, and etc) for ARM platforms and the work ahead from platform perspective.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/sfo17/sfo17-509/
Presentation:
Video:
---------------------------------------------------
★ Event Details ★
Linaro Connect San Francisco 2017 (SFO17)
25-29 September 2017
Hyatt Regency San Francisco Airport
---------------------------------------------------
Keyword:
http://www.linaro.org
http://connect.linaro.org
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://twitter.com/linaroorg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
OpenWrt is a Linux distribution for embedded systems that runs on many routers and networking devices today. In this session we'll talk about OpenWrt's origins, architecture and get down to building apps for the platform.
Along the way we will touch on some basic firmware concepts and at last present the final working OpenWrt router and its capabilities.
Anton Lerner, Architect at Sitaro, computer geek, developer and occasional maker.
Sitaro provides total cyber protection for small business and home networks. Sitaro prevents massive scale IoT cyber attacks.
Find out more information in the meetup event page - https://www.meetup.com/Tel-Aviv-Yafo-Linux-Kernel-Meetup/events/245319189/
Talk held by Jaime Melis at the CentOS Dojo in Cologne, August 4th (http://wiki.centos.org/Events/Dojo/Cologne2014)
In this talk we talk about OpenNebula from the perspsective of the CentOS, explaining tips and considerations for power users.
OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...NETWAYS
Ceph is a open source distributed storage system which provides object, block and file interfaces. The Ceph block device interface (RBD) and object interface (RGW) are popular building blocks in private cloud deployments, and OpenNebula includes a datastore driver for Ceph.
Emerging Persistent Memory Hardware and ZUFS - PM-based File Systems in User ...Kernel TLV
In this talk, Dr. Amit Golander looks into emerging PM/NVDIMM devices, the value they bring to applications and most importantly how they revolutionize the storage stack.
In the second part, Boaz Harrosh and Shachar Sharon dive into new opportunities to develop memory-based filesystems in user space, leveraging a new open source project called ZUFS. ZUFS was presented in the last Linux Plumbers conference and unlike FUSE it focuses on delivering low latency and zero copy.
Dr. Amit Golander was the CTO of Plexistor, which developed the first enterprise-grade PM-based file system, and which was acquired earlier this year by NetApp.
Boaz Harrosh and Shachar Sharon are ZUFS maintainers and longtime Storage and Linux veterans.
OpenNebulaConf2015 1.09.02 Installgems Add-on - Alvaro Simon GarciaOpenNebula Project
Lightning talks are 5 minute plenary presentations focusing on one key point. This can be a new project, product, feature, integration, experience, use case, collaboration invitation, quick tip, or demonstration. This session is an opportunity for ideas to get the attention they deserve.
Talk held by Jaime Melis at the CentOS Dojo in Cologne, August 4th (http://wiki.centos.org/Events/Dojo/Cologne2014)
In this talk we talk about OpenNebula from the perspective of the project, the technology and the community.
Running at Scale: Practical Performance Tuning with Puppet - PuppetConf 2013Puppet
"Running at Scale: Practical Performance Tuning with Puppet" by Sam Kottler Engineer, Red Hat.
Presentation Overview: This session will talk about some production issues I've seen running Puppet in large environments. From how to manage a single master with hundreds of hosts to real-life patterns for building high availability clusters that scale to 10's of thousands of agents. Another important topic that will be covered is how to deploy networked filesystems that perform well under high load and streaming files to many hosts simultaneously.
Speaker Bio: Sam Kottler is a software engineer in the Virtualization R&D group at Red Hat. He's helped build infrastructure for leading startups, including Digg.com, Acquia, and Venmo and is a contributor to Puppet, the Fedora Project, Drupal, and the Rubygems.org. Sam speaks around the world on the topics of internet security, systems automation, and software architecture.
Philip Derbeko presents past design decisions that influenced the design of current filesystems, takes a look at how Linux tackles those problems and compares it with other operating systems, and discusses the upcoming revolution in storage and filesystem design.
Older than he looks, Philip Derbeko has been programming for over 20 years.
He was using Linux since days of Slackware 3.0 with kernel 2.0
Most of the years worked on storage, security systems and machine learning.
Currently, develops and herds a team of Linux, OS X and Windows kernel developers at enSilo.
Utilizing AMD GPUs: Tuning, programming models, and roadmapGeorge Markomanolis
A presentation at FOSDEM 2022 about AMD GPUs, tuning, programming models and software roadmap. This is continuation from the previous talk (FOSDEM 2021)
This presentation is about a methodology which allows patching of a running Linux kernel, its technical details, limitations as well as kpatch tools.
The talk was delivered by Ruslan Bilovol (Associate Manager, Consultant, GlobalLogic) at GlobalLogic Embedded Career Day #2 on February 10, 2018.
More about GlobalLogic Embedded Career Day #2: https://www.globallogic.com/ua/events/globallogic-kyiv-embedded-career-day-2-materials
Talk held by Javier Fontan at the CentOS Dojo in Paris, August 25th (http://wiki.centos.org/Events/Dojo/Paris2014)
In this talk we talk about OpenNebula from the perspsective of the CentOS, explaining tips and considerations for power users.
The Linux Block Layer - Built for Fast StorageKernel TLV
The arrival of flash storage introduced a radical change in performance profiles of direct attached devices. At the time, it was obvious that Linux I/O stack needed to be redesigned in order to support devices capable of millions of IOPs, and with extremely low latency.
In this talk we revisit the changes the Linux block layer in the
last decade or so, that made it what it is today - a performant, scalable, robust and NUMA-aware subsystem. In addition, we cover the new NVMe over Fabrics support in Linux.
Sagi Grimberg
Sagi is Principal Architect and co-founder at LightBits Labs.
Session ID: SFO17-509
Session Name: Deep Learning on ARM Platforms
- SFO17-509
Speaker: Jammy Zhou
Track:
★ Session Summary ★
A new era of deep learning is coming with algorithm evolvement, powerful computing platforms and large dataset availability. This session will focus on existing and potential heterogeneous accelerator solutions (GPU, FPGA, DSP, and etc) for ARM platforms and the work ahead from platform perspective.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/sfo17/sfo17-509/
Presentation:
Video:
---------------------------------------------------
★ Event Details ★
Linaro Connect San Francisco 2017 (SFO17)
25-29 September 2017
Hyatt Regency San Francisco Airport
---------------------------------------------------
Keyword:
http://www.linaro.org
http://connect.linaro.org
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://twitter.com/linaroorg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
OpenWrt is a Linux distribution for embedded systems that runs on many routers and networking devices today. In this session we'll talk about OpenWrt's origins, architecture and get down to building apps for the platform.
Along the way we will touch on some basic firmware concepts and at last present the final working OpenWrt router and its capabilities.
Anton Lerner, Architect at Sitaro, computer geek, developer and occasional maker.
Sitaro provides total cyber protection for small business and home networks. Sitaro prevents massive scale IoT cyber attacks.
Find out more information in the meetup event page - https://www.meetup.com/Tel-Aviv-Yafo-Linux-Kernel-Meetup/events/245319189/
Talk held by Jaime Melis at the CentOS Dojo in Cologne, August 4th (http://wiki.centos.org/Events/Dojo/Cologne2014)
In this talk we talk about OpenNebula from the perspsective of the CentOS, explaining tips and considerations for power users.
OpenNebula Conf 2014 | Using Ceph to provide scalable storage for OpenNebula ...NETWAYS
Ceph is a open source distributed storage system which provides object, block and file interfaces. The Ceph block device interface (RBD) and object interface (RGW) are popular building blocks in private cloud deployments, and OpenNebula includes a datastore driver for Ceph.
Emerging Persistent Memory Hardware and ZUFS - PM-based File Systems in User ...Kernel TLV
In this talk, Dr. Amit Golander looks into emerging PM/NVDIMM devices, the value they bring to applications and most importantly how they revolutionize the storage stack.
In the second part, Boaz Harrosh and Shachar Sharon dive into new opportunities to develop memory-based filesystems in user space, leveraging a new open source project called ZUFS. ZUFS was presented in the last Linux Plumbers conference and unlike FUSE it focuses on delivering low latency and zero copy.
Dr. Amit Golander was the CTO of Plexistor, which developed the first enterprise-grade PM-based file system, and which was acquired earlier this year by NetApp.
Boaz Harrosh and Shachar Sharon are ZUFS maintainers and longtime Storage and Linux veterans.
OpenNebulaConf2015 1.09.02 Installgems Add-on - Alvaro Simon GarciaOpenNebula Project
Lightning talks are 5 minute plenary presentations focusing on one key point. This can be a new project, product, feature, integration, experience, use case, collaboration invitation, quick tip, or demonstration. This session is an opportunity for ideas to get the attention they deserve.
Talk held by Jaime Melis at the CentOS Dojo in Cologne, August 4th (http://wiki.centos.org/Events/Dojo/Cologne2014)
In this talk we talk about OpenNebula from the perspective of the project, the technology and the community.
Running at Scale: Practical Performance Tuning with Puppet - PuppetConf 2013Puppet
"Running at Scale: Practical Performance Tuning with Puppet" by Sam Kottler Engineer, Red Hat.
Presentation Overview: This session will talk about some production issues I've seen running Puppet in large environments. From how to manage a single master with hundreds of hosts to real-life patterns for building high availability clusters that scale to 10's of thousands of agents. Another important topic that will be covered is how to deploy networked filesystems that perform well under high load and streaming files to many hosts simultaneously.
Speaker Bio: Sam Kottler is a software engineer in the Virtualization R&D group at Red Hat. He's helped build infrastructure for leading startups, including Digg.com, Acquia, and Venmo and is a contributor to Puppet, the Fedora Project, Drupal, and the Rubygems.org. Sam speaks around the world on the topics of internet security, systems automation, and software architecture.
Philip Derbeko presents past design decisions that influenced the design of current filesystems, takes a look at how Linux tackles those problems and compares it with other operating systems, and discusses the upcoming revolution in storage and filesystem design.
Older than he looks, Philip Derbeko has been programming for over 20 years.
He was using Linux since days of Slackware 3.0 with kernel 2.0
Most of the years worked on storage, security systems and machine learning.
Currently, develops and herds a team of Linux, OS X and Windows kernel developers at enSilo.
Utilizing AMD GPUs: Tuning, programming models, and roadmapGeorge Markomanolis
A presentation at FOSDEM 2022 about AMD GPUs, tuning, programming models and software roadmap. This is continuation from the previous talk (FOSDEM 2021)
This presentation is about a methodology which allows patching of a running Linux kernel, its technical details, limitations as well as kpatch tools.
The talk was delivered by Ruslan Bilovol (Associate Manager, Consultant, GlobalLogic) at GlobalLogic Embedded Career Day #2 on February 10, 2018.
More about GlobalLogic Embedded Career Day #2: https://www.globallogic.com/ua/events/globallogic-kyiv-embedded-career-day-2-materials
Talk held by Javier Fontan at the CentOS Dojo in Paris, August 25th (http://wiki.centos.org/Events/Dojo/Paris2014)
In this talk we talk about OpenNebula from the perspsective of the CentOS, explaining tips and considerations for power users.
Kernel Recipes 2014 - Performance Does MatterAnne Nicolas
Deploying clouds is in everybody’s mind but how to make an efficient deployment ?
After setting up the hardware, it’s mandatory to make a deep inspection of server’s performance.
In a farm of supposed identical servers, many miss-{installation|configuration} could seriously degrade performance. If you want to discovery such counter-performance before users complains of their VMs, you have to be detect them before installing any software. Another performance metric to know is “how many VMs could I load on top of my servers ?”. By using the same methodology it is possible the compare how a set of VMs performs regarding the bare metal capabilities.
The challenge is here: How do detect automatically servers that under perform ? How to insure that a new server entering a farm will not degrade it ? How to measure the overhead of all the virtualization layers from the VM point of view ?
Erwan Velu – Performance Engineer @eNovance
With the rise of Docker, we have seen an unprecedented interest in container technologies where small companies and big enterprises bet their future on these technologies. This trend bases on an immense adoption of containers from software developers. And it has been agreed upon that they are considered highly beneficial for modern engineering practices like Agile and DevOps. But there is a new kid in town that proclaims a more radical approach: Serverless or FaaS: Function-As-A-Service. This paradigm suggests that a developer should only write functions and react to events.
The functions are written in high-level programming languages like Javascript, Java or Python, and the underlying compute infrastructure like containers or VMs is transparent to the user. That raises the question: Is the container revolution already dead before it really started? And who now needs container technologies in a serverless world?
In this talk we discuss these questions from both a containers advocate and serverless fanboy viewpoints. We confront these two approaches, show the differences, individual strengths and weaknesses and where they complement each other. This talk will also discuss motivations from different involved parties so that the audience can build their conclusion.
Vaclav Pavlin (Containers & OpenShift guru): Containers will rule the world!.
Matthias Luebken (Developer tools PM): Serverless is the Visual Basic for the cloud-native generation.
Developed for DANS-KNAW. This presentation covers some of the fundamentals of the automation-tools. Helper scripts for automation of transfers in Archivematica. Designed to complement the API slide-deck, the two resources can probably be consumed in any order. Knowing the API will help you understand the automation-tools, but knowing the automation-tools may help you understand what you want to create using the API.
API slide-deck here: https://www.slideshare.net/Archivematica/introduction-to-the-archivematica-api-september-2018-122548752
Embedded Systems are basically Single Board Computers (SBCs) with limited and specific functional capabilities. All the components that make up a computer like the Microprocessor, Memory Unit, I/O Unit etc. are hosted on a single board. Their functionality is subject to constraints, and is embedded as a part of the complete device including the hardware, in contrast to the Desktop and Laptop computers which are essentially general purpose (Read more about what is embedded system). The software part of embedded systems used to be vendor specific instruction sets built in as firmware. However, drastic changes have been brought about in the last decade driven by the spurt in technology, and thankfully, the Moore’s Law. New, smaller, smarter, elegant but more powerful and resource hungry devices like Smart-phones, PDAs and cell-phones have forced the vendors to make a decision between hosting System Firmware or full-featured Operating Systems embedded with devices. The choice is often crucial and is decided by parameters like scope, future expansion plans, molecularity, scalability, cost etc. Most of these features being inbuilt into Operating Systems, hosting operating systems more than compensates the slightly higher cost overhead associated with them. Among various Embedded System Operating Systems like VxWorks, pSOS, QNX, Integrity, VRTX, Symbian OS, Windows CE and many other commercial and open-source varieties, Linux has exploded into the computing scene. Owing to its popularity and open source nature, Linux is evolving as an architecturally neutral OS, with reliable support for popular standards and features
Linux has emerged as a number one choice for developing OS based Embedded Systems. Open Source development model, Customizability, Portability, Tool chain availability are some reasons for this success. This course gives a practical perspective of customizing, building and bringing up Linux Kernel on an ARM based target hardware. It combines various previous modules you have learned, by combing Linux administration, Hardware knowledge, Linux as OS, C/Computer programming areas. After bringing up Linux, you can port any of the existing applications into the target hardware.
High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
In the networking world there are a number of ways to increase performance over naive use of basic Berkeley sockets. These techniques have ranged from polling blocking sockets, non-blocking sockets controlled by Epoll, all the way through completely bypassing the Linux kernel for maximum network performance where you talk directly to the network interface card by using something like DPDK or Netmap. All these tools have their place, and generally occupy a space from convenience to performance. But in recent years, that landscape has changed massively.. The tools available to the average Linux systems developer have improved from the creation of io_uring, to the expansion of bpf from a simple filtering language to a full-on programming environment embedded directly in the kernel. Along with that came something called XDP (express datapath). This was Linux kernel's answer to kernel-bypass networking. AF_XDP is the new socket type created by this feature, and generally works very similarly to something like DPDK. History lessons out of the way, this talk will look into, and discuss the merits of this technology, it's place in the broader ecosystem and how it can be used to attain the highest level of performance possible. This talk will dive into crucial details, such as how AF_XDP works, how it can be integrated into a larger system and finally more advanced topics such as request sharding/load balancing. There will be detailed look at the design of AF_XDP, the eBpf code used, as well as the userspace code required to drive it all. It will also include performance numbers from this setup compared to regular kernel networking. And most importantly how to put all this together to handle as much data as possible on a single modern multi-core system.
[HKOSCON][20220611][AlviStack: Hong Kong Based Kubernetes Distribution]Wong Hoi Sing Edison
AlviStack is a Hong Kong based Kubernetes distribution, passing CNCF Kubernetes Conformance Test, and now submitting as CNCF sandbox project. This workshop will share about how AlviStack works, a quick demo with Kubernetes deployment, and some on going public contribution roadmap.
https://2022.hkoscon.org/edisonwong/
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloLinaro
Short
The growing amount of data captured by sensors and the real time constraints imply that not only big data analytics but also Machine Learning (ML) inference shall be executed at the edge. The multiple options for neural network acceleration in Arm-based platforms provide an unprecedented opportunity for new intelligent devices. It also raises the risk of fragmentation and duplication of efforts when multiple frameworks shall support multiple accelerators.
Andrea Gallo, Linaro VP of Segment Groups, will summarise the existing NN frameworks, accelerator solutions, and will describe the efforts underway in the Arm ecosystem.
Abstract
The dramatically growing amount of data captured by sensors and the ever more stringent requirements for latency and real time constraints are paving the way for edge computing, and this implies that not only big data analytics but also Machine Learning (ML) inference shall be executed at the edge. The multiple options for neural network acceleration in recent Arm-based platforms provides an unprecedented opportunity for new intelligent devices with ML inference. It also raises the risk of fragmentation and duplication of efforts when multiple frameworks shall support multiple accelerators.
Andrea Gallo, Linaro VP of Segment Groups, will summarise the existing NN frameworks, model description formats, accelerator solutions, low cost development boards and will describe the efforts underway to identify the best technologies to improve the consolidation and enable the competitive innovative advantage from all vendors.
Audience
The session will be useful for executives to engineers. Executives will gain a deeper understanding of the issues and opportunities. Engineers at NN acceleration IP design houses will take away ideas for how to collaborate in the open source community on their area of expertise, how to evaluate the performance and accelerate multiple NN frameworks without modifying them for each new IP, whether it be targeting edge computing gateways, smart devices or simple microcontrollers.
Benefits to the Ecosystem
The AI deep learning neural network ecosystem is starting just now and it has similar implications with open source as GPU and video accelerators had in the early days with user space drivers, binary blobs, proprietary APIs and all possible ways to protect their IPs. The session will outline a proposal for a collaborative ecosystem effort to create a common framework to manage multiple NN accelerators while at the same time avoiding to modify deep learning frameworks with multiple forks.
Huawei’s requirements for the ARM based HPC solution readiness - Joshua MoraLinaro
Talk Title: Huawei’s requirements for the ARM based HPC solution readiness
Talk Abstract:
A high level review of a wide range of requirements to architect an ARM based competitive HPC solution is provided. The review combines both Industry and Huawei’s unique views with the intend to communicate openly not only the alignment and support in ongoing efforts carried over by other ARM key players but to brief on the areas of differentiation that Huawei is investing towards the research, development and deployment of homegrown ARM based HPC solution(s).
Speaker: Joshua Mora
Speaker Bio:
20 years of experience in research and development of both software and hardware for high performance computing. Currently leading the architecture definition and development of ARM based HPC solutions, both hardware and software, all the way to the applications (ie. turnkey HPC solutions for different compute intensive markets where ARM will succeed !!).
Bud17 113: distribution ci using qemu and open qaLinaro
“Delivering a well working distribution is hard. There are a lot of different hardware platforms that need to be verified and the software stack is in a big flux during development phases. In rolling releases, this gets even worse, as nothing ever stands still. The only sane answer to that problem are working Continuous Integration tests. The SUSE way to check whether any change breaks normal distribution behavior is OpenQA. Using OpenQA we can automatically run tests that hard working QA people did manually in the old days. That way we have fast enough turnaround times to find and reject breaking changes This session shows how OpenQA works, what pitfalls we had to make ARM work with OpenQA and what we’re doing to improve it for ARM specific use cases.”
HPC network stack on ARM - Linaro HPC Workshop 2018Linaro
Speaker: Pavel Shamis
Company: Arm
Speaker Bio:
"Pavel is a Principal Research Engineer at ARM with over 16 years of experience in development HPC solutions. His work is focused on co-design software and hardware building blocks for high-performance interconnect technologies, development communication middleware and novel programming models. Prior to joining ARM, he spent five years at Oak Ridge National Laboratory (ORNL) as a research scientist at Computer Science and Math Division (CSMD). In this role, Pavel was responsible for research and development multiple projects in high-performance communication domain including: Collective Communication Offload (CORE-Direct & Cheetah), OpenSHMEM, and OpenUCX. Before joining ORNL, Pavel spent ten years at Mellanox Technologies, where he led Mellanox HPC team and was one of the key driver in enablement Mellanox HPC software stack, including OFA software stack, OpenMPI, MVAPICH, OpenSHMEM, and other.
Pavel is a recipient of prestigious R&D100 award for his contribution in development of the CORE-Direct collective offload technology and he published in excess of 20 research papers.
"
Talk Title: HPC network stack on ARM
Talk Abstract:
Applications, programming languages, and libraries that leverage sophisticated network hardware capabilities have a natural advantage when used in today¹s and tomorrow's high-performance and data center computer environments. Modern RDMA based network interconnects provides incredibly rich functionality (RDMA, Atomics, OS-bypass, etc.) that enable low-latency and high-bandwidth communication services. The functionality is supported by a variety of interconnect technologies such as InfiniBand, RoCE, iWARP, Intel OPA, Cray¹s Aries/Gemini, and others. Over the last decade, the HPC community has developed variety user/kernel level protocols and libraries that enable a variety of high-performance applications over RDMA interconnects including MPI, SHMEM, UPC, etc. With the emerging availability HPC solutions based on ARM CPU architecture it is important to understand how ARM integrates with the RDMA hardware and HPC network software stack. In this talk, we will overview ARM architecture and system software stack, including MPI runtimes, OpenSHMEM, and OpenUCX.
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...Linaro
Speaker: Jay Kruemcke
Speaker Company: SUSE
Bio:
"Jay is responsible for the SUSE Linux server products for High Performance Computing, 64-bit ARM systems, and SUSE Linux for IBM Power servers.
Jay has built an extensive career in product management including using social media for client collaboration, product positioning, driving future product directions, and evangelizing the capabilities and future directions for dozens of enterprise products.
"
Talk Title: It just keeps getting better - SUSE enablement for Arm
Talk Abstract:
SUSE has been delivering commercial Linux support for Arm based servers since 2016. Initially the focus was on high end servers for HPC and Ceph based software defined storage. But we have enabled a number of other Arm SoCs and are even supporting the Raspberry Pi. This session will cover the SUSE products that are available for the Arm platform and view to the future.
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...Linaro
Speakers: Gilad Shainer and Scot Schultz
Company: Mellanox Technologies
Talk Title: Intelligent Interconnect Architecture to Enable Next
Generation HPC
Talk Abstract:
The latest revolution in HPC interconnect architecture is the development of In-Network Computing, a technology that enables handling and accelerating application workloads at the network level. By placing data-related algorithms on an intelligent network, we can overcome the new performance bottlenecks and improve the data center and applications performance. The combination of In-Network Computing and ARM based processors offer a rich set of capabilities and opportunities to build the next generation of HPC platforms.
Gilad Shainer Bio:
Gilad Shainer has served as Mellanox's vice president of marketing since March 2013. Previously, Mr. Shainer was Mellanox's vice president of marketing development from March 2012 to March 2013. Mr. Shainer joined Mellanox in 2001 as a design engineer and later served in senior marketing management roles between July 2005 and February 2012. Mr. Shainer holds several patents in the field of high-speed networking and contributed to the PCI-SIG PCI-X and PCIe specifications. Gilad Shainer holds a MSc degree (2001, Cum Laude) and a BSc degree (1998, Cum Laude) in Electrical Engineering from the Technion Institute of Technology in Israel.
Scot Schultz Bio:
Scot Schultz is a HPC technology specialist with broad knowledge in operating systems, high speed interconnects and processor technologies. Joining the Mellanox team in 2013, Schultz is 30-year veteran of the computing industry. Prior to joining Mellanox, he spent the past 17 years at AMD in various engineering and leadership roles in the area of high performance computing. Scot has also been instrumental with the growth and development of various industry organizations including the Open Fabrics Alliance, and continues to serve as a founding board-member of the OpenPOWER Foundation and Director of Educational Outreach and founding member of the HPC-AI Advisory Council.
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...Linaro
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Santa Clara 2018
Bio: "Yutaka Ishikawa is the project leader of developing the post K
supercomputer. From 1987 to 2001, he was a member of AIST (former
Electrotechnical Laboratory), METI. From 1993 to 2001, he was the
chief of Parallel and Distributed System Software Laboratory at Real
World Computing Partnership. He led development of cluster system
software called SCore, which was used in several large PC cluster
systems around 2004. From 2002 to 2014, he was a professor at the
University Tokyo. He led a project to design a commodity-based
supercomputer called T2K open supercomputer. As a result, three
universities, Tsukuba, Tokyo, and Kyoto, obtained each supercomputer
based on the specification in 2008. He was also involved with the
design of the Oakleaf-PACS, the successor of T2K supercomputer in both
Tsukuba and Tokyo, whose peak performance is 25PF."
Session Title: Post-K and Arm HPC Ecosystem
Session Description:
"Post-K, a flagship supercomputer in Japan, is being developed by Riken
and Fujitsu. It will be the first supercomputer with Armv8-A+SVE.
This talk will give an overview of Post-K and how RIKEN and Fujitsu
are currently working on software stack for an Arm architecture."
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...Linaro
Event: Arm Architecture HPC Workshop by Linaro and HiSilicon
Location: Santa Clara, CA
Speaker: Andrew J Younge
Talk Title: Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Supercomputing
Talk Desc: The Vanguard program looks to expand the potential technology choices for leadership-class High Performance Computing (HPC) platforms, not only for the National Nuclear Security Administration (NNSA) but for the Department of Energy (DOE) and wider HPC community. Specifically, there is a need to expand the supercomputing ecosystem by investing and developing emerging, yet-to-be-proven technologies and address both hardware and software challenges together, as well as to prove-out the viability of such novel platforms for production HPC workloads.
The first deployment of the Vanguard program will be Astra, a prototype Petascale Arm supercomputer to be sited at Sandia National Laboratories during 2018. This talk will focus on the arthictecural details of Astra and the significant investments being made towards the maturing the Arm software ecosystem. Furthermore, we will share initial performance results based on our pre-general availability testbed system and outline several planned research activities for the machine.
Bio: Andrew Younge is a R&D Computer Scientist at Sandia National Laboratories with the Scalable System Software group. His research interests include Cloud Computing, Virtualization, Distributed Systems, and energy efficient computing. Andrew has a Ph.D in Computer Science from Indiana University, where he was the Persistent Systems fellow and a member of the FutureGrid project, an NSF-funded experimental cyberinfrastructure test-bed. Over the years, Andrew has held visiting positions at the MITRE Corporation, the University of Southern California / Information Sciences Institute, and the University of Maryland, College Park. He received his Bachelors and Masters of Science from the Computer Science Department at Rochester Institute of Technology (RIT) in 2008 and 2010, respectively.
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineLinaro
Session ID: HKG18-501
Session Name: HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Speaker: Chris Redpath
Track: Mobile, Kernel
★ Session Summary ★
This session will introduce the changes to EAS planned for 4.14 kernel, and how Arm hopes that EAS will develop in future. EAS has already evolved from an Arm/Linaro joint project to involving a much wider community of SoC vendors, Google and interested device manufacturers. We will highlight the product-specific pieces remaining in the Android Common Kernel EAS implementation, and our plans to provide an upstreaming plan for each product feature. In particular, the new 'simplified energy model' is designed to provide mainline-friendliness and comparable performance using a simple DT expression of cpu power/performance.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-501/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-501.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-501.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Mobile, Kernel
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineLinaro
"Session ID: HKG18-501
Session Name: HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Speaker: Chris Redpath
Track: Mobile, Kernel
★ Session Summary ★
This session will introduce the changes to EAS planned for 4.14 kernel, and how Arm hopes that EAS will develop in future. EAS has already evolved from an Arm/Linaro joint project to involving a much wider community of SoC vendors, Google and interested device manufacturers. We will highlight the product-specific pieces remaining in the Android Common Kernel EAS implementation, and our plans to provide an upstreaming plan for each product feature. In particular, the new 'simplified energy model' is designed to provide mainline-friendliness and comparable performance using a simple DT expression of cpu power/performance.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-501/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-501.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-501.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Mobile, Kernel
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
HKG18-315 - Why the ecosystem is a wonderful thing, warts and allLinaro
"Session ID: HKG18-315
Session Name: HKG18-315 - Why the ecosystem is a wonderful thing warts and all
Speaker: Andrew Wafaa
Track: Ecosystem Day
★ Session Summary ★
The Arm ecosystem is a vibrant place, but it's not always smooth sailing. This presentation will go through the highs and lows of getting the ecosystem fully Arm enabled.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-315/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-315.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-315.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Ecosystem Day
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorLinaro
"Session ID: HKG18-115
Session Name: HKG18-115 - Partitioning ARM Systems with the Jailhouse Hypervisor
Speaker: Jan Kiszka
Track: Security
★ Session Summary ★
The open source hypervisor Jailhouse provides hard partitioning of multicore systems to co-locate multiple Linux or RTOS instances side by side. It aims at low complexity and minimal footprint to achieve deterministic behavior and enable certifications according to safety or security standards. In this session, we would like to look at the ARM-specific status of Jailhouse and discuss applications, to-dos and possible collaborations around it with the ARM community. The session is intended to be half presentation, half Q&A / discussion.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-115/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-115.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-115.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Security
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
"Session ID: HKG18-TR08
Session Name: HKG18-TR08 - Upstreaming SVE in QEMU
Speaker: Alex Bennée,Richard Henderson
Track: Enterprise
★ Session Summary ★
ARM's Scalable Vector Extensions is an innovative solution to processing highly data parallel workloads. While several out-of-tree attempts at implementing SVE support for QEMU existed, we took a fundamentally different approach to solving key challenges and therefore pursued a from-scratch QEMU SVE implementation in Linaro. Our strategic choice was driven by several factors. First as an ""upstream first"" organisation we were focused on a solution that would be readily accepted by the upstream project. This entailed doing our development in the open on the project mailing lists where early feedback and community consensus can be reached.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-tr08/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-tr08.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-tr08.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Enterprise
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
HKG18-113- Secure Data Path work with i.MX8MLinaro
"Session ID: HKG18-113
Session Name: HKG18-113 - Secure Data Path work with i.MX8M
Speaker: Cyrille Fleury
Track: Digital Home
★ Session Summary ★
NXP presentation on Secure Data Path work with i.MX8M Soc. Demonstrate 4K PlayReady playback with Android 8.1 running on i.MX8M. Focus on security (MS SL3000 and Widevine level 1)
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-113/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-113.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-113.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Digital Home
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
HKG18-120 - Devicetree Schema Documentation and Validation Linaro
"Session ID: HKG18-120
Session Name: HKG18-120 - Structured Documentation and Validation for Device Tree
Speaker: Grant Likely
Track: Kernel
★ Session Summary ★
Devicetree has become the dominant hardware configuration language used when building embedded systems. Projects using Devicetree now include Linux, U-Boot, Android, FreeBSD, and Zephyr. However, it is notoriously difficult to write correct Devicetree data files. The dtc tools perform limited tests for valid data, and there there is not yet a way to add validity test for specific hardware descriptions. Neither is there a good way to document requirements for specific bindings. Work is underway to solve these problems. This session will present a proposal for adding Devicetree schema files to the Devicetree toolchain that can be used to both validate data and produce usable documentation.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-120/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-120.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-120.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Kernel
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
"Session ID: HKG18-223
Session Name: HKG18-223 - Trusted Firmware M : Trusted Boot
Speaker: Tamas Ban
Track: LITE
★ Session Summary ★
An overview of the trusted boot concept and firmware update on the ARMv8-M based platform and how MCUBoot acts as a BL2 bootloader for TF-M.
Trusted Firmware M
In October 2017, Arm announced the vision of Platform Security Architecture (PSA) - a common framework to allow everyone in the IoT ecosystem to move forward with stronger, scalable security and greater confidence. There are three key stages to the Platform Security Architecture: Analysis, Architecture and Implementation which are described at https://developer.arm.com/products/architecture/platform-security-architecture.
_Trusted Firmware M, i.e. TF-M, is the Arm project to provide an open source reference implementation firmware that will conform to the PSA specification for M-Class devices. Early access to TF-M was released in December 2017 and it is being made public during Linaro Connect. The implementation should be considered a prototype until the PSA specifications reach release state and the code aligns._
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-223/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-223.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-223.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: LITE
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the D...Linaro
Session ID: HKG18-500K1
Session Name: HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the Datacenter
Speaker: Not Available
Track: Keynote
★ Session Summary ★
For decades we have been able to take advantage of Moore’s Law to improve single thread performance, reduce power and cost with each generation of semiconductor technology. While technology has advanced after the end of Dennard scaling more than 10 years ago, the advances have slowed down. Server performance increases have relied on increasing core counts and power budgets.
At the same time, workloads have changed in the era of cloud computing. Scale out is becoming more important than scale up. Domain specific architectures have started to emerge to improve the energy efficiency of emerging workloads like deep learning
This talk will provide a historical perspective and discuss emerging trends driving the development of modern servers processors.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-500k1/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-500k1.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-500k1.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Keynote
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Essentials of Automations: Optimizing FME Workflows with Parameters
HKG18-419 - OpenHPC on Ansible
1. HKG18-419: OpenHPC on Ansible
- Easier deployment with Ansible -
Renato Gloin and Takeharu Kato, HPC
2. Outline
● What OpenHPC is
● The Problem - OpenHPC has no Configuration Management System
● What Ansible is
● OpenHPC + Ansible = OpenHPC deployment easier
● Future work
3. OpenHPC
● OpenHPC is a collaborative, community effort that initiated from a desire to
aggregate a number of common ingredients required to deploy and manage
High Performance Computing (HPC) Linux clusters including provisioning
tools, resource management, I/O clients, development tools, and a variety of
scientific libraries.
○ http://www.openhpc.community/
● OpenHPC packages are build on the Open Build System
○ https://build.openhpc.community/
● Packages hosted on GitHub
○ https://github.com/openhpc/ohpc
4. OpenHPC Deployments Today
● A recipe with all rules described in the official documents
● LaTex snippets containing shell code
● Are converted and merged into a (OS-dependent) bash script
● Plus a input.local file, with some cluster-specific configuration
5. OpenHPC Recipe - Problems
● The input.local file exports shell variables, and don’t have enough information
○ Not all parts of the script are correctly conditionalized based on those options
○ Most people that use those scripts, heavily modify them
○ Others do in a completely different way
● The recipe.sh is not idempotent
○ It modifies the contents of root config files using Perl and regex
○ It echoes lines at the end of config files
○ It sets up networking and services that can’t be set twice (moving files around)
● Extensibility is impossible without editing the files
○ It neither has everything (conditionalized), nor has external hooks to add local rules
○ We need to at least edit the input.local configuration, which needs to be kept and updated
6. OpenHPC Recipe - Attempted Solutions
● We tried to change the scripts to conditionalize everything
○ Split into different scripts, one for each chapter
○ Heavily modified the input.local file
○ Automation (or people, manually), can run only selected scripts
● Problems
○ We created yet another layer (bash_local), which sets up input.local
○ Still using environment variables
○ We haven’t tested every possible scenario, no idea if it would work elsewhere… :(
● Why reinvent the wheel?
○ After long discussion with the communities, folks were quite happy to properly automate it
○ Ansible wasn’t the only solution people were using, but was by far the most popular
○ So we’re having a go with Ansible…
○ With the high hopes of moving the upstream deployments and other community builds to it
7. Biggest Obstacle in OpenHPC Deployment
● Lack of ``Configuration Management System’' (CMS)
➔A kind of CMS is needed to build various HPC clusters easy.
8. OpenHPC + Ansible = Easier Deployment
● What Ansible is
A simple automation language which can describe the structure and configuration of IT
infrastructure with Ansible playbook(YAML).
● The reasons why Ansible is the best solution for the OpenHPC deployment
● The most promising CMS tool (Ansible development is led by Red Hat).
● Ansible playbooks are idempotent.
● Ansible can manage nodes, tasks according to the the structure of the cluster.
Ansible will be a main stream in deployment for servers
9. Improvements from recipe.sh
● Improvements from recipe.sh:
○ We can configure where packages are installed, according to node type.
○ We can replay the installation process at many times.
○ We can adopt same recipe for various HPC systems(Intel, AArch64, bere
metal cluster, Cloud, Container)
● Time required for OpenHPC installation:
We can now install OpenHPC in 3 hours.
We took a week to install OpenHPC by hand before.
10. Current Status
● We’ve ported recipe.sh to an Ansible playbook.
Our playbook does not depend on HPC-specific tools (Warewulf, xCat)
● What we are doing now:
○ Testing...We are testing on X64 machines, and we’ll test it on AArch64 soon.
○ Rewrite for common use...Dividing the playbook into Ansible roles for each
package. It is useful to use the playbook as ingredients for installation task.
○ Upstreaming ... We’ll collaborate with OpenHPC community to make OpenHPC
installation easy.
The playbook could be applied to other LEG tasks with minor changes.
Our playbook to be published soon and be upstreamed.
11. Screen shot (Install phase)
[Inventory file]
[sms]
192.168.33.11
[cnodes]
192.168.33.21
192.168.33.22
[ionodes]
192.168.33.11
[devnodes]
192.168.33.21
192.168.33.22
It can handle the types of nodes properly.
(e.g., Install slurm-server on the master node only)
12. Screen shot(Test phase)
Run an MPI job with the SLURM in OpenHPC
Compile an MPI application with the mpicc in
OpenHPC
We are writing another playbook to run
tests automatically.
13. Promote Use of Ansible in HPC
Covered by OpenHPC
CentOS + Warewulf InfiniBand
Lustre/BeegFS client
HPC Software
xCAT
Lustre/BeegFS server
10/25/40GB Eth
OS (Suse,Debian ...)
Ansible will helps to support various packages, devices, OS in HPC.
14. Future Work
● Ansible in OpenHPC … Improving maintainability
○ Considering the way to generate a playbook from the document
(Are LaTeX snippets preferable?)
○ Taking advantage of the power of the Ansible community?
● Ansible in HPC … Making Ansible more popular in HPC
○ Sharing Ansible playbooks for HPC systems through ansible galaxy.
○ Introducing Ansible to OpenHPC community for upstreaming.
● Ansible in general … Easy to write site-specific configurations
Some ideas:
○ Creating a tool to generate host-specific configuration files.
○ Using Kconfig (make menuconfig) to choose packages to be installed.
○ A kind of tool for describing network topology to generate network configurations
(like OpenStack dashboard).
15. OpenHPC Ansible - Integration with Provision
● Before we can install OpenHPC on a cluster, we need to provision the master
○ This is done via a Jenkins job, using MrProvisioner’s API to flash kernel/initrd/preseed
○ Once the node is up (for now, polling on machine), we fire Ansible up
○ SSH to the node, clone the repo, run the playbook
● OpenHPC provisions the compute nodes (via Warewulf/xCAT)
○ All machines can see MrP, so we need to avoid DHCP/TFTP clash
○ Essentially via MAC address binding, but this complicates the Jenkins job
○ We need the list of addresses beforehand
● Having one VLAN per cluster is less flexible
○ We restrict ourselves to fixed number of compute nodes per cluster
○ Changing involves networking configuration (not dynamic in MrP yet)
○ Forces master to provide NAT/DNS access for the compute nodes
17. OpenHPC Ansible - Introduction
● What is ansible?
A simple automation language which can describe the structure and configuration of IT
infrastructure with Ansible playbook(YAML).
● Characteristics of Ansible:
○ Idempotent … Ansible playbook describes the state of node instead of operations
or procedures
○ Lightweight … Ansible does not need an installation agent on each node.
○ Configuration Management … It can handle groups of nodes(SMS, CN, etc)
○ Easy to extend … Well defined plugin interfaces for plugin modules.
○ Orchestration … It can also be applied to clouds, a container orchestration.
18. OpenHPC Ansible - Improvements from recipe.sh
● Improvements from recipe.sh:
○ Installing OpenHPC according to the role of nodes with Ansible inventory:
■ SMS...Management tools, Slurm server
■ Computing node...Libraries, Slurm client, Lustre client
■ Development node … Compiler, Debugger
○ Sharing the install methods among different architectures(Intel, AArch64)
■ Ansible can handle conditional installation and configuration
(e.g. Install psm packages(intel interconnect) into Intel machines only)
○ Supporting various HPC systems
■ Diskless environment (e.g. Warewulf )
■ Small cluster which has node local storages.
■ Cloud/Container (e.g OpenStack, AWS, docker ) in the future
19. OpenHPC Ansible - Current Status
● We’ve ported recipe.sh to an ansible playbook.
○ It can install OpenHPC(both Warewulf and Non-Warewulf clusters on CentOS7 )
○ It can handle the structure of the cluster( both diskless and non-diskless environment ).
■ It can set computing node network up(e.g DHCP, hosts, and so on).
■ It can install and configure OpenHPC packages according to each node type
(SMS, computing node, development node).
■ It can be used for both AArch64 and Intel machines.
● What we are doing now:
○ Testing our playbook on virtual machines(x86-64)
■ Of Course we’ll test the playbook on AArch64 VM soon...
○ Writing the playbook for testing
○ Dividing the playbook into Ansible role for each package for the flexible installation
and to share installation methods for HPC packages through Ansible Galaxy(*).
(*) A kind of community pages for Ansible( https://galaxy.ansible.com/ )
20. OpenHPC Ansible - Screen shot( Install phase )
[Inventory file]
[sms]
192.168.33.11
[cnodes]
192.168.33.21
192.168.33.22
[ionodes]
192.168.33.11
[devnodes]
192.168.33.21
192.168.33.22
It can handle the type of node properly.
(e.g.Install slurm-server on the master node only)
21. OpenHPC Ansible - Screen shot(Test phase)
Run a MPI job with the SLURM in OpenHPC
Compile MPI application with the mpicc in OpenHPC
We are writing another playbook to run test
automatically.
22. OpenHPC Ansible - Flexible installation with Ansible
[The basic structure of Ansible playbook]
● Improvement of the installation flexibility:
We are dividing configuration variables in recipe.sh into three parts(in progress):
○ Cluster wide configurations(node name of SMS, FS Server )
○ Computing node specific configurations(MAC/IP addr, BMC)
○ Package specific configurations(CHROOT for Warewulf)
ohpcOrchestration/
+---- group_vars/
+-- all.yml cluster wide configurations
+-- group1,group2 … node group(e.g. computing nodes) specific configurations
+---- host_vars/
+--- host1,host2 … host specific configurations
+---- roles/ package specific tasks, templates, config files, config variables
+--- package-name/
+--- tasks/main.yml … YAML file to describe installation method of package-name
+--- vars/main.yml … package-name specific configuration variables
+--- templates/*.j2 … template files for configuration files
23. OpenHPC Ansible - Scenarios
● We’re going to test on real AArch64 machines
○ CentOS + Warewulf
○ InfiniBand
○ Lustre/BeegFS client
● Need to improve
○ xCAT support
○ Lustre/BeegFS server support
○ 10/25/40GB Eth support
○ Increasing supported Operating System
(OpenSuse, Debian, and so on. Plugin development may be needed for this)
○ Container based distribution(Ansible can create container images to distribute)
Ansible can be used to install OpenHPC into various HPC systems including
bare metal cluster, HPC clouds, Containers(e.g. Docker ) in the future.
24. OpenHPC Ansible - Unsolved Problems
● Improving maintenance ability
○ Considering the way to generate a playbook from the document
(Are LaTeX snippets preferable?)
○ Taking advantage of the power of the ansible community?
● Easy to write site specific configurations
Following things might be useful (Idea only for now):
○ Creating tool to generate host specific configuration files from a CSV file
○ Using Kconfig(make menuconfig) to choose packages to be installed.
○ Some kind of tools for describing network topology easily(like OpenStack dashboard) to
generate network configurations.
● Making Ansible more popular in HPC world
○ Sharing Ansible playbooks for HPC systems through ansible galaxy.
○ Introducing Ansible to OpenHPC community
25. OpenHPC Ansible - Integration with Provision
● Before we can install OpenHPC on a cluster, we need to provision the master
○ This is done via a Jenkins job, using MrProvisioner’s API to flash kernel/initrd/preseed
○ Once the node is up (for now, polling on machine), we fire Ansible up
○ SSH to the node, clone the repo, run the playbook
● OpenHPC provisions the compute nodes (via Warewulf/xCAT)
○ All machines can see MrP, so we need to avoid DHCP/TFTP clash
○ Essentially via MAC address binding, but this complicates the Jenkins job
○ We need the list of addresses beforehand
● Having one VLAN per cluster is less flexible
○ We restrict ourselves to fixed number of compute nodes per cluster
○ Changing involves networking configuration (not dynamic in MrP yet)
○ Forces master to provide NAT/DNS access for the compute nodes