OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
Manage your bare-metal infrastructure with a CI/CD-driven approachinovex GmbH
If you want to provide Kubernetes on your bare-metal infrastructure you need to configure and manage multiple components, like PKI, DHCP, iPXE and some more. Deploying your infrastructure as immutable without SSH access will confront you with some additional challenges, because traditional configuration management tools like Puppet or Ansible require SSH access to each node. Within the scope of this workshop we will present you a practicable way to manage your bare-metal Kubernetes clusters by the use of a CI/CD-driven approach, including Gitlab CI, Terraform, Confd, Vault and some others. As an extra we will keep a fully functional local development setup ready, so you can gain real hands-on experience. The whole configuration is written in code (Infrastructure as Code – IaC) and will automatically update all components the moment you update your configuration.
Der komplette Workshop ist hier zu finden: https://github.com/inovex/fluffy-unicorn
Speaker: Johannes M. Scheuermann
Event: ContainerDays 2018
Mehr Tech-Vorträge: https://www.inovex.de/de/content-pool/vortraege/
Mehr Tech-Artikel: https://www.inovex.de/blog/
While probably the most prominent, Docker is not the only tool for building and managing containers. Originally meant to be a "chroot on steroids" to help debug systemd, systemd-nspawn provides a fairly uncomplicated approach to work with containers. Being part of systemd, it is available on most recent distributions out-of-the-box and requires no additional dependencies.
This deck will introduce a few concepts involved in containers and will guide you through the steps of building a container from scratch. The payload will be a simple service, which will be automatically activated by systemd when the first request arrives.
Strategies for developing and deploying your embedded applications and imagesMender.io
We will delve into multiple strategies you can use for developing and deploying code to embedded devices. We will compare and contrast the following:
– Lightweight package managers: ipkg/opkg
– Desktop package managers: rpm/deb
– Configuration Management Tools
– Smart Package Manager
– Yocto Runtime Package Management
– PXE boot
– OTA updaters: Mender
As with any decision, it is rarely black-and-white and we will cover some of the benefits and the limitations of all the different methods mentioned, to make sure you have the most critical information needed to decide for yourself whether a given strategy would be a good fit for your embedded application development.
This talk will cover how different mechanisms are implemented in the real world and how choosing the right strategy, understanding its benefits and drawbacks, can speed up and improve the whole development process.
Kernel Recipes 2016 - New hwmon device registration API - Jean DelvareAnne Nicolas
The hwmon subsystem originates from the 1998 project lm-sensors. Along the way, there have been a lot of effort done to have all drivers present a standard interface to user-space, and consolidate the common plumbing into an easy-to-use, hard-to-get-wrong API. The final step of this long-running effort is happening right now.
Jean Delvare, SUSE
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
Manage your bare-metal infrastructure with a CI/CD-driven approachinovex GmbH
If you want to provide Kubernetes on your bare-metal infrastructure you need to configure and manage multiple components, like PKI, DHCP, iPXE and some more. Deploying your infrastructure as immutable without SSH access will confront you with some additional challenges, because traditional configuration management tools like Puppet or Ansible require SSH access to each node. Within the scope of this workshop we will present you a practicable way to manage your bare-metal Kubernetes clusters by the use of a CI/CD-driven approach, including Gitlab CI, Terraform, Confd, Vault and some others. As an extra we will keep a fully functional local development setup ready, so you can gain real hands-on experience. The whole configuration is written in code (Infrastructure as Code – IaC) and will automatically update all components the moment you update your configuration.
Der komplette Workshop ist hier zu finden: https://github.com/inovex/fluffy-unicorn
Speaker: Johannes M. Scheuermann
Event: ContainerDays 2018
Mehr Tech-Vorträge: https://www.inovex.de/de/content-pool/vortraege/
Mehr Tech-Artikel: https://www.inovex.de/blog/
While probably the most prominent, Docker is not the only tool for building and managing containers. Originally meant to be a "chroot on steroids" to help debug systemd, systemd-nspawn provides a fairly uncomplicated approach to work with containers. Being part of systemd, it is available on most recent distributions out-of-the-box and requires no additional dependencies.
This deck will introduce a few concepts involved in containers and will guide you through the steps of building a container from scratch. The payload will be a simple service, which will be automatically activated by systemd when the first request arrives.
Strategies for developing and deploying your embedded applications and imagesMender.io
We will delve into multiple strategies you can use for developing and deploying code to embedded devices. We will compare and contrast the following:
– Lightweight package managers: ipkg/opkg
– Desktop package managers: rpm/deb
– Configuration Management Tools
– Smart Package Manager
– Yocto Runtime Package Management
– PXE boot
– OTA updaters: Mender
As with any decision, it is rarely black-and-white and we will cover some of the benefits and the limitations of all the different methods mentioned, to make sure you have the most critical information needed to decide for yourself whether a given strategy would be a good fit for your embedded application development.
This talk will cover how different mechanisms are implemented in the real world and how choosing the right strategy, understanding its benefits and drawbacks, can speed up and improve the whole development process.
Kernel Recipes 2016 - New hwmon device registration API - Jean DelvareAnne Nicolas
The hwmon subsystem originates from the 1998 project lm-sensors. Along the way, there have been a lot of effort done to have all drivers present a standard interface to user-space, and consolidate the common plumbing into an easy-to-use, hard-to-get-wrong API. The final step of this long-running effort is happening right now.
Jean Delvare, SUSE
Current experience shows that a lot of developers working on Xen/Linux kernel use mainly only small set of debugging tools. Often they are sufficient for generic work. However, when unusual problem arises which could not be easily debugged using known tools sometimes they are trying to reinvent the wheel. Goal of this session is to present wide range of debugging tools starting from simplest one to most feature reach solutions in context of Xen/Linux kernel debugging. It will describe pros and cons of printk (serial, debug console, etc.), gdb, gdbsx, kgdb, QEMU, kdump and others. Additionally, there will be some information about possible new solutions and current kexec/kdump developments for Xen.
Kernel Recipes 2015 - Kernel dump analysisAnne Nicolas
Kernel dump analysis
Cloud this, cloud that…It’s making everything easier, especially for web hosted services. But what about the servers that are not supposed to crash ? For applications making the assumption the OS won’t do any fault or go down, what can you write in your post-mortem once the server froze and has been restarted ? How to track down the bug that lead to service unavailability ?
In this talk, we’ll see how to setup kdump and how to panic a server to generate a coredump. Once you have the vmcore file, how to track the issue with “crash” tool to find why your OS went down. Last but not least : with “crash” you can also modify your live kernel, the same way you would do with gdb.
Adrien Mahieux – System administrator obsessed with performance and uptime, tracking down microseconds from hardware to software since 2011. The application must be seen as a whole to provide efficiently the requested service. This includes searching for bottlenecks and tradeoffs, design issues or hardware optimization.
OpenWrt is a Linux distribution for embedded systems that runs on many routers and networking devices today. In this session we'll talk about OpenWrt's origins, architecture and get down to building apps for the platform.
Along the way we will touch on some basic firmware concepts and at last present the final working OpenWrt router and its capabilities.
Anton Lerner, Architect at Sitaro, computer geek, developer and occasional maker.
Sitaro provides total cyber protection for small business and home networks. Sitaro prevents massive scale IoT cyber attacks.
Find out more information in the meetup event page - https://www.meetup.com/Tel-Aviv-Yafo-Linux-Kernel-Meetup/events/245319189/
The presentation deals with the set of tools and features that can be used by Linux kernel developers for kernel debugging. Also, static analysis of kernel patches was addressed during speech. Special attention was given to access tools, tracing tools, and interactive debugging tools, namely: DebugFS, ftrace, and GDB.
This presentation by Aleksandr Bulyshchenko (Software Engineer, Consultant, GlobalLogic Kharkiv) was delivered at GlobalLogic Kharkiv Embedded TechTalk #1 on March 13, 2018.
Kernel Recipes 2015: Kernel packet capture technologiesAnne Nicolas
Sniffing through the ages
Capturing packets running on the wire to send them to a software doing analysis seems at first sight a simple tasks. But one has not to forget that with current network this can means capturing 30M packets per second. The objective of this talk is to show what methods and techniques have been implemented in Linux and how they have evolved over time.
The talk will cover AF_PACKET capture as well as PF_RING, dpdk and netmap. It will try to show how the various evolution of hardware and software have had an impact on the design of these technologies. Regarding software a special focus will be made on Suricata IDS which is implementing most of these capture methods.
Eric Leblond, Stamus Networks
Upgrade-UX is an open source framework developed to assist in patching and/or updating Unix Operating Systems in a consistent and repeatable way. Especially in the industry it is forbidden just to run yum update (on Linux) to update your Linux system, therefore, upgrade-ux may proof to be a handy tool to guide you through the patching and/or update process as it follows a track you control (evidence gathering, pre/post executing of scripts, logging, and so on).
U-Boot project has evolved in the time span of over 17 years and so as its complexity and its uses. This has made it a daunting task in getting started with its development and uses. This talk will address all these issues start with overview, features, efforts created by community and future plans.
The U-Boot project has evolved in the time span of over 17 years and so as its complexity and its uses. This has made it a daunting task in getting started with its development and uses. This talk will address all these issues and share development efforts created by the U-Boot community.
In this talk Jagan Teki(Maintainer for Allwinner SoC, SPI, SPI FLASH Subsystems) will introduce U-Boot from scratch with a brief overview of U-Boot history, U-Boot Proper, SPL, TPL, Build process and Startup sequence. He will talk about other preliminaries such as Image booting, Falcon Mode, Secure Boot and U-Boot features like device tree, device overlays, driver model and DFU, etc.
Once giving enough introduction, he will also talk about steps to port U-Boot to new hardware with a demo, along with U-Boot testing process. Finally, he will address and review ongoing development work, issues and future development regarding U-Boot.
Containers are incredibly convenient to package applications and deploy them quickly across the data center.
This talk will introduce RunX, a new project under LF Edge that aims at bringing containers to the edge with extra benefits. At the core, RunX is an OCI-compatible container runtime to run software packaged as containers as Xen micro-VMs. RunX allows traditional containers to be executed with a minimal overhead as virtual machines, providing additional isolation and real-time support.
It also introduces new types of containers designed with edge and embedded deployments in mind. RunX enables RTOSes, and baremetal apps to be packaged as containers, delivered to the target using the powerful containers infrastructure, and deployed at runtime as Xen micro-VMs. Physical resources can be dynamically assigned to them, such as accelerators and FPGA blocks.
This presentation will go through the architecture of RunX and the new deployment scenarios it enables. It will provide an overview of the integration with Yocto Project via the meta-virtualization layer and describe how to build a complete system with Xen and RunX.
The presentation will come with a live demo on embedded hardware.
Linux Kernel Platform Development: Challenges and InsightsGlobalLogic Ukraine
This presentation is about the main tasks which Linux kernel platform engineers take care of. The talk includes real-life cases which help understand the role of respective specialists and might be helpful to those who consider such change in their careers.
The talk was delivered by Sam Protsenko (Software Engineer, Consultant, GlobalLogic) at GlobalLogic Embedded Career Day #2 on February 10, 2018.
More about GlobalLogic Embedded Career Day #2: https://www.globallogic.com/ua/events/globallogic-kyiv-embedded-career-day-2-materials
A talk presented at the Automotive Grade Linux All-Members meeting on September 8, 2015. The focus on why AGL should adopt systemd, and highlights two of the more difficult integration issues that may arise while doing so. The embedded SVG image, courtesy Marko Hoyer of ADIT, is at http://she-devel.com/2015-07-23_amm_demo.svg
The Linux Block Layer - Built for Fast StorageKernel TLV
The arrival of flash storage introduced a radical change in performance profiles of direct attached devices. At the time, it was obvious that Linux I/O stack needed to be redesigned in order to support devices capable of millions of IOPs, and with extremely low latency.
In this talk we revisit the changes the Linux block layer in the
last decade or so, that made it what it is today - a performant, scalable, robust and NUMA-aware subsystem. In addition, we cover the new NVMe over Fabrics support in Linux.
Sagi Grimberg
Sagi is Principal Architect and co-founder at LightBits Labs.
Evolution of ota_update_in_the_io_t_worldStefano Babic
The update of the software in an embedded Linux System has gained importance and it is nowadays an essential part of any product. But upgrading an embedded system in field is a complex task and must be robust and secure. The increasing number of devices connected to a public network has led to new features and requirements that a FOSS update agent must fill - Stefano is author and Maintainer of the FOSS project "SWUpdate" - a framework to build an own update strategy. In this presentation, it will be pointed out to the new requirements coming from the industry about an updater and he will show which direction the project will take in future.
Current experience shows that a lot of developers working on Xen/Linux kernel use mainly only small set of debugging tools. Often they are sufficient for generic work. However, when unusual problem arises which could not be easily debugged using known tools sometimes they are trying to reinvent the wheel. Goal of this session is to present wide range of debugging tools starting from simplest one to most feature reach solutions in context of Xen/Linux kernel debugging. It will describe pros and cons of printk (serial, debug console, etc.), gdb, gdbsx, kgdb, QEMU, kdump and others. Additionally, there will be some information about possible new solutions and current kexec/kdump developments for Xen.
Kernel Recipes 2015 - Kernel dump analysisAnne Nicolas
Kernel dump analysis
Cloud this, cloud that…It’s making everything easier, especially for web hosted services. But what about the servers that are not supposed to crash ? For applications making the assumption the OS won’t do any fault or go down, what can you write in your post-mortem once the server froze and has been restarted ? How to track down the bug that lead to service unavailability ?
In this talk, we’ll see how to setup kdump and how to panic a server to generate a coredump. Once you have the vmcore file, how to track the issue with “crash” tool to find why your OS went down. Last but not least : with “crash” you can also modify your live kernel, the same way you would do with gdb.
Adrien Mahieux – System administrator obsessed with performance and uptime, tracking down microseconds from hardware to software since 2011. The application must be seen as a whole to provide efficiently the requested service. This includes searching for bottlenecks and tradeoffs, design issues or hardware optimization.
OpenWrt is a Linux distribution for embedded systems that runs on many routers and networking devices today. In this session we'll talk about OpenWrt's origins, architecture and get down to building apps for the platform.
Along the way we will touch on some basic firmware concepts and at last present the final working OpenWrt router and its capabilities.
Anton Lerner, Architect at Sitaro, computer geek, developer and occasional maker.
Sitaro provides total cyber protection for small business and home networks. Sitaro prevents massive scale IoT cyber attacks.
Find out more information in the meetup event page - https://www.meetup.com/Tel-Aviv-Yafo-Linux-Kernel-Meetup/events/245319189/
The presentation deals with the set of tools and features that can be used by Linux kernel developers for kernel debugging. Also, static analysis of kernel patches was addressed during speech. Special attention was given to access tools, tracing tools, and interactive debugging tools, namely: DebugFS, ftrace, and GDB.
This presentation by Aleksandr Bulyshchenko (Software Engineer, Consultant, GlobalLogic Kharkiv) was delivered at GlobalLogic Kharkiv Embedded TechTalk #1 on March 13, 2018.
Kernel Recipes 2015: Kernel packet capture technologiesAnne Nicolas
Sniffing through the ages
Capturing packets running on the wire to send them to a software doing analysis seems at first sight a simple tasks. But one has not to forget that with current network this can means capturing 30M packets per second. The objective of this talk is to show what methods and techniques have been implemented in Linux and how they have evolved over time.
The talk will cover AF_PACKET capture as well as PF_RING, dpdk and netmap. It will try to show how the various evolution of hardware and software have had an impact on the design of these technologies. Regarding software a special focus will be made on Suricata IDS which is implementing most of these capture methods.
Eric Leblond, Stamus Networks
Upgrade-UX is an open source framework developed to assist in patching and/or updating Unix Operating Systems in a consistent and repeatable way. Especially in the industry it is forbidden just to run yum update (on Linux) to update your Linux system, therefore, upgrade-ux may proof to be a handy tool to guide you through the patching and/or update process as it follows a track you control (evidence gathering, pre/post executing of scripts, logging, and so on).
U-Boot project has evolved in the time span of over 17 years and so as its complexity and its uses. This has made it a daunting task in getting started with its development and uses. This talk will address all these issues start with overview, features, efforts created by community and future plans.
The U-Boot project has evolved in the time span of over 17 years and so as its complexity and its uses. This has made it a daunting task in getting started with its development and uses. This talk will address all these issues and share development efforts created by the U-Boot community.
In this talk Jagan Teki(Maintainer for Allwinner SoC, SPI, SPI FLASH Subsystems) will introduce U-Boot from scratch with a brief overview of U-Boot history, U-Boot Proper, SPL, TPL, Build process and Startup sequence. He will talk about other preliminaries such as Image booting, Falcon Mode, Secure Boot and U-Boot features like device tree, device overlays, driver model and DFU, etc.
Once giving enough introduction, he will also talk about steps to port U-Boot to new hardware with a demo, along with U-Boot testing process. Finally, he will address and review ongoing development work, issues and future development regarding U-Boot.
Containers are incredibly convenient to package applications and deploy them quickly across the data center.
This talk will introduce RunX, a new project under LF Edge that aims at bringing containers to the edge with extra benefits. At the core, RunX is an OCI-compatible container runtime to run software packaged as containers as Xen micro-VMs. RunX allows traditional containers to be executed with a minimal overhead as virtual machines, providing additional isolation and real-time support.
It also introduces new types of containers designed with edge and embedded deployments in mind. RunX enables RTOSes, and baremetal apps to be packaged as containers, delivered to the target using the powerful containers infrastructure, and deployed at runtime as Xen micro-VMs. Physical resources can be dynamically assigned to them, such as accelerators and FPGA blocks.
This presentation will go through the architecture of RunX and the new deployment scenarios it enables. It will provide an overview of the integration with Yocto Project via the meta-virtualization layer and describe how to build a complete system with Xen and RunX.
The presentation will come with a live demo on embedded hardware.
Linux Kernel Platform Development: Challenges and InsightsGlobalLogic Ukraine
This presentation is about the main tasks which Linux kernel platform engineers take care of. The talk includes real-life cases which help understand the role of respective specialists and might be helpful to those who consider such change in their careers.
The talk was delivered by Sam Protsenko (Software Engineer, Consultant, GlobalLogic) at GlobalLogic Embedded Career Day #2 on February 10, 2018.
More about GlobalLogic Embedded Career Day #2: https://www.globallogic.com/ua/events/globallogic-kyiv-embedded-career-day-2-materials
A talk presented at the Automotive Grade Linux All-Members meeting on September 8, 2015. The focus on why AGL should adopt systemd, and highlights two of the more difficult integration issues that may arise while doing so. The embedded SVG image, courtesy Marko Hoyer of ADIT, is at http://she-devel.com/2015-07-23_amm_demo.svg
The Linux Block Layer - Built for Fast StorageKernel TLV
The arrival of flash storage introduced a radical change in performance profiles of direct attached devices. At the time, it was obvious that Linux I/O stack needed to be redesigned in order to support devices capable of millions of IOPs, and with extremely low latency.
In this talk we revisit the changes the Linux block layer in the
last decade or so, that made it what it is today - a performant, scalable, robust and NUMA-aware subsystem. In addition, we cover the new NVMe over Fabrics support in Linux.
Sagi Grimberg
Sagi is Principal Architect and co-founder at LightBits Labs.
Evolution of ota_update_in_the_io_t_worldStefano Babic
The update of the software in an embedded Linux System has gained importance and it is nowadays an essential part of any product. But upgrading an embedded system in field is a complex task and must be robust and secure. The increasing number of devices connected to a public network has led to new features and requirements that a FOSS update agent must fill - Stefano is author and Maintainer of the FOSS project "SWUpdate" - a framework to build an own update strategy. In this presentation, it will be pointed out to the new requirements coming from the industry about an updater and he will show which direction the project will take in future.
Creating new Tizen profiles using the Yocto ProjectLeon Anavi
Presentation for Tizen Developer Conference 2015 Shenzhen.
Tizen is an open source Linux based software platform for Internet of Things, mobile, wearable and embedded devices. Tizen:Common provides a generic development environment for Tizen 3 which key features include Wayland, Weston, EFL, and the Crosswalk web runtime. The Yocto Project offers easy to use tools to create meta layers for new Tizen 3 profiles that inherit and expand the features of Tizen:Common. This talk will focus the Tizen architecture and it will provide guidelines for creating and building new Tizen profiles, based on Tizen:Common, using the Yocto Project for devices with Intel or ARM processors. It will also provide information about hidden gems in Tizen on Yocto and practical examples for packaging and deploying HTML5 applications through Yocto recipes for the open source hardware development boards MinnowBoard Max (Intel) and Humming Board (Freescale I.MX6 ARM SoC).
This is the notes of a presentation I gave to our IT dept., people who know a lot about VMs! They include a description of differences betwen a VM and a container, why would someone would want to use Docker, how it works (at 30,000 feet), some hints of what are the hub and orchestration, some Dockerfiles examples: jenkins slave, jenkins master, sinopia server, etc. and finally some new features Docker is going to propose in the future and how I intend to mix Configuration tools, such as Ansible, and Docker.
We open-sourced LinuxKit in April 2017 at DockerCon in Austin. In this session, we'll take a detailed look at some advanced topics of LinuxKit ranging from the general read-only filesystem setup, multi-arch image support for x86_64 and arm64, custom network configuration, and kernel debugging and testing.
Key concepts for mastering Docker. This is the slides presented during this Docker local meetup: https://events.docker.com/events/details/docker-sidi-bel-abbes-presents-master-docker-1st-meetup/
Real-World Docker: 10 Things We've Learned RightScale
Docker has taken the world of software by storm, offering the promise of a portable way to build and ship software - including software running in the cloud. The RightScale development team has been diving into Docker for several projects, and we'll share our lessons learned on using Docker for our cloud-based applications.
This presentation session will go through the basics of Docker and illustrate its importance in modern DevOps. It will also go through a step-by-step demo of setting up a Docker image for the LAMP stack (Linux, Apache, MySQL, PHP) together with a working sample application.
Slides & codes: http://bit.ly/thomasdocker
Docker is the Open Source container engine. It lets you author, run, and manage software containers. Escape from dependency hell, and make deployment a breeze! This presentation includes the standard Docker intro (actualized for Docker 0.11) as well as some insights about how to perform orchestration and multi-host container linking.
It is a simple introduction to the containers world, starting from LXC to arrive to the Docker Platform.
The presentation is focused on the first steps in the docker environment and the scenarious from a developer point of view.
Gebruik dezelfde Docker container voor Java applicaties tijdens ontwikkelen e...NLJUG
Docker is een extreem populair en relatief nieuw open source project waarmee containers gemaakt kunnen worden van (bijna alle) applicaties. Een container gebaseerd op Ubuntu met Glassfish en je favoriete applicatie is een van de vele mogelijkheden. Het grootste voordeel is dat Docker containers draaien op (alle) Linux distributies. Dit betekend dat dezelfde container lokaal gebruikt kan worden voor ontwikkeling en in de cloud gebruikt kan worden voor klanten. Docker wordt al gebruikt door grote bedrijven als Ebay en Spotify en ook Google ondersteund het actief. Deze presentatie zal de voordelen van Docker en de best practices behandelen. Tevens zal ik demonstreren hoe Docker werkt zodat je na deze sessie zelf met Docker aan de slag kan.
Balena: a Moby-based container engine for IoT Balena
An introduction to balena, a Moby-based container engine for IoT. Presented by Petros Angelatos, CTO at resin.io, at the DockerCon Europe Moby Summit in Copenhagen, October 2017. Read more at https://balena.io
resin.io: Docker for IoT
Introducing resinOS: An Operating System Tailored for Containers and Built fo...Balena
This presentation, from the Embedded Linux Conference Europe in October 2016, discusses how resinOS was built, highlights some of its key features, and shares a roadmap for future development and contribution.
resinOS is the latest open-source tool built by resin.io to enable the future of hardware with the tools of modern software. resinOS is a simple yet powerful operating system that brings standard Docker containers to embedded devices and works on a wide variety of device types and architectures. resinOS was born from the team’s experience deploying embedded containers across device types and has been battle-tested in production environments.
You can download resinOS at https://resinos.io
11. 1.x ➞ 2.x updates considerations
Lots of differences between 1.x/2.x
Update-relevant changes:
● Partition layout / size
● State partition
● File system labels
● BTRFS ➞ Ext4 file system
● Docker storage driver change
12. 1.x ➞ 2.x updates process
● Stop supervisor & user application
● Backup /data (user data)
● Move partitions
● Pull OS image
● Resize Root A / recreate Root B
● Export root and boot content
● Reformat data part, restore /data
aka. “the juggle”...
13. 1.x ➞ 2.x updates schematics
Root BRoot A
Boot
User Data
14. 1.x ➞ 2.x updates
● Updater is a shell script
○ Lightweight
○ If any issue, easy to pick up
from an intermediate state
● Update logs are transient (/tmp)
● Quite slow (`docker export | gz`)
● OS needs to do more work on reboot
● 1.8.0-1.30.1 ➞ 2.2.0-2.4.2
15. 2.x ➞ 2.x updates (v1)
● Updater is a shell script
● Logic similar to 1.x➞1.x updates
○ Stop supervisor & user app
○ Pull resinOS image
○ Export to secondary root
○ Update supervisor
○ Switch boot
○ Reboot
16. 2.x ➞ 2.x updates (v2) - hostapp-update
● Devices boot directly into a
resinOS container (mobynit)
● Root is within the container
● Two root partitions accessible:
○ sysroot/active; sysroot/inactive
● Secondary Balena(Docker):
○ balena-host (docker-host)
● OS update: ~ a “balena pull”
17. 2.x ➞ 2.x updates (v2) - hostapp-update
Image A Image B Image C
inactiveactive
Boot
User Data
Super
18. 2.x ➞ 2.x updates (v2) - hostapp-update
● No need to shut down supervisor &
user application anymore
● Balena deltas for OS updates
● Can create new OS versions by
Dockerfile and `hostapp-update`
● Update hooks do /boot or firmware
changes (with rollback on error)
● 2.5.0 ➞ current
19. 2.x ➞ 2.x updates (v1➞v2)
● Manually set up “balena-host”
● Pull resinOS image
● Export image
● Run resinOS image (update run in
hostapp system)
● Import image to be able to use
hostapp-update within
… aka “the return of the juggle”
20. From beginning to end
From 1.0.0-pre / 0.0.10 (~2015)
to 2.12.6+rev1 / 7.4.3 (~1 week ago)
● Total time ~2h
● ~4-5 reboots
21. Running a HUP
Self-service:
● Triggering through the Proxy’s
action server
● Backend logs into device
● Runs the updater script
Manual:
● Log into the device and run script
23. hostapp-updates with custom image
FROM resin/resinos:2.12.6_rev1-intel-nuc
RUN echo "Kilroy was here" > /tagged
Then
$ docker build . -t imrehg/resinos-mod:kilroy
$ docker push imrehg/resinos-mod:kilroy
On the device:
# hostapp-update -i imrehg/resinos-mod:kilroy -r
24. Device support
Aiming to support all device types
on hostapp-enabled resinOS versions.
For 1.x➞2.x updates we might might
the range of currently supported
versions (Pi, BBB, NUC)
1.x->1.x updates probably won’t be
expanded further
25. Further Goals
● Easy management of host OS
variants/modifications
● The host being able to update
itself (just as supervisor is
capable to self-update with the
`update-resin-supervisor` service
● Delta updates all around
26. ResinOS in a container
The same image that used for updates
can now be run as a full-fleded
system:
resin-os/resinos-in-container
(Github)
Host OS updates:
● Run new image w/ existing volumes
It’s indeed a lot of information, and can be quite involved. Will skip over some technical details to keep things focus, nevertheless feel free to ask questions any time!
“git push to devices” is one of our earlier motto (even though it’s changing) - everyone’s familiar with application update, but we are managing the entire device - including the host OS. The “host OS” is the minimal system that mainly gets docker up and running (plus vpn, time management,
Anyone who hasn’t done a device update on the team before? (Why?)
This is the conceptual diagram of resinOS layout that is often quoted.
OS + Balena on that + Application images within balena.
A slightly modified version for explanation.
Dual root partition (enables os updates)
Boot partition (storing config.json, config.txt, uEnv, device trees….)
Rest is pretty much bundled
Images, user data, supervisor are all currently part of the same “data” section of the OS
This is not directly mapped onto partitions, there’s a bit more of that, but this is the nutshell.
The updates are packaged as docker images, and posted on Docker Hub (currently)
They contain the root fs, but also specific locations for the files that go to the boot (kernel, firmware, device trees) and additional scripts (hooks)
This section goes through how updates were done historically from earlier versions to latest ones, to appreciate changes.
The original setup of host OS updates
The shell script coordinates the update steps, while HUP is mainly for the host-os part of the update
3-4 docker pulls for the entire process (supervisor, updater, OS image, migrator (for pre-1.10 Docker))
Of course the switch can work from Root B ➞ Root A too,, if the device was updated before already. The angled arrow means, that in the pull, the intermediate data is stored on the same partition as the user data.
Partition layout / size: increased rootA/rootB, state partition, moved data partition
File system labels: resin-rootA, resin-rootB labels for the two roots
Docker storage driver: btrfs ➞ aufs (overlay2 in some versions on resinOS, but fortunately those devices are not updated 1.x➞2.x)
A simplified version of what’s going on, but still…
Root B is recreated elsewhere on the disk, so the system has to run from Root A for this to work. If not, the partition is switched and redone.
gz is very slow on some devices (like Pi, even Pi 3)
Updates can be ~5 minutes on a NUC, to ~20-30 minutes on Pi (slower on PiZero, eg.), on fast network even
Extra OS work: the supervisor starting up after reboot will need to pull the new supervisor image, and that new supervisor will have to repull the user application
The ranges of updates vary between devices (e.g. BBB needs latest 1.x to update, while Pi can do updates)
Still have a bit below 900 devices on 1.x.
Faster, though, as 1 less pulls
Mobynit is part of balena: bootable containers (original PR is linked)
The two root partitions are there to use with the secondary, host balena: active where the currently running system’s balena storage, while the inactive is where the update would land if run, currently empty
The pull is directly to inactive, maybe using the active as delta
We are working on this sort of updates be a central part of the platform (the actual “hostapps”). In the meantime it might be handy for hacking.
Note, that when building an image, to use RUN, have to build on a compatible architecture. On incompatible ones can only use COPY (for example building a modified Raspberry Pi image on an x86 work machine).
For 2.x➞2.x updates that are not hostapp-enabled, we will add support if it’s low hanging fruit, if has any issues, some updates will be exempted.
For 1.x➞2.x updates it depends on the number of provisioned devices we have on 1.x for a given device type, that gains 2.x release. Also, if there’s no intermediate non-hostapp version, then it cannot have such update at the moment.
The variants and modifications can be tools added to the host OS (extra binaries one needs), or convenience (adding extra authorized_keys) - many of these what people would do could be later part of the platform too, this is a good way to prototype.
Supervisor updates - run once every 24h (and ~15m after device start to kick off), queries the API and pulls an update if needed.
Delta updates should reduce the amount of data moved, so in line with our theme of “getting more and more lightweight) - just like the update is using less and less data, reduced HUP container, then done away with completely, smaller and smaller supervisor….
When running resinOS in a container, the boot, state and data partitions are added volumes.
This os delivering over the air updates...
Anyone who wants to go and do an update now?