This presentation, from the Embedded Linux Conference Europe in October 2016, discusses how resinOS was built, highlights some of its key features, and shares a roadmap for future development and contribution.
resinOS is the latest open-source tool built by resin.io to enable the future of hardware with the tools of modern software. resinOS is a simple yet powerful operating system that brings standard Docker containers to embedded devices and works on a wide variety of device types and architectures. resinOS was born from the team’s experience deploying embedded containers across device types and has been battle-tested in production environments.
You can download resinOS at https://resinos.io
Presented by: Elizabeth Joseph, IBM
Presented at All Things Open 2020
Abstract: Many enterprises and, as many of us learned during the COVID-19 outbreak, governments, rely on mainframes to do the bulk of their data-driven work and the modern mainframe is very good at what it does. But what if you’re looking to modernize your platform and bring in the DevOps methodologies, tooling, and practice into your organization?
Today, there is an entire product line of mainframes that exclusively run Linux (RHEL, SLES, or Ubuntu). With Linux, you get access to the vast ecosystem of open source software that’s already been ported to the mainframe architecture (s390x), with more being ported every month.
If your organization is using z/OS, the Open Mainframe Project has a series of open source projects targeted specifically at the mainframe and improving usability. Zowe, for instance, helps create a consolidated API for accessing resources and workload on your system and Feilong is a z/VM connector that allows you to manage your virtual machines with familiar open source tooling like OpenStack. There are even connectors for Jenkins that allow you to integrate CI/CD pipelines with your workloads.
In this talk I’ll explore all of this in more to show you how an automated, modern environment can thrive on today’s mainframe.
Pull, push, clone, it is all in your daily workflow. But what if this wasn't your source code or your container, but the state of your whole computer? Push your production database over to another machine? No problem!
This talk shows how you can use Dotmesh with LinuxKit to work with persistent data on your server as simply as you work with git. This workflow helps unleash new ways of working with servers and data. Immutable infrastructure from LinuxKit meets controlled and manageable data storage from Dotmesh. Combining these two open source projects allows new possibilities in how to manage your infrastructure.
Securing Your Resources with Short-Lived Certificates!All Things Open
Presented by: Allen Vailliencourt
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: There is a better way to manage access to servers, Databases, and Kubernetes than using passwords and/or public and private keys. Come and see how this is done with short-lived certificates and see a demo of Teleport!
Securing the Software Supply Chain with TUF and Docker - Justin Cappos and Sa...Docker, Inc.
If you want to compromise millions of machines and users, software distribution and software updates are an excellent attack vector. Using public cryptography to sign your packages is a good starting point, but as we will see, it still leaves you open to a variety of attacks. This is why we designed TUF, a secure software update framework. TUF helps to handle key revocation securely, limits the impact a man-in-the-middle attacker may have, and reduces the impact of repository compromise. We will discuss TUF's protections and integration into Docker's Notary software, and demonstrate new techniques that could be added to verify other parts of the software supply chain, including the development, build, and quality assurance processes.
Presented by: Elizabeth Joseph, IBM
Presented at All Things Open 2020
Abstract: Many enterprises and, as many of us learned during the COVID-19 outbreak, governments, rely on mainframes to do the bulk of their data-driven work and the modern mainframe is very good at what it does. But what if you’re looking to modernize your platform and bring in the DevOps methodologies, tooling, and practice into your organization?
Today, there is an entire product line of mainframes that exclusively run Linux (RHEL, SLES, or Ubuntu). With Linux, you get access to the vast ecosystem of open source software that’s already been ported to the mainframe architecture (s390x), with more being ported every month.
If your organization is using z/OS, the Open Mainframe Project has a series of open source projects targeted specifically at the mainframe and improving usability. Zowe, for instance, helps create a consolidated API for accessing resources and workload on your system and Feilong is a z/VM connector that allows you to manage your virtual machines with familiar open source tooling like OpenStack. There are even connectors for Jenkins that allow you to integrate CI/CD pipelines with your workloads.
In this talk I’ll explore all of this in more to show you how an automated, modern environment can thrive on today’s mainframe.
Pull, push, clone, it is all in your daily workflow. But what if this wasn't your source code or your container, but the state of your whole computer? Push your production database over to another machine? No problem!
This talk shows how you can use Dotmesh with LinuxKit to work with persistent data on your server as simply as you work with git. This workflow helps unleash new ways of working with servers and data. Immutable infrastructure from LinuxKit meets controlled and manageable data storage from Dotmesh. Combining these two open source projects allows new possibilities in how to manage your infrastructure.
Securing Your Resources with Short-Lived Certificates!All Things Open
Presented by: Allen Vailliencourt
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: There is a better way to manage access to servers, Databases, and Kubernetes than using passwords and/or public and private keys. Come and see how this is done with short-lived certificates and see a demo of Teleport!
Securing the Software Supply Chain with TUF and Docker - Justin Cappos and Sa...Docker, Inc.
If you want to compromise millions of machines and users, software distribution and software updates are an excellent attack vector. Using public cryptography to sign your packages is a good starting point, but as we will see, it still leaves you open to a variety of attacks. This is why we designed TUF, a secure software update framework. TUF helps to handle key revocation securely, limits the impact a man-in-the-middle attacker may have, and reduces the impact of repository compromise. We will discuss TUF's protections and integration into Docker's Notary software, and demonstrate new techniques that could be added to verify other parts of the software supply chain, including the development, build, and quality assurance processes.
Presentation on Pesantren Kilat Code Security
Tangerang, 2016-06-06
We talk about docker. What it is? Why it matters? and how it can benefit us?
This presentation is an introduction and delivered to local meetup in Indonesia.
Networking in Docker EE 2.0 with Kubernetes and SwarmAbhinandan P.b
The presentation is about the operator goal from networking perspective and how it is influenced by both swarm and kubernetes on the Docker EE platform
Secure Substrate: Least Privilege Container Deployment Docker, Inc.
Riyaz Faizullabhoy - Security Engineer, Docker
Diogo Mónica - Security Lead, Docker
The popularity of containers has driven the need for distributed systems that can provide a substrate for container deployments. These systems need the ability to provision and manage resources, place workloads, and adapt in the presence of failures. In particular, container orchestrators make it easy for anyone to manage their container workloads using their cloud-based or on-premise infrastructure. Unfortunately, most of these systems have not been architected with security in mind.Compromise of a less-privileged node can allow an attacker to escalate privileges to either gain control of the whole system, or to access resources it shouldn't have access to. In this talk, we will go over how Docker has been working to build secure blocks that allow you to run a least privilege infrastructure - where any participant of the system only has access to the resources that are strictly necessary for its legitimate purpose. No more, no less.
OSCON: Unikernels and Docker: From revolution to evolutionDocker, Inc.
with Richard Mortier and Anil Madhavapeddy
Unikernels are a growing technology that augment existing virtual machine and container deployments with compact, single-purpose appliances. Two main flavors exist: clean-slate unikernels, which are often language specific, such as MirageOS (OCaml) and HaLVM (Haskell), and more evolutionary unikernels that leverage existing OS technology recreated in library form, notably Rump Kernel used to build Rumprun unikernels.
Containers in depth – Understanding how containers work to better work with c...All Things Open
Presented by: Brent Laster, SAS
Presented at All Things Open 2020
Abstract: Containers are all the rage these days – from Docker to Kubernetes and everywhere in-between. But to get the most out of them it can be helpful to understand how containers are constructed, how they depend and interact with the operating system, and what the differences and interactions are between layers, images, and containers. Join R&D Director, Brent Laster as he does a quick, visual overview of how containers work and how applications such as Docker work with them. Topics to be discussed include:
What containers are and the benefits they provide
How containers are constructed
The differences between layers, images, and containers
What does immutability really mean
The core Linux functionalities that containers are based on
How containers reuse code
The differences between containers and VMs
What Docker really does
The Docker storage drivers
How overlays work
The Open Container Initiative
A good analogy for understanding all of this
Empower Your Docker Containers with Watson - DockerCon 2017 AustinPhil Estes
A community theater talk given at DockerCon in Austin, Texas on April 18th, 2017 by Lin Sun and Phil Estes from IBM Cloud. This talk first describes the growth of Watson cognitive services and APIs since its origin beating human participants in Jeopardy years ago, and then takes a simple containerized application and adds cognitive capability by using the Watson Conversation service.
Building a Secure and Resilient Foundation for Banking at Intesa Sanpaolo wit...Docker, Inc.
Intesa Sanpaolo is one of the first banking groups in the Euro zone, with over 12 million customers and 4,600 branches in Italy. With a lot of traditional monolithic applications that are difficult to maintain and evolve, Intesa turned to Docker to help them both modernize the applications and improve their portability so that they could consider a multi-site architecture across multiple data centers. Using Docker Enterprise Edition (EE), Intesa took the first step to “break the monolith” by containerizing their infrastructure, self-described as an “Infrastructure-as-code” pattern, and now use Docker EE to orchestrate the applications across sites.
In this talk Diego Braga, Infrastructure System Specialist at Intesa, and Lorenzo Fontana, DevOps Engineer at Kiratech will share how they implemented Docker EE along with software-defined networking and storage solutions to validate Intesa’s architectural model and to build a geographical distributed multi-data center cluster, all while saving infrastructure costs and remaining compliant with regulations.
They will highlight their CI/CD process using Docker and Jenkins, how the developer and ops team are now working together to implement a DevOps methodology and Intesa’s ROI in using Docker EE. They will also share Intesa’s future plans, including creating mixed Linux/Windows clusters that use the same overlay network and on-prem/public cloud clusters opportunities.
Presented by: Lin Sun, IBM
Presented at All Things Open 2020
Abstract: Do you really need microservices? The Istio team have made an architecture decision to change the Istio control plane components from microservices to monolithic to simplify Istio. Come and hear why we did it and how it simplifies Istio operation experience, along with many other changes we made to simplify Istio.
Talking TUF: Securing Software DistributionDocker, Inc.
The Update Framework (TUF) secures new or existing software update systems by providing a specification and library that can be flexibly and universally integrated or natively implemented. The update procedure is notoriously susceptible to malicious attacks and TUF is designed to prevent these and other updater weaknesses.
Docker's Notary project integrates the Go implementation of TUF with Docker Content Trust to verify the publisher of Docker images.
https://github.com/theupdateframework/tuf
Becoming the Docker Champion: Bringing Docker Back to WorkDocker, Inc.
You’re at DockerCon and have spent the last two days deep in sessions, the Hallway Track, and networking. You’ve heard the stories, learnings and benefits from large and small organizations that are on their devops and app modernization journey with Docker. You may have even begun to identify multiple use cases for Docker at your work and how it could benefit your business and other teams.
In this session, Jim Armstrong of Docker will share how other Docker users have built their cases for broader use of Docker in their organizations. He will share real experiences of developers convincing their ops teams, ops teams introducing Docker to their developers, and passionate Docker users convincing IT executives to adopt Docker.
Managing Open Source software in the Docker era nexB Inc.
Heather Meeker from O'Melveny & Myers and Michael Herzog from nexB discuss the specific impact of Docker on open source software governance and compliance.
Continuous Packaging is also Mandatory for DevOpsDocker, Inc.
While DevOps are comfortable with continuous integration and automatic tests, the area of continuous packaging has not been given the attention it deserves.
Even with containers, delivering an application using software packages provides multiple advantages with regards to file-based installation: it allows to manage dependencies more easily, to provide metadata, checksum, and signature mechanisms, to deal with packages repositories.
But doing that in a continuous packaging approach means that the generation of these packages is fully automated and part of the build process of the software. As a consequence, it eases the various steps of a solution lifecycle (controlled impact of installation/uninstallation,
identical deliveries up to the customer, avoidance of code or metadata duplication)
This presentation will detail the methodological approach around continuous packaging and demonstrate how this can be put in place using an Open Source tool such as project-builder.org and how this allows the MondoRescue project to deliver packages at will for lots of distribution tuples through the same number of Docker containers.
Discussion and demo (available via video) of Open Container Initiative (OCI) status and the runc reference implementation. Given at Open Container Day during OSCON 2016 in Austin, TX.
Securing Applications and Pipelines on a Container PlatformAll Things Open
Presented at: Open Source 101 at Home
Presented by: Veer Muchandi, Red Hat Inc
Abstract: While everyone wants to do Containers and Kubernetes, they don’t know what they are getting into from Security perspective. This session intends to take you from “I don’t know what I don’t know” to “I know what I don’t know”. This helps you to make informed choices on Application Security.
Kubernetes as a Container Platform is becoming a de facto for every enterprise. In my interactions with enterprises adopting container platform, I come across common questions:
- How does application security work on this platform? What all do I need to secure?
- How do I implement security in pipelines?
- What about vulnerabilities discovered at a later point in time?
- What are newer technologies like Istio Service Mesh bring to table?
In this session, I will be addressing these commonly asked questions that every enterprise trying to adopt an Enterprise Kubernetes Platform needs to know so that they can make informed decisions.
Docker is the Open Source container engine. It lets you author, run, and manage software containers. Escape from dependency hell, and make deployment a breeze! This presentation includes the standard Docker intro (actualized for Docker 0.11) as well as some insights about how to perform orchestration and multi-host container linking.
Presentation on Pesantren Kilat Code Security
Tangerang, 2016-06-06
We talk about docker. What it is? Why it matters? and how it can benefit us?
This presentation is an introduction and delivered to local meetup in Indonesia.
Networking in Docker EE 2.0 with Kubernetes and SwarmAbhinandan P.b
The presentation is about the operator goal from networking perspective and how it is influenced by both swarm and kubernetes on the Docker EE platform
Secure Substrate: Least Privilege Container Deployment Docker, Inc.
Riyaz Faizullabhoy - Security Engineer, Docker
Diogo Mónica - Security Lead, Docker
The popularity of containers has driven the need for distributed systems that can provide a substrate for container deployments. These systems need the ability to provision and manage resources, place workloads, and adapt in the presence of failures. In particular, container orchestrators make it easy for anyone to manage their container workloads using their cloud-based or on-premise infrastructure. Unfortunately, most of these systems have not been architected with security in mind.Compromise of a less-privileged node can allow an attacker to escalate privileges to either gain control of the whole system, or to access resources it shouldn't have access to. In this talk, we will go over how Docker has been working to build secure blocks that allow you to run a least privilege infrastructure - where any participant of the system only has access to the resources that are strictly necessary for its legitimate purpose. No more, no less.
OSCON: Unikernels and Docker: From revolution to evolutionDocker, Inc.
with Richard Mortier and Anil Madhavapeddy
Unikernels are a growing technology that augment existing virtual machine and container deployments with compact, single-purpose appliances. Two main flavors exist: clean-slate unikernels, which are often language specific, such as MirageOS (OCaml) and HaLVM (Haskell), and more evolutionary unikernels that leverage existing OS technology recreated in library form, notably Rump Kernel used to build Rumprun unikernels.
Containers in depth – Understanding how containers work to better work with c...All Things Open
Presented by: Brent Laster, SAS
Presented at All Things Open 2020
Abstract: Containers are all the rage these days – from Docker to Kubernetes and everywhere in-between. But to get the most out of them it can be helpful to understand how containers are constructed, how they depend and interact with the operating system, and what the differences and interactions are between layers, images, and containers. Join R&D Director, Brent Laster as he does a quick, visual overview of how containers work and how applications such as Docker work with them. Topics to be discussed include:
What containers are and the benefits they provide
How containers are constructed
The differences between layers, images, and containers
What does immutability really mean
The core Linux functionalities that containers are based on
How containers reuse code
The differences between containers and VMs
What Docker really does
The Docker storage drivers
How overlays work
The Open Container Initiative
A good analogy for understanding all of this
Empower Your Docker Containers with Watson - DockerCon 2017 AustinPhil Estes
A community theater talk given at DockerCon in Austin, Texas on April 18th, 2017 by Lin Sun and Phil Estes from IBM Cloud. This talk first describes the growth of Watson cognitive services and APIs since its origin beating human participants in Jeopardy years ago, and then takes a simple containerized application and adds cognitive capability by using the Watson Conversation service.
Building a Secure and Resilient Foundation for Banking at Intesa Sanpaolo wit...Docker, Inc.
Intesa Sanpaolo is one of the first banking groups in the Euro zone, with over 12 million customers and 4,600 branches in Italy. With a lot of traditional monolithic applications that are difficult to maintain and evolve, Intesa turned to Docker to help them both modernize the applications and improve their portability so that they could consider a multi-site architecture across multiple data centers. Using Docker Enterprise Edition (EE), Intesa took the first step to “break the monolith” by containerizing their infrastructure, self-described as an “Infrastructure-as-code” pattern, and now use Docker EE to orchestrate the applications across sites.
In this talk Diego Braga, Infrastructure System Specialist at Intesa, and Lorenzo Fontana, DevOps Engineer at Kiratech will share how they implemented Docker EE along with software-defined networking and storage solutions to validate Intesa’s architectural model and to build a geographical distributed multi-data center cluster, all while saving infrastructure costs and remaining compliant with regulations.
They will highlight their CI/CD process using Docker and Jenkins, how the developer and ops team are now working together to implement a DevOps methodology and Intesa’s ROI in using Docker EE. They will also share Intesa’s future plans, including creating mixed Linux/Windows clusters that use the same overlay network and on-prem/public cloud clusters opportunities.
Presented by: Lin Sun, IBM
Presented at All Things Open 2020
Abstract: Do you really need microservices? The Istio team have made an architecture decision to change the Istio control plane components from microservices to monolithic to simplify Istio. Come and hear why we did it and how it simplifies Istio operation experience, along with many other changes we made to simplify Istio.
Talking TUF: Securing Software DistributionDocker, Inc.
The Update Framework (TUF) secures new or existing software update systems by providing a specification and library that can be flexibly and universally integrated or natively implemented. The update procedure is notoriously susceptible to malicious attacks and TUF is designed to prevent these and other updater weaknesses.
Docker's Notary project integrates the Go implementation of TUF with Docker Content Trust to verify the publisher of Docker images.
https://github.com/theupdateframework/tuf
Becoming the Docker Champion: Bringing Docker Back to WorkDocker, Inc.
You’re at DockerCon and have spent the last two days deep in sessions, the Hallway Track, and networking. You’ve heard the stories, learnings and benefits from large and small organizations that are on their devops and app modernization journey with Docker. You may have even begun to identify multiple use cases for Docker at your work and how it could benefit your business and other teams.
In this session, Jim Armstrong of Docker will share how other Docker users have built their cases for broader use of Docker in their organizations. He will share real experiences of developers convincing their ops teams, ops teams introducing Docker to their developers, and passionate Docker users convincing IT executives to adopt Docker.
Managing Open Source software in the Docker era nexB Inc.
Heather Meeker from O'Melveny & Myers and Michael Herzog from nexB discuss the specific impact of Docker on open source software governance and compliance.
Continuous Packaging is also Mandatory for DevOpsDocker, Inc.
While DevOps are comfortable with continuous integration and automatic tests, the area of continuous packaging has not been given the attention it deserves.
Even with containers, delivering an application using software packages provides multiple advantages with regards to file-based installation: it allows to manage dependencies more easily, to provide metadata, checksum, and signature mechanisms, to deal with packages repositories.
But doing that in a continuous packaging approach means that the generation of these packages is fully automated and part of the build process of the software. As a consequence, it eases the various steps of a solution lifecycle (controlled impact of installation/uninstallation,
identical deliveries up to the customer, avoidance of code or metadata duplication)
This presentation will detail the methodological approach around continuous packaging and demonstrate how this can be put in place using an Open Source tool such as project-builder.org and how this allows the MondoRescue project to deliver packages at will for lots of distribution tuples through the same number of Docker containers.
Discussion and demo (available via video) of Open Container Initiative (OCI) status and the runc reference implementation. Given at Open Container Day during OSCON 2016 in Austin, TX.
Securing Applications and Pipelines on a Container PlatformAll Things Open
Presented at: Open Source 101 at Home
Presented by: Veer Muchandi, Red Hat Inc
Abstract: While everyone wants to do Containers and Kubernetes, they don’t know what they are getting into from Security perspective. This session intends to take you from “I don’t know what I don’t know” to “I know what I don’t know”. This helps you to make informed choices on Application Security.
Kubernetes as a Container Platform is becoming a de facto for every enterprise. In my interactions with enterprises adopting container platform, I come across common questions:
- How does application security work on this platform? What all do I need to secure?
- How do I implement security in pipelines?
- What about vulnerabilities discovered at a later point in time?
- What are newer technologies like Istio Service Mesh bring to table?
In this session, I will be addressing these commonly asked questions that every enterprise trying to adopt an Enterprise Kubernetes Platform needs to know so that they can make informed decisions.
Docker is the Open Source container engine. It lets you author, run, and manage software containers. Escape from dependency hell, and make deployment a breeze! This presentation includes the standard Docker intro (actualized for Docker 0.11) as well as some insights about how to perform orchestration and multi-host container linking.
Why everyone is excited about Docker (and you should too...) - Carlo Bonamic...Codemotion
In less than two years Docker went from first line of code to major Open Source project with contributions from all the big names in IT. Everyone is excited, but what's in for me - as a Dev or Ops? In short, Docker makes creating Development, Test and even Production environments an order of magnitude simpler, faster and completely portable across both local and cloud infrastructure. We will start from Docker main concepts: how to create a Linux Container from base images, run your application in it, and version your runtimes as you would with source code, and finish with a concrete example.
Docker and Containers are proven solutions, but are they ready to replace your current deployment? And more importantly, are you aware of the changes you'll have to make to accommodate them? Are there any risks involved? This talk will answer these questions and talk about how to plan, automate, build, deploy, and orchestrate the whole process.
Real-World Docker: 10 Things We've Learned RightScale
Docker has taken the world of software by storm, offering the promise of a portable way to build and ship software - including software running in the cloud. The RightScale development team has been diving into Docker for several projects, and we'll share our lessons learned on using Docker for our cloud-based applications.
Docker is in all the news and this talk presents you the technology and shows you how to leverage it to build your applications according to the 12 factor application model.
Accelerate your software development with DockerAndrey Hristov
Docker is in all the news and this talk presents you the technology and shows you how to leverage it to build your applications according to the 12 factor application model.
An overview on docker and container technology behind it. Lastly, we discuss few tools that might come handy when dealing with large number of containers management.
Remix of two other open source presentations along with my own content, 40 slides set to play at 20 seconds auto-timed (similar to Pecha-Kucha style timing). This was delivered via Caribbean Tech Dev forum's monthly Google Hangout in November 2015, and video can be viewed at https://www.youtube.com/watch?v=xANrsSin_-0
JDD2014: Docker.io - versioned linux containers for JVM devops - Dominik DornPROIDEA
This presentation will introduce you to Docker - the new shiny star on the Devops horizon. It will teach you everything you need to know to get started with Docker, why you'd want to use it and which tools to use to get the most out of it. Additionally to showing the basics, it will introduce helpful libraries available for the JVM and how they can be used together with Docker to create secure, scalable and maintainable cloud setups for your applications.
his workshop will shed light on a modern solution to solve application portability, building, delivery, packaging, and system dependency issues. Containers especially Docker have seen accelerated adoption in the web, cloud and recently the enterprise. HPC environments are seeing something similar to the introduction of HPC containers Singularity and Shifter. They provide a good use case for solving software portability, not to mention ensure repeatability of results. Not to mention their ECO system provides for the better development, delivery, testing workflows that were alien to most of HPC environments. This workshop will cover the Theory and hands-on of containers and Its ecosystem. Introducing Docker and singularity containers; Docker as a general-purpose container for almost any app, Singularity as the particular container technology for HPC. The workshop will go over the foundations of the containers platform, including an overview of the platform system components: images, containers, repositories, clustering, and orchestration. The strategy is to demonstrate through "live demo, and hands-on exercises." The reuse case of containers in building a portable distributed application cluster running a variety of workloads including HPC workload.
Docker Intro at the Google Developer Group and Google Cloud Platform Meet UpJérôme Petazzoni
Docker is the Open Source engine to author, run, and manage Linux Containers. This is a short introduction to Docker, what it is, what is for; it was given in the context of the Google Developer Group and Google Cloud Platform Meet-Up in San Francisco, end of March 2014.
Docker - Demo on PHP Application deployment Arun prasath
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this demo, I will show how to build a Apache image from a Dockerfile and deploy a PHP application which is present in an external folder using custom configuration files.
Similar to Introducing resinOS: An Operating System Tailored for Containers and Built for the Embedded World (20)
Balena: a Moby-based container engine for IoT Balena
An introduction to balena, a Moby-based container engine for IoT. Presented by Petros Angelatos, CTO at resin.io, at the DockerCon Europe Moby Summit in Copenhagen, October 2017. Read more at https://balena.io
resin.io: Docker for IoT
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
4. Be the embedded OS of choice for containers in IoT
Create a community around containers for IoT
Modern security features
Minimal footprint
Production ready
Mission
5. Started 4 years ago
Modern devops practices to the embedded world
Naturally leaned towards containers
Ported Docker to ARMv6
Ported Docker to ARMv5
Fixes upstreamed
History - resin.io
6. Needed an OS for our platform
Tried a modified Arch
Tried a modified TinyCore
Both had important shortcomings
History - resinOS
7. Started in January 2014 as internal project
Used Yocto as a base
Open sourced in July 2015
Currently under very active development
It’s been running in production for 2.5 years
History - resinOS
11. meta-resin
meta-resin
meta-resin-common
Jethro overlayer Fido overlayer Daisy overlayer
Main resinOS layer
Automatic aufs patching
BSP independent kernel configuration
Can prepopulate docker images
Kernel headers for out-of-tree module development
https://github.com/resin-os/meta-resin
12. Environment defined in a Dockerfile
Predictable host configuration
Docker image artifacts
You can use the OS as a container
resin/resinos:<version>-<board>
Build system
https://github.com/resin-os/resin-yocto-scripts
13. Separate rootfs and root state
We know exactly which services write to disk
Dual root partition
data partition auto-expands on first boot
Partition layout
rootA databoot rootB state
14. Forced us to investigate all writes
Configuration stored in state partition
Network configuration
Random seed
Clock at shutdown
Some state is stored in tmpfs
DHCP leases
Limited logs
Read-only root
16. Compartmentalisation of failures
Device can survive data partition corruption
Most I/O activity happens in there
Root partition is never written to while in use
We strive to do atomic operations everywhere
Reliability
19. Leverage a lot of systemd features
Adjusting OOM score for critical services
Running services in separate mount namespaces
Very easy dependency management
NTP
Socket activation for SSH
Saves RAM since ssh is running only when needed
Systemd
20. DNS is hard
dnsmasq
Integration of Docker with host’s dnsmasq
NetworkManager
Excellent D-Bus API
ModemManager
Excellent D-Bus API
Lots of documentation
Networking
21. AUFS driver
Allows support for NAND based devices
Currently on docker 1.10.3
Backported stability patches
Journald logging driver
Avoids SD card wear
Seccomp enabled
Docker
22. All logs end up in journald
In RAM 8MB buffer by default
Configurable log persistence
Journald allows for structured logs
Container logs are annotated with metadata
Easy to send logs to a central location to store and process
Log management
24. Some boards have internal storage
Image for these boards is a flasher
Automatic copying to internal storage
Feedback through LEDs
Two stage flashing
25. So many options
It’s one of our biggest focus areas
resinhup is our current approach
Takes advantage of dual root partition
Validates everything before changing the state
It’s still experimental
Host OS updates
https://github.com/resin-os/resinhup/
26. ● Used by
CoreOS, ChromiumOS, Ubuntu Snappy
Brillo, Mender.io
But wastes a lot of space
We’re experimenting with more advanced approaches
ostree
docker
Dual root partition method
27. Integration with docker
It uses docker to pull the OS image
It then unpacks and applies it
Leveraging important docker features
Signed images
Programmatic API for fetching
Open question: can unify containers and host?
ResinHUP
https://github.com/resin-os/resinhup/
28. Automatic emulated testing
● We support virtual QEMU boards
● Automated basic testing on every PR
○ Booting
○ Networking
● Integrated with our Jenkins
https://github.com/resin-io/autohat
29. Automatic hardware testing
● Manual testing doesn’t scale
○ Currently 22 boards
● We built a board that instruments boards
○ GPIO
○ Provisioning
○ SD muxing
○ Wifi testing
https://github.com/resin-io/autohat-rig
30. ARM64
● Coming soon
ARMv6
● RPI Zero
● RPI model 1 A+
ARMv5
● TS7700
Device support
ARMv7
● Raspberry Pi 2
● Raspberry Pi 3
● Samsung Artik 5
● SamsungArtik 10
● Beaglebone Black
● Beaglebone Green
● Beaglebone Green Wireless
● Odroid C1/C1+
● Odroid XU4
● SolidRun Hummingboard i2
● Boundary Devices Nitrogen6x
● Parallella Board
● VIA 820 board
● Zynq zc702
● TS4900 single and Quad
X86_32
● Intel Edison
X86_64
● Intel NUC
31. Device support
● Easy to add new boards
● Meta-resin handles
○ Userspace
○ Image generation
○ Kernel configuration
33. ● How do you..
○ Configure network credentials?
○ Provision a device?
○ Develop on the board?
○ Get logs?
Development tools
34. ● Development images have
○ Open SSH server
○ Docker socket exposed over TCP
○ mDNS exposed metadata
● Device is at <hostname>.local
Development mode
35. ● Image configuration
● Wifi credentials
● Hostname
● Persistent logging
Resin Device Toolbox
$ rdt configure ~/Downloads/resinos-dev.img
? Network SSID super_wifi
? Network Key super_secure_password
? Do you want to set advanced settings? Yes
? Device Hostname resin
? Do you want to enable persistent logging? no
Done!
36. ● Automatically detects removable storage
● Won’t wipe your drive!
● Validates after writing
Resin Device Toolbox
$ sudo rdt flash ~/Downloads/resinos-dev.img
? Select drive /dev/disk3 (7.9 GB) - STORAGE DEVICE
? This will erase the selected drive. Are you sure? Yes
Flashing [========================] 100% eta 0s
Validating [========================] 100% eta 0s
37. Docker development
Finds device in local network
Continously syncs code into the container
Rebuilds when necessary
Resin Device Toolbox
$ rdt push --source .
* Building..
- Stopping and Removing any previous 'myapp' container
- Removing any existing container images for 'myapp'
- Building new 'myapp' image
38. ● More than 500 images for each supported device type
● Debian, Fedora, Alpine
● Nodejs, python, golang, Java
● Follow docker conventions
Base Images
https://github.com/resin-io-library/base-images
Ποιοι είμαστε
Γιατί κάνουμε το resin
Πως δουλεύει το resin
We will upload the slides
The links next to the github logo link to the repo for what is mentioned in the slide
We believe containers give some unique capabilities when applied to IoT
Developers use very familiar tools
They don’t have to reboot for every iteration
If their container completely crashes the OS is still there to recover
But also have some unique challenges
A container in the cloud interacts with its environment by requiring computation, memory, networking, storage
All of these are very well abstracted and have no different if they are virtual or not
You can’t have a virtual GPIO
Containers in IoT have to interface with hardware sometimes
Devices can be connected with to very poor networks or via 3G, bandwidth matters
We’re also aiming at making the OS developer friendly and providing the tools that allow you to build great applications
We’ve made a lot of progress during the past years and now we’re making an official release as a standalone project
We were managing a fleet of 200 devices across London
Most of the tooling felt primitive. It was a hard problem
We wanted to bring more modern tools to the embedded world
Bridge yocto world with docker/cloud world
Started with `git push` workflow, only supported nodejs projects
Most distributions are not optimised for remote management and embedded devices
We had a very early prototype which sometimes it would start being slow
Turned out filesystem indexing was kicking off in the background
We had to manually build tooling around creating images, installing software
Most distributions support a limited set of architectures
resinOS is the result of our efforts to have a reliable system in production
Has been running on thousands of devices
We moved the development to the open a bit more than a year ago
We’re now making it a proper open source project
Yocto had drawbacks, namely steep learning curve, resource-hungry builds
Benefits outweight them by far
We can disable compile-time features that we don’t use
Very big community in the embedded space
Provides the right tools for distribution maintainer
For each board we support there is a GH repo with a well-defined structure
We manage yocto dependencies by using git submodules
We can move each board independently through versions without blocking the development of meta-resin
Normally each such repo has no code, but sometimes some glue code is put there in a non-submoduled layer
This is where the bulk of the work is happening
We need AUFS for docker - we’ll talk about it later on why we chose this backend
Meta-resin auto-detects if the kernel provided by the bsp has aufs support (like beaglebone) and automatically applies patches if not
Meta-resin automatically extends the kernel config provided by the BSP to enable the right features for container to run
Prepopulating docker images means easier mass-production/provisioning
We use a docker container for the build env yocto runs in
Well defined versions of host dependencies
One of the artifacts of the build process is a docker image containing the userspace and boot files
We are actually pushing this image in dockerhub for every release
This can be used as a base for a container or for testing
It will also become relevant in the talk when we talk about OS updates
Meta-resin creates this layout by default
Data partition is where all containers and container state is kept
State partition is where partition state is kept
We generally try to keep the boot partition as a FAT partition
This allows our development tools to read and write to it cross-platform
We chose to have a read-only root partition to make failures much less likely
During board boot we bind-mount a lot of relevant directories from the state partition to the appropriate locations
Another approach, used by OpenWRT, would be having a read-only root (squashfs in their case) and using overlayfs on top.
We didn’t want to be dependent on a union filesystem for the rootfs
Some POSIX semantics are lost
You can still get random mutations of the rootfs that will bite you later when you update
The goal of the OS is to be fully updatable but also flexible. If all the application code was living in the host then a complex host OS update would be required for every change.
This is where containers come in.
Our userspace is designed to do the absolute necessary things to bring up the board and connect to the internet
Using this distinction we can have very fast reboot-less updates for containers
But we can still update the host with traditional methods but much more infrequently
The initial versions of our OS were based on sysvinit
But we quickly found ourselves having a hard time managing dependencies
Sometimes restarting a service wouldn’t kill all its processes
Each program was managing its own logs
We plan to leverage it a lot more in the future
Networking outside the data center is a wild place
Glibc only supports max 3 nameservers
In some networks DHCP dns is broken
In some networks google DNS is broken
In some networks DNS server take forever to reply
In some networks DNS servers reply with the wrong data
D-Bus can be easily exposed to containers
Have also tried connman
We had to patch it to convince it to not do NTP
Documentation buried into repo and examples in test suite
We didn’t have a good way of persisting a config file describing 3G connections
ModemManager gets us that
In general much more developer friendly
Overlay needs 3.18+ and uses a lot of inodes
Overlay2 needs 4.0+
BTRFS is not stable enough, especially for low storage devices
Devicemapper doesn’t work on NAND
Zfs… yeah
Docker’s default logging driver will keep appending your stdout into a very big JSON file on disk
We configure docker to send logs to journald
Meta-resin also enables seccomp in the kernel and docker uses a default profile for all containers
We don’t allow anything to write its own logs
One can easily send the stream of journald logs to a remote server
If configured, ResinOS can store logs in the
Yocto by default produces the host OS image but sometimes this needs to be written to internal storage
Dependening on the board this might involve a few steps
ResinOS can create a “flasher” type image that copies itself into internal storage and shuts down the board
There are so many options for OS updates. I think there are 4 talks on this subject in this ELCE
Resinhup is how are addressing our current production needs
It’s still experimental and tested in a very carefully selected set of customers
Host OS updates is a very hot topic and is definitely
One important feature we added recently is integration with docker
Pulling down an fs image using a tarball felt wrong when we had docker
Currently we’re using docker only for pulling an image and then copy it to the rootfs
Docker has a lot of nice features that can be leveraged. Signed images for example
Automate all possible ways of booting a board ( SD card / DFU / Network booting? ).* Emulate the peripherals that could be connected to the SBC ( GPIO/SPI/I2C/Bluetooth).* All external hardware used to run the tests should be on USB to enable for portability of the entire rig.* Simulate and test different network conditions under which a SBC operates.* Accompanying software components should have an easy DSL/way of writing and extending tests, so that peripheral abstractions can be re-used.* Accompanying software component should enable Continuous Integration with Jenkins* All custom hardware designs should be in KiCad to allow for easy contribution as the project is open source.
Automate all possible ways of booting a board ( SD card / DFU / Network booting? ).* Emulate the peripherals that could be connected to the SBC ( GPIO/SPI/I2C/Bluetooth).* All external hardware used to run the tests should be on USB to enable for portability of the entire rig.* Simulate and test different network conditions under which a SBC operates.* Accompanying software components should have an easy DSL/way of writing and extending tests, so that peripheral abstractions can be re-used.* Accompanying software component should enable Continuous Integration with Jenkins* All custom hardware designs should be in KiCad to allow for easy contribution as the project is open source.
20 boards
Normally you have to connect with ethernet/serial cable
Configuration is tricky
Flashing
All of our code is open source and our goal is to build a community around this project
We’ll be more than happy to chat with you on our gitter channel