The first release of Docker only supported AUFS, and AUFS was available (out of the box) only on Debian and Ubuntu kernel. Then Red Hat wanted Docker to run on its distros, and contributed the Device Mapper driver, and later the BTRFS driver, and recently the overlayfs driver.
Jérôme presents how those drivers compare from a high-level perspective, explaining their pros and cons.
Then he showed each driver in action, and look at low-level implementation details. We won't dive into the golang implementation code itself, but we will explain the concepts of each driver. This will help to better understand how they work, and give some hints when it comes to troubleshoot their behaviour.
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConJérôme Petazzoni
Containers are everywhere. But what exactly is a container? What are they made from? What's the difference between LXC, butts-nspawn, Docker, and the other container systems out there? And why should we bother about specific filesystems?
In this talk, Jérôme will show the individual roles and behaviors of the components making up a container: namespaces, control groups, and copy-on-write systems. Then, he will use them to assemble a container from scratch, and highlight the differences (and likelinesses) with existing container systems.
Describes what is lightweight virtualization and containers, and the low-level mechanisms in the Linux kernel that it relies on: namespaces, cgroups. It also gives details on AUFS. Those component together are the key to understanding how modern systems like Docker (http://www.docker.io/) work.
Virtual machines are generally considered secure. At least, secure enough to power highly multi-tenant, large-scale public clouds, where a single physical machine can host a large number of virtual instances belonging to different customers. Containers have many advantages over virtual machines: they boot faster, have less performance overhead, and use less resources. However, those advantages also stem from the fact that containers share the kernel of their host, instead of abstracting a new independent environment. This sharing has significant security implications, as kernel exploits can now lead to host-wide escalations.
We will show techniques to harden Linux Containers; including kernel capabilities, mandatory access control, hardened kernels, user namespaces, and more, and discuss the remaining attack surface.
Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps.
Cgroups, namespaces, and beyond: what are containers made from? (DockerCon Eu...Jérôme Petazzoni
Linux containers are different from Solaris Zones or BSD Jails: they use discrete kernel features like cgroups, namespaces, SELinux, and more. We will describe those mechanisms in depth, as well as demo how to put them together to produce a container. We will also highlight how different container runtimes compare to each other.
This talk was delivered at DockerCon Europe 2015 in Barcelona.
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConJérôme Petazzoni
Containers are everywhere. But what exactly is a container? What are they made from? What's the difference between LXC, butts-nspawn, Docker, and the other container systems out there? And why should we bother about specific filesystems?
In this talk, Jérôme will show the individual roles and behaviors of the components making up a container: namespaces, control groups, and copy-on-write systems. Then, he will use them to assemble a container from scratch, and highlight the differences (and likelinesses) with existing container systems.
Describes what is lightweight virtualization and containers, and the low-level mechanisms in the Linux kernel that it relies on: namespaces, cgroups. It also gives details on AUFS. Those component together are the key to understanding how modern systems like Docker (http://www.docker.io/) work.
Virtual machines are generally considered secure. At least, secure enough to power highly multi-tenant, large-scale public clouds, where a single physical machine can host a large number of virtual instances belonging to different customers. Containers have many advantages over virtual machines: they boot faster, have less performance overhead, and use less resources. However, those advantages also stem from the fact that containers share the kernel of their host, instead of abstracting a new independent environment. This sharing has significant security implications, as kernel exploits can now lead to host-wide escalations.
We will show techniques to harden Linux Containers; including kernel capabilities, mandatory access control, hardened kernels, user namespaces, and more, and discuss the remaining attack surface.
Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps.
Cgroups, namespaces, and beyond: what are containers made from? (DockerCon Eu...Jérôme Petazzoni
Linux containers are different from Solaris Zones or BSD Jails: they use discrete kernel features like cgroups, namespaces, SELinux, and more. We will describe those mechanisms in depth, as well as demo how to put them together to produce a container. We will also highlight how different container runtimes compare to each other.
This talk was delivered at DockerCon Europe 2015 in Barcelona.
Virtualization with KVM (Kernel-based Virtual Machine)Novell
As a technical preview, SUSE Linux Enterprise Server 11 contains KVM, which is the next-generation virtualization software delivered with the Linux kernel. In this technical session we will demonstrate how to set up SUSE Linux Enterprise Server 11 for KVM, install some virtual machines and deal with different storage and networking setups.
To demonstrate live migration we will also show a distributed replicated block device (DRBD) setup and a setup based on iSCSI and OCFS2, which are included in SUSE Linux Enterprise Server 11 and SUSE Linux Enterprise 11 High Availability Extension.
How Netflix Tunes EC2 Instances for PerformanceBrendan Gregg
CMP325 talk for AWS re:Invent 2017, by Brendan Gregg. "
At Netflix we make the best use of AWS EC2 instance types and features to create a high performance cloud, achieving near bare metal speed for our workloads. This session will summarize the configuration, tuning, and activities for delivering the fastest possible EC2 instances, and will help other EC2 users improve performance, reduce latency outliers, and make better use of EC2 features. We'll show how we choose EC2 instance types, how we choose between EC2 Xen modes: HVM, PV, and PVHVM, and the importance of EC2 features such SR-IOV for bare-metal performance. SR-IOV is used by EC2 enhanced networking, and recently for the new i3 instance type for enhanced disk performance as well. We'll also cover kernel tuning and observability tools, from basic to advanced. Advanced performance analysis includes the use of Java and Node.js flame graphs, and the new EC2 Performance Monitoring Counter (PMC) feature released this year."
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxThe Linux Foundation
This talk will introduce Dom0-less: a new way of using Xen to build mixed-criticality solutions. Dom0-less is a Xen feature that adds a novel approach to static partitioning based on virtualization. It allows multiple domains to start at boot time directly from the Xen hypervisor, decreasing boot times dramatically. Xen userspace tools, such as xl and libvirt, become optional.
Dom0-less extends the existing device tree based Xen boot protocol to cover information required by additional domains. Binaries, such as kernels and ramdisks, are loaded by the bootloader (u-boot) and advertised to Xen via new device tree bindings.
The audience will learn how to use Dom0-less to partition the system. Uboot and device tree configuration details will be explained to enable the audience to get the most out of this feature. The talk will include a status update and details on future plans.
Basics of Linux Commands, Git and GithubDevang Garach
Teachers Day 2020 - Basics of Linux Commands, Git and Github
History of Linux? (Fast Forward)
Brief overview of Linux OS files/ folders system
Basics Commands on Linux (Useful in daily routine)
What is Git? How to use?
Difference between Git and GitHub
How can we host HTML based website,
and to get github.io domain, Free of cost ₹ 0/-
In computing, ZFS is a combined file system and logical volume manager designed by Sun Microsystems, a subsidiary of Oracle Corporation. The features of ZFS include support for high storage capacities, integration of the concepts of file system and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs. Unlike traditional file systems, which reside on single devices and thus require a volume manager to use more than one device, ZFS file systems are built on top of virtual storage pools called zpools. A zpool is constructed of virtual devices (vdevs), which are themselves constructed of block devices: files, hard drive partitions, or entire drives, with the last being the recommended usage.[7] Thus, a vdev can be viewed as a group of hard drives. This means a zpool consists of one or more groups of drives.
In addition, pools can have hot spares to compensate for failing disks. In addition, ZFS supports both read and write caching, for which special devices can be used. Solid State Devices can be used for the L2ARC, or Level 2 ARC, speeding up read operations, while NVRAM buffered SLC memory can be boosted with supercapacitors to implement a fast, non-volatile write cache, improving synchronous writes. Finally, when mirroring, block devices can be grouped according to physical chassis, so that the filesystem can continue in the face of the failure of an entire chassis. Storage pool composition is not limited to similar devices but can consist of ad-hoc, heterogeneous collections of devices, which ZFS seamlessly pools together, subsequently doling out space to diverse file systems as needed. Arbitrary storage device types can be added to existing pools to expand their size at any time. The storage capacity of all vdevs is available to all of the file system instances in the zpool. A quota can be set to limit the amount of space a file system instance can occupy, and a reservation can be set to guarantee that space will be available to a file system instance.
An introduction to Linux Container, Namespace & Cgroup.
Virtual Machine, Linux operating principles. Application constraint execution environment. Isolate application working environment.
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...The Linux Foundation
Xen role, details of implementation and problems in a sample solution based on OSS (Android, Linux and Xen) that addresses Automotive requirements such as ultra-fast RVC boot time, quick IVI system boot time, cloud connectivity and multimedia capabilities, reliability and security through hardware virtualization. Secure CAN/LIN/MOST bus integration handled by Linux on Dom0 while Android runs customizable QML-based HMI in a sandbox of DomU. These case studies will include but not be limited to: computing power requirements, memory requirements, virtualization, stability, boot-time sequence and optimization, video clips showing results of the work done. Case study is built on TexasInstruments OMAP5 SoC.
Virtualization, Containers, Docker and scalable container management servicesabhishek chawla
In this presentation we take you through the concept of virtualization which includes the different types of virtualizations, understanding the Docker as a software containerization platform like Docker's Architecture, Building and running custom images in Docker containers, Scalable container management services which include overview of Amazon ECS & kubernetes and how at LimeTray we harnessed the power of kubernetes for scalable automated deployment of our microservices.
Docker Intro at the Google Developer Group and Google Cloud Platform Meet UpJérôme Petazzoni
Docker is the Open Source engine to author, run, and manage Linux Containers. This is a short introduction to Docker, what it is, what is for; it was given in the context of the Google Developer Group and Google Cloud Platform Meet-Up in San Francisco, end of March 2014.
Cgroups, namespaces and beyond: what are containers made from?Docker, Inc.
Linux containers are different from Solaris Zones or BSD Jails: they use discrete kernel features like cgroups, namespaces, SELinux, and more. We will describe those mechanisms in depth, as well as demo how to put them together to produce a container. We will also highlight how different container runtimes compare to each other.
Virtualization with KVM (Kernel-based Virtual Machine)Novell
As a technical preview, SUSE Linux Enterprise Server 11 contains KVM, which is the next-generation virtualization software delivered with the Linux kernel. In this technical session we will demonstrate how to set up SUSE Linux Enterprise Server 11 for KVM, install some virtual machines and deal with different storage and networking setups.
To demonstrate live migration we will also show a distributed replicated block device (DRBD) setup and a setup based on iSCSI and OCFS2, which are included in SUSE Linux Enterprise Server 11 and SUSE Linux Enterprise 11 High Availability Extension.
How Netflix Tunes EC2 Instances for PerformanceBrendan Gregg
CMP325 talk for AWS re:Invent 2017, by Brendan Gregg. "
At Netflix we make the best use of AWS EC2 instance types and features to create a high performance cloud, achieving near bare metal speed for our workloads. This session will summarize the configuration, tuning, and activities for delivering the fastest possible EC2 instances, and will help other EC2 users improve performance, reduce latency outliers, and make better use of EC2 features. We'll show how we choose EC2 instance types, how we choose between EC2 Xen modes: HVM, PV, and PVHVM, and the importance of EC2 features such SR-IOV for bare-metal performance. SR-IOV is used by EC2 enhanced networking, and recently for the new i3 instance type for enhanced disk performance as well. We'll also cover kernel tuning and observability tools, from basic to advanced. Advanced performance analysis includes the use of Java and Node.js flame graphs, and the new EC2 Performance Monitoring Counter (PMC) feature released this year."
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxThe Linux Foundation
This talk will introduce Dom0-less: a new way of using Xen to build mixed-criticality solutions. Dom0-less is a Xen feature that adds a novel approach to static partitioning based on virtualization. It allows multiple domains to start at boot time directly from the Xen hypervisor, decreasing boot times dramatically. Xen userspace tools, such as xl and libvirt, become optional.
Dom0-less extends the existing device tree based Xen boot protocol to cover information required by additional domains. Binaries, such as kernels and ramdisks, are loaded by the bootloader (u-boot) and advertised to Xen via new device tree bindings.
The audience will learn how to use Dom0-less to partition the system. Uboot and device tree configuration details will be explained to enable the audience to get the most out of this feature. The talk will include a status update and details on future plans.
Basics of Linux Commands, Git and GithubDevang Garach
Teachers Day 2020 - Basics of Linux Commands, Git and Github
History of Linux? (Fast Forward)
Brief overview of Linux OS files/ folders system
Basics Commands on Linux (Useful in daily routine)
What is Git? How to use?
Difference between Git and GitHub
How can we host HTML based website,
and to get github.io domain, Free of cost ₹ 0/-
In computing, ZFS is a combined file system and logical volume manager designed by Sun Microsystems, a subsidiary of Oracle Corporation. The features of ZFS include support for high storage capacities, integration of the concepts of file system and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs. Unlike traditional file systems, which reside on single devices and thus require a volume manager to use more than one device, ZFS file systems are built on top of virtual storage pools called zpools. A zpool is constructed of virtual devices (vdevs), which are themselves constructed of block devices: files, hard drive partitions, or entire drives, with the last being the recommended usage.[7] Thus, a vdev can be viewed as a group of hard drives. This means a zpool consists of one or more groups of drives.
In addition, pools can have hot spares to compensate for failing disks. In addition, ZFS supports both read and write caching, for which special devices can be used. Solid State Devices can be used for the L2ARC, or Level 2 ARC, speeding up read operations, while NVRAM buffered SLC memory can be boosted with supercapacitors to implement a fast, non-volatile write cache, improving synchronous writes. Finally, when mirroring, block devices can be grouped according to physical chassis, so that the filesystem can continue in the face of the failure of an entire chassis. Storage pool composition is not limited to similar devices but can consist of ad-hoc, heterogeneous collections of devices, which ZFS seamlessly pools together, subsequently doling out space to diverse file systems as needed. Arbitrary storage device types can be added to existing pools to expand their size at any time. The storage capacity of all vdevs is available to all of the file system instances in the zpool. A quota can be set to limit the amount of space a file system instance can occupy, and a reservation can be set to guarantee that space will be available to a file system instance.
An introduction to Linux Container, Namespace & Cgroup.
Virtual Machine, Linux operating principles. Application constraint execution environment. Isolate application working environment.
XPDS13: Xen in OSS based In–Vehicle Infotainment Systems - Artem Mygaiev, Glo...The Linux Foundation
Xen role, details of implementation and problems in a sample solution based on OSS (Android, Linux and Xen) that addresses Automotive requirements such as ultra-fast RVC boot time, quick IVI system boot time, cloud connectivity and multimedia capabilities, reliability and security through hardware virtualization. Secure CAN/LIN/MOST bus integration handled by Linux on Dom0 while Android runs customizable QML-based HMI in a sandbox of DomU. These case studies will include but not be limited to: computing power requirements, memory requirements, virtualization, stability, boot-time sequence and optimization, video clips showing results of the work done. Case study is built on TexasInstruments OMAP5 SoC.
Virtualization, Containers, Docker and scalable container management servicesabhishek chawla
In this presentation we take you through the concept of virtualization which includes the different types of virtualizations, understanding the Docker as a software containerization platform like Docker's Architecture, Building and running custom images in Docker containers, Scalable container management services which include overview of Amazon ECS & kubernetes and how at LimeTray we harnessed the power of kubernetes for scalable automated deployment of our microservices.
Docker Intro at the Google Developer Group and Google Cloud Platform Meet UpJérôme Petazzoni
Docker is the Open Source engine to author, run, and manage Linux Containers. This is a short introduction to Docker, what it is, what is for; it was given in the context of the Google Developer Group and Google Cloud Platform Meet-Up in San Francisco, end of March 2014.
Cgroups, namespaces and beyond: what are containers made from?Docker, Inc.
Linux containers are different from Solaris Zones or BSD Jails: they use discrete kernel features like cgroups, namespaces, SELinux, and more. We will describe those mechanisms in depth, as well as demo how to put them together to produce a container. We will also highlight how different container runtimes compare to each other.
Introduction to Docker (and a bit more) at LSPE meetup SunnyvaleJérôme Petazzoni
What's Docker, why does it matter, how does it use Linux Containers, why should you use it, and how? You'll find answers to those questions (and a bit more) in this presentation, given February 20th 2014 at the Large Scale Production Engineering Meet-Up at Yahoo, in Sunnyvale.
Why everyone is excited about Docker (and you should too...) - Carlo Bonamic...Codemotion
In less than two years Docker went from first line of code to major Open Source project with contributions from all the big names in IT. Everyone is excited, but what's in for me - as a Dev or Ops? In short, Docker makes creating Development, Test and even Production environments an order of magnitude simpler, faster and completely portable across both local and cloud infrastructure. We will start from Docker main concepts: how to create a Linux Container from base images, run your application in it, and version your runtimes as you would with source code, and finish with a concrete example.
OSDC 2016 - Interesting things you can do with ZFS by Allan Jude&Benedict Reu...NETWAYS
ZFS is the next generation filesystem originally developed at Sun Microsystems. Available under the CDDL, it uniquely combines volume manager and filesystem into a powerful storage management solution for Unix systems. Regardless of big or small storage requirements. ZFS offers features, for free, that are usually found only in costly enterprise storage solutions. This talk will introduce ZFS and give an overview of its features like snapshots and rollback, compression, deduplication as well as replication. We will demonstrate how these features can make a difference in the datacenter, giving administrators the power and flexibility to adapt to changing storage requirements.
Real world examples of ZFS being used in production for video streaming, virtualization, archival, and research are shown to illustrate the concepts. The talk is intended for people considering ZFS for their data storage needs and those who are interested in the features ZFS provides.
History of Linux
Brain behind development
Why Linux
GNU
Why GNU ?
Where can you find Linux?
Linux is Best!!
Core components of Linux
File system
Drive letter’s
Security
Facts about Linux
Docker is the Open Source container engine. It lets you author, run, and manage software containers. Escape from dependency hell, and make deployment a breeze! This presentation includes the standard Docker intro (actualized for Docker 0.11) as well as some insights about how to perform orchestration and multi-host container linking.
All Things Open 2015: DOCKER: EVERYTHING YOU SHOULD KNOWDocker, Inc
Docker is an open platform to build, ship, and run any Linux application, anywhere. It can be used in many ways: providing clean, isolated development environments; quickly spinning up test instances for CI purposes; ensuring coherence between development and production platform; and much more. You will learn about Docker basic concepts, how to run containers, create images, leverage Docker Hub, and stack multiple containers to orchestrate distributed applications.
Topics include
- What’s Docker?
- Understanding Docker images
- Building images with Dockerfile
- Pushing and pulling images
- Development workflow with Docker
- Orchestrating distributed applications.
This 1st presentation in the training "Introduction to linux for bioinformatics" gives an introduction to Linux, and the concepts by which Linux operates.
Docker Tips And Tricks at the Docker Beijing MeetupJérôme Petazzoni
This talk was presented in October at the Docker Beijing Meetup, in the VMware offices.
It presents some of the latest features of Docker, discusses orchestration possibilities with Docker, then gives a briefing about the performance of containers; and finally shows how to use volumes to decouple components in your applications.
Containerize Your Game Server for the Best Multiplayer Experience Docker, Inc.
Raymond Arifianto, AccelByte and
Mark Mandel, Google -
We have been deploying containerized micro-services for our Game Backend Services for a while. Now we are tackling the challenge to scale up fleets of game dedicated servers in multiple regions, multiple data centers and multiple providers - some in bare metal, some in Cloud. So we leverage docker containerization to deploy Game Servers to achieve Portability, Fast Deployment and Predictability, enabling us to scale up to thousands of servers, on demand, without a sweat.
How to Improve Your Image Builds Using Advance Docker BuildDocker, Inc.
Nicholas Dille, Haufe-Lexware + Docker Captain -
Docker continues to be the standard tool for building container images. For more than a year Docker ships with BuildKit as an alternative image builder, providing advanced features for secret and cache management. These features help to make image builds faster and more secure. In this session, Docker Captain Nicholas Dille will teach you how to use Buildkit features to your advantage.
Build & Deploy Multi-Container Applications to AWSDocker, Inc.
Lukonde Mwila, Entelect -
As the cloud-native approach to development and deployment becomes more prevalent, it's an exciting time for software engineers to be equipped on how to dockerize multi-container applications and deploy them to the cloud.
In this talk, Lukonde Mwila, Software Engineer at Entelect, will cover the following topics:
- Docker Compose
- Containerizing an Nginx Server
- Containerizing an React App
- Containerizing an Node.JS App
- Containerizing anMongoDB App
- Runing Multi-Container App Locally
- Creating a CI/CD Pipeline
- Adding a build stage to test containers and push images to Docker Hub
- Deploying Multi-Container App to AWS Elastic Beanstalk
Lukonde will start by giving an overview of how Docker Compose works and how it makes it very easy and straightforward to startup multiple Docker containers at the same time and automatically connect them together with some form of networking.
After that, Lukonde will take a hands on approach to containerize an Nginx server, a React app, a NodeJS app and a MongoDB instance to demonstrate the power of Docker Compose. He'll demonstrate usage of two Docker files for an application, one production grade and the other for local development and running of tests. Lastly, he'll demonstrate creating a CI/CD pipeline in AWS to build and test our Docker images before pushing them to Docker Hub or AWS ECR, and finally deploying our multi-container application AWS Elastic Beanstalk.
Securing Your Containerized Applications with NGINXDocker, Inc.
Kevin Jones, NGNIX -
NGINX is one of the most popular images on Docker Hub and has been at the forefront of the web since the early 2000's. In this talk we will discuss how and why NGINX's lightweight and powerful architecture makes it a very popular choice for securing containerized applications as a sidecar reverse proxy within containers. We will highlight important aspects of application security that NGINX can help with, such as TLS, HTTP, AuthN, AuthZ and traffic control.
How To Build and Run Node Apps with Docker and ComposeDocker, Inc.
Kathleen Juell, Digital Ocean -
Containers are an essential part of today's microservice ecosystem, as they allow developers and operators to maintain standards of reliability and reproducibility in fast-paced deployment scenarios. And while there are best practices that extend across stacks in containerized environments, there are also things that make each stack distinct, starting with the application image itself.
This talk will dive into some of these particularities, both at the image and service level, while also covering general best practices for building and running Node applications with database backends using Docker and Compose.
Jessica Deen, Microsoft -
Helm 3 is here; let's go hands-on! In this demo-fueled session, I'll walk you through the differences between Helm 2 and Helm 3. I'll offer tips for a successful rollout or upgrade, go over how to easily use charts created for Helm 2 with Helm 3 (without changing your syntax), and review opportunities where you can participate in the project's future.
Distributed Deep Learning with Docker at SalesforceDocker, Inc.
Jeff Hajewski, Salesforce -
There is a wealth of information on building deep learning models with PyTorch or TensorFlow. Anyone interested in building a deep learning model is only a quick search away from a number of clear and well written tutorials that will take them from zero knowledge to having a working image classifier. But what happens when you need to deploy these models in a production setting? At Salesforce, we use TensorFlow models to help us provide customers with insights into their data, and we do this as close to real-time as possible. Designing these systems in a scalable manner requires overcoming a number of design challenges, but the core component is Docker. Docker enables us to design highly scalable systems by allowing us to focus on service interactions, rather than how our services will interact with the hardware. Docker is also at the core of our test infrastructure, allowing developers and data scientists to build and test the system in an end to end manner on their local machines. While some of this may sound complex, the core message is simplicity - Docker allows us to focus on the aspects of the system that matter, greatly simplifying our lives.
The First 10M Pulls: Building The Official Curl Image for Docker HubDocker, Inc.
James Fuller, webcomposite s.r.o. -
Curl is the venerable (yet very modern) 'swiss army knife' command line tool and library for transferring data with URLs. Recently we (the Curl team) decided to build a release for Docker Hub. This talk will outline our current development workflow with respect to the docker image and provide insights on what it takes to build a docker image for mass public consumption. We are also keen to learn from users and other developers how we might improve and enhance the official curl docker image.
Fabian Stäber, Instana -
In recent years, we saw a great paradigm shift in software engineering away from static monolithic applications towards dynamic distributed horizontally scalable architectures. Docker is one of the key technologies enabling this development. This shift poses a lot of new challenges for application monitoring, ranging from practical issues (need for automation) to technical challenges (Docker networking) to organizational topics (blurring line between software engineers and operations) to fundamental questions (define what is an application). In this talk we show how Docker changed the way we do monitoring, how modern application monitoring systems work, and what future developments we expect.
COVID-19 in Italy: How Docker is Helping the Biggest Italian IT Company Conti...Docker, Inc.
Clemente Biondo, Engineering Ingegneria Informatica -
When the COVID 19 pandemic started, Engineering Ingegneria Informatica Group (1.25 billion euros of revenues, 65 offices around the world, 12.000 employees) was forced to put their digital transformation to the test in order to maintain operational continuity. In this session, Clemente Biondo, the Tech Lead of the Information Systems Department, will share how his company is reacting to this unforeseeable scenario and how Docker-driven digital transformation had paved the path for work to continue remotely. Clemente will discuss learnings moving from colocated teams, manual approaches, email based-business processes, and a monolithic application to a mature DevOps culture characterized by a distributed autonomous workforce and a continuous deployment process that deploys backward-compatible Docker containerized microservices into hybrid multi cloud datacenters an average of twice a day with zero-downtime. He will detail how they use Docker to unify dev, test and production environments, and as an efficient and automated mechanism for deploying applications. Lastly, Clemente shares how, in our darkest hour, he and others are working to shine their brightest light.
Chris Lauer, NOAA Space Weather Prediction Center -
This is the story of how adopting a containerized workflow changed the way our small software team works at NOAA’s Space Weather Prediction Center. Our old architecture, a big ball of mud shared-database integration, just wasn’t cutting it - it was killing our agility. Over the past two years, our small team has adopted a microservice style architecture, using Docker with docker-compose and environment files as our deployment strategy for all new development. We’ve discovered the joys of using containers for identical dev, staging, and production environments. We work closely with scientists: much of the code we’re running has complicated and conflicting library dependencies. Docker captures these beautifully - we’ve even had some success teaching our scientists to use it! I’ll share what we’ve learned, some of the persistent challenges we face, and one place we really got it wrong. This talk builds off of a popular hallway track from DockerCon 2019.
Become a Docker Power User With Microsoft Visual Studio CodeDocker, Inc.
Brian Christner, 56k + Docker Captain -
In this session, we will unlock the full potential of using Microsoft Visual Studio Code (VS Code) and Docker Desktop to turn you into a Docker Power User. When we expand and utilize the VS Code Docker plugin, we can take our projects and Docker skills to the next level. In addition to using VS Code, we streamline our Docker Desktop development workflow with less context switching and built-in shortcuts. You will learn how to bootstrap new projects, quickly write Dockerfiles utilizing templates, build, run, and interact with containers all from VS Code.
How to Use Mirroring and Caching to Optimize your Container RegistryDocker, Inc.
Brandon Mitchell, Boxboat + Docker Captain -
How do you make your builds more performant? This talk looks at options to configure caching and mirroring of images that you need to save on bandwidth costs and to keep running even if something goes down upstream.
Monolithic to Microservices + Docker = SDLC on Steroids!Docker, Inc.
Ashish Sharma, SS&C Eze -
SS&C Eze provides various products in the stock market domain. We spent the last couple of years building Eclipse which is an investment suite born in cloud. The journey so far has been very interesting. The very first version of the product were a bunch of monolithic windows services and deployed using Octopus tool. We successfully managed to bring all the monolithic problem to the cloud and created a nightmare for ourselves. We then started applying microservices architecture principles and started breaking the monolithic into small services. Very soon we realized that we need a better packaging/deployment tool. Docker looked like a magical solution to our problem. Since its adoption, It has not only solved the deployment problem for us but has made a deep impact on different aspects of SDLC. It allowed us to use heterogeneous technology stacks, simplified development environment setup, simplified our testing strategy, improved our speed of delivery, and made our developers more productive. In this talk I would like to share our experience of using Docker and its positive impact on our SDLC.
Ara Pulido, Datadog -
Container technologies, although not new, have increased their popularity in the past few years, with container orchestrators allowing companies around the world to adopt these technologies to help them ship and scale microservices with precision and velocity. Kubernetes is currently the most popular container orchestration platform, and while many organizations are migrating their workloads to it, Kubernetes is still relatively immature. New corner cases, errors, and quirks are regularly discovered as users push the boundaries of size and scale. When Datadog adopted Kubernetes we discovered some of these boundaries the hard way, and we continuously challenge and modify our infrastructure decisions in order to fit our use case. Join me in this talk for our story on what we learned while we scaled our Kubernetes clusters, the contributions to Kubernetes we made along the way, and how you can apply those learnings when growing your Kubernetes clusters from a handful to hundreds or thousands of nodes.
Andy Clemenko, StackRox -
One underutilized, and amazing, thing about the docker image scheme is labels. Labels are a built in way to document all aspects about the image itself. Think about all the information that the tags inside your clothing carry. If you care to look you can find out everything about the garment. All that information can be very valuable. Now think about how we can leverage labels to carry similar information. We can even use the labels to contain Docker Compose or even Kubernetes Yaml. We can even include labels into the CI/CD process making things more secure and smoother. Come find out some fun techniques on how to leverage labels to do some fun and amazing things.
Using Docker Hub at Scale to Support Micro Focus' Delivery and Deployment ModelDocker, Inc.
Patrick Deloulay, Micro Focus -
Micro Focus started their digital transformation 3 years ago, moving the entire portfolio into hundreds of container images. Leveraging Docker Hub as our primary registry service, we will cover how we ended up building a simple but secure push/pull model to publish and deliver our premium assets to our customers and partners to both meet the high agility of our DevOps teams while greatly simplifying the deployment of our applications.
Build & Deploy Multi-Container Applications to AWSDocker, Inc.
Lukonde Mwila, Entelect
As the cloud-native approach to development and deployment becomes more prevalent, it's an exciting time for software engineers to be equipped on how to dockerize multi-container applications and deploy them to the cloud.
In this talk, Lukonde Mwila, Software Engineer at Entelect, will cover the following topics:
- Docker Compose
- Containerizing an Nginx Server
- Containerizing an React App
- Containerizing an Node.JS App
- Containerizing anMongoDB App
- Runing Multi-Container App Locally
- Creating a CI/CD Pipeline
- Adding a build stage to test containers and push images to Docker Hub
- Deploying Multi-Container App to AWS Elastic Beanstalk
Lukonde will start by giving an overview of how Docker Compose works and how it makes it very easy and straightforward to startup multiple Docker containers at the same time and automatically connect them together with some form of networking.
After that, Lukonde will take a hands on approach to containerize an Nginx server, a React app, a NodeJS app and a MongoDB instance to demonstrate the power of Docker Compose. He'll demonstrate usage of two Docker files for an application, one production grade and the other for local development and running of tests. Lastly, he'll demonstrate creating a CI/CD pipeline in AWS to build and test our Docker images before pushing them to Docker Hub or AWS ECR, and finally deploying our multi-container application AWS Elastic Beanstalk.
From Fortran on the Desktop to Kubernetes in the Cloud: A Windows Migration S...Docker, Inc.
Elton Stoneman, Docker Captain + Container Consultant and Trainer
How do you provide a SaaS offering when your product is a 10-year old Fortran app, currently built to run on Windows 10? With Docker and Kubernetes of course - and you can do it in a week (... to prototype level at least).
In this session I'll walk through the processes and practicalities of taking an older Windows app, making it run in containers with Kubernetes, and then building a simple API wrapper to host the whole stack as a cloud-based SaaS product.
There's a lot of technology here from a real world case study, and I'll focus on:
- running Windows apps in Docker containers
- building a .NET Core API which can run in Linux or Windows containers
- running the stack in Kubernetes with Docker Desktop locally and AKS in the cloud
- configuring AKS workloads in Azure to burst out to Azure Container Instances
And there's a core theme to this session: Docker and Kubernetes are complex technologies, but they're the key to modern development. If you invest time learning them, they make projects like this simple, portable, fast and fun.
Developing with Docker for the Arm ArchitectureDocker, Inc.
This virtual meetup introduces the concepts and best practices of using Docker containers for software development for the Arm architecture across a variety of hardware systems. Using Docker Desktop on Windows or Mac, Amazon Web Services (AWS) A1 instances, and embedded Linux, we will demonstrate the latest Docker features to build, share, and run multi-architecture images with transparent support for Arm.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Leading Change strategies and insights for effective change management pdf 1.pdf
Docker storage drivers by Jérôme Petazzoni
1. Deep dive into
Docker storage drivers
*
Jérôme Petazzoni - @jpetazzo
Docker - @docker
1 / 71
2. Not so deep dive into
Docker storage drivers
*
Jérôme Petazzoni - @jpetazzo
Docker - @docker
2 / 71
3. Who am I?
@jpetazzo
Tamer of Unicorns and Tinkerer Extraordinaire¹
Grumpy French DevOps person who loves Shell scripts
Go Away Or I Will Replace You Wiz Le Very Small Shell Script
Some experience with containers
(built and operated the dotCloud PaaS)
¹ At least one of those is actually on my business card
3 / 71
4. Outline
Extremely short intro to Docker
Short intro to copy-on-write
History of Docker storage drivers
AUFS, BTRFS, Device Mapper, Overlayfs, VFS
Conclusions
4 / 71
6. What's Docker?
A platform made of the Docker Engine and the Docker Hub
The Docker Engine is a runtime for containers
It's Open Source, and written in Go
http://www.slideshare.net/jpetazzo/docker-and-go-why-did-we-decide-to-write-docker-in-go
It's a daemon, controlled by a REST-ish API
What is this, I don't even?!?
Check the recording of this online "Docker 101" session:
https://www.youtube.com/watch?v=pYZPd78F4q4
6 / 71
7. If you've never seen Docker in action ...
This will help!
jpetazzo@tarrasque:~$dockerrun-tipythonbash
root@75d4bf28c8a5:/#pipinstallIPython
Downloading/unpackingIPython
Downloadingipython-2.3.1-py3-none-any.whl(2.8MB):2.8MBdownloaded
Installingcollectedpackages:IPython
SuccessfullyinstalledIPython
Cleaningup...
root@75d4bf28c8a5:/#ipython
Python3.4.2(default,Jan222015,07:33:45)
Type"copyright","credits"or"license"formoreinformation.
IPython2.3.1--AnenhancedInteractivePython.
? ->IntroductionandoverviewofIPython'sfeatures.
%quickref->Quickreference.
help ->Python'sownhelpsystem.
object? ->Detailsabout'object',use'object??'forextradetails.
In[1]:
7 / 71
8. What happened here?
We created a container (~lightweight virtual machine),
with its own:
filesystem (based initially on a pythonimage)
network stack
process space
We started with a bashprocess
(no init, no systemd, no problem)
We installed IPython with pip, and ran it
8 / 71
9. What did not happen here?
We did not make a full copy of the pythonimage
The installation was done in the container, not the image:
We did not modify the pythonimage itself
We did not affect any other container
(currently using this image or any other image)
9 / 71
10. How is this important?
We used a copy-on-write mechanism
(Well, Docker took care of it for us)
Instead of making a full copy of the pythonimage, keep
track of changes between this image and our container
Huge disk space savings (1 container = less than 1 MB)
Huge time savings (1 container = less than 0.1s to start)
10 / 71
13. Copy-on-write for memory (RAM)
fork()(process creation)
Create a new process quickly
... even if it's using many GBs of RAM
Actively used by e.g. Redis SAVE,
to obtain consistent snapshots
mmap()(mapped files) with MAP_PRIVATE
Changes are visible only to current process
Private maps are fast, even on huge files
Granularity: 1 page at a time (generally 4 KB)
13 / 71
14. Copy-on-write for memory (RAM)
How does it work?
Thanks to the MMU! (Memory Management Unit)
Each memory access goes through it
Translates memory accesses (location¹ + operation²) into:
actual physical location
or, alternatively, a page fault
¹ Location = address = pointer
² Operation = read, write, or exec
14 / 71
15. Page faults
When a page faults occurs, the MMU notifies the OS.
Then what?
Access to non-existent memory area = SIGSEGV
(a.k.a. "Segmentation fault" a.k.a. "Go and learn to use pointers")
Access to swapped-out memory area = load it from disk
(a.k.a. "My program is now 1000x slower")
Write attempt to code area = seg fault (sometimes)
Write attempt to copy area = deduplication operation
Then resume the initial operation as if nothing happened
Can also catch execution attempt in no-exec area
(e.g. stack, to protect against some exploits)
15 / 71
16. Copy-on-write for storage (disk)
Initially used (I think) for snapshots
(E.g. to take a consistent backup of a busy database,
making sure that nothing was modified between the
beginning and the end of the backup)
Initially available (I think) on external storage (NAS, SAN)
(Because It's Complicated)
16 / 71
17. Copy-on-write for storage (disk)
Initially used (I think) for snapshots
(E.g. to take a consistent backup of a busy database,
making sure that nothing was modified between the
beginning and the end of the backup)
Initially available (I think) on external storage (NAS, SAN)
(Because It's Complicated)
Suddenly,
Wild CLOUD appeared!
17 / 71
18. Thin provisioning for VMs¹
Put system image on copy-on-write storage
For each machine¹, create copy-on-write instance
If the system image contains a lot of useful software,
people will almost never need to install extra stuff
Each extra machine will only need disk space for data!
WIN $$$ (And performance, too, because of caching)
¹ Not only VMs, but also physical machines with netboot, and containers!
18 / 71
19. Modern copy-on-write on your desktop
(In no specific order; non-exhaustive list)
LVM (Logical Volume Manager) on Linux
ZFS on Solaris, then FreeBSD, Linux ...
BTRFS on Linux
AUFS, UnionMount, overlayfs ...
Virtual disks in VM hypervisors
19 / 71
20. Copy-on-write and Docker: a love story
Without copy-on-write...
it would take forever to start a container
containers would use up a lot of space
Without copy-on-write "on your desktop"...
Docker would not be usable on your Linux machine
There would be no Docker at all.
And no meet-up here tonight.
And we would all be shaving yaks instead.
☹
20 / 71
21. Thank you:
Junjiro R. Okajima (and other AUFS contributors)
Chris Mason (and other BTRFS contributors)
Jeff Bonwick, Matt Ahrens (and other ZFS contributors)
Miklos Szeredi (and other overlayfs contributors)
The many contributors to Linux device mapper, thinp target,
etc.
... And all the other giants whose shoulders we're sitting on top of, basically
21 / 71
23. First came AUFS
Docker used to be dotCloud
(PaaS, like Heroku, Cloud Foundry, OpenShift...)
dotCloud started using AUFS in 2008
(with vserver, then OpenVZ, then LXC)
Great fit for high density, PaaS applications
(More on this later!)
23 / 71
24. AUFS is not perfect
Not in mainline kernel
Applying the patches used to be exciting
... especially in combination with GRSEC
... and other custom fancery like setns()
24 / 71
25. But some people believe in AUFS!
dotCloud, obviously
Debian and Ubuntu use it in their default kernels,
for Live CD and similar use cases:
Your root filesystem is a copy-on-write between
- the read-only media (CD, DVD...)
- and a read-write media (disk, USB stick...)
As it happens, we also ♥ Debian and Ubuntu very much
First version of Docker is targeted at Ubuntu (and Debian)
25 / 71
26. Then, some people started to believe in Docker
Red Hat users demanded Docker on their favorite distro
Red Hat Inc. wanted to make it happen
... and contributed support for the Device Mapper driver
... then the BTRFS driver
... then the overlayfs driver
Note: other contributors also helped tremendously!
26 / 71
27. Special thanks:
Alexander Larsson
Vincent Batts
+ all the other contributors and maintainers, of course
(But those two guys have played an important role in the initial support, then
maintenance, of the BTRFS, Device Mapper, and overlay drivers. Thanks again!)
27 / 71
30. In Theory
Combine multiple branches in a specific order
Each branch is just a normal directory
You generally have:
at least one read-only branch (at the bottom)
exactly one read-write branch (at the top)
(But other fun combinations are possible too!)
30 / 71
31. When opening a file...
With O_RDONLY- read-only access:
look it up in each branch, starting from the top
open the first one we find
With O_WRONLYor O_RDWR- write access:
look it up in the top branch;
if it's found here, open it
otherwise, look it up in the other branches;
if we find it, copy it to the read-write (top) branch,
then open the copy
That "copy-up" operation can take a while if the file is big!
31 / 71
32. When deleting a file...
A whiteout file is created
(if you know the concept of "tombstones", this is similar)
#dockerrunubunturm/etc/shadow
#ls-la/var/lib/docker/aufs/diff/$(dockerps--no-trunc-lq)/etc
total8
drwxr-xr-x2rootroot4096Jan2715:36.
drwxr-xr-x5rootroot4096Jan2715:36..
-r--r--r--2rootroot 0Jan2715:36.wh.shadow
32 / 71
33. In Practice
The AUFS mountpoint for a container is
/var/lib/docker/aufs/mnt/$CONTAINER_ID/
It is only mounted when the container is running
The AUFS branches (read-only and read-write) are in
/var/lib/docker/aufs/diff/$CONTAINER_OR_IMAGE_ID/
All writes go to /var/lib/docker
dockerhost#df-h/var/lib/docker
Filesystem Size UsedAvailUse%Mountedon
/dev/xvdb 15G 4.8G 9.5G 34%/mnt
33 / 71
34. Under the hood
To see details about an AUFS mount:
look for its internal ID in /proc/mounts
look in /sys/fs/aufs/si_.../br*
each branch (except the two top ones)
translates to an image
34 / 71
36. Performance, tuning
AUFS mount()is fast, so creation of containers is quick
Read/write access has native speeds
But initial open()is expensive in two scenarios:
when writing big files (log files, databases ...)
with many layers + many directories in PATH
(dynamic loading, anyone?)
Protip: when we built dotCloud, we ended up putting all
important data on volumes
When starting the same container 1000x, the data is
loaded only once from disk, and cached only once in
memory (but dentrieswill be duplicated)
36 / 71
38. Preamble
Device Mapper is a complex subsystem; it can do:
RAID
encrypted devices
snapshots (i.e. copy-on-write)
and some other niceties
In the context of Docker, "Device Mapper" means
"the Device Mapper system + its thin provisioning target"
(sometimes noted "thinp")
38 / 71
39. In theory
Copy-on-write happens on the block level
(instead of the file level)
Each container and each image gets its own block device
At any given time, it is possible to take a snapshot:
of an existing container (to create a frozen image)
of an existing image (to create a container from it)
If a block has never been written to:
it's assumed to be all zeros
it's not allocated on disk
(hence "thin" provisioning)
39 / 71
40. In practice
The mountpoint for a container is
/var/lib/docker/devicemapper/mnt/$CONTAINER_ID/
It is only mounted when the container is running
The data is stored in two files, "data" and "metadata"
(More on this later)
Since we are working on the block level, there is not much
visibility on the diffs between images and containers
40 / 71
41. Under the hood
dockerinfowill tell you about the state of the pool
(used/available space)
List devices with dmsetupls
Device names are prefixed with docker-MAJ:MIN-INO
MAJ, MIN, and INO are derived from the block major, block minor, and inode number
where the Docker data is located (to avoid conflict when running multiple Docker
instances, e.g. with Docker-in-Docker)
Get more info about them with dmsetupinfo, dmsetupstatus
(you shouldn't need this, unless the system is badly borked)
Snapshots have an internal numeric ID
/var/lib/docker/devicemapper/metadata/$CONTAINER_OR_IMAGE_ID
is a small JSON file tracking the snapshot ID and its size
41 / 71
42. Extra details
Two storage areas are needed:
one for data, another for metadata
"data" is also called the "pool"; it's just a big pool of blocks
(Docker uses the smallest possible block size, 64 KB)
"metadata" contains the mappings between virtual offsets
(in the snapshots) and physical offsets (in the pool)
Each time a new block (or a copy-on-write block) is
written, a block is allocated from the pool
When there are no more blocks in the pool, attempts to
write will stall until the pool is increased (or the write
operation aborted)
42 / 71
43. Performance
By default, Docker puts data and metadata on a loop
device backed by a sparse file
This is great from a usability point of view
(zero configuration needed)
But terrible from a performance point of view:
each time a container writes to a new block,
a block has to be allocated from the pool,
and when it's written to,
a block has to be allocated from the sparse file,
and sparse file performance isn't great anyway
43 / 71
44. Tuning
Do yourself a favor: if you use Device Mapper,
put data (and metadata) on real devices!
stop Docker
change parameters
wipe out /var/lib/docker(important!)
restart Docker
docker-d--storage-optdm.datadev=/dev/sdb1--storage-optdm.metadatadev=/dev/sdc1
44 / 71
45. More tuning
Each container gets its own block device
with a real FS on it
So you can also adjust (with --storage-opt):
filesystem type
filesystem size
discard(more on this later)
Caveat: when you start 1000x containers,
the files will be loaded 1000x from disk!
45 / 71
48. In theory
Do the whole "copy-on-write" thing at the filesystem level
Create¹ a "subvolume" (imagine mkdirwith Super Powers)
Snapshot¹ any subvolume at any given time
BTRFS integrates the snapshot and block pool
management features at the filesystem level, instead of the
block device level
¹ This can be done with the btrfstool.
48 / 71
49. In practice
/var/lib/dockerhas to be on a BTRFS filesystem!
The BTRFS mountpoint for a container or an image is
/var/lib/docker/btrfs/subvolumes/$CONTAINER_OR_IMAGE_ID/
It should be present even if the container is not running
Data is not written directly, it goes to the journal first
(in some circumstances¹, this will affect performance)
¹ E.g. uninterrupted streams of writes.
The performance will be half of the "native" performance.
49 / 71
50. Under the hood
BTRFS works by dividing its storage in chunks
A chunk can contain meta or metadata
You can run out of chunks (and get Nospacelefton
device)
even though dfshows space available
(because the chunks are not full)
Quick fix:
#btrfsfilesysbalancestart-dusage=1/var/lib/docker
50 / 71
51. Performance, tuning
Not much to tune
Keep an eye on the output of btrfsfilesysshow!
This filesystem is doing fine:
#btrfsfilesysshow
Label:none uuid:80b37641-4f4a-4694-968b-39b85c67b934
Totaldevices1FSbytesused4.20GiB
devid 1size15.25GiBused6.04GiBpath/dev/xvdc
This one, however, is full (no free chunk) even though there is
not that much data on it:
#btrfsfilesysshow
Label:none uuid:de060d4c-99b6-4da0-90fa-fb47166db38b
Totaldevices1FSbytesused2.51GiB
devid 1size87.50GiBused87.50GiBpath/dev/xvdc
51 / 71
53. Preamble
What with the grayed fs?
It used to be called (and have filesystem type) overlayfs
When it was merged in 3.18, this was changed to overlay
53 / 71
54. In theory
This is just like AUFS, with minor differences:
only two branches (called "layers")
but branches can be overlays themselves
54 / 71
55. In practice
You need kernel 3.18
On Ubuntu¹:
go to http://kernel.ubuntu.com/~kernel-ppa/mainline/
locate the most recent directory, e.g. v3.18.4-vidi
download the linux-image-..._amd64.debfile
dpkg-ithat file, reboot, enjoy
¹ Adapatation to other distros left as an exercise for the reader.
55 / 71
56. Under the hood
Images and containers are materialized under
/var/lib/docker/overlay/$ID_OF_CONTAINER_OR_IMAGE
Images just have a rootsubdirectory
(containing the root FS)
Containers have:
lower-id→ file containing the ID of the image
merged/→ mount point for the container (when running)
upper/→ read-write layer for the container
work/→ temporary space used for atomic copy-up
56 / 71
57. Performance, tuning
Implementation detail:
identical files are hardlinked between images
(this avoids doing composed overlays)
Not much to tune at this point
Performance should be slightly better than AUFS:
no stat()explosion
good memory use
slow copy-up, still (nobody's perfect)
57 / 71
59. In theory
No copy on write. Docker does a full copy each time!
Doesn't rely on those fancy-pesky kernel features
Good candidate when porting Docker to new platforms
(think FreeBSD, Solaris...)
Space inefficient, slow
59 / 71
60. In practice
Might be useful for production setups
(If you don't want / cannot use volumes, and don't want /
cannot use any of the copy-on-write mechanisms!)
60 / 71
62. The nice thing about Docker storage drivers,
is that there are so many of them to choose from.
62 / 71
63. What do, what do?
If you do PaaS or other high-density environment:
AUFS (if available on your kernel)
overlayfs (otherwise)
If you put big writable files on the CoW filesystem:
BTRFS or Device Mapper (pick the one you know best)
Wait, really, you want me to pick one!?!
63 / 71
65. The best storage driver to run your production
will be the one with which you and your team
have the most extensive operational experience.
65 / 71
67. TRIM
Command sent to a SSD disk, to tell it:
"that block is not in use anymore"
Useful because on SSD, erase is very expensive (slow)
Allows the SSD to pre-erase cells in advance
(rather than on-the-fly, just before a write)
Also meaningful on copy-on-write storage
(if/when every snapshots as trimmed a block, it can be
freed)
67 / 71
68. discard
Filesystem option meaning:
"can I has TRIMon this pls"
Can be enabled/disabled at any time
Filesystem can also be trimmed manually with fstrim
(even while mounted)
68 / 71
69. The discardquandary
discardworks on Device Mapper + loopback devices
... but is particularly slow on loopback devices
(the loopback file needs to be "re-sparsified" after
container or image deletion, and this is a slow operation)
You can turn it on or off depending on your preference
69 / 71
71. Questions?
To get those slides, follow me on twitter: @jpetazzo
Yes, this is a particularly evil scheme to increase my follower count
Also WE ARE HIRING!
infrastructure (servers, metal, and stuff)
QA (get paid to break things!)
Python (Docker Hub and more)
Go (Docker Engine and more)
Rumor says Docker UK office might be hiring but what do I know!
(I know nothing, except that you should send your resume to jobs@docker.com)
71 / 71