Docker is popular open-source software containerization platform. It provides an ability to package software into standardised units on Docker for software development. In this hands-on introductory session, I introduce the concept of containers and provide an overview of Docker. Participants can learn important concepts in Docker step-by-step and learn by example by running commands with me. The main session involves using Docker CLI (Command Line Interface) covering all the key concepts such as creating images and managing containers. What is more, this workshop ends with a complete example of getting some amazing work done with ease using Docker. Presented in OSI Days '16: http://opensourceindia.in/osidays/workshops-osi-2016/
Overview of Docker 1.11 features(Covers Docker release summary till 1.11, runc/containerd, dns load balancing ipv6 service discovery, labels, macvlan/ipvlan)
Delve Labs was present during the GoSec 2016 conference, where our lead DevOps engineer presented an overview of the current options available for securing Docker in production environments.
https://www.delve-labs.com
Under the Hood with Docker Swarm Mode - Drew Erny and Nishant Totla, DockerDocker, Inc.
Join SwarmKit maintainers Drew and Nishant as they showcase features that have made Swarm Mode even more powerful, without compromising the operational simplicity it was designed with. They will discuss the implementation of new features that streamline deployments, increase security, and reduce downtime. These substantial additions to Swarm Mode are completely transparent and straightforward to use, and users may not realize they're already benefiting from these improvements under the hood.
Looking at how people, with current deployments, can start using docker with out having to replace anything. Also giving a migration path that allows testing the separate pieces and migrating over slowly without painting yourself into a corner. Also covering why you might want to do this and the problems it may help to solve.
Docker - Demo on PHP Application deployment Arun prasath
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this demo, I will show how to build a Apache image from a Dockerfile and deploy a PHP application which is present in an external folder using custom configuration files.
Docker is popular open-source software containerization platform. It provides an ability to package software into standardised units on Docker for software development. In this hands-on introductory session, I introduce the concept of containers and provide an overview of Docker. Participants can learn important concepts in Docker step-by-step and learn by example by running commands with me. The main session involves using Docker CLI (Command Line Interface) covering all the key concepts such as creating images and managing containers. What is more, this workshop ends with a complete example of getting some amazing work done with ease using Docker. Presented in OSI Days '16: http://opensourceindia.in/osidays/workshops-osi-2016/
Overview of Docker 1.11 features(Covers Docker release summary till 1.11, runc/containerd, dns load balancing ipv6 service discovery, labels, macvlan/ipvlan)
Delve Labs was present during the GoSec 2016 conference, where our lead DevOps engineer presented an overview of the current options available for securing Docker in production environments.
https://www.delve-labs.com
Under the Hood with Docker Swarm Mode - Drew Erny and Nishant Totla, DockerDocker, Inc.
Join SwarmKit maintainers Drew and Nishant as they showcase features that have made Swarm Mode even more powerful, without compromising the operational simplicity it was designed with. They will discuss the implementation of new features that streamline deployments, increase security, and reduce downtime. These substantial additions to Swarm Mode are completely transparent and straightforward to use, and users may not realize they're already benefiting from these improvements under the hood.
Looking at how people, with current deployments, can start using docker with out having to replace anything. Also giving a migration path that allows testing the separate pieces and migrating over slowly without painting yourself into a corner. Also covering why you might want to do this and the problems it may help to solve.
Docker - Demo on PHP Application deployment Arun prasath
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this demo, I will show how to build a Apache image from a Dockerfile and deploy a PHP application which is present in an external folder using custom configuration files.
Deploying Windows Containers on Windows Server 2016Ben Hall
Introduction into the new Windows Containers and Windows Hyper-V Containers coming in Windows Server 2016.
Presented at WinOps Meetup #5 on Wednesday 20th April 2016. http://www.meetup.com/WinOps/events/229065341/
Plug-ins: Building, Shipping, Storing, and Running - Nandhini Santhanam and T...Docker, Inc.
At Docker, we are striving to enable the extensibility of Docker via "Plugins" and make them available for developers and enterprises alike. Come attend this talk to understand what it takes to build, ship, store and run plugins. We will deep dive into plugin lifecycle management on a single engine and across a swarm cluster. We will also demonstrate how you can integrate plugins from other enterprises or developers into your ecosystem. There will be fun demos accompanying this talk! This will be session will be beneficial to you if you: 1) Are an ops team member trying to integrate Docker with your favorite storage or network vendor 2) Are Interested in extending or customizing Docker; or 3) Want to become a Docker partner, and want to make the technology integration seamless.
Continuous Integration: SaaS vs Jenkins in CloudIdeato
Dopo la diffusione del Cloud Computing e di Docker, è ancora preferibile
adottare i classici SaaS di Continuous Integration rispetto ad un
sistema Jenkins in cloud?
L'intervento ha l’obiettivo di mostrare un caso d'uso applicato in
Ideato di migrazione da un SaaS quale Travis ad un sistema Jenkins in
cloud, sfruttando funzionalità di on demand tramite il cloud di Amazon
Web Services e di containerizzazione tramite Docker.
Tenendo in considerazione gli aspetti tecnici legati all’implementazione
e quelli che potrebbero impattare sul fronte economico come la mancanza
di automatizzazione e i tempi di setup, verranno mostrati pregi e
difetti di questo sistema e come può essere applicato ad una serie di
progetti. Infine verranno elencati una serie di prodotti recentemente
rilasciati e in grado di far evolvere ulteriormente l'attuale sistema.
Dockerize your Symfony application - Symfony Live NYC 2014André Rømcke
With the advent of docker it is now easier then ever to make sure you develop, test and deploy using the same environment, resulting in no more issues caused by differences or missing libraries. Talk will go into the basics of containers, docker, and showcase how you might setup a basic php + mysql environment for your symfony app.
https://joind.in/12188
It is a simple introduction to the containers world, starting from LXC to arrive to the Docker Platform.
The presentation is focused on the first steps in the docker environment and the scenarious from a developer point of view.
This presentation looks deep into the concept of containerization. What is containerization, how is it different from VMs, how containerization is achieved using Linux containers (LXC), control groups (cgroups) and copy on write file systems and current trends in containerization/docker are described.
The internals and the latest trends of container runtimesAkihiro Suda
Containers are a set of various lightweight methods to isolate filesystems, CPU resources, memory resources, system permissions, etc. Containers are similar to virtual machines in many senses, but they are more efficient and often less secure. This talk roughly consists of the following three parts:
1. Introduction to containers and how they spread in the last decade
2. Internals of container runtimes: namespaces, cgroups, capabilities, seccomp, etc.
3. Latest trends: Non-Docker containers, User Namespaces, Rootless Containers, Kata Containers, gVisor, WebAssembly, etc.
http://www.cce.i.kyoto-u.ac.jp/danwa23.html
Deploying Windows Containers on Windows Server 2016Ben Hall
Introduction into the new Windows Containers and Windows Hyper-V Containers coming in Windows Server 2016.
Presented at WinOps Meetup #5 on Wednesday 20th April 2016. http://www.meetup.com/WinOps/events/229065341/
Plug-ins: Building, Shipping, Storing, and Running - Nandhini Santhanam and T...Docker, Inc.
At Docker, we are striving to enable the extensibility of Docker via "Plugins" and make them available for developers and enterprises alike. Come attend this talk to understand what it takes to build, ship, store and run plugins. We will deep dive into plugin lifecycle management on a single engine and across a swarm cluster. We will also demonstrate how you can integrate plugins from other enterprises or developers into your ecosystem. There will be fun demos accompanying this talk! This will be session will be beneficial to you if you: 1) Are an ops team member trying to integrate Docker with your favorite storage or network vendor 2) Are Interested in extending or customizing Docker; or 3) Want to become a Docker partner, and want to make the technology integration seamless.
Continuous Integration: SaaS vs Jenkins in CloudIdeato
Dopo la diffusione del Cloud Computing e di Docker, è ancora preferibile
adottare i classici SaaS di Continuous Integration rispetto ad un
sistema Jenkins in cloud?
L'intervento ha l’obiettivo di mostrare un caso d'uso applicato in
Ideato di migrazione da un SaaS quale Travis ad un sistema Jenkins in
cloud, sfruttando funzionalità di on demand tramite il cloud di Amazon
Web Services e di containerizzazione tramite Docker.
Tenendo in considerazione gli aspetti tecnici legati all’implementazione
e quelli che potrebbero impattare sul fronte economico come la mancanza
di automatizzazione e i tempi di setup, verranno mostrati pregi e
difetti di questo sistema e come può essere applicato ad una serie di
progetti. Infine verranno elencati una serie di prodotti recentemente
rilasciati e in grado di far evolvere ulteriormente l'attuale sistema.
Dockerize your Symfony application - Symfony Live NYC 2014André Rømcke
With the advent of docker it is now easier then ever to make sure you develop, test and deploy using the same environment, resulting in no more issues caused by differences or missing libraries. Talk will go into the basics of containers, docker, and showcase how you might setup a basic php + mysql environment for your symfony app.
https://joind.in/12188
It is a simple introduction to the containers world, starting from LXC to arrive to the Docker Platform.
The presentation is focused on the first steps in the docker environment and the scenarious from a developer point of view.
This presentation looks deep into the concept of containerization. What is containerization, how is it different from VMs, how containerization is achieved using Linux containers (LXC), control groups (cgroups) and copy on write file systems and current trends in containerization/docker are described.
The internals and the latest trends of container runtimesAkihiro Suda
Containers are a set of various lightweight methods to isolate filesystems, CPU resources, memory resources, system permissions, etc. Containers are similar to virtual machines in many senses, but they are more efficient and often less secure. This talk roughly consists of the following three parts:
1. Introduction to containers and how they spread in the last decade
2. Internals of container runtimes: namespaces, cgroups, capabilities, seccomp, etc.
3. Latest trends: Non-Docker containers, User Namespaces, Rootless Containers, Kata Containers, gVisor, WebAssembly, etc.
http://www.cce.i.kyoto-u.ac.jp/danwa23.html
Containers: from development to production at DevNation 2015Jérôme Petazzoni
In Docker, applications are shipped using a lightweight format, managed with a high-level API, and run within software containers which abstract the host environment. Operating details like distributions, versions, and network setup no longer matter to the application developer.
Thanks to this abstraction level, we can use the same container across all steps of the life cycle of an application, from development to production. This eliminates problems stemming from discrepancies between those environments.
Even so, these environments will always have different requirements. If our quality assurance (QA) and production systems use different logging systems, how can we still ship the same container to both? How can we satisfy the backup and security requirements of our production stack without bloating our development stack?
In this sess, you will learn about the unique features in containers that allow you to cleanly decouple system administrator tasks from the core of your application. We’ll show you how this decoupling results in smaller, simpler containers, and gives you more flexibility when building, managing, and evolving your application stacks.
How Secure Is Your Container? ContainerCon Berlin 2016Phil Estes
A conference talk at ContainerCon Europe in Berlin, Germany, given on October 5th, 2016. This is a slightly modified version of my talk first used at Docker London in July 2016.
Dojo given at ESEI, Uvigo.
The slides include a set of great slides from a presentation made by Elvin Sindrilaru at CERN.
Docker is an open platform for building, shipping and running distributed applications. It gives programmers, development teams and operations engineers the common toolbox they need to take advantage of the distributed and networked nature of modern applications.
A talk given at Docker London on Wednesday, July 20th, 2016. This talk is a fast-paced overview of the potential threats faced when containerizing applications, married to a quick run-through of the "security toolbox" available in the Docker engine via Linux kernel capabilities and features enabled by OCI's libcontainer/runc and Docker.
A video recording of this talk is available here: https://skillsmatter.com/skillscasts/8551-container-security
Internal presentation of Docker, Lightweight Virtualization, and linux Containers; at Spotify NYC offices, featuring engineers from Yandex, LinkedIn, Criteo, and NASA!
Tokyo OpenStack Summit 2015: Unraveling Docker SecurityPhil Estes
A Docker security talk that Salman Baset and Phil Estes presented at the Tokyo OpenStack Summit on October 29th, 2015. In this talk we provided an overview of the security constraints available to Docker cloud operators and users and then walked through a "lessons learned" from experiences operating IBM's public Bluemix container cloud based on Docker container technology.
History and Basics of containers, LXC, Docker and Kubernetes. This presentation is given to Engineering colleage students at VIT DevFest 2018. Beginner to Intermediate level.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
2. $ whoami
● Patrick Kleindienst
● Computer Science & Media (CS3)
● student trainee at Bertsch Innovation GmbH (since 2014)
● interested in Linux, software development, infrastructure etc.
2
3. Outline
● Review: Hardware virtualization and VMs
● Docker at a glance
● Container internals (using the example of Docker)
● Container security: How secure is Docker?
● Conclusion and further thoughts
● Discussion
3
5. Review: Virtual Machine basics
5
● VM = replication of a computer system
● runs a whole operating system with its
own OS kernel
● hypervisor creates a virtual environment
for each VM (RAM, CPU, Storage, ..)
● hypervisor as an abstraction layer
between host and guest(s)
● each host may run multiple guest VMs
6. Hardware Virtualization: Pros and Cons
single kernel per VM offers high degree of
isolation
hypervisor reduces attack surface
VM escape is considered very difficult
improvement of hardware resources
utilization
guest OS may be different from host OS
● full kernel = almost certainly many bugs
● hypervisor may also ship with bugs
● not as efficient as an ordinary host
● running on virtual hardware is slower than
physical hardware
● highly elastic infrastructure based on VMs
is not so easy6
7. Docker at a
glance
● About Docker
● The Container approach
● Docker architecture
● Demo
7
8. About Docker
● started as dotCloud (shipping Software with LXC)
● release of Docker as Open Source Project (2013)
● slogan: “Build, Ship, Run”
● ease of packaging and deploying applications
● focused on usability
● trigger for DevOps movement
8
9. The Container approach
9
● no more hypervisor, no more VMs
● lightweight Docker Engine running on top
of host OS
● Docker engine runs apps along with their
dependencies as isolated processes
sharing the host kernel (Containers)
● Starting/Stopping a container takes
seconds instead of minutes (or even
hours)
10. Docker architecture (1)
Docker Image:
read-only template containing a minimal OS
(e.g. Ubuntu, Debian, CentOS, ..)
may also contain additional layers (JRE,
Python, Apache, VIM, ..)
published and shared by means of Dockerfiles
Docker Container:
additional read-write layer on top of an image
does not manipulate the underlying image
10
11. Docker architecture (2)
● Docker Client:
○ for interaction with Docker Daemon
○ shares a UNIX socket with the daemon
● Docker Daemon:
○ connects to the same UNIX socket as the
client
○ responsible for starting, stopping and
monitoring containers
11
12. What we’ve learned so far:
● In contrast to VMs, containers running on the same host share the
underlying kernel
● Therefore, they’re lightweight and save lots of resources
● As for starting/stopping/setup, they’re also much faster than traditional
VMs
● Docker distinguishes between Images and Containers
● Docker Images ship with at least a single minimal OS layer12
13. What we DON’T know so far:
Thinking about the underlying technology:
What exactly are file system layers and Copy-on-Write?
How to provide isolation between multiple containers running on same host?
Did Docker really invent all this stuff??
Thinking about security:
Eeehm, … how secure is running a container in the first place?
And what about Docker?13
15. Union file systems (AUFS)
● unification filesystem: stack of multiple directories on an Linux host which
provides a single unified view (like stacked sheets on a overhead
projector)
● involved directories need a shared mount point (union mount)
● shared mount point provides a single view on the mounted directories
● a directory participating in a union mount is called a branch
● result: each layer simply stores what has changed compared to the layers15
18. Namespaces
● isolation mechanism of the Linux kernel
● provide processes with a different views on global resources
● examples: PIDs, network interfaces, mount points
● processes can work on that views without affecting the global
configuration
● Linux makes use of certain system calls for namespace creation
18
19. Mount Namespaces
19
● Linux OS maintains data structure
containing all existing mount points
(which fs is mounted on which path?)
● Kernel allows for cloning this data
structure and pass is to a process or
group of processes
● Process(es) can change their mount
points without side-effects
● e.g. allows for changing the root fs
(similar to chroot)
20. PID Namespaces
20
● in a single process tree, a privileged process
may inspect or kill other processes
● Linux kernel allows for nested PID
namespaces
● Processes inside a PID namespace are not
aware of what’s going on outside
● However, processes in the outer PID
namespace consider them as regular
members of the outer process tree
21. Network Namespaces
● allows a process/group of processes to
see a different set of network interfaces
● each container gets assigned a virtual
network interface
● each virtual network interface is
connected to the Docker daemon’s
docker0 interface
● docker0 routes traffic between containers
and the host (depending on settings)
21
22. Control groups (cgroups)
● mechnism for limiting certain resources a process/group of processes can
call for
● e.g. CPU, Memory, device access, network (QoS), ..
● a cgroup as a whole can be “frozen” and later “unfrozen”
● freeze mechanism allows to easily stop associated idling processes and to
wake them up if necessary
● might prevent a container from “running amok” (e.g. binds all resources or22
23. What we’ve learned:
● union file systems enable re-use of single image layers
● a container makes use of CoW in order to work on read-only images
● multiple namespaces provided by host kernel allow for isolated execution
of container processes
● cgroups as a means for limiting access and resource consumption
23
24. As for security, questions remain:
● The container default user is root. What happens if anyone succeeds
breaking out of a container?
● Is container breakout even possible?
● What about container threats in general?
● What about client-side authentication/authorization in Docker?
● Is there any option to verify the publisher of Docker images in order to
avoid tampering and replay attacks?24
25. Container security:
How secure is
Docker?
● uid 0 - one account to rule them
all
● Demo: Container breakout
● User namespaces, capabilities
and MAC
● Common container threats
● Docker CLI AuthN/AuthZ
● Docker Content Trust
25
26. uid 0 - one account to rule them all
● to get that clear: considering the mechanisms introduced so far, there’s
actually no difference between host root and container root!!
● this can even be expanded: Any user allowed to access the Docker
daemon is effectively root on the host!!
● This is also true for otherwise unprivileged users belonging to the docker
group
● Sounds incredible? Watch and be astonished ;)
26
27. User namespaces to the rescue
● problem: container root is root on host is case of breakout
● solution: “root remapping” (introduced with Docker 1.10)
● maps uid 0 inside container to arbitrary uid outside the container
● caution: user namespaces are disabled by default!
● confinement: there’s only one single user namespace per Docker daemon,
not per container
27
28. Taming root with Capabilities?
● another problem: setuid-root binaries (e.g. /bin/ping)
● these binaries are also executed with the rights of their owner (guess which
user owns ping)
● heavily increases the risk of privilege escalation in case of flaws
● capabilities idea: grant fine-granular access only to what’s absolutely needed
(network sockets in case of ping)
● allows for unprivileged containers (missing in Docker)28
29. One last try: Mandatory Access Control
(MAC)
● Linux standard: Discretionary Access
Control (DAC)
● grants access only by the actor’s identity
(access rights per user)
● every resource (file, directory, ..) has a
owning user/group
● Linux manages acess rights for owner,
group and world (rwx)
29
● another approach: Mandatory Access
Control (MAC)
● access is granted by a policy or rather
fine-granular rules
● Linux implementations: SELinux,
AppArmor (rules per file/directory)
● there’re ready-to-use templates offered by
Docker for both
● writing own policy is tricky and error-prone
30. Container threats: Escaping
● What’s happening?
○ compromising of the host
○ worst case: attacker can do anything on the host system
● Why does it happen?
○ lack of user namespaces
○ insecure defaults/weak configuration (user namespaces disabled)
■ user namespaces disabled30
31. Container threats: Cross-container attacks
● What’s happening?
○ compromising of sensitive containers (e.g.database container)
○ ARP spoofing and steal of credentials
○ DoS attack (e.g. XML bombs)
● Why does it happen?
○ weak network defaults (default bridge configuration in Docker)
○ poor/missing resource limitation defaults31
32. Container threats: Inner-container attacks
What’s happening?
attacker gains unauthorized access to a single container
root cause of previous container threats
Why does it happen?
typically due to non-container related flaws (e.g. webapp vulnerabilities)
out of date software
exposing a container to insecure/untrusted networks32
34. Docker Registry: Challenges
● Docker Registry = kind of git repositoy for Docker Images (public/private)
● Handy for collaboration (e.g. in organization context)
➔challenge 1: Make sure that docker pull actually gives us exactly the
content that we want (identity/integrity)
➔challenge 2: Make sure that we always get the latest version of a
requested software (freshness)34
35. Docker Registry: Attacks (1)
Scenario 1: Attacker hacks into registry server and tampers a single layer of
an up-to-date image (image forgery)
35
36. Docker Registry: Attacks (2)
Scenario 2: Attacker hacks into registry server and provides content which is
actually out of date (replay attack)
36
37. Docker Content Trust: Notary
● Implementation of The Update Framework (TUF), which has its origins in
the TOR project (Apache license)
● focus: publisher identity & freshness guarantees
● relies on several keys which are stored at physically different places
● oncept of online and offline keys (compared to simple GPG)
● offline (root) key remains on a USB stick, smart card, ..
37
39. Docker Content Trust: Registry V2
● Content Addressable System (key-value store)
● pull by hash/pull by digest: key = hash(object)
● self-verifying system (integrity)
● docker pull is a secure operation, as long as we get the correct hash
➔quiz game: How can we ensure to always get the correct hash?
39
40. Conclusion
● Container systems become more and more security-aware
● However, container security is still work in progress
● Namespaces and Capabilities are relatively new kernel features (buggy?)
● root seems to be a never-ending problem
● Docker provides useful defaults, but lacks support for fine-granular
AuthN/AuthZ
40
41. Research Questions (1)
Q1: “Will container technology and it’s security become part of a developer’s
everyday life?”
What I think: “Clearly yes! Facing more and more attack vectors every day,
shipping software by means of containers requires at least a basic
understanding of the underlying container system’s security properties. The
DevOps movement relocates responsibilities like deployment, reliability and
security to developers!”
41
42. Research Questions (2)
Q2: “What about Docker’s future approach in terms of security?”
What I think: “Docker seems to have understood the importance of security for
their tool stack in order to stay successful. In my opinion, the biggest challenge
they’ve to face is integrating security features without damaging the great
usability they offer, since this is what sets them apart from alternative solutions.”
42
43. Research Questions (3)
Q3: “What about Unikernels? How might this technology help improving Docker
security and Docker in general?”
What I think: “Hard to answer. Regarding one of their blog posts, Docker uses
Unikernels for spawning minimal hypervisors and combines them with the
Docker Engine, creating lightweight apps that contain everything it needs to run
Docker under Non-Linux environments. I’m very excited to hear about their future
plans with Unikernels.”
43
44. Research Questions (4)
Q4: “Will container systems ever be really secure some day?”
What I think: “In my opinion, there will never be 100% security. The point is: We
saw that containers completely rely on kernel features, they couldn’t even exist
without a kernel. As a consequence, containers will probably be as secure or
rather as unsecure as the operating system they run on.”
44
45. Thanks for your attention!
Any questions?
Contact:
pk070@hdm-stuttgart.de
Twitter: @Apophis1990
45
46. Sources (1)
46
Internet:
Docker Inc. (2016): Docker Docs [https://docs.docker.com/]
Docker Inc. (2016): What is Docker? [https://www.docker.com/what-docker]
Ridwan, Mahmud (2016): Separation Anxiety: A Tutorial for Isolating Your System with Linux Namespaces
[https://www.toptal.com/linux/separation-anxiety-isolating-your-system-with-linux-namespaces]
Wikipedia (2016): Descretionary Access Control [https://de.wikipedia.org/wiki/Discretionary_Access_Control]
Wikipedia (2016): Virtuelle Maschine [https://de.wikipedia.org/wiki/Virtuelle_Maschine]
Literature:
Hypervisor = Virtual Machine Monitor
virtuelle Hardware: Jeder VM wird “vorgegaugelt”, dass sie alleinigen Zugang zur Hardware besitzt (simuliert durch virtuelle Hardware)
auch “Hardware-Virtualisierung” genannt
hier: in Software implementierter Hypervisor, der auf einem Host läuft (Anwendungsprogramm): z.B. VirtualBox (Typ-1-Hypervisor ~> läuft auf Host)
alternativ: Typ-2-Hypervisor (setzt direkt auf Hardware auf), z.B. Xen
Lauffähigkeit von verschiedenen Betriebssystemen (Windows, Linux, ..) bzw. verschiedenen Versionen
Hypervisor kann ebenfalls Bugs enthalten (kann z.B. zu DoS führen, da Hypervisor für Kontrolle und Zuteilung der Ressourcen zuständig ist)
Effizienz: Hypervisor bindet ebenfalls Ressourcen, hohe Auslastung einer VM kann andere VMs aufgrund der shared Hardware beeinflussen
dynamische Skalierung: Ressourcen dann verfügbar machen, wenn sie gebraucht werden
Kosten, Energieeffizienz
Beispiel Netflix (AWS-Zonen)
Aufsetzen, Hochfahren etc. von VMs braucht Zeit
Folge: keine schnelle Reaktion auf Ausfälle, bzw. redundante VMs im Leerlauf
Deployment:
Produktionsumgebung unterscheidet sich von Test- bzw. Entwicklungsumgebung (vorhandene Software, Dienste etc.)
erfordert aufwendiges Testen
Auftreten unvorhergesehener Fehler aufgrund der Heterogenität der Live- bzw. Testsysteme
Docker-Fokus: Erleichterung des Packaging- und Deployment-Prozesses
Bauen von sogenannten “Images” und anschließender Betrieb als Container
DevOps = Aufweichung der strikten Trennung zwischen Entwicklung und Systemadministration
dennoch: Fokus darauf, den Overhead für Entwickler möglichst gering zu halten
Images = read-only Templates, die ein minimales OS enthalten (z.B. Ubuntu) + z.B. Webapplikation (zusammen mit nginx Server)
Image wird aus Repository gepullt, falls nicht lokal auf Host vorhanden
Container = RW-Layer auf einem Image, wird aus einem Image erzeugt
UNIX Socket dient der Kommunikation von Prozessen (IPC = Inter Process Communication)
kurze Demo im Anschluss: Container starten und stoppen
wichtig; aufgeführte Punkte sind Kernel-Features
Konsequenz: Docker nutzt i.d.R. vorhandene Features, hat das Rad nicht neu erfunden!
AUFS = Advanced Multi-Layered Unification File System
Jede “Schicht” wird in einem separaten Verzeichnis gespeichert
Beim Start eines Containers werden alle Layer eines Images an einem shared Mount-Point gemountet (union mount)
Das AUFS ermöglicht einen transparenten und einheitlichen Blick auf die Gesamtheit dem am shared Mount-Pount eingehängten Verzeichnisse (vgl. Tageslichtprojektor)
Vorteile:
jeder Layer speichert nur das ab, was sich im Vergleich zu den darunter liegenden Schichten gändert hat
somit sind die einzelnen Schichten de facto voneinander lösgelöst und können wiederverwendet werden (Speichereffizienz!)
ein Container ist nichts anderes als ein zusätzliches Verzeichnis, das “oben drauf” gelegt wird (allerdings mit RW-Rechten)
Ein Container kann gespeichert werden, das heißt, es wird ein neues RO-Image mit den Änderungen des Container-Layers als oberste Schicht erzeugt
Copy-on-Write: Soll ein File bearbeitet werden, so wird es durch die einzelnen Schichten von oben nach unten gesucht. Sobald es gefunden wurde, wird innerhalb des Container-Layers eine Kopie angelegt. Somit “schattiert” (bzw. verdeckt) das neu angelegte File zukünftig die alte Version.
Löschen erfolgt mit sogenannten “Whiteout-Files”
Bislang klar: Aufbau von Images und Erzeugung von Containern
Unklar: Wenn der Kernel unter allen Containern auf einem Host geshared wird, wie kann dann eine isolierte Umgebung für jeden Container erzeugt werden?
Wie komme ich zu meinem Root-FS innerhalb des Containers?
Mount-Namespace wird u.a. zum Mounten von Volumes benutzt (Verzeichnisse auf dem Host, Laufwerke, Devices, ..)
Check der aktuellen Mount-Points unter Linux: $ mount
Prozesse innerhalb der einzelnen Container müssen voneinander abgeschirmt werden
Dürfen sich nicht gegenseitig “sehen” können
Unter Linux wird der User-Space mit dem “init”-Prozess gestartet (PID 1)
- Erzeugung eines neuen PID-Namespace: Prozess ruft clone() syscall mit bestimmtem Flag auf > resultierender Prozess erhält PID1 innerhalb des neuen Namespace (wird zuvor erzeugt)
typische Netwerk-Interfaces: eth0 oder lo (loopback, 127.0.0.1)
Wiederverwendung von Image-Schichten spart Ressourcen (in erster Linie Speicherplatz)
setuid Binaries, die root gehören, erhöhen das Riskio einer Privilege Escalation immens (durch Programierfehler in den jeweiligen Programmen)
Das Capability-Modell befindet sich immer noch im Entwicklungsstadium!
Unprivileged Containers: Container, die von Nicht-Root-Usern erzeugt wurden (jedoch mit entsprechender Capability)
Capablities benötigen sinnvolle Defaults (siehe Docker)
DAC: Zugriffsentscheidung für eine Ressource auf Basis der Identität des Akteurs (Benutzers)
Zugriffsrechte werden PRO USER festgelegt
MAC: Zugriff aufgrund von allgemeinen Regeln (SELinux, AppArmor im Linux-Umfeld)
unsichere Defaults:
Aktivierung von unsicheren Capabilities
zu schwache cgroup Restriktionen
versehentliches Offenlegen von Host-Verzeichnissen
unsicheres Netzwerk:
viele Services binden sich standardmäßig an alle Interfaces (0.0.0.0, legt zahlreiche Daemons offen)
dazu gehört auch das Bridge-Interface bei Containern (von Docker standardmäßg verwendet)
dient der Kommunikation zwischen Containern (docker0 verhält sich dabei wie ein Switch)
ermöglicht Angriffe über das Netzwerk
Kernel Ring Buffer:
enthält verschiedenste Nachrichten (Ringpuffer)
werden anschließend nach /var/log/messages etc. geschrieben
verhindert ständiges I/O (langsam)
Ressourcenverbrauch:
eigentliche Ursache kann eine Applikation sein, z.B. Parser-Rekursion (Jackson in Java für JSON/XML)
schädlich, falls Speicherverbrauch nicht eingeschränkt wird
Container Management Systeme:
zwingend notwendig zur Orchestrierung von Containern in großer Zahl (Relilience, Ausfallsicherheit)
brauchen gewissen Zugang zu Clients (z.B. Health-Checks, erfordern offene Verbindungen in beide Richtungen)
Ursachen für unautorisierten Container-Zugang:
Schwachstellen in Webanwendungen (z.B. SQL-Injection)
Command Injection (falls Interpreter direkt aufgerufen werden, z.B. Python oder auch bash)
schwache (oder keine) Passwörter
resultiert in der Regel in Privilege Escalation (Erweiterung bestehender Rechte)
Ursachen für Inner-Container Attacken:
veraltete Software (auch Container müssen Updates unterzogen werden!)
große Basis-Images (bedeutet auch immer: viele mögliche Schwachstellen, große Menge an Software, die aktualisiert werden muss)
Authentifizierung:
TLS/SSL standardmäßg deaktiviert
möglich: Aktivierung und anschließendes Auth mit Zertifikaten (Client-Daemon)
kompromittierbarer Mechanismus, wenig robust
bisher allerdings keine Unterscheidung zwischen Usern (Zugang zu Socket = Root)
Autorisierung:
es existiert bis heute KEIN Autorisierungsmechanismus
laut Docker in Planung
Kein GPG aufgrund von Anfälligkeit für Replay-Attacken/Man-In-The-Middle (Server liefert veraltete Packages, sind aber korrekt signiert)
Private Key liegt auf Server (kompromittierbar)
Keine Chance bei Kompromittierung des GPG Keys
TUF hat seinen Ursprung in TOR (für TOR entwickelt)
TUF:
Freshness: sicherstellen, dass kein veralteter Content ausgeliefert wird (aktuellste Version einer Software)
Key-Kompromittierung überleben (Unterscheidung von Online- u. Offline-Keys, Aufteilung der Keys für Robustheit)
Offline (root) Key signiert Online-Key (einfache Key-Rotation, falls Online-Key kompromittiert wird) -> auf USB-Stick, Smart Card
zusätzlicher Timestamp-Key stellt sicher, dass kein älterer Content gepusht werden kann als im Repo vorhanden
kompromittierter Timestamp-Key führt lediglich zu Verlust der Freshness-Guarantee (kein Push in Repo möglich)
Root of trust:
TOFUs (trust on first use over tls)
erste Verbindung über TLS, danach: TLS/SSL vollkommen irrelevant
danach ist TLS völlig egal: Der Server hat nicht die Möglichkeit, Inhalte im Repo zu signieren (selbst dann, wenn der Server kompromittiert ist)
Kein GPG aufgrund von Anfälligkeit für Replay-Attacken/Man-In-The-Middle (Server liefert veraltete Packages, sind aber korrekt signiert)
Private Key liegt auf Server (kompromittierbar)
Keine Chance bei Kompromittierung des GPG Keys
TUF hat seinen Ursprung in TOR (für TOR entwickelt)
TUF:
Freshness: sicherstellen, dass kein veralteter Content ausgeliefert wird (aktuellste Version einer Software)
Key-Kompromittierung überleben (Unterscheidung von Online- u. Offline-Keys, Aufteilung der Keys für Robustheit)
Offline (root) Key signiert Online-Key (einfache Key-Rotation, falls Online-Key kompromittiert wird) -> auf USB-Stick, Smart Card
zusätzlicher Timestamp-Key stellt sicher, dass kein älterer Content gepusht werden kann als im Repo vorhanden
kompromittierter Timestamp-Key führt lediglich zu Verlust der Freshness-Guarantee (kein Push in Repo möglich)
Root of trust:
TOFUs (trust on first use over tls)
erste Verbindung über TLS, danach: TLS/SSL vollkommen irrelevant
danach ist TLS völlig egal: Der Server hat nicht die Möglichkeit, Inhalte im Repo zu signieren (selbst dann, wenn der Server kompromittiert ist)
Docker Registry V2 ermöglicht “Pull-by-digest” (Pull by hash), z.B. Mapping von Ubuntu:latest auf bestimmten Hash (identifiziert zu pullendes Objekt)
Registry -> Content addressable system
Hash ist gleichzeitig kryptographische Prüfsumme, die Clients Verifizierung ermöglicht
Objekt wird gehasht -> muss Hash sein, der von Notary ermittelt wurde