A new movement is taking cloud by storm; Docker is evolving the way services are deployed by organizations so that they can operate more efficiently at scale — both in the cloud and on bare metal. In the same way shipping containers revolutionized the cargo industry, cheap, zero-penalty Linux Containers (LXC) are like shrink-wrapped VMs but without the fat. What’s not obvious, however, is how to roll your own Docker deployments and all tools you’ll need to leverage along the way.
This discussion will cover:
• Principles of Immutable Infrastructure
• Docker Basics
• Docker for Dev & QA
• Docker in Production
• Business Drivers
• Answering the Question: Is Docker Ready for Prime Time?
Docker Demystified - Virtual VMs without the FatErik Osterman
DevOps guru Erik Osterman has been at the forefront of large-scale cloud architectures as the Director of Cloud Architecture for CBS Interactive and advisor for numerous successful startups. Now he’s ready to show you why Docker is all the rage.
A new movement is taking cloud by storm; Docker is evolving the way services are deployed by organizations so that they can operate more efficiently at scale — both in the cloud and on bare metal. In the same way shipping containers revolutionized the cargo industry, cheap, zero-penalty Linux Containers (LXC) are like shrink-wrapped VMs but without the fat. What’s not obvious, however, is how to roll your own Docker deployments and all tools you’ll need to leverage along the way.
Tune in to learn how you too can run a micro services architecture that supports thousands of containers controlled effortlessly from your laptop’s command line.
This webinar is free to attend and will cover:
• Principles of Immutable Infrastructure
• Docker Basics
• Docker for Dev & QA
• Docker in Production
• Business Drivers
• Answering the Question: Is Docker Ready for Prime Time?
Webcast at http://webcast.cloudposse.com/
Docker is the world's leading software containerization platform.
This is a comprehensive introduction to Docker, suitable for delivering in introductory meetups to an audience who does not know about docker.
In case you want to deliver this presentation somewhere, kindly drop me a mail at aditya.konarde@gmail.com
You can contact me at:
Connect with me onLinkedIN: https://www.linkedin.com/in/adityakonarde
Add me on Facebook: https://www.facebook.com/Aditya.Konarde
Tweet to me @aditya_konarde
Thinking Inside the Container: A Continuous Delivery Story by Maxfield Stewart Docker, Inc.
Riot builds a lot of software. At the start of 2015 we were looking at 3000 build jobs over a hundred different applications and dozens of teams. We were handling nearly 750 jobs per hour and our build infrastructure needed to grow rapidly to meet demand. We needed to give teams total control of the “stack” used to build their applications and we needed a solution that enabled agile delivery to our players. On top of that, we needed a scalable system that would allow a team of four engineers to support over 250.
After as few explorations, we built an integrated Docker solution using Jenkins that accepts docker images submitted as build environments by engineers around the company . Our “containerized” farm now creates over 10,000 containers a week and handles nearly 1000 jobs at a rate of about 100 jobs an hour.
In this occasionally technical talk, we’ll explore the decisions that led Riot to consider Docker, the evolutionary stages of our build infrastructure, and how the open source and in-house software we combined to achieve our goals at scale. You’ll come away with some best practices, plenty of lessons learned, and insight into some of the more unique aspects of our system (like automated testing of submitted build environments, or testing node.js apps in containers with Chromium and xvfb).
Jacopo Nardiello - Monitoring Cloud-Native applications with Prometheus - Cod...Codemotion
We are going to talk about Prometheus and how to use to monitor micro-services "Cloud-Native" application s. We are going to dive deep into the Prometheus monitoring model, we will see what are the components be hind this system and how they integrate with each others to provide an efficient and modern monitoring sy stem. We will also have a glance on Prometheus native integrations for cloud-native environments such as Kubernetes.
Learn best practices in container security to make your containers seaworthy through the build, ship, and run lifecycle.
Demos temporarily living at github.com/endophage/apps (look under wordpress dir)
Market overview of Docker orchestrators. A detailed architecture's comparison of Kubernetes and Docker Swarm, including benefits and issues. Which orchestrator works better for microservice and highly available applications?
Justin Cormack - The 10 Container Security Tricks That Will Help You Sleep At...Codemotion
Containers, and the tooling around them, make some parts of application security that much easier. There are some simple things you can do to make a substantial difference to the security of your applications without making any big changes to what you do. This talk will give you some small changes you can make in a few hours that will make it that much more difficult to hack your applications.
Docker Demystified - Virtual VMs without the FatErik Osterman
DevOps guru Erik Osterman has been at the forefront of large-scale cloud architectures as the Director of Cloud Architecture for CBS Interactive and advisor for numerous successful startups. Now he’s ready to show you why Docker is all the rage.
A new movement is taking cloud by storm; Docker is evolving the way services are deployed by organizations so that they can operate more efficiently at scale — both in the cloud and on bare metal. In the same way shipping containers revolutionized the cargo industry, cheap, zero-penalty Linux Containers (LXC) are like shrink-wrapped VMs but without the fat. What’s not obvious, however, is how to roll your own Docker deployments and all tools you’ll need to leverage along the way.
Tune in to learn how you too can run a micro services architecture that supports thousands of containers controlled effortlessly from your laptop’s command line.
This webinar is free to attend and will cover:
• Principles of Immutable Infrastructure
• Docker Basics
• Docker for Dev & QA
• Docker in Production
• Business Drivers
• Answering the Question: Is Docker Ready for Prime Time?
Webcast at http://webcast.cloudposse.com/
Docker is the world's leading software containerization platform.
This is a comprehensive introduction to Docker, suitable for delivering in introductory meetups to an audience who does not know about docker.
In case you want to deliver this presentation somewhere, kindly drop me a mail at aditya.konarde@gmail.com
You can contact me at:
Connect with me onLinkedIN: https://www.linkedin.com/in/adityakonarde
Add me on Facebook: https://www.facebook.com/Aditya.Konarde
Tweet to me @aditya_konarde
Thinking Inside the Container: A Continuous Delivery Story by Maxfield Stewart Docker, Inc.
Riot builds a lot of software. At the start of 2015 we were looking at 3000 build jobs over a hundred different applications and dozens of teams. We were handling nearly 750 jobs per hour and our build infrastructure needed to grow rapidly to meet demand. We needed to give teams total control of the “stack” used to build their applications and we needed a solution that enabled agile delivery to our players. On top of that, we needed a scalable system that would allow a team of four engineers to support over 250.
After as few explorations, we built an integrated Docker solution using Jenkins that accepts docker images submitted as build environments by engineers around the company . Our “containerized” farm now creates over 10,000 containers a week and handles nearly 1000 jobs at a rate of about 100 jobs an hour.
In this occasionally technical talk, we’ll explore the decisions that led Riot to consider Docker, the evolutionary stages of our build infrastructure, and how the open source and in-house software we combined to achieve our goals at scale. You’ll come away with some best practices, plenty of lessons learned, and insight into some of the more unique aspects of our system (like automated testing of submitted build environments, or testing node.js apps in containers with Chromium and xvfb).
Jacopo Nardiello - Monitoring Cloud-Native applications with Prometheus - Cod...Codemotion
We are going to talk about Prometheus and how to use to monitor micro-services "Cloud-Native" application s. We are going to dive deep into the Prometheus monitoring model, we will see what are the components be hind this system and how they integrate with each others to provide an efficient and modern monitoring sy stem. We will also have a glance on Prometheus native integrations for cloud-native environments such as Kubernetes.
Learn best practices in container security to make your containers seaworthy through the build, ship, and run lifecycle.
Demos temporarily living at github.com/endophage/apps (look under wordpress dir)
Market overview of Docker orchestrators. A detailed architecture's comparison of Kubernetes and Docker Swarm, including benefits and issues. Which orchestrator works better for microservice and highly available applications?
Justin Cormack - The 10 Container Security Tricks That Will Help You Sleep At...Codemotion
Containers, and the tooling around them, make some parts of application security that much easier. There are some simple things you can do to make a substantial difference to the security of your applications without making any big changes to what you do. This talk will give you some small changes you can make in a few hours that will make it that much more difficult to hack your applications.
How to be successful running Docker in ProductionDocker, Inc.
John’s presentation will cover his lessons learned from running Docker in Production @ SalesforceIQ. Learn how to scale your registry using AWS and S3. Should you use Device Mapper or AUFS? Why run Swarm, Mesos, Kubernetes, or neither. Finally, know how persistent storage (Kafka, Cassandra, or SQL) can be run successfully with Docker in Production
His team focuses on Docker based solutions to power their SaaS infrastructure and developer operations.
DockerCon EU 2015: Shipping Manifests, Bill of Lading and Docker Metadata and...Docker, Inc.
Presented by Gareth Rushgrove, Sr. Software Engineer, Puppet Labs
The shipping container metaphor for Docker points to many of the advantages of building and running software using containers. But what about other essential parts of the shipping container ecosystem like the shipping manifest and bill of lading?
Many of the most powerful features of traditional package management tools like apt or yum are based on metadata associated with the packages. You can find out who created a package and when, check where a particular file came from, whether the package has a known vulnerability and more. What would this capability look like for Docker containers?
This talk will look at the power of metadata for containers, in particular:
* Docker provides labels for associating metadata with images and containers but how best to use them?* What problems can be solved by agreeing on standards for container metadata?* Exposing standard commands and endpoints to expose metadata about what is inside a container* Demo some open source toolings and also look at the sort of tools we might build atop those standards and low-level tools.
Docker for any type of workload and any IT InfrastructureDocker, Inc.
This presentation discusses the different types of workloads typical enterprises are required to run, which use cases exist for containerizing them and how leading-edge workload orchestration can be used to deploy, run and manage the containerized workloads or various types or scale-out infrastructures, such as on-premise clusters, public clouds or hybrid clouds.
Docker Online Meetup: Infrakit update and Q&ADocker, Inc.
While working on Docker for AWS and Azure, we realized the need for a standard way to create and manage infrastructure state that was portable across any type of infrastructure, from different cloud providers to on-prem. One challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision five servers; what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required. And in the case of server failures (especially unplanned), that sudden change needs to be reconciled against the desired state to ensure that any required servers are re-provisioned with the necessary configuration. We started InfraKit to solves these problems and to provide the ability to create a self healing infrastructure for distributed systems.
CI and CD at Scale: Scaling Jenkins with Docker and Apache MesosCarlos Sanchez
In this presentation Carlos Sanchez will share his experience running Jenkins at scale, using Docker and Apache Mesos to create one of the biggest (if not the biggest) Jenkins clusters to date.
By taking advantage of Apache Mesos, the Jenkins platform is dynamically scaled to run jobs across hundreds of Jenkins masters, on Docker containers distributed across the Mesos cluster. Jenkins slaves are dynamically created based on load, using the Jenkins Mesos and Docker plugins, running in containers distributed across multiple hosts, and isolating job execution.
This presentation will allow a better understanding of Apache Mesos and the challenges of running Docker containerized and distributed applications, particularly JVM ones, by sharing a real world use case, including good and bad decisions and how they affected the development.
Presented at DockerCon 2018 EU, I go through using Docker and the Swarm Orchestrator (a simpler Kuberentes) to stack different tools up from the base OS to a full-featured production server cluster. Also, Sci-Fi. The Video to this deck will be at https://www.bretfisher.com/docker once they are posted.
Divide and Conquer: Easier Continuous Delivery using Micro-ServicesCarlos Sanchez
Docker has revolutionized the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads. But testing a distributed micro-services architecture is no easy task, requiring a shift in mindset and tooling to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based ones.
Versioning an API can be a somewhat daunting task for the uninitiated. Even worse, some of the most common approaches are less than ideal. In this session I discuss the struggles and outcomes of my first foray into versioning and deploying. I will show how using a combination immutable docker containers, nginx, and a few other friendly tools made for the creation of a fully automated versioning and deployment system at the push of a button.
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Immutable pattern in IT infrastructure architecture. Building own OS'es and containers to deliver software.
Examples for delivery pipelines. Pros and cons for containers and configuration managers. Docker, Ansible, Chef, AWS CloudFormation, GCE, Terraform.
Building a Secure App with Docker - Ying Li and David Lawrence, DockerDocker, Inc.
Built-in security is one of the most important features in Docker. But to build a secure app, you have to understand how to take advantage of these features. Security begins with the platform, but also requires conscious secure design at all stages of app development. In this session, we'll cover the latest features in Docker security, and how you can leverage them. You'll learn how to add them to your existing development pipeline, as well as how you can and streamline your workflow while making it more secure.
Docker is definitely one of the hottest technologies at the moment and one that is already dramatically changing the way we build, package and deploy applications. In this session we’ll have a look at how a project got into a quest for containerizing most of their components and services while increasing the value delivered.
This presentation was delivered at Wildcard Conference 2015, on 16th of May 2015 in Riga
Current status of Docker on Windows and Windows Containers.
Link to slides with working hyperlinks
http://stefanscherer.github.io/talks/20160225_DockerMeetupBamberg_DockerOnWindows
Try it out yourself:
Azure: https://github.com/StefanScherer/docker-windows-azure
Win10+HyperV: https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/container_setup
Other platforms, eg. with Packer and Vagrant.
https://github.com/StefanScherer/docker-windows-box
Automation and Collaboration Across Multiple Swarms Using Docker Cloud - Marc...Docker, Inc.
cover how Docker Cloud can help you and your team easily deploy and manage multiple Swarms across different Cloud providers in a secure and platform agnostic way. We will cover how we provide a secure authentication framework for Swarms backed by Docker Cloud and how that enables seamless collaboration across your team.
We talk about docker, what it is, why it matters, and how it can benefit us. This presentation is an introduction and delivered to local meetup in Indonesia.
Presentation on Pesantren Kilat Code Security
Tangerang, 2016-06-06
We talk about docker. What it is? Why it matters? and how it can benefit us?
This presentation is an introduction and delivered to local meetup in Indonesia.
How to be successful running Docker in ProductionDocker, Inc.
John’s presentation will cover his lessons learned from running Docker in Production @ SalesforceIQ. Learn how to scale your registry using AWS and S3. Should you use Device Mapper or AUFS? Why run Swarm, Mesos, Kubernetes, or neither. Finally, know how persistent storage (Kafka, Cassandra, or SQL) can be run successfully with Docker in Production
His team focuses on Docker based solutions to power their SaaS infrastructure and developer operations.
DockerCon EU 2015: Shipping Manifests, Bill of Lading and Docker Metadata and...Docker, Inc.
Presented by Gareth Rushgrove, Sr. Software Engineer, Puppet Labs
The shipping container metaphor for Docker points to many of the advantages of building and running software using containers. But what about other essential parts of the shipping container ecosystem like the shipping manifest and bill of lading?
Many of the most powerful features of traditional package management tools like apt or yum are based on metadata associated with the packages. You can find out who created a package and when, check where a particular file came from, whether the package has a known vulnerability and more. What would this capability look like for Docker containers?
This talk will look at the power of metadata for containers, in particular:
* Docker provides labels for associating metadata with images and containers but how best to use them?* What problems can be solved by agreeing on standards for container metadata?* Exposing standard commands and endpoints to expose metadata about what is inside a container* Demo some open source toolings and also look at the sort of tools we might build atop those standards and low-level tools.
Docker for any type of workload and any IT InfrastructureDocker, Inc.
This presentation discusses the different types of workloads typical enterprises are required to run, which use cases exist for containerizing them and how leading-edge workload orchestration can be used to deploy, run and manage the containerized workloads or various types or scale-out infrastructures, such as on-premise clusters, public clouds or hybrid clouds.
Docker Online Meetup: Infrakit update and Q&ADocker, Inc.
While working on Docker for AWS and Azure, we realized the need for a standard way to create and manage infrastructure state that was portable across any type of infrastructure, from different cloud providers to on-prem. One challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision five servers; what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required. And in the case of server failures (especially unplanned), that sudden change needs to be reconciled against the desired state to ensure that any required servers are re-provisioned with the necessary configuration. We started InfraKit to solves these problems and to provide the ability to create a self healing infrastructure for distributed systems.
CI and CD at Scale: Scaling Jenkins with Docker and Apache MesosCarlos Sanchez
In this presentation Carlos Sanchez will share his experience running Jenkins at scale, using Docker and Apache Mesos to create one of the biggest (if not the biggest) Jenkins clusters to date.
By taking advantage of Apache Mesos, the Jenkins platform is dynamically scaled to run jobs across hundreds of Jenkins masters, on Docker containers distributed across the Mesos cluster. Jenkins slaves are dynamically created based on load, using the Jenkins Mesos and Docker plugins, running in containers distributed across multiple hosts, and isolating job execution.
This presentation will allow a better understanding of Apache Mesos and the challenges of running Docker containerized and distributed applications, particularly JVM ones, by sharing a real world use case, including good and bad decisions and how they affected the development.
Presented at DockerCon 2018 EU, I go through using Docker and the Swarm Orchestrator (a simpler Kuberentes) to stack different tools up from the base OS to a full-featured production server cluster. Also, Sci-Fi. The Video to this deck will be at https://www.bretfisher.com/docker once they are posted.
Divide and Conquer: Easier Continuous Delivery using Micro-ServicesCarlos Sanchez
Docker has revolutionized the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads. But testing a distributed micro-services architecture is no easy task, requiring a shift in mindset and tooling to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based ones.
Versioning an API can be a somewhat daunting task for the uninitiated. Even worse, some of the most common approaches are less than ideal. In this session I discuss the struggles and outcomes of my first foray into versioning and deploying. I will show how using a combination immutable docker containers, nginx, and a few other friendly tools made for the creation of a fully automated versioning and deployment system at the push of a button.
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Immutable pattern in IT infrastructure architecture. Building own OS'es and containers to deliver software.
Examples for delivery pipelines. Pros and cons for containers and configuration managers. Docker, Ansible, Chef, AWS CloudFormation, GCE, Terraform.
Building a Secure App with Docker - Ying Li and David Lawrence, DockerDocker, Inc.
Built-in security is one of the most important features in Docker. But to build a secure app, you have to understand how to take advantage of these features. Security begins with the platform, but also requires conscious secure design at all stages of app development. In this session, we'll cover the latest features in Docker security, and how you can leverage them. You'll learn how to add them to your existing development pipeline, as well as how you can and streamline your workflow while making it more secure.
Docker is definitely one of the hottest technologies at the moment and one that is already dramatically changing the way we build, package and deploy applications. In this session we’ll have a look at how a project got into a quest for containerizing most of their components and services while increasing the value delivered.
This presentation was delivered at Wildcard Conference 2015, on 16th of May 2015 in Riga
Current status of Docker on Windows and Windows Containers.
Link to slides with working hyperlinks
http://stefanscherer.github.io/talks/20160225_DockerMeetupBamberg_DockerOnWindows
Try it out yourself:
Azure: https://github.com/StefanScherer/docker-windows-azure
Win10+HyperV: https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/container_setup
Other platforms, eg. with Packer and Vagrant.
https://github.com/StefanScherer/docker-windows-box
Automation and Collaboration Across Multiple Swarms Using Docker Cloud - Marc...Docker, Inc.
cover how Docker Cloud can help you and your team easily deploy and manage multiple Swarms across different Cloud providers in a secure and platform agnostic way. We will cover how we provide a secure authentication framework for Swarms backed by Docker Cloud and how that enables seamless collaboration across your team.
We talk about docker, what it is, why it matters, and how it can benefit us. This presentation is an introduction and delivered to local meetup in Indonesia.
Presentation on Pesantren Kilat Code Security
Tangerang, 2016-06-06
We talk about docker. What it is? Why it matters? and how it can benefit us?
This presentation is an introduction and delivered to local meetup in Indonesia.
Docker - Demo on PHP Application deployment Arun prasath
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this demo, I will show how to build a Apache image from a Dockerfile and deploy a PHP application which is present in an external folder using custom configuration files.
Gianluca Arbezzano Wordpress: gestione delle installazioni e scalabilità con ...Codemotion
Uno degli argomenti più importanti per chi utilizza wordpress è la condivisione dello stesso server, brutto a dirsi ma in questo settore si tende ad installare su una sola macchina un numero molto alto di siti. Vediamo come Docker può venirci incontro per organizzare al meglio le nostre installazioni e come può diventare un opportunità per gestire al meglio le risorse anche in vista di una crescita improvvisa e di una necessità di scalata orizzontale.
Using Kubernetes for Continuous Integration and Continuous DeliveryCarlos Sanchez
Learn how to scale your Continuous Integration and Continuous Delivery environment using containers. The Kubernetes project provides a container orchestration solution that greatly simplifies app deployments in large clusters and you can use Jenkins and Kubernetes together to run jobs on-demand.
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but it increases complexity when scaling to multiple nodes and clusters.
Jenkins is an example of an application that can take advantage of Kubernetes technology to run Continuous Integration and Continuous Delivery workloads. Jenkins and Kubernetes can be integrated to transparently use on demand containers to run build agents and jobs, and isolate job execution. It also supports CI/CD-as-code using Jenkins Pipelines and automated deployments to Kubernetes clusters. The presentation will allow a better understanding of how to use Jenkins on Kubernetes for container based, totally dynamic, large scale CI and CD.
Using Kubernetes for Continuous Integration and Continuous Delivery. Java2daysCarlos Sanchez
Learn how to scale your Continuous Integration and Continuous Delivery environment using containers. The Kubernetes project provides a container orchestration solution that greatly simplifies app deployments in large clusters and you can use Jenkins and Kubernetes together to run jobs on-demand.
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but it increases complexity when scaling to multiple nodes and clusters.
Jenkins is an example of an application that can take advantage of Kubernetes technology to run Continuous Integration and Continuous Delivery workloads. Jenkins and Kubernetes can be integrated to transparently use on demand containers to run build agents and jobs, and isolate job execution. It also supports CI/CD-as-code using Jenkins Pipelines and automated deployments to Kubernetes clusters. The presentation will allow a better understanding of how to use Jenkins on Kubernetes for container based, totally dynamic, large scale CI and CD.
Docker moves very fast, with an edge channel released every month and a stable release every 3 months. Patrick will talk about how Docker introduced Docker EE and a certification program for containers and plugins with Docker CE and EE 17.03 (from March), the announcements from DockerCon (April), and the many new features planned for Docker CE 17.05 in May.
This talk will be about what's new in Docker and what's next on the roadmap
AtlanTEC 2017: Containers! Why Docker, Why NOW?Phil Estes
A talk given at the AtlanTEC festival/conference (http://atlantec.ie/#atlantec-conference) in Galway, Ireland on Thursday, May 25th, 2017. This talk provides the background of how container popularity exploded in the past few years, the impact of Docker to this ecosystem, and why containers are interesting for developers and the enterprise in 2017.
Scaling Jenkins with Docker: Swarm, Kubernetes or Mesos?Carlos Sanchez
The Jenkins platform can be dynamically scaled by using several Docker cluster and orchestration platforms, using containers to run slaves and jobs and also isolating job execution. But which cluster technology should be used? Docker Swarm? Apache Mesos? Kubernetes? How do they compare? All of them can be used to dynamically run jobs inside containers. This talk will cover these main container clusters, outlining the pros and cons of each, the current state of the art of the technologies and Jenkins support.
Docker is a key player in the microservices movement and is arguably the leader in containerization technology.
That said, there are many ways to “do Docker”.
Between the leading cloud providers AWS, Azure, and Google; plus other platform stacks like Docker/Swarm, Apache Mesos – DC/OS, and Kubernetes; it can get confusing.In this session, Michele will bring her customer experiences building solutions across most of these platforms – to provide you with the highlights, the architecture topologies, and some perspective on the way she helps her customers choose the right platform for their cloud, on premise or hybrid solutions.
DCEU 18: Building Your Swarm Tech Stack for the Docker Container PlatformDocker, Inc.
This session will focus on the practicals of building a fully-functional stack of container cluster tools, with different options for stacking those tools from the OS-up. We’ve all seen examples of common technologies stacks, like the good ol’ LAMP and MEAN stacks for applications, but what about lower-level infrastructure? And can we get it without cloud vendor lock in please? Oh and pure containers and infrastructure-as-code too? With Docker, sure thing! This session will cover: Which OS/Distro and Kernel to use VM’s or Bare Metal Recommended Swarm architectures Tool stacks for “pure open source”, “cloud-service based”, and “Docker Enteprise” scenarios Demos of these tools working together including InfraKit, Docker Engine, Swarm, Flow-Proxy, ELK, Prometheus, REX-Ray, and more.
Unlimited Staging Environments on KubernetesErik Osterman
How to run complete, disposable apps on Kubernetes for Staging and Development
What if you could rapidly spin up new environments in a matter of minutes entirely from scratch, triggered simply by the push of a button or automatically for every Pull Request or Branch. Would that be cool?
That’s what we thought too! Companies running complex microservices architectures need a better way to do QA, prototype new features & discuss changes. We want to show that there’s a simpler way to collaborate and it’s available today if you’re running Kubernetes.
Tune in to learn how you can assemble 100% Open Source components with a CodeFresh CI/CD Pipeline to deploy your full stack for any branch and expose it on a unique URL that you can share. Not only that, we ensure that it’s fully integrated with CI/CD so console expertise is not required to push updates. Empower designers and front-end developers to push code freely. Hand it over to your sales team so they can demo upcoming features for customers! The possibilities are truly unlimited. =)
Learn how Cloud Posse recently architected and implemented Wordpress for massive scale on Amazon EC2. We'll show you exactly the tools that we used and our recipe to both secure and power Wordpress setups on AWS using Elastic Beanstalk, EFS, CodePipeline, Memcached, Aurora and Varnish.
Secrets are any sensitive piece of information (like a password, API token, TLS private key) that must be kept safe. This presentation is a practical guide covering what we've done at Cloud Posse to lock down secrets in production. It includes our answer to avoid the same pitfalls that Shape Shift encountered when they were hacked. The techniques presented are compatible with automated cloud environments and even legacy systems.
An Ensemble Core with Docker - Solving a Real Pain in the PaaS Erik Osterman
Docker by itself is only an engine powering containers. You need a containership to run it in production. CoreOS is a purpose-built containership that powers Docker conatiners, however, without higher-level orchestration managing hundreds or thousands of containers is not manageable. Ensemble is the answer for running containers at scale on top of CoreOS.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
3. Shipping Software is Difficult
Different Stages (test, qa, prod, etc)
More Dependencies = More Problems
Low Density with Poor Utilization
Repeatable Deployments
Lifecycle Management
Version Conflicts
A/B Testing
13. Configuration Management (Hell)
Complicated by Design, Fragile
Declarative ~ “Just Trust Me”
Not Easy to Ensure Consistency
No Guarantee QA == Production
PAINFULLY SLOW,
EXPENSIVE
14. Immutable Infrastructure (Heaven)
Build Once, Run Anywhere
Imperative (WYSIWYG)
vs Declarative (TRUST ME)
3 Layers
Persistent ~ data that changes like /var/lib/mysql
Immutable ~ should never change /usr/bin/mysql
Identity ~ configuration like /etc/mysql/my.cnf
Easier Rollbacks, Faster Deploys
15. Virtual Machines?
Necessary but Expensive to Operate
Microservices on Individual VMs Impractical
Too Rigid / One Size Does Not Fit All
Portability Issues with Hypervisors & Clouds
Slow to Boot & Clunky to Manage
Resource Hogs / Redundant Services
20. What if I told you….
There was a way to magically run
any application*, on
any distribution, from
any vendor, using
any cloud and
it would just work? =)
(*dependent on Linux
Kernel, CPU
architecture)
30. Linux Containers
A way to securely run processes
Looks like a <Shrink Wrapped> VM
Share the Same Kernel, which is usually OK
Penalty Free Execution
Easy to Ship, Instant Boot
Throttle CPU & Memory, I/O
32. Docker in a Nutshell
An abstraction for managing LXC (libcontainer)
Docker Daemon Runs / Connects Containers
Dockerfile DSL to package apps
Repositories to ship containers
Run anywhere you have Linux
Chroot on steroids
34. Docker Hub
Storage for Docker Containers
Maintains Lineage / All Versions
Public, Private & Self-Hosted Repositories
Like GitHub, but for Docker Images
36. Docker Analogs to Java Ecosystem
JVM is like the Linux Kernel + LXC
Jar files are like Docker Images
Maven is like Docker Client
pom.xml is like the Dockerfile
Artifactory is like the Docker Hub (Repository)
Tomcat is like the Docker Daemon
37. The Dockerfile
FROM ubuntu:14.04
MAINTAINER erik@cloudposse.com
ENV MYSQL_USER app
WORKDIR /
RUN apt-get update && apt-get -y install mysql-server
ADD ./start.sh /start.sh
VOLUME /var/lib/mysql/
EXPOSE 3306
USER nobody
ENTRYPOINT /bin/sh
CMD /run.sh
38. Docker Command Line
export DOCKER_HOST=tcp://192.168.59.103:2376
docker build --tag our-repo/mysql -f Dockerfile
docker push
docker pull our-repo/mysql
docker stats my-container
docker logs my-container
docker kill my-container
# a total of ~40 commands
39. Docker Command Line
export DOCKER_HOST=tcp://192.168.59.103:2376
docker run --name=mysql --restart=always
--memory=512m --cpu-shares=256
--blkio-weight=256 --memory-swappiness=20
--env="INNODB_CACHE_SIZE=256m"
--dns-search=qa.domain.local --dns=1.2.3.4
--volumes-from=mysql-data-vol
our-repo/mysql:latest
mysqld_safe
41. Development Possibilities
Dozens of Containers on a Laptop
“Docker Compose” Environments
Vagrant Docker Provider
Run Locally with Boot2Docker or Kitematic
Bake Image Ship it to QA
44. Production Possibilities
Run EXACTLY same image from QA
Rollback Easily / Assassinate
Reduce Errors from
Inconsistencies
Isolate Failures
of Microservices
45. Business Drivers
Maximize CapEx Investment / Higher Utilization
Reduce OpEx thru Increased Density
Move Faster with Reduced Risk
Conduct More A/B Tests
47. Production Ready? YES
Containers are definitely stable
Docker v1.5+ is stable
Tools exist that tie it all together
“Containerships” = GCE, Triton, CoreOS,
etc
Many large companies run Docker
49. But are you ready?
Do your apps run on Linux?
Use the 12-Factor methodology?
Know how to leverage cloud orchestration?
Have an expert devops team handy?
Excited to retool everything (again)?
Have Operational Competency?
52. Production Requirements
A Purpose Built Containership
Service Composition, Orchestration
Private Image Repository
Zero Downtime Deployments & Rollbacks
Cross-Container Networking
Log Management, Monitoring & Alerting
Data Persistence & Backups
Version Pinning
54. Docker Gotchas
You still need to be an expert sysadmin
Requires some configuration management
No Built-in Auto-scaling
Docker Hub incomplete
Security Concerns / Public Images / Lineage
55. The Future
Docker Swarm
Kubernetes, Mesos, Mesosphere
PaaSifcation like Deis, Flynn, Joyent Triton
Massive Vendor Adoption
App Container / Open Standards
56. Docker - The Real Deal
Reduce Configuration Management
Reliably Ship Less Data, Faster
Run Services with Greater Isolation
True Cross-cloud Portability
57. Don’t stop here...
Cloud Posse provides (kickass) advisory and
implementation assistance for medium to
large-scale cloud deployments.
Erik Osterman
erik@cloudposse.com
(310) 496-6556
Editor's Notes
Alright everyone, it’s time to get started.
All questions will be answered at the end.
I’ll also be sharing a link to the final slides
My name is Erik Osterman and today it’s my EXTREME pleasure to talk about Docker
It is something that I always dreamed of having, but seemed always out of reach
My background is in software development, principally web-based stuff and moved into cloud architecture out of necessity
Most recently I was the director of cloud architecture for CBS Interactive
Prior to that, I advised lots of startups
So what’s my objective? To convince you that Docker is the the evolution of cloud
It’s not a revolution
It’s logical next step you need to take.
Why? Because it will improve your operations while reducing risk.
What I’m about to cover is complicated. Don’t worry. I’ve lots of pretty pictures.
Also, since this is a Java group, there are some great analogs which will help you better understand all the moving pieces.
So what’s the problem?
It’s that software companies are still struggling with shipping software
Shipping software is difficult
There are a lot of moving pieces
Many pieces are outside of your control over.
Most solutions have been symptomatic / Not natural selection / Job preservation
We’ve been doing the same thing for decades => It’s called configuration management.
There are things we don’t do because it’s either too tedious or risky, but we should….
A/B testing
Continuous Integration
Version pinning
Staggered deployments
If we could deploy faster, roll back easier -- shipping software wouldn’t be so bad.
Today the most common prescription for scaling a website is a microservices architecture like this one
A microservices architecture is one where you break apart your application into individual components that can be individually scaled as necessary. That means both vertically as in throwing bigger badder machines at the problem or horizontally which is to add more machines and split up the computation.
Here’s what WordPress might look like in a microservices architecture
Chances are you run more than just wordpress.
You run all of this and then some. I mean, this is what you were already running a few years ago.
How do ya get all of this to work together?
There are dependencies
There are upgrades
One version breaks another, so pin versions. Deploy the software. And then realize you didn’t want Cassandra after all because elastic search has more of a je ne sais quoi
It’s going to drive you insane like it did to me.
On top of that, we have more flavors of cloud than ever.
And you probably have some diehard bare metal fans in your organizations holding onto bare servers for dear life
It shouldn’t really matter that much. Right?
You want compute capacity.
You need some place to run your software.
Let’s focus on delivering that one way or another.
Because what we have today, is the matrix from hell.
With every new software component we add,
the system’s complexity is multiplied by every place you’ll need to run it.
Until recently, there haven’t been any novel solutions.
In fact, it got so bad I quit my job
I never wanted to work with cloud again;
living on beach sounded pretty good to me
So I went traveling for a year around the world; I saw 14+ countries; 30+ cities. It was awesome
It was because cloud computing start to look like a Rube Goldberg machine than I would have liked
Here’s one designed to wipe the sweat from your face when amazon crashes.
They all solve the job, but does the job need to be solved at all?
tweaking configuration file snippets
ensuring packages are installed
shipping the kitchen sink
Heck, may shipping the entire kitchen along with the Chef
They are called configuration management tools
They are the traditional way software has been deployed forever.
But maybe the configuration of software isn’t the problem? Maybe it is the OS?
If we can solve the problem of delivering bundled “services”, things get a lot simpler
Case in point.
This is what a typical execution of configuration management software looks like.
This happens to be puppet.
This is not a criticism of Puppets.
I love Puppets and muppets.
But they they belong on stage.
The point is something so complex is fragile by nature.
We should cut the strings.
We want something anti-fragile - to quote nassim taleb
The declarative nature of Configuration Management it not bullet proof
Moreover, even when it works well enough, it’s painfully slow.
Imagine all the wasted compute cycles spent reevaluating configurations.
Why go to hell and back to ship software?
We want to go to Heaven.
That’s Immutable Infrastructure.
Build it once and be done with it;
There’s no incremental patching, so there’s less risk
It’s simple
2 actions: deploy/destroy
Any time you want to change something, you bake a new imageYes, that’s slow if you only have one server. But if you only have one server, then EVERYTHING I am about to talk about is probably overkill. If you have dozens, shipping a golden image is DEFINITELY faster.
the Difference: imperrative vs declarrative;
define the process and not the outcome
Not a new concept, but the tools until recently hadn’t caught up
The other dilemma we have is related to Virtual Machines
The realization that we could emulate servers with a Hypervisor was a BRILLIANT
They got us to where we are today. Kind of like training wheels. They were a necessary step of evolution.
But VMs are boxy.
Sure, not as bad as the bare metal, but they aren’t exactly vacuum sealed.
Suppose you have 10 VMs on a server.
Then imagine all the redundant processes like sshd, syslog, crond, and a dozen others taking up precious resources without much value
Redundant kernels, filesystems, page caches, etc.
Each machine image is huge, usually a couple of gigs.
We still depend on Configuration Management because VMs aren’t exactly “penalty free” to instantiate
They take minutes to boot and billed by the hour
I think milliseconds are a more appropriate unit of measurement.
There’s no reason for machines to linger if you don’t need them. For example, a crontab server should only run for the few seconds it takes to run the job and then exit. Free up the resources for another process.
But we don’t do this in reality because it’s too expensive.
Come to think of it, VMs are a lot like trucks.
Both have payloads.
Compared to trains and cargo ships, trucks are WAY more expensive to operate.
You shouldn’t need an engine, a chassis, gas, or a driver for every container
So you shouldn’t need a VM for every service.
You can fit a lot of containers on a ship, but the 405, my friend, is at capacity
I think you get it.
That’s why modern day cloud has come to this.
It’s a bunch of heavy machinery on even heavier machines
All we really care about is the payload.
If this was an MTA bus, we could all agree it’s absurd.
So can’t there be a better way?
Well thank god that someone was hard work, while I was sipping pina colada. True story.
you see… what if I told there was a way…….
and you get this - along with a badass architecture & design that your application begs for.
There’s a movement happening that might change your opinion of what’s possible.
It was all inspired by this - the shipping container.
Back in the ‘50’s they created an ISO Standard for Containers.
These are set measurements for how the containers are built,
Everything from how they are opened, locked, stacked, loaded.
Today there are a few standards, but principles remain the same.
If you need to move something, make fit inside of one of these containers. And you’re good to go.
Combined with railways, highways and ships, containers are moved all around the world
Trucks are just for the “last mile” of delivery.
Unfortunately, some companies haven’t yet figured that out.
They also invented orchestration.
Cranes like these load the containers on to the cargo ships and distribute the load
-----------------------
(fun fact: Some say these massive cranes in the Port of Oakland were the inspiration behind the Walkers aka All Terrain Armored Transport (AT-AT) in Star Wars, but Lucas denies it; I beg to differ)
MASSIVE cargo ships take thousands of containers at a time across open oceans
Shipping companies lease these containers to you along with space on ships
They work around the clock and reach every part of the world.
It makes for a very efficient process.
You see it’s been solved in other industries.
Let’s try to ship software like IKEA ships furniture.
We’ll call it - The IKEA pattern.
If it fits, it ships.
So what is the secret sauce? Hire me to find out ;P
It’s Linux Containers or LXC for short
They are way to ship pre-bundled linux operating systems that operate in isolation just like VMs
There’s something called the Duck Test…. you might have heard of.
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
Well, containers quack like VMs
They provide all the essential benefits of VMs but without the overhead.
Containers are elastic or as I like to say shrink wrapped.
They take only as many resources as the underlying processes.
No pre allocation necessary. One size, fits all.
Best of all, you can start them up in a milliseconds
You can fire up a container for cronjob and be done with it
You can still throttle CPU, cap memory usage, limit I/O both network and disk, and much, much more.
VMs and containers are similar but different.
Technically speaking, they are entirely unrelated technologies
But effectively, they are used to accomplish much of the same things providing different assurances.
Think of a light switch. There are many kinds. Some are simple circuits. Others are software defined. Most of you live your lives perfectly happy not knowing exactly how your lightswitch works.
It’s not complicated.
Containers are designed to share as much as possible while maintaining reasonable isolation for most applications
e.g. linux kernel, linux page caches and entropy pools
The way I like to think about it, is that Docker containers virtualize the Linux operating system.
VMs on the other hand are designed to share as little as possible, just the bare metal.
therefore traditional VMs virtualize the hardware, which is quite a feat.
Because containers share as much as possible, they take fewer resources.
It’s that simple.
Containers are not designed to be a universal drop-in replacement for every single use-case, but they certainly fit the bill for most companies.
Because containers share the same machine, kernel exploits are pretty bad. I don’t have any consolation there.
But consider this - since containers allow you to isolate more of your runtime environment than you would have otherwise done if running virtual machines or bare metal, in practice your overall security posture will be strengthened.
A little known fact
containers have been since 2008
Google wrote the initial code for LXC to run their internal stuff and then gave it to Linus (Thanks!)
The problem is LXC by itself is too complicated and the implementation varies by Linux distributions.
So that’s why it’s taken so long for containers to become a mainstream technology.
The reason is it’s front-page news is docker.
Docker created the necessary abstractions to bring LXC to the masses.
Essentially it combines (3) things to create app Isolation
chroot, cgroups & namespaces
then they created a public repo system and it took off like wildfire.
Inside a container, processes only see other processes in the container. The first process is PID 1.
Containers only have access to their own filesystem. If you want to share files or directories, across containers, you mount them.
Process are jailed. They can’t break out or read the memory of other process.
Inside a container it feels a lot like a VM.
Here’s a kicker: inside a container, by default there’s not even a single process running, not even init. Best practice is to run only one process per container.
The docker daemon. It’s is a single process that typically runs once per machine or VM (not per container)
The Docker daemon is like the container hypervisor
but not really a hypervisor since the Kernel does most of the heavy lifting
libcontainer is responsible for the interaction with LXC; it’s kind of like libvirt for cloud
Dockerfiles are like GNU Makefiles and define how the image should be built.
They are super simple ~ max 30 seconds to learn.
The product is a fully baked image
Baked images are like AMIs, VMDKs, or JAR files
They are based on AUFS a layered filesystem.
If you’re familiar with journaling, it’s a lot like that.
Every change, results in a new layer.
Shipping a new image, involves simply ships the new layers.
Containers are running images. Think of them like instances.
You can attach to them much like you attach to a virtual machine.
Inside containers, you typically don’t run syslog, sshd, crond, chef agent, etc. That would be wasteful.
Repositories are not a new concept. They work as you can imagine. There’s even a public one called Docker Hub. Think of it like “GitHub for operating systems”
The docker client uses the API to communicate. It’s a simple RESTful protocol.
All communication is done over encrypted SSL sockets to control the docker daemon
And the docker client can be run locally on your laptop just as easily as on a server
Docker Hub is like GitHub for “operating systems”.
It’s VERY popular.
You can probably find an example of every opensource application somewhere on Docker Hub.
what’s cool about all of this is you can get pre-bundled images straight from the vendors
There are A LOT of repositories
And A LOT of people downloading from them.
But there’s a caveat: Most images are not “official” or ever audited. So proceed with caution.
Technically speaking, Containers, VMs, JVMs are in no way similar.
That is what they virtualize and how they accomplish it varies drastically.
But practically speaking, the way they are used is not that different.
In fact, Docker accomplishes many of the same goals that Java delivers on without limiting us to a particular language.
The JVM is called a virtual machine because it defines an abstract virtual CPU complete with registers and a stack.
It provides a sort of guarantee that if you feed it java byte code, it will execute that code anywhere you have a JVM.
Well, Docker is to Linux what the JVM is to java byte code. That is to say, if you have an application that executes under the Linux kernel, Docker let’s you do that irrespective of the Linux distribution. Of course, the machine architecture needs to be the same since after all, it’s not a virtual machine. It’s a virtual operating system.
Java gives you a convenient way of moving code around in “jar files” along with assets. In Docker we have images. They do the same thing, but they move an OS.
Many of you are probably familiar with Maven. Maven is used to download and assemble all the dependencies for a Java project. You can think of the Dockerfile as your pom.xml, only it’s not XML. Thank god.
Now to transport your Java apps and dependencies, you have a distribution layer provided by Artifactory. This is what the Docker Repository does for Docker Images. There’s even a public one called Docker Hub.
Once you’ve built your app, deployed to Artifactory, you’ll want to run it somewhere like under Tomcat if it’s a web app. Well, that’s what the Docker Daemon is good for. It executes your container.
I said the Dockerfile is simple. I meant it. Here’s what a Dockerfile looks like.
If you’ve ever written a bash script, you’ll learn how to write a Dockerfile in about 30 seconds.
It’s that easy.
You have several keywords like FROM, MAINTAINER, ENV, ADD, RUN (&&), CMD, USER, WORKDIR
The docker command line tool intuitive for any Linux admin
No crazy arcane syntax to worry about
Here we have an example of a new image being generated and pushed to the repository
It’s sort’a like “git commit” followed by a “git push origin master”
The Magic is that you can change DOCKER_HOST environment
It can be your local machine or a remote Docker cluster of thousand nodes
So now let’s entertain a new possibility
There are runtime environments to run Docker on your laptop
For OSX you have 2 easy options: Boot2Docker or Kitematic
They are trivial to install and take zero configuration
You can use Docker Compose (Fig) which has a simple YAML configuration
it describes how to run and link your containers
Or if you prefer, use Vagrant with the Docker provider, if that’s easier
The magic is that you can run dozens of containers on an average Laptop
That’s not possible with traditional VMs!
Here’s a quick glimpse of Kitematic. Incidentally, Docker acquired the company earlier this year.
Personally, I use Boot2Docker (as do most I know) because I live on the command line. I don’t care for a GUI.
Developers can ship the code exactly as they had developed it to QA
QA can then test that code to see if it’s kosher and ship it to production.
Containers are cheap, so take advantage of it.
Let a few canaries lose in production to see if they fly as expected.
If they don’t, just shoot ‘em down with “docker kill”
In production, you can have the peace of mind of knowing you’re running exactly the same code the developer tested and QA verified.
You’re reducing risk because containers are Imperative. Exactly what you defined.
To automate rollouts, just start new containers pinned at right version,and leave the old ones laying around “just in case”
The way you do a rollback is easy too. Kill the problematic containers and go back to your stalwarts.
The more containers and microservices you leverage, the more isolated your failures.
Get this: GILT? -
they treat every page of their website as a standalone application.
That’s SERIOUS isolation
But it means that they can change any part of the website without a full blown rollout.
That’s cool.
It’s taking microservices to the extreme
Use them to their maximum advantage.
It’s new way of thinking.
You cannot do that with VMs. It would never be cost efficient.
Because containers are so cheap, run more A/B tests
In fact, you can run those tests with totally different dependencies.
Test if certain libraries are faster.
Because deploying software is easier, it can be done more frequently.
The faster you can determine the results of a deployment,
the quicker you can minimize risk.Think “high frequency trading” ; the less market exposure you have, the safer you are.
Businesses want to minimize risk and maximize reward.
If a test is performing poorly, nuke it and you’re back to square one.
Now, CapEx
The way to maximize your CapEx investment is to better utilize existing hardware (your sunk cost)
Containers are dense by nature, so they are your best bet
Run more software on the same servers
Related to this, you can reduce OpEx
as a result of increased density of services, Fewer servers means less power/heat/network ports.
You get the idea.
Here’s a good way to visualize that.
The question on everyone’s mind is it Production ready?
YES, without a doubt it is.
Remember, the underlying technology is mature, even if Docker itself is pretty new.
LXC is used by Google
And Docker is up to v1.5 and in serious production use by large companies
But don’t take my word for it.
------------------------------------
With Version 1.5 of Docker they explicity addressed many of the features necessary for production
IPv6, --read-only, -f, stats command
1.7 added lots of ways to limit resource consumption. this is essential as you add more and more services to a machine.
Take theirs.
If you google some of these names along with the keyword docker, you’ll find great videos at meetups talking about how they cracked the nut.
Not all of them have gone off the deep end, but they’re committed to a future that includes Docker.
The real question to ask is -- are you ready?
If you don’t engineer for the cloud or use patterns such 12-factor apps, Docker isn’t going to do miracles for you.
Docker is designed to make cloud architecture easier
It is not miracle grow; you architecture won’t suddenly scale or grow like weeds
By far, the biggest reason you don’t see more companies running docker is a lack of operational proficiency.
As with any new technology, you have to hone your skills. New tools are required and processes put in place.
If you don’t already practice the advanced art of DevOps jujitsu, Docker will be overwhelming, especially outside the comfort of a sandbox.
Remember, Docker is just the engine block.
You still need this… the containership.
Building it, requires a strong opinion for how you want to run containers.
That’s your choice.
Most of what you read online are simple use-cases of single server installations.
They leave out all fun parts of doing it at scale which involves scheudling, orchestrestion, volume management and race conditions.
Taking the leap to production is big, but it does not require a leap of faith.
It’s important to note, these are not NEW concepts for cloud deployments.
It’s just that Docker itself does not solve these things.
Solving it would require docker to become very opinionated and that is not a good thing.
Docker is only a tool. It replaces the configuration management layer for operating systems.
Using docker, however, will make things easier because best of all you can reduce the size of your Goldberg apparatus.
Covering Docker in an hour is impossible.
The ecosystem has grown rapidly.
Everyone’s first thought is to go build their own PaaS.
STOP.
There’s a lot of legit software written to scale Docker across hosts.
Please research what’s out there. Ask someone for advice. Maybe ask me?
Namely, check out some of these things. They’ll get you on your way.
The major ones to look at are Apache Mesos by Twitter and Kubernetes by Google, CoreOS, Tectonic, and Mesosphere.
There are also some Security Concerns that you need to be aware of
Most of these security concerns are no different than using any off-the-shelf open-source software
Docker Hub is public community like GitHub.
Anyone can publish images, including bitcoin miners and botnet entrepreneurs
There is no oversight beyond public due-diligence and community support
Some images will be certified by Docker, but those are few-and-far between
Your safest bet is to use your own private repository and borrow from Dockerfiles on Docker Hub as needed
Kernel exploits are especially evil since you’ll root every machine
Depending on the network fabric that’s used, it can be difficult to limit connectivity between containers.
Probably the coolest thing to expect in the near future is Docker Swarm
adds the ability to stitch all your docker daemons into one HUGE virtual server
It’s what cloud always promised by NEVER delivered.
All docker client tools work the same! they don’t know the difference of who they are talking to
Much like Docker Swarm, keep your eye out for Triton by Joyent.
Joyent has more experience than any other company at running Containers at scale
They’ve adapted SmartOS to run Docker; for the record, SmartOS is actually really smart.
They are the only cloud provider that implements something that looks like Docker Swarm and has a ZFS as the backing store
With Triton, I anticipate Joyent becoming a major player in the container space. . . if they play their cards right.
Did I mention it’s open source? You can self host it. How awesome is that??
The other option is OpenStack Magnum by RackSpace, which is something similar to Triton.
It lets you run Containers on OpenStack
They pull this off using Kubernetes and Mesos ,
You’re going to continue to see MASSIVE vendor adoption.
This technology is here to stay.
It’s amazing that AWS, GCE, Azure, RackSpace and Joyent all jumped on board in less than a year
More amazing is that the boat didn’t capsize
You can bet VMware is watching VERY closely
There will be some big acquisitions this year.
To summarize, here’s what I want you to walk away with.
Docker is the real deal.
Docker gives you raw compute power, it’s up to you what to do with it.
It’ll let you do more with less
It’ll let you move faster with reduced risk
And best of all, you get it all without vendor lock-in