Novacoast uses Docker and DevOps practices to streamline their development and operations processes. Previously, setting up servers and deploying code was manual and time-consuming. Now, Docker containers allow applications and their dependencies to be packaged together for easy, consistent deployment. Continuous integration using Jenkins builds and tests code changes automatically. Successful builds are pushed to an internal Docker registry for deployment via Chef to production servers. This automated workflow allows for faster, more reliable releases while improving security.
SUSECon 2015 Session CAS20148 covering Docker use cases, business use cases, and what environments and applications are most appropriate for containers.
Recording here: https://www.youtube.com/watch?v=5W4n9K3PIVg
Since Docker was open sourced in 2013, the community and adoption around Docker containers has grown to over 6 billion downloads and over 1000 contributors. Learn about why this is, and why you should start using containers for your own applications.
DCSF 19 How Entergy is Mitigating Legacy Windows Operating System Vulnerabili...Docker, Inc.
Jason Brown - Program Manager, Entergy
Jeff Hummel - IT Infrastructure, Architect, Entergy
Entergy, a large utility company headquartered in New Orleans, LA has launched an initiative to modernize their application infrastructure. During the initial analysis, Entergy recognized the existing legacy infrastructure’s lack of compatibility with more recent operating systems would stand in the way of progress. As a result, containerization was fast-tracked as the solution that can help them with the various tenants of their strategy: hyperconvergence, SaaS (ServiceNow), and workload portability. Docker Enterprise proved to be the right solution to migrate roughly 850 legacy applications from Windows Server 2003 and 2008 to Windows Server 2016 quickly, securely and economically. Entergy IT has now delivered the ability for the business to run applications on-premise, in the cloud, and future-proofed the applications for migration to new versions of Windows Server. In this session, Entergy will talk about how they are modernizing their infrastructure to become more agile, secure, and enable workload portability.
Docker provides PODA (Package Once Deploy Anywhere) and complements WORA (Write Once Run Anywhere) provided by Java. It also helps you reduce the impedance mismatch between dev, test, and production environment and simplifies Java application deployment.
This session will explain how to:
* Run your first Java application with Docker
* Package your Java application with Docker
* Share your Java application using Docker Hub
* Deploy your Java application using Maven
* Deploy your application using Docker for AWS
* Scale Java services with Docker Engine swarm mode
* Package your multi-container application and use service discovery
* Monitor your Docker + Java applications
* Build a deployment pipeline using common tools
Docker in Production, Look No Hands! by Scott CoultonDocker, Inc.
In this session we will talk about HealthDirect’s journey with Docker. We will follow the life cycle of a container through our CD process to its home in our swarm cluster with just a git commit thanks to configuration management. We will cover the CD process for Docker, Docker swarm, Docker networking and service discovery. The audience will leave with a solid foundation of how to build a production ready swarm cluster (A github repo with code will be given). They will also have the knowledge of how to implement a CD framework using Docker.
SUSECon 2015 Session CAS20148 covering Docker use cases, business use cases, and what environments and applications are most appropriate for containers.
Recording here: https://www.youtube.com/watch?v=5W4n9K3PIVg
Since Docker was open sourced in 2013, the community and adoption around Docker containers has grown to over 6 billion downloads and over 1000 contributors. Learn about why this is, and why you should start using containers for your own applications.
DCSF 19 How Entergy is Mitigating Legacy Windows Operating System Vulnerabili...Docker, Inc.
Jason Brown - Program Manager, Entergy
Jeff Hummel - IT Infrastructure, Architect, Entergy
Entergy, a large utility company headquartered in New Orleans, LA has launched an initiative to modernize their application infrastructure. During the initial analysis, Entergy recognized the existing legacy infrastructure’s lack of compatibility with more recent operating systems would stand in the way of progress. As a result, containerization was fast-tracked as the solution that can help them with the various tenants of their strategy: hyperconvergence, SaaS (ServiceNow), and workload portability. Docker Enterprise proved to be the right solution to migrate roughly 850 legacy applications from Windows Server 2003 and 2008 to Windows Server 2016 quickly, securely and economically. Entergy IT has now delivered the ability for the business to run applications on-premise, in the cloud, and future-proofed the applications for migration to new versions of Windows Server. In this session, Entergy will talk about how they are modernizing their infrastructure to become more agile, secure, and enable workload portability.
Docker provides PODA (Package Once Deploy Anywhere) and complements WORA (Write Once Run Anywhere) provided by Java. It also helps you reduce the impedance mismatch between dev, test, and production environment and simplifies Java application deployment.
This session will explain how to:
* Run your first Java application with Docker
* Package your Java application with Docker
* Share your Java application using Docker Hub
* Deploy your Java application using Maven
* Deploy your application using Docker for AWS
* Scale Java services with Docker Engine swarm mode
* Package your multi-container application and use service discovery
* Monitor your Docker + Java applications
* Build a deployment pipeline using common tools
Docker in Production, Look No Hands! by Scott CoultonDocker, Inc.
In this session we will talk about HealthDirect’s journey with Docker. We will follow the life cycle of a container through our CD process to its home in our swarm cluster with just a git commit thanks to configuration management. We will cover the CD process for Docker, Docker swarm, Docker networking and service discovery. The audience will leave with a solid foundation of how to build a production ready swarm cluster (A github repo with code will be given). They will also have the knowledge of how to implement a CD framework using Docker.
Containers - Transforming the data centre as we know it 2016Keith Lynch
These innovative technologies are at the heart of the microservices and DevOps revolution currently sweeping through the IT industry. They are fuelling digital transformation and accelerating cloud adoption. They're helping organisations develop infrastructure agnostic applications that can be deployed anywhere i.e. Bare Metal, Virtualised Data Centres, Private and Public Cloud. They’re helping organisations to significantly reduce infrastructure costs and accelerating agile application delivery by automating application deployments and operational management. After this talk you’ll know what these open source technologies and open standards are, what they mean to you and your organisation and where you can go to try them out.
Immutable Awesomeness by John Willis and Josh CormanDocker, Inc.
This presentation will show the combination of two ideas that can create 2 to 3 order of magnitude efficiencies in service delivery. We will discuss an example used in an insurance company that has experienced these efficiencies. Josh Corman will present the concept of using Open Source and Toyota Supply Chain principles as a weapon for eliminating operational costs of service delivery. By applying first order principles like fewer suppliers (e.g, less logging frameworks) and image manifests (i.e., bill of materials) he will show how an organization can cut down on bugs and issue resolution times. John Willis will then cover how these principles fit like peanut butter and chocolate when used in an immutable delivery model based on Docker. This presentation was the third highest rated session at the 2015 Devops Enterprise Summit.
Continuous Delivery leveraging on Docker CaaS by Adrien BlindDocker, Inc.
At Societe Generale GBIS, time to market & quality matters; hence we do love continuous delivery. In this context, we’re considering the Container as a Service pattern: artifacts produced by the continuous integration chain would become self-sufficient “dockerized” application modules, onboarding both code and subsequent system requirements; then, a CaaS cloud would enable to host these containers. In this talk, I’ll present our usecase and current findings, considering both technical & operational aspects. We’ll talk about software factories, immutable IT, registries, containers configuration, API-driven infrastructure, DevOps roles shifts. Finally, we’ll discuss pros/cons of this solution toward regular IaaS and PaaS.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/29kVhDV.
Justin Cormack talks about the Docker unikernels build, ship and run pipelines and how the changes they are seeing lead to unikernels in production. Filmed at qconlondon.com.
Justin Cormack is a developer at Docker, working on unikernels.
DCSF19 Transforming a 15+ Year Old Semiconductor Manufacturing EnvironmentDocker, Inc.
Jeanie Schwenk, Jireh Semiconductor
Jireh Semiconductor bought the Hillsboro fab and its contents including the manufacturing tools, servers, and software running the fab. The previous company had been winding down for years so server and software upgrades had not been on the radar for some time. In 2011 Jireh became the proud owner of the building, the tools, and its legacy software running on servers that weren’t even made any more.
That's when I started my adventure with Jireh in September 2016 with a charter to modernize the applications running the manufacturing facility process and move them into VMs with no impact to manufacturing. That led me down a path of exploration and questions. “What’s the goal?”
The goal wasn't to move to VMs. It was to become independent of the aging PA-RISC architecture, bring forward the ~230 java 1.4.2 applications (10-15 years old), scale to allow increased the load on the software and hardware in order to ramp the factory output to numbers never seen previously. And do it without manufacturing downtime.
The solution included a transition from waterfall and silo development to agile scrum. Rather than simply migrating to VMs, it became obvious the lynch pin for a successful software transition with the required uptime, flexibility, and scalability was Docker Enterprise.
Join me for this session where I'll talk about my journey modernizing 15+ year old applications and infrastructure at Jireh.
Don’t have a Meltdown! Practical Steps for Defending Your AppsDocker, Inc.
Security is a key concern for application developers and operations teams, as well as security professionals. Have I done enough? What do I need to do in the face of new threats like Meltdown and Spectre? What happens when the next big issue comes along? What should my priorities be? How do containers help?
In this talk we’ll demonstrate some common attacks live, and show how you can effectively defend your container deployment against them, using a combination of best practices, configuration, and tools.
Taking inspiration from highlights of the OWASP Top 10, and other high profile exploits and attacks, in this talk we will look at risks and preventative measures related to:
- authentication
- injection
- updates
- sensitive data
- configuration
By the end of the talk you should understand the most important security risks in your applications, and how to go about mitigating them.
Container technology is shaping the future of software development and is causing a structural change in the cloud-computing world. Developers are embracing container technology and enterprises are adopting it at an explosive rate. Containers are portion of "IT" in technology as they're a very powerful tool which streamline your development and ops processes, save company's money & make life for developers much easier.
Node.js Rocks in Docker for Dev and OpsBret Fisher
DockerCon 2019 session
Learn the best practices of managing Node and JavaScript projects when developing, testing, and operating containers from Docker Captain Bret Fisher, who's been building and deploying Node apps in containers since the early days of the Docker project.
This session will take you on a journey, starting with local development of Node and js-specific projects and how to optimize your Docker Desktop and Compose configs for "the best of both worlds" with js and Docker. You'll see examples of cutting edge features like macOS mind-mount performance enhancements, and multi-stage image targeting.
Then Bret will walk you through examples of optimizing your builds, testing, and CI/CD of Node with new features like test stages in multi-stage builds.
Finally, you'll get some examples around Node in production orchestration, and how you can optimize your cluster updates for zero-downtime scenarios on Kubernetes and Swarm using Node connection management techniques.
Node apps rock in containers, so come join Bret for a fun ride through the best parts and learn solutions for the problems that you'll need to solve along the way.
Docker Practice in Alibaba Cloud by Li Yi (Mark) & Zuhe Li (Sogo)Docker, Inc.
China is the biggest emerging market for Cloud computing, with strong momentum in both business and technology. Docker is starting to get adopted rapidly by Chinese organizations in their development and production environments. As the leading cloud provider in China, Alibaba Cloud commits to open container technologies, and provides Aliyun Container Service as the open platform for cloud native applications.
In this session, we will share use cases and experiences learned from Docker practices in Alibaba Cloud. It will cover topics including container technology in life-cycle process of Micro-Service applications; highly scalable, distributed Docker registry for global image distribution, and more. Join us to hear how to align customer's business needs with cutting edge container technologies.
"Container technologies such as Docker are rapidly becoming the de-facto way to deploy cloud applications, and Java is committed to being a good container citizen. This session will explain how OpenJDK fits into the world of containers, specifically how it fits with Docker images and containers.
The session will focus on the production of optimized Docker images containing a JDK. We will introduce technologies such as jlink, that can be used to reduce the size of the created image. The session will explain Alpine/musl support for an effective image and runtime. The session will also talk about and the inclusion of Class Data Sharing (CDS) archives and Ahead of Time (AOT) shared object libraries for improving startup time.
The attendees will learn about the recent work that has gone into OpenJDK for interacting with container resource limitations."
Docker for .NET Developers - Michele Leroux Bustamante, SollianceDocker, Inc.
Millions of developers use .NET to build high performance apps, from Enterprise to hobbiests. Docker enables .NET developers to build containerized applications that can be deployed natively to Windows or Linux. Windows containers support applications that leverage the full .NET Framework. And with AspNetCore on Linux developers can target both Linux-based Docker containers or Windows containers. In both cases you can develop your applications on Windows using your favorite .NET developer tools - then build Docker images and run them as containers on Windows Server or Linux machines. This session in this session, you will learn how to build or migrate full .NET Framework applications and deploy them as Windows Containers. Then you will learn to build AspNetCore applications that can target either Windows or Linux containers, without any changes to your code. Topics covered include - Common considerations as you work locally - Running local Docker containers, and preserving as environment settings - Unit testing - Choosing the right base image - Working with IIS or Kestrel - Composing multiple containers - Working with a Docker Registry
Networking in Docker EE 2.0 with Kubernetes and SwarmAbhinandan P.b
The presentation is about the operator goal from networking perspective and how it is influenced by both swarm and kubernetes on the Docker EE platform
Develop and deploy Kubernetes applications with Docker - IBM Index 2018Patrick Chanezon
Docker Desktop and Enterprise Edition now both include Kubernetes as an optional orchestration component. This talk will explain how to use Docker Desktop (Mac or Windows) to develop and debug a cloud native application, then how Docker Enterprise Edition helps you deploy it to Kubernetes in production.
Talking TUF: Securing Software DistributionDocker, Inc.
The Update Framework (TUF) secures new or existing software update systems by providing a specification and library that can be flexibly and universally integrated or natively implemented. The update procedure is notoriously susceptible to malicious attacks and TUF is designed to prevent these and other updater weaknesses.
Docker's Notary project integrates the Go implementation of TUF with Docker Content Trust to verify the publisher of Docker images.
https://github.com/theupdateframework/tuf
"The majority of the container security discussion revolves around containers on Linux while the security of containers in Windows is left as a mystical black box. In this talk we'll peel back the curtain and dive in to how Windows containers are secured.
Does Windows have namespaces? How does it compose the layers of a container's filesystem? How does it limit resource usage of containers? I heard there's a Hyper-V isolation thing, what's that about?
We'll answer all these questions and more!"
Shipping and Shifting ~100 Apps with Docker EEDocker, Inc.
Alm. Brand has been successfully running greenfield Dockerized workloads in production for nearly two years. However, enterprises are known for their very long-lived and ill-maintained monoliths which are not easily rewritten or relocated, and we have our fair share of those. Focusing on freeing up precious ops time, Alm. Brand ventured to transform all legacy WebLogic apps to run in Docker. The move has provided a golden opportunity to restructure our platform, and has helped push the DevOps agenda in what is probably the oldest company yet to present at DockerCon (1792).
Through an awesome live demo, we will demonstrate:
* as much as we can of our entire working production setup, boiled down to a Swarm stack file;
* how we are able to convert and deploy applications during office hours, unbeknown to the end users;
* how to smoothly and transparently handle the transition of users to the Dockerized environment;
* how we have streamlined monitoring, logging and deployment across greenfield and legacy apps
OSCON: Unikernels and Docker: From revolution to evolutionDocker, Inc.
with Richard Mortier and Anil Madhavapeddy
Unikernels are a growing technology that augment existing virtual machine and container deployments with compact, single-purpose appliances. Two main flavors exist: clean-slate unikernels, which are often language specific, such as MirageOS (OCaml) and HaLVM (Haskell), and more evolutionary unikernels that leverage existing OS technology recreated in library form, notably Rump Kernel used to build Rumprun unikernels.
Containers - Transforming the data centre as we know it 2016Keith Lynch
These innovative technologies are at the heart of the microservices and DevOps revolution currently sweeping through the IT industry. They are fuelling digital transformation and accelerating cloud adoption. They're helping organisations develop infrastructure agnostic applications that can be deployed anywhere i.e. Bare Metal, Virtualised Data Centres, Private and Public Cloud. They’re helping organisations to significantly reduce infrastructure costs and accelerating agile application delivery by automating application deployments and operational management. After this talk you’ll know what these open source technologies and open standards are, what they mean to you and your organisation and where you can go to try them out.
Immutable Awesomeness by John Willis and Josh CormanDocker, Inc.
This presentation will show the combination of two ideas that can create 2 to 3 order of magnitude efficiencies in service delivery. We will discuss an example used in an insurance company that has experienced these efficiencies. Josh Corman will present the concept of using Open Source and Toyota Supply Chain principles as a weapon for eliminating operational costs of service delivery. By applying first order principles like fewer suppliers (e.g, less logging frameworks) and image manifests (i.e., bill of materials) he will show how an organization can cut down on bugs and issue resolution times. John Willis will then cover how these principles fit like peanut butter and chocolate when used in an immutable delivery model based on Docker. This presentation was the third highest rated session at the 2015 Devops Enterprise Summit.
Continuous Delivery leveraging on Docker CaaS by Adrien BlindDocker, Inc.
At Societe Generale GBIS, time to market & quality matters; hence we do love continuous delivery. In this context, we’re considering the Container as a Service pattern: artifacts produced by the continuous integration chain would become self-sufficient “dockerized” application modules, onboarding both code and subsequent system requirements; then, a CaaS cloud would enable to host these containers. In this talk, I’ll present our usecase and current findings, considering both technical & operational aspects. We’ll talk about software factories, immutable IT, registries, containers configuration, API-driven infrastructure, DevOps roles shifts. Finally, we’ll discuss pros/cons of this solution toward regular IaaS and PaaS.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/29kVhDV.
Justin Cormack talks about the Docker unikernels build, ship and run pipelines and how the changes they are seeing lead to unikernels in production. Filmed at qconlondon.com.
Justin Cormack is a developer at Docker, working on unikernels.
DCSF19 Transforming a 15+ Year Old Semiconductor Manufacturing EnvironmentDocker, Inc.
Jeanie Schwenk, Jireh Semiconductor
Jireh Semiconductor bought the Hillsboro fab and its contents including the manufacturing tools, servers, and software running the fab. The previous company had been winding down for years so server and software upgrades had not been on the radar for some time. In 2011 Jireh became the proud owner of the building, the tools, and its legacy software running on servers that weren’t even made any more.
That's when I started my adventure with Jireh in September 2016 with a charter to modernize the applications running the manufacturing facility process and move them into VMs with no impact to manufacturing. That led me down a path of exploration and questions. “What’s the goal?”
The goal wasn't to move to VMs. It was to become independent of the aging PA-RISC architecture, bring forward the ~230 java 1.4.2 applications (10-15 years old), scale to allow increased the load on the software and hardware in order to ramp the factory output to numbers never seen previously. And do it without manufacturing downtime.
The solution included a transition from waterfall and silo development to agile scrum. Rather than simply migrating to VMs, it became obvious the lynch pin for a successful software transition with the required uptime, flexibility, and scalability was Docker Enterprise.
Join me for this session where I'll talk about my journey modernizing 15+ year old applications and infrastructure at Jireh.
Don’t have a Meltdown! Practical Steps for Defending Your AppsDocker, Inc.
Security is a key concern for application developers and operations teams, as well as security professionals. Have I done enough? What do I need to do in the face of new threats like Meltdown and Spectre? What happens when the next big issue comes along? What should my priorities be? How do containers help?
In this talk we’ll demonstrate some common attacks live, and show how you can effectively defend your container deployment against them, using a combination of best practices, configuration, and tools.
Taking inspiration from highlights of the OWASP Top 10, and other high profile exploits and attacks, in this talk we will look at risks and preventative measures related to:
- authentication
- injection
- updates
- sensitive data
- configuration
By the end of the talk you should understand the most important security risks in your applications, and how to go about mitigating them.
Container technology is shaping the future of software development and is causing a structural change in the cloud-computing world. Developers are embracing container technology and enterprises are adopting it at an explosive rate. Containers are portion of "IT" in technology as they're a very powerful tool which streamline your development and ops processes, save company's money & make life for developers much easier.
Node.js Rocks in Docker for Dev and OpsBret Fisher
DockerCon 2019 session
Learn the best practices of managing Node and JavaScript projects when developing, testing, and operating containers from Docker Captain Bret Fisher, who's been building and deploying Node apps in containers since the early days of the Docker project.
This session will take you on a journey, starting with local development of Node and js-specific projects and how to optimize your Docker Desktop and Compose configs for "the best of both worlds" with js and Docker. You'll see examples of cutting edge features like macOS mind-mount performance enhancements, and multi-stage image targeting.
Then Bret will walk you through examples of optimizing your builds, testing, and CI/CD of Node with new features like test stages in multi-stage builds.
Finally, you'll get some examples around Node in production orchestration, and how you can optimize your cluster updates for zero-downtime scenarios on Kubernetes and Swarm using Node connection management techniques.
Node apps rock in containers, so come join Bret for a fun ride through the best parts and learn solutions for the problems that you'll need to solve along the way.
Docker Practice in Alibaba Cloud by Li Yi (Mark) & Zuhe Li (Sogo)Docker, Inc.
China is the biggest emerging market for Cloud computing, with strong momentum in both business and technology. Docker is starting to get adopted rapidly by Chinese organizations in their development and production environments. As the leading cloud provider in China, Alibaba Cloud commits to open container technologies, and provides Aliyun Container Service as the open platform for cloud native applications.
In this session, we will share use cases and experiences learned from Docker practices in Alibaba Cloud. It will cover topics including container technology in life-cycle process of Micro-Service applications; highly scalable, distributed Docker registry for global image distribution, and more. Join us to hear how to align customer's business needs with cutting edge container technologies.
"Container technologies such as Docker are rapidly becoming the de-facto way to deploy cloud applications, and Java is committed to being a good container citizen. This session will explain how OpenJDK fits into the world of containers, specifically how it fits with Docker images and containers.
The session will focus on the production of optimized Docker images containing a JDK. We will introduce technologies such as jlink, that can be used to reduce the size of the created image. The session will explain Alpine/musl support for an effective image and runtime. The session will also talk about and the inclusion of Class Data Sharing (CDS) archives and Ahead of Time (AOT) shared object libraries for improving startup time.
The attendees will learn about the recent work that has gone into OpenJDK for interacting with container resource limitations."
Docker for .NET Developers - Michele Leroux Bustamante, SollianceDocker, Inc.
Millions of developers use .NET to build high performance apps, from Enterprise to hobbiests. Docker enables .NET developers to build containerized applications that can be deployed natively to Windows or Linux. Windows containers support applications that leverage the full .NET Framework. And with AspNetCore on Linux developers can target both Linux-based Docker containers or Windows containers. In both cases you can develop your applications on Windows using your favorite .NET developer tools - then build Docker images and run them as containers on Windows Server or Linux machines. This session in this session, you will learn how to build or migrate full .NET Framework applications and deploy them as Windows Containers. Then you will learn to build AspNetCore applications that can target either Windows or Linux containers, without any changes to your code. Topics covered include - Common considerations as you work locally - Running local Docker containers, and preserving as environment settings - Unit testing - Choosing the right base image - Working with IIS or Kestrel - Composing multiple containers - Working with a Docker Registry
Networking in Docker EE 2.0 with Kubernetes and SwarmAbhinandan P.b
The presentation is about the operator goal from networking perspective and how it is influenced by both swarm and kubernetes on the Docker EE platform
Develop and deploy Kubernetes applications with Docker - IBM Index 2018Patrick Chanezon
Docker Desktop and Enterprise Edition now both include Kubernetes as an optional orchestration component. This talk will explain how to use Docker Desktop (Mac or Windows) to develop and debug a cloud native application, then how Docker Enterprise Edition helps you deploy it to Kubernetes in production.
Talking TUF: Securing Software DistributionDocker, Inc.
The Update Framework (TUF) secures new or existing software update systems by providing a specification and library that can be flexibly and universally integrated or natively implemented. The update procedure is notoriously susceptible to malicious attacks and TUF is designed to prevent these and other updater weaknesses.
Docker's Notary project integrates the Go implementation of TUF with Docker Content Trust to verify the publisher of Docker images.
https://github.com/theupdateframework/tuf
"The majority of the container security discussion revolves around containers on Linux while the security of containers in Windows is left as a mystical black box. In this talk we'll peel back the curtain and dive in to how Windows containers are secured.
Does Windows have namespaces? How does it compose the layers of a container's filesystem? How does it limit resource usage of containers? I heard there's a Hyper-V isolation thing, what's that about?
We'll answer all these questions and more!"
Shipping and Shifting ~100 Apps with Docker EEDocker, Inc.
Alm. Brand has been successfully running greenfield Dockerized workloads in production for nearly two years. However, enterprises are known for their very long-lived and ill-maintained monoliths which are not easily rewritten or relocated, and we have our fair share of those. Focusing on freeing up precious ops time, Alm. Brand ventured to transform all legacy WebLogic apps to run in Docker. The move has provided a golden opportunity to restructure our platform, and has helped push the DevOps agenda in what is probably the oldest company yet to present at DockerCon (1792).
Through an awesome live demo, we will demonstrate:
* as much as we can of our entire working production setup, boiled down to a Swarm stack file;
* how we are able to convert and deploy applications during office hours, unbeknown to the end users;
* how to smoothly and transparently handle the transition of users to the Dockerized environment;
* how we have streamlined monitoring, logging and deployment across greenfield and legacy apps
OSCON: Unikernels and Docker: From revolution to evolutionDocker, Inc.
with Richard Mortier and Anil Madhavapeddy
Unikernels are a growing technology that augment existing virtual machine and container deployments with compact, single-purpose appliances. Two main flavors exist: clean-slate unikernels, which are often language specific, such as MirageOS (OCaml) and HaLVM (Haskell), and more evolutionary unikernels that leverage existing OS technology recreated in library form, notably Rump Kernel used to build Rumprun unikernels.
A CI/CD Pipeline to Deploy and Maintain OpenStack - cfgmgmtcamp2015Simon McCartney
An intro into the pipeline & related tools we built to build a CI/CD pipeline for building and maintaining a package based OpenStack installation, with realistic, portable multi-machine development environments.
Enhancing the application development process in all its phases—building, scaling, shipping, deploying
and running—plays a vital role in today’s competitive IT industry by shortening the time between writing
code and running it.
Docker allows simple environment isolation and repeatability so that we can create a run-time environment once, package it up, then run it again on any other machine. Furthermore, everything that runs in that environment is isolated from the underlying host (much like a virtual machine). And best of all, everything is fast and simple.
This presentation, we’ll provide basic introduction. What is Docker? why to use it? and demonstrate how we can use Docker to compose and deploy an application.
Thanks to tools like vagrant, puppet/chef, and Platform as a Service services like Heroku, developers are extremely used to being able to spin up a development environment that is the same every time. What if we could go a step further and make sure our development environment is not only using the same software, but 100% configured and set up like production. Docker will let us do that, and so much more. We'll look at what Docker is, why you should look into using it, and all of the features that developers can take advantage of.
The DevOps Tool Kit: Building the Software Supply ChainMark Miller
This was presented as a lightning talk at DevOpsDays Boston 2015. It is a short overview to introduce Software Supply Chain principles through the examination of Reference Architectures.
Avi Cavale presentation at DevOpsDays India, September 2015
2014 was the year of Docker. The container-based world exploded on the scene with the promise to reinvent how you think about distributed applications. Continuous Integration/Continuous Delivery in support of DevOps is proving to be a successful early use case for a container-based architecture. Learn how Shippable has designed its Continuous Integration/Continuous Delivery system by fully leveraging containers and a microservices architecture, resulting in reduced Dev/Test cycle times and lower infrastructure costs.
PaaS Design & Architecture: A Deep Dive into Apache StratosWSO2
The design and architecture of Stratos present some unique advantages to the users. The multi-tenancy model, where it allows high multi-tenancy density within a deployment is a key advantage. The ability to control IaaS resources, per could, per region, per zone
paves the way to easily achieve high availability and disaster recover. Multi-factor based auto scaling, dynamic load balancing and cloudbusting are some of the other key noteworthy differentiators in Stratos PaaS. This session will highlight the advantages of using Apache Stratos (Incubating) as your PaaS framework.
I tried to dockerize my app but I had to PaaSJorge Morales
In this talk I describe how I tried to run my application in Docker containers in production and how difficult and painful the process was, and why a PaaS platform helped me with many things I haven’t thought of before.
Build, Publish, Deploy and Test Docker images and containers with Jenkins Wor...Docker, Inc.
This lightning talk will show you how simple it is to apply CI to the creation of Docker images, ensuring that each time the source is changed, a new image is created, tagged, and published. I will then show how easy it is to then deploy containers from this image and run tests to verify the behaviour.
DCEU 18: Building Your Development PipelineDocker, Inc.
Oliver Pomeroy - Solution Engineer, Docker
Laura Frank Tacho - Director of Engineering, CloudBees
Enterprises often want to provide automation and standardisation on top of their container platform, using a pipeline to build and deploy their containerized applications. However this opens up new challenges… Do I have to build a new CI/CD Stack? Can I build my CI/CD pipeline with Kubernetes orchestration? What should my build agents look like? How do I integrate my pipeline into my enterprise container registry? In this session full of examples and “how-to”s, Olly and Laura will guide you through common situations and decisions related to your pipelines. We’ll cover building minimal images, scanning and signing images, and give examples on how to enforce compliance standards and best practices across your teams.
DCSF 19 Building Your Development Pipeline Docker, Inc.
Oliver Pomeroy, Docker & Laura Tacho, Cloudbees
Enterprises often want to provide automation and standardisation on top of their container platform, using a pipeline to build and deploy their containerized applications. However this opens up new challenges; Do I have to build a new CI/CD Stack? Can I build my CI/CD pipeline with Kubernetes orchestration? What should my build agents look like? How do I integrate my pipeline into my enterprise container registry? In this session full of examples and how-to's, Olly and Laura will guide you through common situations and decisions related to your pipelines. We'll cover building minimal images, scanning and signing images, and give examples on how to enforce compliance standards and best practices across your teams.
Webinar: From Development to Production with Docker and MongoDBMongoDB
In this talk we review what Docker is and why it's important to Developers, Admins and DevOps.
We also cover the following topics:
- Using Docker to Orchestrate a multi container application (Flask + MongoDB)
- Injecting HAProxy and other production requirements as we deploy to production
- Scaling the Web and MongoDB cluster to grow to meet demand
This presentation includes an interactive demo showcasing the core Docker components (Machine, Engine, Swarm and Compose) as well as some of Docker's new components (libnetowrk, runC) from the experimental branch along with MongoDB. We hope you will see how much simpler Docker can make building and deploying multi-node applications.
<hr>
<b>What's next?</b>
See how you can push MongoDB performance to meed the needs of your mission-critical app with our best practices for MongoDB operations.
<a>Read the guide</a>
Docker in Production: How RightScale Delivers Cloud ApplicationsRightScale
Combining Docker, cloud infrastructure, and continuous integration and delivery practices can create a highly automated and efficient way to get new applications and features to market. The RightScale development team has been using Docker from development to continuous integration, and now the operations team has taken Docker into the production environment.
The Docker in Production: How RightScale Delivers Cloud Applications webinar will cover:
Approach and use case for adopting Docker
How RightScale has adopted Docker for development, CI, and production
Overcoming technical and process challenges
The RightScale process before and after Docker
Benefits for both developers and operations teams
Developer Experience Cloud Native - From Code Gen to Git Commit without a CI/...Michael Hofmann
Developing cloud native applications bring in a lot of complexities for developers. Without using tools to compensate these complexities, you will not become very efficient. Additional, cloud developers often suffer a rising frustration, by fighting these problems.
Before I push my code into Git, I want to test different things in my cloud environment. Therefore it is essential to have a fast and easy round trip. A classic round trip starts by writing or generating code, create a Docker image, deploy it into Kubernetes and test or remote debug the application in Docker or in Kubernetes. Without some elementary tools, this round trip will not be very fast or simple and therefore error prone.
This Lab will show you some open source tools, making your live as a developer more easy. Short demos will demonstrate the simple handling of these tools. Starting point is the generation of a MicroProfile and a SpringBoot application. By using the different tools (e.g. Helm, Shell completion, kubectl cp, Ksync, Stern, Kubefwd, Telepresence, …) on these applications, the complete round trip will be shown. Most of these tools can also be used with other programming languages. Every tool works on its own which makes it easy to switch between these tools.
Finally you will get an evaluation of these tools and I will show you an outlook on tools which are more focused on larger developer teams.
Slides from DockerCon SF 2015 –
Docker at Lyft: Speeding up development w/ Matthew Leventi
Talk description: Learn how Docker enables Lyft to increase developer productivity across our engineering organization. We'll go through a local development model that decreases our developer onboard time, and keeps our teams focused on delivering product goals. We'll also talk about how we use Docker to test changes to our servers and allow QA testing of our mobile clients. You'll come out of the talk with techniques and reasons for integrating docker not just in the cloud but also onto developer's laptops.
Real-World Docker: 10 Things We've Learned RightScale
Docker has taken the world of software by storm, offering the promise of a portable way to build and ship software - including software running in the cloud. The RightScale development team has been diving into Docker for several projects, and we'll share our lessons learned on using Docker for our cloud-based applications.
Building Distributed Systems without Docker, Using Docker Plumbing Projects -...Patrick Chanezon
Docker provides an integrated and opinionated toolset to build, ship and run distributed applications. Over the past year, the Docker codebase has been refactored extensively to extract infrastructure plumbing components that can be used independently, following the UNIX philosophy of small tools doing one thing well: runC, containerd, swarmkit, hyperkit, vpnkit, datakit and the newly introduced InfraKit.
This talk will give an overview of these tools and how you can use them to build your own distributed systems without Docker.
Patrick Chanezon & David Chung, Docker & Phil Estes, IBM
PuppetConf 2017: What’s in the Box?!- Leveraging Puppet Enterprise & Docker- ...Puppet
“Docker, Docker, Docker.” It’s a phrase we hear often, but what are containers, what can they be used for, and why should you know more about them? In this session, Grace (Puppet) and Tricia (AppDynamics) will introduce attendees to Docker and help them build and deploy their first container with Puppet. They will leverage the docker_image_build module from the Puppet Forge and take attendees through the proper workflow for coupling Docker and Puppet together. The session will focus on how to use some of the newest Docker features, such as multi-stage build files and password stores within Docker so you can pass "secrets" to a swarm for login credentials. The goal is to provide newcomers with a working proficiency of how to get started deploying containers using Puppet as their automation tool.
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...Edureka!
This DevOps Docker Tutorial on what is docker ( Docker Tutorial Blog Series: https://goo.gl/32kupf ) will help you understand how to use Docker Hub, Docker Images, Docker Container & Docker Compose. This tutorial explains Docker's working Architecture and Docker Engine in detail. This Docker tutorial also includes a Hands-On session around Docker by the end of which you will learn to pull a centos Docker Image and spin your own Docker Container. You will also see how to launch multiple docker containers using Docker Compose. Finally, it will also tell you the role Docker plays in the DevOps life-cycle.
The Hands-On session is performed on an Ubuntu-64bit machine in which Docker is installed.
Similar to SDLC Using Docker for Fun and Profit (20)
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
2. n
Your Presenters Today…
2
Dan Elder
Linux Services Manager, Novacoast
delder@novacoast.com
800.949.9933 x1337
Ryan Trauntvein
Infrastructure and DevOps Lead
rtrauntvein@novacoast.com
+1 805.568.0171 x4805
3. n
Novacoast, Inc.
Who we are…
3
‣ IT Services & Development
‣ 4 Internal ops engineers
‣ 85 Field Engineers
‣ 40 Developers
‣ 45 Sales / Admin
‣ Internal User Base 170+
6. n
From a Novacoast Ops Team Perspective
Pre-Devops
‣ Code is given to the Developer
‣ Developer works on “Dev server”
‣ Developer hands off code to Ops
‣ Likely deployed manually
‣ Something is broken in Production
‣ Needs to be fixed in Production. Now.
6
7. n
devops
7
‣ Continuous integration (CI)
‣ Getting changes to users quickly, reliably, and securely.
‣ Many releases per day or hour.
‣ More confidence due to automated testing
‣ Portability
‣ Reproducibility
‣ (Too) many tools to choose from
Communication, collaboration and integration
12. Containers for everyone
Docker
12n
‣ A platform for devs and ops to build,
ship, and run application images.
‣ Containers run on Linux hosts
‣ Dockerfiles to define images
‣ Version control for an app and its
whole environment
‣ Official OpenSUSE images
13. CI and Docker builds
n
‣ Jenkins (Running in Docker)
‣ Merge / Pull request integration
‣ Run tests on code, and on running containers
‣ Merge request builder - Feedback dictates next step
‣ “Master” and “Prod” branches built and tagged
‣ Successful build pushes to Internal Docker Registry
https://github.com/timols/jenkins-gitlab-merge-request-builder-plugin
14. n
Deployment
‣ We chose to go with Chef
‣ Provisions Docker Hosts
‣ Provisions Docker Containers on hosts
‣ Re-deploy (update) Containers as needed
‣ Configures AppArmor, and docker-bench
‣ Runs on a schedule, or triggered
https://github.com/bflad/chef-docker
https://github.com/opscode-cookbooks/chef-client
15. Overview
Docker vs vm
15n
Emulates a computing
environment, managed by a
virtualization layer which translates
requests to the underlying physical
hardware.
Linux Containers are operating
system-level capabilities that make
it possible to run multiple isolated
Linux containers, on one control
host.
19. and The Docker Hub
Docker registry
19n
‣ Docker image version control
‣ Push & Pull Images
‣ Image Tags
‣ Self Hosted (Private): Portus by SUSE, or Docker’s own
‣ Private 3rd Party (quay.io)
‣ Public / Private Official + Trusted Builds: hub.docker.com
20. using the Docker Hub
Docker registry trusted build
20n
‣ Built on Docker’s servers
‣ Linked to Github or Bitbucket repository
‣ Dockerfile & Code audit visibility
‣ Per branch builds
‣ docker pull
‣ Web hooks
‣ Private repos for sale
36. Put it all together
n
‣ Critical Vulnerability Discovered (i.e., ShellShock)
‣ Vendor patch is mirrored automatically to local build server
‣ Based on severity rating, automatic Docker image rebuild is triggered
‣ New images are run through automated testing
‣ Validated images are pushed to prod, load balancer picks them up
‣ Admins receive email notifying them of automatic deployment
37. Security Benefits of Docker and DevOps
n
‣ No access to production environment (SSH, CLI, etc…)
‣ Stateless nature of environment mitigates against APT
‣ Minimal images eliminate majority of attack vectors
‣ Deployment methodology allows rapid response to threats
‣ Full audit trail for entire lifecycle of deployment
‣ Breaks down communication barriers between Dev, Ops, and Security
‣ Automation ensures consistency and mitigates human error
‣ AppArmor and/or SELinux to confine applications at kernel level
38. Beyond the simple demo..
‣ Further automated or manual testing within the built image prior to deployment
‣ Automated Deployment / Clustering
‣ Using another set of VCS and CI tools
n
Other considerations
38
39. n 39
‣ Docker workflow consulting and training
‣ Private Registry configuration
‣ Application “Dockerization”
‣ Deployment, monitoring and mangement
how we can help
41. Try our demo out at:
‣GitHub https://github.com/novacoast/opensuse-apache-docker
‣Docker https://registry.hub.docker.com/u/novacoast/opensuse-apache
n
Give it a Spin
41
Editor's Notes
Intro
We want to share with you a bit of background on Novacoast, and how we came to use Docker in our production and development environment and workflows.
Novacoast is an IT Professional services and product development company that is Headquartered in Santa Barbara. We are a long time partner of the Attachmate group, along with Novell, NetIQ, and SUSE. We manage and consult on large linux, identity management and security projects.
This talk focuses on Docker and DevOps in Novacoast’s internal infrastructure.
Our userbase is Novacoast Staff - broken into:
Roughly 100 total technical staff - A Development team of 25, and about 75 field engineers / consultants nationwide.
Sales and Administrative staff of about 40, also not listed are users from our Staffing services, which we also run internal apps for.
For some context, here is a quick overview of our internal system breakdown by OS, translating to roughly 75+ or so services that we provide to our user base.
Novacoast ops was very much the traditional IT shop. Manually building and maintaining ~100 servers for applications and services. Some servers around for years, built and updated manually. Black boxes at this point, there is no way for us to know all of the changes that have been made, who has had access, and how to rebuild it again the exact same way.
This posed a problem for our developers, who had to resort to creative means to reproduce issues, and ultimately lead to the “It worked in dev, but is broken in production” problem
One of the analogies in the DevOps community is that in the “old style” of IT, people make manual changes to their servers, and you end up with servers that are like special snowflakes.
Manually configuring systems, years down the road, re-creating the exact same server will be nearly impossible, just as no two snowflakes are alike. And because it takes a miracle to really re-create a production server, you must do everything in your power to protect it from changes that can break it.
*Developer may get access to version control, or sent a tarball
*Kind of a combined dev environment & testing server, not managed well
*Hopefully in version control, probably a tarball
*Likely will be staying late after hours to deploy, schedule downtime
*install, ssh to the system, run install docs if provided. Maybe a git pull if possible
*Something broke because the dev or qa server is configured differently (a snowflake)
*Now the app is live and receiving traffic, so need to fix it ASAP!
Moving forward a few years, we started discussing and reading about this “DevOps” movement. Things like automation, rapid deployment, and configuration management & auditing were all things we wanted to improve upon.
The ability to quickly, reliably, and accurately reproduce systems between dev and production was something we were not doing well.
The ability to terminate a server with no fear of losing some undocumented configuration also stuck out to us on the ops team.
CI is the practice, in software engineering, of merging all developer working copies with a shared mainline several times a day.
The old “traditional” way of doing IT makes special snowflakes, this new method of DevOps IT help realize the goal of disposable, “carbon copy” systems.
New DevOps tools come out every day, there are almost too many options. Define a process, then pick the right tools for the job. Just like building a house, you start with a blueprint, then select the correct materials and tools to build it the way you want.
Let’s take a look at the components and how they fit into our blueprint.
The first component, is version control. It is the focal point for collaboration, and is a building block for the rest of the workflow.
* Many options here, use what you are comfortable and good at. We prefer Git.
We felt it was important to have integrated issue tracking. Easy for anyone (technical or non-technical) to submit their issues.
More visibility into what is changing and what needs attention, even if it’s not something we’re working on (better transparency).
Allows open contributions without risk of merging mystery code that could potentially break things or be insecure.
Protected branches and forking are useful because of pull requests. Control over master branch, code review can happen here.
DOCKER - Now we’re going to talk about Docker, the one constant in this whole equation.
The next piece, and the one constant in our equation.
Docker containers are the intermodal shipping containers of the development world; they are standardized in a way that allows them to be shipped using any one of many different methods, but ultimately the contents of the container arrive at their destination in the same state or configuration as they started.
What is Docker?
Essentially a wrapper around Linux containers, which have existed for a while. Makes them easier to use.
Like a very minimal Linux virtual machine with a focused purpose.
* Dockerfile = Text document that contains commands to build a Docker image.
* Image = The environment and application in a portable Docker format.
* Container = A running (or exited) image.
What are the advantages of using Docker?
Version controlled - Ability to make and test an image locally, push to a central repository, then pull and run on another system.
Run anywhere with the assurance that it will be the same on any platform.
Only dependency is Docker.
Hands-off, consistent approach to ensuring quality code while avoiding pitfalls of manual checks.
Many options here, use what you are comfortable and good at. Again, go with what you are good at and provides the features you need. We went with Jenkis because it is flexible and an easy learning curve
Works by triggering builds & tests when e.g. a merge request is submitted, and gives feedback.
Can stop bugs or problems before they make it beyond the pull request. If it doesn’t pass tests, it won’t be accepted.
Chef - Needed something that was agent based, as all hosts are two-factor enabled for ssh (requires a key + a token)
One tool to handle Docker and non-docker (even Windows)
Redeploy does a pull, then compares the images, if a new image is received, old one is stopped, and new is started
Handles all security configuration, and distribution of secrets to containers at runtime
Containers are scoped to an instance of Linux. It might be different flavors of Linux ( e.g. a Ubuntu container on a Centos host but it’s still Linux. )
Linux Containers serve as a lightweight alternative to VMs as they don’t require the hypervisors
VM’s have a broader scope: windows, netware, etc.
Moving on from docker, next, we’ll discuss automated building and testing.
Hands-off, consistent approach to ensuring quality code while avoiding pitfalls of manual checks.
Many options here, use what you are comfortable and good at. Again, go with what you are good at and provides the features you need.
Works by triggering builds & tests when e.g. a pull request is submitted, and gives feedback.
* Can stop bugs or problems before they make it beyond the pull request. If it doesn’t pass tests, it won’t be accepted.
DOCKER REGISTRY - Finally, we’re going to talk about using a Docker registry to hold and transport your images in a manner very similar to version control systems.
A central repository for images
Much like you use git or svn for versioning code, this is for tracking the entire docker image
Easy to share images, and re-use images to make your own, single line in Dockerfile
Tagging allows version releases, and can be used along side branches and tags in your version control system
Different ways to achieve this, depending on your data security requirements.
Public hub has special feature of “trusted builds” (segue to next slide)
Feature of the official Docker registry
Trusted builds:
Are built on known, trusted infrastructure
Can link to VCS to automate builds
Allow tracking of everything that went into your container.
Dockerfile
Link back to VCS repository
Can have different versions, which help facilitate releases
Are available to anyone (if you wish) with a single line of code or a single command
Can trigger other things when build completes
Integrates into further testing of the image
Private images
As we mentioned, just about all of the pieces in the workflow are interchangeable. Our demo will utilize: Github, Codeship, the official Docker Hub, and a docker hosting provider, tutum.co. With exception of Tutum, these are essentially free for public projects.
We chose these for this first demo due to their simplicity and public availability. It is quite easy to swap out pieces with self-hosted solutions such as: Gitlab, Jenkins, a Docker Registry container, and on-prem or cloud hosting.
Now we will show a demo of this workflow.
Talk to us after if you want more information about using some of these other options.
Here is an example of a Docker workflow and a real world demo using free services.
For this demo; We will be using Github, Codeship (CI), Docker Hub, and pulling and running on linux.
Starting with a Github project containing a Dockerfile and our web application, we will go through a pull request workflow with automated testing, automated docker image builds, and then pull and run our newly modified image.
Here we have our Github project containing our Dockerfile, and webapp code.
Notice the Red “Failing” Codeship CI badge displayed on the page. In this demo we are going to make pull request to fix that issue, and then have automated testing run before we accept the pull request, triggering an automated build of our image on the Docker hub.
We have now gone ahead and forked the upstream project (by pressing the “Fork” button on the upper right corner).
You can see the namespace has changed from “novacoast/opensuse-apache-docker” to “rtrauntvein/opensuse-apache-docker”
We determined that the project’s Codeship test is failing a simple php lint test, due to an extra set of parenthesis.
Within our forked repository, we will go ahead and fix the syntax issue, and commit our changes.
Now our forked copy of the repository shows that we are one commit ahead of the upstream “novacoast:master” project.
We will now create a “Pull request” to request that our changes be merged into the upstream project. Here a submitter can explain what their commit is changing, and why it should be accepted into the upstream project.
Once we have submitted our pull request, Codeship will run a “build” which in our case is running the php lint checks again. We can click on the “Details” link to see our build status
Here is the Codeship status for our test run, and we can see that no syntax errors have been detected.
Here we have gone ahead and accepted the pull request, which automatically merged our forked branch into the master branch of the upstream project.
Our Codeship status badge is now showing as green also!
Over on the official Docker Hub, we have a “Automated Build Repository” setup which is linked to the Github project. We have configured the build to trigger whenever a change is pushed to the master branch of our project.
Clicking on a build ID will show the Dockerfile used, and the logging output for the build.
Once the build completes, we are able to use the “docker pull” command to download the image.
Then we run the container from our image with the “docker run” command, exposing port 80 to the host’s networking stack.
We are then able to browse to http://<docker host IP>/phpinfo.php and view our page
Here are some other items that we don’t have time to demo, but are things to think about going beyond what we have showed.
Unit tests / integration tests on the images after being built.
Deploying using config management tools, or via a build system like Codeship or Jenkins
We used Github, Codeship, and the Docker Hub registry to demo - Could just as easily use SVN, Jenkins, and a Privately hosted registry - Go with what meets your needs and strengths.