This document provides an informal summary of Docker and related container technologies presented by Santosh Koti on May 29, 2015. It discusses Docker and how it helps standardize application packaging and runtimes. Microservices and how they decompose applications are covered. Distributed systems challenges like failures are also summarized. Kubernetes, an open source container cluster manager from Google, is introduced as a way to manage containers at scale across multiple clouds.
Business Insider puts Docker at no. 22 on its list of 40 tech skills
that will land you a 120K plus salary. A good factoid to know if you are drivenby money. On the other hand, Docker's technology, is just flat out fun if you are a Linux techie, delight in good DevOps, or just like cutting-edge innovation. This talk covers both the fun and funds of Docker technology. You'll learn essential container concepts and see them in action. You'll also get practical
insight for applying container technology at your company.
My (very brief!) presentation at Interzone.io on March 11, 2015. A more in depth exploration of these ideas can be found at http://www.slideshare.net/bcantrill/docker-and-the-future-of-containers-in-production video: https://www.joyent.com/developers/videos/docker-and-the-future-of-containers-in-production
2014, April 15, Atlanta Java Users GroupTodd Fritz
Server to Cloud – convert a legacy platform to a micro-PaaS using Docker and related, containerization technologies
Video: http://vimeo.com/94556976
The talk will begin with how to setup a local Docker development environment (Windows or Mac OSX) as Docker runs atop Linux. The basics of Docker will be examined including how to use image repositories, and a brief description of available UI’s for managing Docker containers (Shipyard and DockerUI).
Next, example applications will be built for progressively more robust use cases and deployments; to demonstrate the power, flexibility and scalability of Containerization with Docker. The first example will discuss a simple two container model to encapsulate a database and application layer, which will lead to demonstration and discussion about more robust deployments that include features such as service discovery, automatic load balancing, and abstractions to simplify linking of containers. The context of the talk with be how Containerization enables architectural choice, scalability, and polyglot environments.
Docker and supporting technologies will be discussed to expose the multitude of supporting technologies within the ecosystem such as Flynn, Serf (makes or Vagrant), CoreOS, Deus, HAProxy and more.
Technologies that may be employed within containers during the demonstration include, Java, Scala, Akka, Docker, vert.x or node.js, memcached, mysql, mongo.
This document discusses virtualization, containers, and hyperconvergence. It provides an overview of virtualization and its benefits including hardware abstraction and multi-tenancy. However, virtualization also has challenges like significant overhead and repetitive configuration tasks. Containers provide similar benefits with less overhead by abstracting at the operating system level. The document then discusses how hyperconvergence combines compute, storage, and networking to simplify deployment and operations. It notes that many hyperconverged solutions still face virtualization challenges. The presentation argues that combining containers and hyperconvergence can provide both the benefits of containers' efficiency and hyperconvergence's scale. Stratoscale is presented as a solution that provides containers as a service with multi-tenancy, SLA-driven performance
This document summarizes the evolution of cloud computing technologies from virtual machines to containers to serverless computing. It discusses how serverless computing uses cloud functions that are fully managed by the cloud provider, providing significant cost savings over virtual machines by only paying for resources used. While serverless computing reduces operational overhead, it is not suitable for all workloads and has some limitations around cold start times and vendor lock-in. The document promotes serverless computing as the next wave in cloud that can greatly reduce costs and complexity while improving scalability and availability.
Virtual machines (VMs) allow users to run multiple operating systems on a single physical machine concurrently. VMs act like independent computers and have their own OS, applications, and storage. Containers provide operating system-level virtualization where the kernel runs directly on the host machine and containers share resources but are isolated. Common VM environments include VirtualBox, VMware, AWS, and OpenStack. Common container environments include LXC and Docker. While VMs are heavier, containers are lighter and more portable. The author currently prefers VMs due to industry use, customization, security, and ease of backups and recovery.
Containers vs. VMs: It's All About the Apps!Steve Wilson
There has been much hype about whether Containers will replace Virtual Machines for use in Cloud architectures. We’ll look at the strengths of each technology and how they apply in real-world usage. By taking a top-down (Application-first) approach to requirements analysis, versus a bottoms-up (Infrastructure-first) approach, we can see how unique architectures will emerge that can balance the needs of Developers, DevOps and corporate IT.
Business Insider puts Docker at no. 22 on its list of 40 tech skills
that will land you a 120K plus salary. A good factoid to know if you are drivenby money. On the other hand, Docker's technology, is just flat out fun if you are a Linux techie, delight in good DevOps, or just like cutting-edge innovation. This talk covers both the fun and funds of Docker technology. You'll learn essential container concepts and see them in action. You'll also get practical
insight for applying container technology at your company.
My (very brief!) presentation at Interzone.io on March 11, 2015. A more in depth exploration of these ideas can be found at http://www.slideshare.net/bcantrill/docker-and-the-future-of-containers-in-production video: https://www.joyent.com/developers/videos/docker-and-the-future-of-containers-in-production
2014, April 15, Atlanta Java Users GroupTodd Fritz
Server to Cloud – convert a legacy platform to a micro-PaaS using Docker and related, containerization technologies
Video: http://vimeo.com/94556976
The talk will begin with how to setup a local Docker development environment (Windows or Mac OSX) as Docker runs atop Linux. The basics of Docker will be examined including how to use image repositories, and a brief description of available UI’s for managing Docker containers (Shipyard and DockerUI).
Next, example applications will be built for progressively more robust use cases and deployments; to demonstrate the power, flexibility and scalability of Containerization with Docker. The first example will discuss a simple two container model to encapsulate a database and application layer, which will lead to demonstration and discussion about more robust deployments that include features such as service discovery, automatic load balancing, and abstractions to simplify linking of containers. The context of the talk with be how Containerization enables architectural choice, scalability, and polyglot environments.
Docker and supporting technologies will be discussed to expose the multitude of supporting technologies within the ecosystem such as Flynn, Serf (makes or Vagrant), CoreOS, Deus, HAProxy and more.
Technologies that may be employed within containers during the demonstration include, Java, Scala, Akka, Docker, vert.x or node.js, memcached, mysql, mongo.
This document discusses virtualization, containers, and hyperconvergence. It provides an overview of virtualization and its benefits including hardware abstraction and multi-tenancy. However, virtualization also has challenges like significant overhead and repetitive configuration tasks. Containers provide similar benefits with less overhead by abstracting at the operating system level. The document then discusses how hyperconvergence combines compute, storage, and networking to simplify deployment and operations. It notes that many hyperconverged solutions still face virtualization challenges. The presentation argues that combining containers and hyperconvergence can provide both the benefits of containers' efficiency and hyperconvergence's scale. Stratoscale is presented as a solution that provides containers as a service with multi-tenancy, SLA-driven performance
This document summarizes the evolution of cloud computing technologies from virtual machines to containers to serverless computing. It discusses how serverless computing uses cloud functions that are fully managed by the cloud provider, providing significant cost savings over virtual machines by only paying for resources used. While serverless computing reduces operational overhead, it is not suitable for all workloads and has some limitations around cold start times and vendor lock-in. The document promotes serverless computing as the next wave in cloud that can greatly reduce costs and complexity while improving scalability and availability.
Virtual machines (VMs) allow users to run multiple operating systems on a single physical machine concurrently. VMs act like independent computers and have their own OS, applications, and storage. Containers provide operating system-level virtualization where the kernel runs directly on the host machine and containers share resources but are isolated. Common VM environments include VirtualBox, VMware, AWS, and OpenStack. Common container environments include LXC and Docker. While VMs are heavier, containers are lighter and more portable. The author currently prefers VMs due to industry use, customization, security, and ease of backups and recovery.
Containers vs. VMs: It's All About the Apps!Steve Wilson
There has been much hype about whether Containers will replace Virtual Machines for use in Cloud architectures. We’ll look at the strengths of each technology and how they apply in real-world usage. By taking a top-down (Application-first) approach to requirements analysis, versus a bottoms-up (Infrastructure-first) approach, we can see how unique architectures will emerge that can balance the needs of Developers, DevOps and corporate IT.
Docker's Remote API allows for implementations of Docker that are radically different than the reference Docker implementation. Joyent implemented the Docker Remote API in their SmartDataCenter product to virtualize the Docker host and allow Docker containers to run on any machine in their data center. This allows them to leverage capabilities of SmartOS like ZFS, DTrace and virtualized networking. By unlocking innovation down the stack, the Remote API is Docker's killer feature as it does not imply physical co-location of containers and is flexible enough to accommodate different implementations.
Discussing the difference between docker dontainers and virtual machinesSteven Grzbielok
This presentation is designed to give an overview about differences of both virtualization methods to provide the reader with the fundamental knowledge to decide in each use case which technology is more suitable.
server to cloud: converting a legacy platform to an open source paasTodd Fritz
This session discusses the process to move legacy applications "into the cloud". It is intended for a diverse audience including developers, architects, and managers. We will discuss techniques, methodologies, and thought processes used to analyze, design, and execute a migration strategy and implementation plan -- from planning through rollout and operational.
An important aspect of this is the necessity for technical staff to effectively communicate to mid-level management how these design decisions and strategies translate into cost, complexity and schedule.
Commonly used migration strategies, cloud technologies, architecture options, and low level technologies will be discussed.
The case will be made that investing in strategic refactoring and decomposition during the migration will reap the benefits of a modern, decoupled and simplified system.
The end game being alignment and adoption of current best practices around PaaS, Saas, SOA, event-driven architectures, and message-oriented middleware, at scale in the cloud, to provide quantifiable business value.
This talk will focus more on the big picture, at times delving into technical architectures and discussion of certain technologies and service providers.
Use of Containers (Docker) is evangelized for decoupling and decomposing legacy systems.
This document summarizes a presentation given at the Jenkins User Conference in Herzelia, Israel on July 5, 2012. It discusses how Israel Direct Insurance (IDI) has implemented continuous integration practices using tools like Jenkins, Subversion, Maven, Artifactory, and Jira. The presentation outlines IDI's development environment, processes, challenges, and how Jenkins has helped address issues with build synchronization, testing integration, and deployment.
Mit Urs Stephan Alder (CEO Kybernetika), Michael Abmayer (Senior Consultant Opvizor) und Dennis Zimmer (CEO Opvizor) präsentierten gleich 3 hochkarätige Referenten an der vergangenen VMware@Night bei Digicomp. Sie zeigten zusammen auf, welche Auswirkungen Container in der Virtualisierung auf den täglichen Betrieb sowie die Performance- und Kapazitätsplanung haben.
Vor allem Docker ist derzeit in aller Munde und die bekannteste und meist genutzte Container-Technologie. Container werden vielfach in virtuellen Maschinen betrieben und stellen eine neue Herausforderung für VMware- Administratoren, aber auch IT-Manager dar. Gewährleistung und Überwachung der Performance sowie eine möglichst genaue Kapazitätsplanung sind Herausforderungen, denen man sich zügig stellen muss.
Nach einer kurzen Einführung in die Thematik der Container, in der auch die Unterschiede zur Virtualisierung aufgezeigt wurde, widmeten sich die Referenten dem Umgang mit Conteinern am Beispiel von Docker mit VMware vSphere. Zum Abschluss wurde die Performanceüberwachung und Kapazitätsplanung behandelt.
As Docker containers become the new standard, learn about what's catapulting them to the head of the pack and how to best protect their assets now and later with the help of Unitrends.
Lightweight virtualization uses container technology to isolate processes and their resources through namespaces and cgroups. Docker is a container management system that provides lightweight virtualization. Baidu chose Docker for its BAE platform because containers provide better isolation than sandboxes with fewer restrictions and lower costs. Docker meets BAE's needs but was improved with additional security and resource constraints for its PAAS platform.
In this session we'll discuss some of Kubernetes' basic concepts and talk about the architecture of the system, the problems it solves, and the model that it uses to handle containerized deployments and scaling.
node.js in production: Reflections on three years of riding the unicornbcantrill
Node.js was initially challenging to use in production due to memory leaks and lack of debugging tools. Over three years, Joyent developed tools like DTrace probes, MDB for debugging core dumps, Bunyan for logging, and node-restify for building HTTP services to make node.js more reliable and observable in production. These tools helped Joyent successfully deploy many internal services using node.js and identify issues through postmortem analysis. Joyent continues working to improve node.js for production use.
The document discusses microservices and how Docker can be used with microservices. Some key points about microservices are that they are small, focused services that are highly decoupled and independent. Docker is well-suited for microservices because it allows for lightweight, fast containers that are composable. The document also covers how Docker can be used for development, testing, continuous delivery and production of microservices-based applications.
Karthik Gaekwad presented on containers and microservices. He discussed the evolution of DevOps and how containers and microservices fit within the DevOps paradigm by allowing for collaboration between development and operations teams. He defined containers, microservices, and common containerization concepts. Gaekwad also provided examples of how organizations are using containers for standardization, continuous integration and delivery pipelines, and hosting legacy applications.
DCSF19 Transforming a 15+ Year Old Semiconductor Manufacturing EnvironmentDocker, Inc.
Jeanie Schwenk, Jireh Semiconductor
Jireh Semiconductor bought the Hillsboro fab and its contents including the manufacturing tools, servers, and software running the fab. The previous company had been winding down for years so server and software upgrades had not been on the radar for some time. In 2011 Jireh became the proud owner of the building, the tools, and its legacy software running on servers that weren’t even made any more.
That's when I started my adventure with Jireh in September 2016 with a charter to modernize the applications running the manufacturing facility process and move them into VMs with no impact to manufacturing. That led me down a path of exploration and questions. “What’s the goal?”
The goal wasn't to move to VMs. It was to become independent of the aging PA-RISC architecture, bring forward the ~230 java 1.4.2 applications (10-15 years old), scale to allow increased the load on the software and hardware in order to ramp the factory output to numbers never seen previously. And do it without manufacturing downtime.
The solution included a transition from waterfall and silo development to agile scrum. Rather than simply migrating to VMs, it became obvious the lynch pin for a successful software transition with the required uptime, flexibility, and scalability was Docker Enterprise.
Join me for this session where I'll talk about my journey modernizing 15+ year old applications and infrastructure at Jireh.
Demystifying Containerization Principles for Data ScientistsDr Ganesh Iyer
Demystifying Containerization Principles for Data Scientists - An introductory tutorial on how Dockers can be used as a development environment for data science projects
Microservices involve breaking up monolithic applications into smaller, independent services that work together. This allows for increased efficiency through scaling individual services as needed, easier updates by updating smaller code bases, and improved stability if one service fails. Containers are well-suited for microservices due to their lightweight nature and ability to easily move workloads.
Containers, Docker, and Microservices: the Terrific TrioJérôme Petazzoni
One of the upsides of Microservices is the ability to deploy often,at arbitrary schedules, and independently of other services, instead of requiring synchronized deployments happening on a fixed time.
But to really leverage this advantage, we need fast, efficient, and reliable deployment processes. That's one of the value propositions of Containers in general, and Docker in particular.
Docker offers a new, lightweight approach to application portability.It can build applications using easy-to-write, repeatable, efficient recipes; then it can ship them across environments using a common container format; and it can run them within isolated namespaces which abstract the operating environment, independently of the distribution,versions, network setup, and other details of this environment.
But Docker can do way more than deploy your apps. Docker also enables you to generalize Microservices principles and apply them on operational tasks like logging, remote access, backups, and troubleshooting.This decoupling results in independent, smaller, simpler moving parts.
Brief overview of the Docker eco system, the paradigm change it brings to development and operations processes. While docker has lots of potential its still working to mature into a viable production system that has proved itself secure, stable, and viable.
This document provides an introduction to Docker. It begins with an overview of the shift from monolithic to microservices architecture and how Docker addresses the complexity problems that arise. Docker is described as a tool that packages applications and dependencies into standardized units called containers that can run on any Linux server. Key differences between Docker containers and traditional virtual machines are outlined. The document then covers Docker concepts like images, containers, and the Docker Engine. It demonstrates the Docker build, ship, and run workflow and introduces common Docker commands and tools.
This document discusses new capabilities in CFEngine 3, an advanced configuration management system. Key points include:
- CFEngine 3 is declarative, ensures desired state is reached through convergence, is lightweight using 3-6MB of memory, and can run continuously to check configurations every 5 minutes.
- It supports both new platforms like ARM boards and older systems like Solaris.
- Recent additions allow managing resources like SQL databases, XML files, and virtual machines in a code-free manner using the Design Center.
- CFEngine treats all resources like files, processes, and VMs as maintainable and ensures they self-correct through convergence to the desired state.
Docker concepts and microservices architecture are discussed. Key points include:
- Microservices architecture involves breaking applications into small, independent services that communicate over well-defined APIs. Each service runs in its own process and communicates through lightweight mechanisms like REST/HTTP.
- Docker allows packaging and running applications securely isolated in lightweight containers from their dependencies and libraries. Docker images are used to launch containers which appear as isolated Linux systems running on the host.
- Common Docker commands demonstrated include pulling public images, running interactive containers, building custom images with Dockerfiles, and publishing images to Docker Hub registry.
Dev Ops Geek Fest: Automating the ForgeRock PlatformForgeRock
This document discusses using DevOps tools and practices with ForgeRock identity platforms. It begins by explaining the needs of different roles that DevOps aims to address. It then covers topics like elastic scaling of ForgeRock, different DevOps tools available, and ForgeRock's role in supporting DevOps. The document demonstrates configuring all ForgeRock components like OpenIDM, OpenAM, OpenDJ and OpenIG using Ansible. It discusses experiences using tools like Docker and the benefits of containers. Finally, it presents a proposal to run OpenAM on Kubernetes and leverage its capabilities for container orchestration.
Docker right now provides great value in the enterprise but the value proposition is more about developer productivity than scale-out.
Docker benefits include resource management, environment management, continuous delivery, developer and operations collaboration, and hybrid workloads.
Take care in its introduction. Consider Docker as just part of an overall toolkit and you don't need to go "full stack" to gain value.
Docker's Remote API allows for implementations of Docker that are radically different than the reference Docker implementation. Joyent implemented the Docker Remote API in their SmartDataCenter product to virtualize the Docker host and allow Docker containers to run on any machine in their data center. This allows them to leverage capabilities of SmartOS like ZFS, DTrace and virtualized networking. By unlocking innovation down the stack, the Remote API is Docker's killer feature as it does not imply physical co-location of containers and is flexible enough to accommodate different implementations.
Discussing the difference between docker dontainers and virtual machinesSteven Grzbielok
This presentation is designed to give an overview about differences of both virtualization methods to provide the reader with the fundamental knowledge to decide in each use case which technology is more suitable.
server to cloud: converting a legacy platform to an open source paasTodd Fritz
This session discusses the process to move legacy applications "into the cloud". It is intended for a diverse audience including developers, architects, and managers. We will discuss techniques, methodologies, and thought processes used to analyze, design, and execute a migration strategy and implementation plan -- from planning through rollout and operational.
An important aspect of this is the necessity for technical staff to effectively communicate to mid-level management how these design decisions and strategies translate into cost, complexity and schedule.
Commonly used migration strategies, cloud technologies, architecture options, and low level technologies will be discussed.
The case will be made that investing in strategic refactoring and decomposition during the migration will reap the benefits of a modern, decoupled and simplified system.
The end game being alignment and adoption of current best practices around PaaS, Saas, SOA, event-driven architectures, and message-oriented middleware, at scale in the cloud, to provide quantifiable business value.
This talk will focus more on the big picture, at times delving into technical architectures and discussion of certain technologies and service providers.
Use of Containers (Docker) is evangelized for decoupling and decomposing legacy systems.
This document summarizes a presentation given at the Jenkins User Conference in Herzelia, Israel on July 5, 2012. It discusses how Israel Direct Insurance (IDI) has implemented continuous integration practices using tools like Jenkins, Subversion, Maven, Artifactory, and Jira. The presentation outlines IDI's development environment, processes, challenges, and how Jenkins has helped address issues with build synchronization, testing integration, and deployment.
Mit Urs Stephan Alder (CEO Kybernetika), Michael Abmayer (Senior Consultant Opvizor) und Dennis Zimmer (CEO Opvizor) präsentierten gleich 3 hochkarätige Referenten an der vergangenen VMware@Night bei Digicomp. Sie zeigten zusammen auf, welche Auswirkungen Container in der Virtualisierung auf den täglichen Betrieb sowie die Performance- und Kapazitätsplanung haben.
Vor allem Docker ist derzeit in aller Munde und die bekannteste und meist genutzte Container-Technologie. Container werden vielfach in virtuellen Maschinen betrieben und stellen eine neue Herausforderung für VMware- Administratoren, aber auch IT-Manager dar. Gewährleistung und Überwachung der Performance sowie eine möglichst genaue Kapazitätsplanung sind Herausforderungen, denen man sich zügig stellen muss.
Nach einer kurzen Einführung in die Thematik der Container, in der auch die Unterschiede zur Virtualisierung aufgezeigt wurde, widmeten sich die Referenten dem Umgang mit Conteinern am Beispiel von Docker mit VMware vSphere. Zum Abschluss wurde die Performanceüberwachung und Kapazitätsplanung behandelt.
As Docker containers become the new standard, learn about what's catapulting them to the head of the pack and how to best protect their assets now and later with the help of Unitrends.
Lightweight virtualization uses container technology to isolate processes and their resources through namespaces and cgroups. Docker is a container management system that provides lightweight virtualization. Baidu chose Docker for its BAE platform because containers provide better isolation than sandboxes with fewer restrictions and lower costs. Docker meets BAE's needs but was improved with additional security and resource constraints for its PAAS platform.
In this session we'll discuss some of Kubernetes' basic concepts and talk about the architecture of the system, the problems it solves, and the model that it uses to handle containerized deployments and scaling.
node.js in production: Reflections on three years of riding the unicornbcantrill
Node.js was initially challenging to use in production due to memory leaks and lack of debugging tools. Over three years, Joyent developed tools like DTrace probes, MDB for debugging core dumps, Bunyan for logging, and node-restify for building HTTP services to make node.js more reliable and observable in production. These tools helped Joyent successfully deploy many internal services using node.js and identify issues through postmortem analysis. Joyent continues working to improve node.js for production use.
The document discusses microservices and how Docker can be used with microservices. Some key points about microservices are that they are small, focused services that are highly decoupled and independent. Docker is well-suited for microservices because it allows for lightweight, fast containers that are composable. The document also covers how Docker can be used for development, testing, continuous delivery and production of microservices-based applications.
Karthik Gaekwad presented on containers and microservices. He discussed the evolution of DevOps and how containers and microservices fit within the DevOps paradigm by allowing for collaboration between development and operations teams. He defined containers, microservices, and common containerization concepts. Gaekwad also provided examples of how organizations are using containers for standardization, continuous integration and delivery pipelines, and hosting legacy applications.
DCSF19 Transforming a 15+ Year Old Semiconductor Manufacturing EnvironmentDocker, Inc.
Jeanie Schwenk, Jireh Semiconductor
Jireh Semiconductor bought the Hillsboro fab and its contents including the manufacturing tools, servers, and software running the fab. The previous company had been winding down for years so server and software upgrades had not been on the radar for some time. In 2011 Jireh became the proud owner of the building, the tools, and its legacy software running on servers that weren’t even made any more.
That's when I started my adventure with Jireh in September 2016 with a charter to modernize the applications running the manufacturing facility process and move them into VMs with no impact to manufacturing. That led me down a path of exploration and questions. “What’s the goal?”
The goal wasn't to move to VMs. It was to become independent of the aging PA-RISC architecture, bring forward the ~230 java 1.4.2 applications (10-15 years old), scale to allow increased the load on the software and hardware in order to ramp the factory output to numbers never seen previously. And do it without manufacturing downtime.
The solution included a transition from waterfall and silo development to agile scrum. Rather than simply migrating to VMs, it became obvious the lynch pin for a successful software transition with the required uptime, flexibility, and scalability was Docker Enterprise.
Join me for this session where I'll talk about my journey modernizing 15+ year old applications and infrastructure at Jireh.
Demystifying Containerization Principles for Data ScientistsDr Ganesh Iyer
Demystifying Containerization Principles for Data Scientists - An introductory tutorial on how Dockers can be used as a development environment for data science projects
Microservices involve breaking up monolithic applications into smaller, independent services that work together. This allows for increased efficiency through scaling individual services as needed, easier updates by updating smaller code bases, and improved stability if one service fails. Containers are well-suited for microservices due to their lightweight nature and ability to easily move workloads.
Containers, Docker, and Microservices: the Terrific TrioJérôme Petazzoni
One of the upsides of Microservices is the ability to deploy often,at arbitrary schedules, and independently of other services, instead of requiring synchronized deployments happening on a fixed time.
But to really leverage this advantage, we need fast, efficient, and reliable deployment processes. That's one of the value propositions of Containers in general, and Docker in particular.
Docker offers a new, lightweight approach to application portability.It can build applications using easy-to-write, repeatable, efficient recipes; then it can ship them across environments using a common container format; and it can run them within isolated namespaces which abstract the operating environment, independently of the distribution,versions, network setup, and other details of this environment.
But Docker can do way more than deploy your apps. Docker also enables you to generalize Microservices principles and apply them on operational tasks like logging, remote access, backups, and troubleshooting.This decoupling results in independent, smaller, simpler moving parts.
Brief overview of the Docker eco system, the paradigm change it brings to development and operations processes. While docker has lots of potential its still working to mature into a viable production system that has proved itself secure, stable, and viable.
This document provides an introduction to Docker. It begins with an overview of the shift from monolithic to microservices architecture and how Docker addresses the complexity problems that arise. Docker is described as a tool that packages applications and dependencies into standardized units called containers that can run on any Linux server. Key differences between Docker containers and traditional virtual machines are outlined. The document then covers Docker concepts like images, containers, and the Docker Engine. It demonstrates the Docker build, ship, and run workflow and introduces common Docker commands and tools.
This document discusses new capabilities in CFEngine 3, an advanced configuration management system. Key points include:
- CFEngine 3 is declarative, ensures desired state is reached through convergence, is lightweight using 3-6MB of memory, and can run continuously to check configurations every 5 minutes.
- It supports both new platforms like ARM boards and older systems like Solaris.
- Recent additions allow managing resources like SQL databases, XML files, and virtual machines in a code-free manner using the Design Center.
- CFEngine treats all resources like files, processes, and VMs as maintainable and ensures they self-correct through convergence to the desired state.
Docker concepts and microservices architecture are discussed. Key points include:
- Microservices architecture involves breaking applications into small, independent services that communicate over well-defined APIs. Each service runs in its own process and communicates through lightweight mechanisms like REST/HTTP.
- Docker allows packaging and running applications securely isolated in lightweight containers from their dependencies and libraries. Docker images are used to launch containers which appear as isolated Linux systems running on the host.
- Common Docker commands demonstrated include pulling public images, running interactive containers, building custom images with Dockerfiles, and publishing images to Docker Hub registry.
Dev Ops Geek Fest: Automating the ForgeRock PlatformForgeRock
This document discusses using DevOps tools and practices with ForgeRock identity platforms. It begins by explaining the needs of different roles that DevOps aims to address. It then covers topics like elastic scaling of ForgeRock, different DevOps tools available, and ForgeRock's role in supporting DevOps. The document demonstrates configuring all ForgeRock components like OpenIDM, OpenAM, OpenDJ and OpenIG using Ansible. It discusses experiences using tools like Docker and the benefits of containers. Finally, it presents a proposal to run OpenAM on Kubernetes and leverage its capabilities for container orchestration.
Docker right now provides great value in the enterprise but the value proposition is more about developer productivity than scale-out.
Docker benefits include resource management, environment management, continuous delivery, developer and operations collaboration, and hybrid workloads.
Take care in its introduction. Consider Docker as just part of an overall toolkit and you don't need to go "full stack" to gain value.
The document summarizes Day 2 of DockerCon. It discusses Docker being ready for production use with solutions for building, shipping, and running containers. It highlights Docker Hub growth and improvements to quality. Business Insider's journey with Docker is presented, covering lessons learned around local development and using Puppet and Docker Hub. Future directions discussed include orchestration tools and image security.
Cloud 2.0: Containers, Microservices and Cloud HybridizationMark Hinkle
In a very short time cloud computing has become a major factor in the way we deliver infrastructure and services. Though we’ve quickly breezed through the ideas of hosted cloud and orchestration. This talk will focus on the next evolution of cloud and how the evolution of technologies like container (like Docker), microservices the way Netflix runs their cloud) and how hybridization (applications running on Mesos across Kubernetes clusters in both private and public clouds).
Understanding Docker and IBM Bluemix Container ServiceAndrew Ferrier
The document provides an overview of Docker and IBM Bluemix Container Service. It begins with explaining what Docker is, how it differs from virtual machines, and why it is useful. It then discusses what IBM Bluemix is and how it provides different compute models including containers. The document explains that IBM Bluemix Container Service (formerly IBM Containers) is based on Docker and provides features like persistent storage, integrated monitoring and logging, and works with the IBM Bluemix DevOps toolchain. It notes that Container Service will evolve to use Kubernetes as the runtime engine to provide additional capabilities like declarative topologies, self-healing, and service discovery.
The document discusses the emerging "cloud-native" ecosystem centered around containers. It identifies key characteristics like containers as modular compute units and microservices architectures. Popular early solutions are mentioned like Docker, CoreOS, Kubernetes, and Mesosphere, but the ecosystem remains immature with issues around persistence, security, and lack of best practices. Standards are emerging that may drive further innovation, and containers still lack a "killer app" business case like virtualization had with consolidation. The document provides a taxonomy of the technology stack and lists many active companies and projects in different layers.
The challenge of application distribution - Introduction to Docker (2014 dec ...Sébastien Portebois
Live recording with the demos: https://www.youtube.com/watch?v=0XRcmJEiZOM
Contents
- The application distribution challenge
- The current solutions
- Introduction to Docker, Containers, and the Matrix from Hell
- Why people care: Separation of Concerns
- Technical Discussion
- Ecosystem, momentum
- How to build Docker images
- How to make containers talk to each other, how to handle data persistence
- Demo 1: isolation
- Demo 2: real case - installing Go Math! Academy, tail –f containers, unit tests
node.js and Containers: Dispatches from the Frontierbcantrill
This document discusses node.js and containers for microservices architectures. It describes how microservices architectures break large monolithic applications into many smaller independent services. Node.js is well-suited for microservices due to its lightweight footprint and asynchronous nature. Containers provide an efficient way to run many independent services on a single machine by virtualizing at the operating system level. The document outlines lessons learned from rewriting a cloud orchestration system called SmartDataCenter using a microservices and container-based architecture.
This document provides an overview of DevOps, including:
1) It discusses Garry, a product owner whose team had slow and manual software delivery processes until discovering DevOps.
2) DevOps is defined as a culture change involving shared responsibility between development and operations teams to deliver software rapidly through practices like continuous integration, delivery, infrastructure as code, and monitoring.
3) The benefits of DevOps are outlined as faster and more frequent software delivery, improved quality, security, and compliance through automation and visibility across the software lifecycle.
4) Examples of DevOps in action include deploying a web application directly from a development machine versus through an automated build and test process on a separate build box.
Containers, microservices and serverless for realistsKarthik Gaekwad
The document discusses containers, microservices, and serverless applications for developers. It provides an overview of these topics, including how containers and microservices fit into the DevOps paradigm and allow for better collaboration between development and operations teams. It also discusses trends in container usage and orchestration as well as differences between platforms as a service (PaaS) and serverless applications.
This document provides an introduction and overview of containers, Kubernetes, IBM Container Service, and IBM Cloud Private. It discusses how microservices architectures break monolithic applications into smaller, independently developed services. Containers are presented as a standard way to package applications to move between environments. Kubernetes is introduced as an open-source system for automating deployment and management of containerized applications. IBM Cloud Container Service and IBM Cloud Private are then overviewed as platforms that combine Docker and Kubernetes to enable deployment of containerized applications on IBM Cloud infrastructure.
Docker is the developer-friendly container technology that enables creation of your application stack: OS, JVM, app server, app, database and all your custom configuration. So you are a Java developer but how comfortable are you and your team taking Docker from development to production? Are you hearing developers say, “But it works on my machine!” when code breaks in production? And if you are, how many hours are then spent standing up an accurate test environment to research and fix the bug that caused the problem?
This workshop/session explains how to package, deploy, and scale Java applications using Docker.
In this session we introduce administrators to the concepts of Docker and discuss architectural decisions that will come into play when deploying containers. Although this session was originally presented as part of IBM's New Way To Learn initiative it does not discuss any specific aspects of IBM technology
Microservices, Spring Cloud & Cloud FoundryEmilio Garcia
Microservices, Spring Cloud & Cloud Foundry
The document discusses microservices architecture, distributed system patterns, Spring Boot, Spring Cloud, and Cloud Foundry. It defines microservices and compares monolithic vs microservices styles. Key advantages of microservices include using the right tool for each job and easier scaling. Challenges include complexity and coordination. Distributed patterns like centralized configuration, service registry, dynamic routing, and circuit breakers help address challenges. Spring Boot and Spring Cloud simplify building microservices and provide tools that implement common patterns. Cloud Foundry is a PaaS that makes deploying microservices applications easy.
In an increasingly competitive marketplace, speed and business agility are paramount. And integration between customer-facing systems and back-end applications is more crucial than ever.
At this event, you'll learn how open source software built by communities, like Apache Camel, Docker, Kubernetes, OpenShift Origin, and Fabric8, can help organizations integrate services and establish effective continuous integration and delivery (CI/CD) pipelines.
The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.
In this slide we have discussed, Monolithic application vs Microservices, applicable scenarios for adopting the architectural pattern, when we need microservices, what are the benefits, case study of an e-commerce platform by compartmentalizing the scopes into different sample microservices and Docker implementations.
The full talk has been recorded here: https://youtu.be/tNlp7HS533g
Introduction to dockers and kubernetes. Learn how this helps you to build scalable and portable applications with cloud. It introduces the basic concepts of dockers, its differences with virtualization, then explain the need for orchestration and do some hands-on experiments with dockers
Comparing and contrasting monolithic systems to Lego pieces (microservices) at the 50,000 foot view. In this presentation we will compare and contrast monolithic systems to microservices. We will then take a look at some of the down sides to microservices. And then we will discuss some strategies for building microservices.
1. Docker & Beyond
-Santosh Koti
Given on May 29,2015 at Equinix
- An opinionated & very informal talk
with bit of fun, TGIF
2. Disclaimer
• This presentation represents purely my views
& opinions only
• It does not represent either my current or
future employer’s opinions.
• Some of the content may be dated, which may
not hold true any further.
• It is a very opinionated, informal & funny talk
• Hope you enjoy it.
3. Is there Any Agenda ?
• Hmm.. Probably:
– Dockers
– Impact of Containers
– Micro Services
– Distributed Systems
– Container Orchestration at Scale
– Demo
• “But…nothing is guaranteed in the transit of
time”
5. What is Docker ?
• Sometimes containers can be good too (Tomcat ?)
• GitHub Says:
• Big Idea: Ship code/app with it’s run time
environment/dependencies
6. Some more details please ?
Standardizes Application Binaries:
Package/ Image Format
Enables static-sealed/self-sufficient binaries / No external dependencies
Standardizes Application Runtime:
Enables both process isolation & process containment
Process Virtualization
Built for Cloud:
Optimized for large scale application deployment
Misc:
Standardizes the old Java PR – Write once , run anywhere
(for any apps, not just Java )
Sometimes history repeats for the better, if not always
7. Ok, How it helps ?
• Avoids “But it works on my system!” syndrome
- (Truth is more portable now )
• Enables Apps Portability
• Lightweight than VM
• Enables Micro-services
• Enables better DevOps
8. Built on the shoulders of…?
• Built on number of Linux features
• cgroups: Restrict resources a process can consume
CPU, memory, disk IO
• chroots: Determines what parts of the filesystem a
user can see
• capabilities: Limits what a user can do mount, kill,
chown..
• namespaces: Change a process’s view of the system
Network interfaces, PIDs, users, mounts…
9. How is it different from VM ?
Heavyweight
Lightweight
Sometimes Less is Better
10. Hmm.... Better than VMs ? *
• VM:
More Isolation, better guaranteed resources
Heavyweight
Can run handful of VMs on a single host.
Takes minutes to start
• Docker :
Resource Isolation is not very strict
Lightweight
Can run 100s of containers even on a single host.
Starts in seconds
Extremely Popular
(Resonance ? with the advent of Cloud/Microservices)
12. Sounds Fishy, Things can’t be so good ?
Security: “If a user or application has superuser
privileges within the container, the underlying operating
system could, in theory, be cracked.” *
Can get stale , after running for a long time ?
So, run your services as non-root whenever possible
And Grant minimal privileges
14. With Docker, is it Customer Container First ?
“The real value of Docker is not technology , It’s getting people
to agree on something”
- Solomon Hykes, Docker Founder
But, Making people agree on something is a hard thing ?
It is not very different here too
So there is a Docker Governance Board to define the evolving the container-first
standards with contributions from Redhat, Google, Docker, IBM, Microsoft etc.
Move over VMware, Docker is the new interface/API/standard….!
But then too many chefs can spoil the dish
(Remember JSR/JEE Committe ? )
17. Docker - Impact on Infrastructure Level ?
• Container-First OS
• CoreOS (Google Backed) , Project Atomic (Redhat)
• One from Intel (ClearOS ?, Sorry can’t remember )
• Rancher OS (Rancher Inc) , Photon (VMware)
• Container-First { Networking, Storage }
• Weave, CoreOS Flannel, Flocker
• Container Scheduling/Orchestration
- Docker Swarm,Spotify Helios
- Google Kubernetes , CoreOS Fleet
• Some More Containers: Rocket (CoreOS)
• Startups : CoreOS, Rancher , Kismatic, ClusterHQ etc
Good on Economy ? Not sure (First law of thermodynamics here ?)
But then every generation needs its own heroes.
18. CoreOS: Container-First OS ?
• Minimal OS to host your containers
• Automatic Updates
• Atomic Updates/Rollbacks
• No package manager like rpm, instead use docker
• In other words, Docker is the new package manager ?
• Built for Cloud
• Enables Immutable OS (read only root fs)
• First class support for linux containers :
Docker & Rocket etc.
20. Docker - Impact on Application Level ?
• Fosters Micro-services
• So applications are structurally decomposed/distributed
• Embrace Fundamentally Distributed Systems
• Emergence of Lean Stacks across languages
( Javascript: NodeJS, Java: Spring Boot,
Scala: Akka Http, Python: Flask, Go : Goji etc )
• Better Developer’s Health [NOCC ? ]
• As a result, better software is shipped ?
• So can we recall tagline of KF/LG’s tag line : Good times / Life is Good
22. Why Microservices ?
• Decompose application into set of simple-cum-small services
• Often focused on one business capability
• Independently deployable
• Loosely coupled & communicate over HTTP
• Can be developed using different languages/tools
• Easy to develop/debug/prototype
• Easier Developer Onboarding
• Lightweight
• Asynchronous communication
• Fundamentally, distributed in nature
23. Ok, But What is the dark side ?
• Code Duplication in Polygot environment
• Complexity of Distributed Systems : fault tolerance,
unreliable networks, asynchronicity, transactions etc
• More Operational Overhead
• System Testing gets Harder
• “Asynchronous systems are great when we can decompose work into
genuinely separate independent tasks which can happen out of order at
different times.”
• When things have to happen synchronously/ transactionally in an
inherent asynchronous architecture, it gets more complex.
For more : http://highscalability.com/blog/2014/4/8/microservices-not-a-free-lunch.html
24. Microservices – Major Challenge ?
• Determining the right level of granularity for service
component is one of the biggest challenges
• Define granularity by Business Functionality ?
27. Microservices Pattern 3 - Messaging ?
• Non-RESTful communication with other services
• Asynchronous Messaging, Error Handling, Reliability,
whenever data is produced/consumed at different velocity
28. Microservices Pattern 4 - Orchestration ?
• Used for transactional request processing
• Generally required if you need to make (synchronous) inter-service
communication across service components
• Required when service components are too fine grained/ incorrectly
partitioned from a business standpoint.
• Generally complex, re-design, prefer messaging if possible *
• Rollbacks are harder
• Undesired Coupling
• Common practice: Violate DRY & copy the shared functionality
• (Sometimes it is good to break the rules)
• For more: Software Architecture Patterns, O’REILLY 2015
30. Distributed Systems – What ?
“A collection of independent systems that appears to its users as a single
system”
“Everything fails all the time” – Werner Vogels , Amazon CTO
Failure is the norm, not the exception
So, design for failure
CAP Theorem (Consistency, Availability & Partition)
- Choose Two
- Foundation DB defies ?
- Lacks Formal Verification,
unlike Hoare’s CSP ?
31. Distributed Systems – Little more ?
Simultaneity/Synchronocity is hard
“There is No Now ” in distributed systems
For more : https://queue.acm.org/detail.cfm?id=2745385
Design for “Eventual Consistency ”
As events get more complex, we tend to talk about probability ,
not certainty, same is the case here (not just in software systems, everywhere)
So we express intent in distributed systems, as things keep going out of place
But the tag line of TCS says – Experience Certainty ?
32. Distributed Systems – Blue Print ?
• Anything can fail.
• When Worker Nodes fail, Master Node replicates worker nodes
• Master nodes can fail , Paxos (ZooKeeper), Raft (etcd) to rescue
-- One of the standby nodes will be elected by
consensus protocols(Paxos / Raft)
Master Node
Worker Node1 Worker Node2 Worker Node3
StandBy Master
Node 1
StandBy Master
Node2
Raft/
Paxos
34. Kubernetes – What ?
• Greek word for ‘The person who steers the ship’
• Orchestrator for running Docker containers at scale
• Supports multi-cloud environments
• Backed by Google’s decades of experience
• Endorsed by the big players : Redhat, Microsoft, IBM,
CoreOS, Mesosphere etc
• Open Source
• Still in beta, very active community
35. Kubernetes – How does it help me ?
• Enables to build & manage container-based distributed
systems at large scale/cloud scale.
• Supports container deployment / scheduling
• Supports container orchestration / high availability of
services
• Enhances efficient resource utilization across the
cluster (saves money too )
• Can manage multiple clusters at the same time
• REST API Support
• Supports Docker & Rocket containers
• It is a distributed system by itself
36. Kubernetes – Big Picture ?
Kube-Master
Server
Kubelet –
Worker Nodes
37. Kubernetes - Key Concepts
• Cluster: A group of nodes on which containers are scheduled
• Container: A sealed application package (Docker)
• Pod: A small group of tightly coupled Containers
example: content syncer & web server
• Replication Controller: A loop that drives current state towards desired
state
• Service: A set of running pods that work together
• example: load-balanced backends
• Labels: Identifying metadata attached to other objects
• example: phase=canary vs. phase=prod
• Selector: A query against labels, producing a set result
• example: all pods where label phase == prod
39. Kubernetes – How do they all fit in ?
(Worker Node)
Recieves
Commands from
Master
Node
Health
Service Proxy
40. Kubernetes – Pods ?
• Small group of containers & volumes
• Tightly coupled
• The atom/unit of cluster scheduling & placement
• Shared namespace :
- share IP address & localhost, storage volume
• Ephemeral (like Snapchat ?)
- can die and be replaced
42. Kubernetes – Replication Controllers ?
• Ensures high availability of pods
• Recreates Pods, maintains desired state of cluster
• Fine grained control for scaling
• if too few, start new ones
• if too many, kill some
43. Kubernetes – Services ?
• A group of pods that act as one == Service
• Load Balanced Access to Pods
• Gets a stable virtual IP and port
- called the service portal
- also a DNS name
46. Create a Container Cluster
1) gcloud auth login
2) gcloud alpha container clusters create CLUSTER_NAME --
zone ZONE –user=admin –password=lostintime
There is a REST API too..