Newt Global provides DevOps transformation, cloud enablement, and test automation services. It was founded in 2004 and is headquartered in Dallas, Texas with locations in the US and India. The company is a leader in DevOps transformations and has been one of the top 100 fastest growing companies in Dallas twice. The document discusses an upcoming webinar on Docker 101 that will be presented by two Newt Global employees: Venkatnadhan Thirunalai, the DevOps Practice Leader, and Jayakarthi Dhanabalan, an AWS Solution Specialist.
DCSF19 Containerized Databases for Enterprise ApplicationsDocker, Inc.
Containerized Databases for Enterprise Applications
Containers are now being used in organizations of all sizes. From small startups to established enterprises, data persistence is necessary in many mission critical applications. “Containers are not for database applications” is a misconception and nothing could be further from the truth.
This session aims to help practitioners navigate the minefield of database containerization and avoid some of the major pitfalls that can occur. Discussion includes traditional enterprise database concerns surrounding data persistence and data security, and how they mesh with containerized deployment.
DCSF 19 How Entergy is Mitigating Legacy Windows Operating System Vulnerabili...Docker, Inc.
Jason Brown - Program Manager, Entergy
Jeff Hummel - IT Infrastructure, Architect, Entergy
Entergy, a large utility company headquartered in New Orleans, LA has launched an initiative to modernize their application infrastructure. During the initial analysis, Entergy recognized the existing legacy infrastructure’s lack of compatibility with more recent operating systems would stand in the way of progress. As a result, containerization was fast-tracked as the solution that can help them with the various tenants of their strategy: hyperconvergence, SaaS (ServiceNow), and workload portability. Docker Enterprise proved to be the right solution to migrate roughly 850 legacy applications from Windows Server 2003 and 2008 to Windows Server 2016 quickly, securely and economically. Entergy IT has now delivered the ability for the business to run applications on-premise, in the cloud, and future-proofed the applications for migration to new versions of Windows Server. In this session, Entergy will talk about how they are modernizing their infrastructure to become more agile, secure, and enable workload portability.
Hypervisor "versus" Linux Containers!
Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.
Less hardware, less pain and more scalability in production, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above. "Do more with less " and this is all that matters!
Automation of server and applications deployments never had been so easy and fast that ever. Also brings produtivity to a new level, in the DataCenters and Cloud Environments.
Francisco Gonçalves (Dec2013
( francis.goncalves@gmail.com )
DCSF19 Containerized Databases for Enterprise ApplicationsDocker, Inc.
Containerized Databases for Enterprise Applications
Containers are now being used in organizations of all sizes. From small startups to established enterprises, data persistence is necessary in many mission critical applications. “Containers are not for database applications” is a misconception and nothing could be further from the truth.
This session aims to help practitioners navigate the minefield of database containerization and avoid some of the major pitfalls that can occur. Discussion includes traditional enterprise database concerns surrounding data persistence and data security, and how they mesh with containerized deployment.
DCSF 19 How Entergy is Mitigating Legacy Windows Operating System Vulnerabili...Docker, Inc.
Jason Brown - Program Manager, Entergy
Jeff Hummel - IT Infrastructure, Architect, Entergy
Entergy, a large utility company headquartered in New Orleans, LA has launched an initiative to modernize their application infrastructure. During the initial analysis, Entergy recognized the existing legacy infrastructure’s lack of compatibility with more recent operating systems would stand in the way of progress. As a result, containerization was fast-tracked as the solution that can help them with the various tenants of their strategy: hyperconvergence, SaaS (ServiceNow), and workload portability. Docker Enterprise proved to be the right solution to migrate roughly 850 legacy applications from Windows Server 2003 and 2008 to Windows Server 2016 quickly, securely and economically. Entergy IT has now delivered the ability for the business to run applications on-premise, in the cloud, and future-proofed the applications for migration to new versions of Windows Server. In this session, Entergy will talk about how they are modernizing their infrastructure to become more agile, secure, and enable workload portability.
Hypervisor "versus" Linux Containers!
Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.
Less hardware, less pain and more scalability in production, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above. "Do more with less " and this is all that matters!
Automation of server and applications deployments never had been so easy and fast that ever. Also brings produtivity to a new level, in the DataCenters and Cloud Environments.
Francisco Gonçalves (Dec2013
( francis.goncalves@gmail.com )
Evénement Docker Paris: Anticipez les nouveaux business model et réduisez vos...Docker, Inc.
Au programme : la mise en place de plateformes agiles pour s’adapter aux nouveaux business models, l’optimisation des coûts IT dans le cadre de vos déploiements applicatifs, réussir la mise en oeuvre de Kubernetes, garantir la sécurité de vos applications tout au long de leur cycle de vie et bien plus encore.
Mit Urs Stephan Alder (CEO Kybernetika), Michael Abmayer (Senior Consultant Opvizor) und Dennis Zimmer (CEO Opvizor) präsentierten gleich 3 hochkarätige Referenten an der vergangenen VMware@Night bei Digicomp. Sie zeigten zusammen auf, welche Auswirkungen Container in der Virtualisierung auf den täglichen Betrieb sowie die Performance- und Kapazitätsplanung haben.
Vor allem Docker ist derzeit in aller Munde und die bekannteste und meist genutzte Container-Technologie. Container werden vielfach in virtuellen Maschinen betrieben und stellen eine neue Herausforderung für VMware- Administratoren, aber auch IT-Manager dar. Gewährleistung und Überwachung der Performance sowie eine möglichst genaue Kapazitätsplanung sind Herausforderungen, denen man sich zügig stellen muss.
Nach einer kurzen Einführung in die Thematik der Container, in der auch die Unterschiede zur Virtualisierung aufgezeigt wurde, widmeten sich die Referenten dem Umgang mit Conteinern am Beispiel von Docker mit VMware vSphere. Zum Abschluss wurde die Performanceüberwachung und Kapazitätsplanung behandelt.
From the Amazon Web Services Singapore & Malaysia Summits 2015 Track 2 Breakout, 'Containerized Cloud Computing' Presented by Sivaram Shunmugam Manager, Infrastructure Practice - Redhat
Jessica Deen, Microsoft -
Helm 3 is here; let's go hands-on! In this demo-fueled session, I'll walk you through the differences between Helm 2 and Helm 3. I'll offer tips for a successful rollout or upgrade, go over how to easily use charts created for Helm 2 with Helm 3 (without changing your syntax), and review opportunities where you can participate in the project's future.
Bare-metal, Docker Containers, and Virtualization: The Growing Choices for Cl...Odinot Stanislas
(FR)
Introduction très sympathique autour des environnements Cloud avec un focus particulier sur la virtualisation et les containers (Docker)
(ENG)
Friendly presentation about Cloud solutions with a focus on virtualization and containers (Docker).
Author: Nicholas Weaver – Principal Architect, Intel Corporation
Cloud Native Patterns with Bluemix Developer ConsoleMatthew Perrins
This presentation talks about Cloud Native Application patterns Mobile, Web, BFF (Backend for Frontend) and Microservices. It will walk through the patterns and show how they can be used to deliver public cloud solutions with IBM Cloud, using Bluemix Developer Console
Getting Started With Docker | Docker Tutorial | Docker Training | EdurekaEdureka!
This tutorial on "Getting started With Docker" will help you understand the fundamental concepts in Docker and how it is used for containerization. Below are the topics covered in this tutorial:
1. Challenges With Shipping & Transportation
2. How Does Docker Fit The Bill?
3. What Is Docker?
4. Benefits Of Docker Over Virtual Machines
5. Docker Terminology
6. Architecture Of Docker
7. Hands-On: Running Hello-World Docker Container
To take a structured training on Deep Learning, you can check complete details of our Deep Learning with TensorFlow course here: https://goo.gl/WF1RYI
The Container Evolution of a Global Fortune 500 Company with Docker EEDocker, Inc.
In our new digital economy, keeping up can feel like a never-ending expansion of costly technical overhead. Each “trend” adds net-new operational and capital expenses to seemingly bloated run-rate measures - already challenged by leadership. Containers may feel like just another one of these trends, bringing its own additional expense. At MetLife, however, we sought to make containerization self-funding, allowing us to fuel change and tap into innovation at a large-scale. To do this, MetLife’s ModSquad, challenged established norms to prove that containers worked through production. Then, we asked Docker for help to modernize our traditional landscape to create funding sources to adopt containers, change holistically, and reduce overhead to our bottom line.
This talk picks up where the MetLife story presented at the Austin DockerCon ends: What happens after you’ve done one thing well and you need to expand the revolution? We'll discuss how MetLife leveraged the Modernize Traditional App Program. We’ll discuss planning, preparation, execution and our post-mortem learnings in addition to technical obstacles, mindsets, roles, addressing executive concerns and training. I’ll share how we created regional business cases and roadmaps to create a funding pipeline by technology. Finally, we’ll look at our new forecast and ultimately our new future.
Containers and VMs and Clouds: Oh My. by Mike ColemanDocker, Inc.
As containers move from the developer's workstation into production environments there are many questions about how they fit into a company's existing infrastructure. Should a workload run in a VM or in a container? Should that container run on physical or virtual? In the data center or in the cloud?
The reality is that there is no "right" answer, just a series of questions that admins should be asking as they look to figure out where to run their application workloads. In this talk we'll take a look at the key differences between containers and VMs. From there we'll discuss the coexistence of VMs and containers, and finally we'll take a look at key factors to consider when making the decision where to run your applications. Throughout the presentation we'll highlight real world customers, their problems, and their ultimate deployment decisions.
Shipping and Shifting ~100 Apps with Docker EEDocker, Inc.
Alm. Brand has been successfully running greenfield Dockerized workloads in production for nearly two years. However, enterprises are known for their very long-lived and ill-maintained monoliths which are not easily rewritten or relocated, and we have our fair share of those. Focusing on freeing up precious ops time, Alm. Brand ventured to transform all legacy WebLogic apps to run in Docker. The move has provided a golden opportunity to restructure our platform, and has helped push the DevOps agenda in what is probably the oldest company yet to present at DockerCon (1792).
Through an awesome live demo, we will demonstrate:
* as much as we can of our entire working production setup, boiled down to a Swarm stack file;
* how we are able to convert and deploy applications during office hours, unbeknown to the end users;
* how to smoothly and transparently handle the transition of users to the Dockerized environment;
* how we have streamlined monitoring, logging and deployment across greenfield and legacy apps
How to containerize at speed and at scale with Docker Enterprise Edition, mov...Kangaroot
Containers are meant to be used for modern application architectures is a commonly heard misconception.
During this talk we'll explain how you can benefit from containerizing your existing applications to reduce infrastructure footprint, make your application more portable and manage your existing application in a cloud native way. All without changing one line of code in your application itself.
Kubernetes made easy with Docker Enterprise - Tech deep dive on Docker/Kubern...Kangaroot
With the release of version 2.0 of the Docker EE platform, Kubernetes was integrated into the platform, now offering people choice of orchestrators.
This talk will do a deep-dive of how Kubernetes is integrated with Docker EE and how we take away the complexity of maintaining a fully high-available and secure Kubernetes cluster.
DCEU 18: Edge Computing with Docker EnterpriseDocker, Inc.
Marc Meunier - Director of Business Development, Docker
Adam Parco - Director of Engineering, Edge & IoT, Docker
The Internet of Things (IoT) is pushing more computing to the edge - where data from devices can be aggregated, filtered, and analyzed before it’s sent somewhere else. As edge devices become more powerful and capable of running sophisticated applications, the edge servers have to keep pace with development. The challenge for edge computing is that these servers and devices are distributed geographically across many sites and sometimes inaccessible. The Docker platform is designed for distributed computing and provides an easy way to securely distribute and run applications at the edge. In this session, we will outline some of the major trends around edge computing and the common architectures and use cases across different industries. We will highlight some of the work we’re doing with our customers to deliver on these edge use cases and where Docker is headed.
Demystifying Containerization Principles for Data ScientistsDr Ganesh Iyer
Demystifying Containerization Principles for Data Scientists - An introductory tutorial on how Dockers can be used as a development environment for data science projects
Two parts:
1. The evolution of Joyent's SmartDataCenter cloud infrastructure management software from a largely monolithic app to a microservices architecture.
2. How container infrastructure enables microservices.
More details in http://www.meetup.com/cloudclub/events/220026896/
Evénement Docker Paris: Anticipez les nouveaux business model et réduisez vos...Docker, Inc.
Au programme : la mise en place de plateformes agiles pour s’adapter aux nouveaux business models, l’optimisation des coûts IT dans le cadre de vos déploiements applicatifs, réussir la mise en oeuvre de Kubernetes, garantir la sécurité de vos applications tout au long de leur cycle de vie et bien plus encore.
Mit Urs Stephan Alder (CEO Kybernetika), Michael Abmayer (Senior Consultant Opvizor) und Dennis Zimmer (CEO Opvizor) präsentierten gleich 3 hochkarätige Referenten an der vergangenen VMware@Night bei Digicomp. Sie zeigten zusammen auf, welche Auswirkungen Container in der Virtualisierung auf den täglichen Betrieb sowie die Performance- und Kapazitätsplanung haben.
Vor allem Docker ist derzeit in aller Munde und die bekannteste und meist genutzte Container-Technologie. Container werden vielfach in virtuellen Maschinen betrieben und stellen eine neue Herausforderung für VMware- Administratoren, aber auch IT-Manager dar. Gewährleistung und Überwachung der Performance sowie eine möglichst genaue Kapazitätsplanung sind Herausforderungen, denen man sich zügig stellen muss.
Nach einer kurzen Einführung in die Thematik der Container, in der auch die Unterschiede zur Virtualisierung aufgezeigt wurde, widmeten sich die Referenten dem Umgang mit Conteinern am Beispiel von Docker mit VMware vSphere. Zum Abschluss wurde die Performanceüberwachung und Kapazitätsplanung behandelt.
From the Amazon Web Services Singapore & Malaysia Summits 2015 Track 2 Breakout, 'Containerized Cloud Computing' Presented by Sivaram Shunmugam Manager, Infrastructure Practice - Redhat
Jessica Deen, Microsoft -
Helm 3 is here; let's go hands-on! In this demo-fueled session, I'll walk you through the differences between Helm 2 and Helm 3. I'll offer tips for a successful rollout or upgrade, go over how to easily use charts created for Helm 2 with Helm 3 (without changing your syntax), and review opportunities where you can participate in the project's future.
Bare-metal, Docker Containers, and Virtualization: The Growing Choices for Cl...Odinot Stanislas
(FR)
Introduction très sympathique autour des environnements Cloud avec un focus particulier sur la virtualisation et les containers (Docker)
(ENG)
Friendly presentation about Cloud solutions with a focus on virtualization and containers (Docker).
Author: Nicholas Weaver – Principal Architect, Intel Corporation
Cloud Native Patterns with Bluemix Developer ConsoleMatthew Perrins
This presentation talks about Cloud Native Application patterns Mobile, Web, BFF (Backend for Frontend) and Microservices. It will walk through the patterns and show how they can be used to deliver public cloud solutions with IBM Cloud, using Bluemix Developer Console
Getting Started With Docker | Docker Tutorial | Docker Training | EdurekaEdureka!
This tutorial on "Getting started With Docker" will help you understand the fundamental concepts in Docker and how it is used for containerization. Below are the topics covered in this tutorial:
1. Challenges With Shipping & Transportation
2. How Does Docker Fit The Bill?
3. What Is Docker?
4. Benefits Of Docker Over Virtual Machines
5. Docker Terminology
6. Architecture Of Docker
7. Hands-On: Running Hello-World Docker Container
To take a structured training on Deep Learning, you can check complete details of our Deep Learning with TensorFlow course here: https://goo.gl/WF1RYI
The Container Evolution of a Global Fortune 500 Company with Docker EEDocker, Inc.
In our new digital economy, keeping up can feel like a never-ending expansion of costly technical overhead. Each “trend” adds net-new operational and capital expenses to seemingly bloated run-rate measures - already challenged by leadership. Containers may feel like just another one of these trends, bringing its own additional expense. At MetLife, however, we sought to make containerization self-funding, allowing us to fuel change and tap into innovation at a large-scale. To do this, MetLife’s ModSquad, challenged established norms to prove that containers worked through production. Then, we asked Docker for help to modernize our traditional landscape to create funding sources to adopt containers, change holistically, and reduce overhead to our bottom line.
This talk picks up where the MetLife story presented at the Austin DockerCon ends: What happens after you’ve done one thing well and you need to expand the revolution? We'll discuss how MetLife leveraged the Modernize Traditional App Program. We’ll discuss planning, preparation, execution and our post-mortem learnings in addition to technical obstacles, mindsets, roles, addressing executive concerns and training. I’ll share how we created regional business cases and roadmaps to create a funding pipeline by technology. Finally, we’ll look at our new forecast and ultimately our new future.
Containers and VMs and Clouds: Oh My. by Mike ColemanDocker, Inc.
As containers move from the developer's workstation into production environments there are many questions about how they fit into a company's existing infrastructure. Should a workload run in a VM or in a container? Should that container run on physical or virtual? In the data center or in the cloud?
The reality is that there is no "right" answer, just a series of questions that admins should be asking as they look to figure out where to run their application workloads. In this talk we'll take a look at the key differences between containers and VMs. From there we'll discuss the coexistence of VMs and containers, and finally we'll take a look at key factors to consider when making the decision where to run your applications. Throughout the presentation we'll highlight real world customers, their problems, and their ultimate deployment decisions.
Shipping and Shifting ~100 Apps with Docker EEDocker, Inc.
Alm. Brand has been successfully running greenfield Dockerized workloads in production for nearly two years. However, enterprises are known for their very long-lived and ill-maintained monoliths which are not easily rewritten or relocated, and we have our fair share of those. Focusing on freeing up precious ops time, Alm. Brand ventured to transform all legacy WebLogic apps to run in Docker. The move has provided a golden opportunity to restructure our platform, and has helped push the DevOps agenda in what is probably the oldest company yet to present at DockerCon (1792).
Through an awesome live demo, we will demonstrate:
* as much as we can of our entire working production setup, boiled down to a Swarm stack file;
* how we are able to convert and deploy applications during office hours, unbeknown to the end users;
* how to smoothly and transparently handle the transition of users to the Dockerized environment;
* how we have streamlined monitoring, logging and deployment across greenfield and legacy apps
How to containerize at speed and at scale with Docker Enterprise Edition, mov...Kangaroot
Containers are meant to be used for modern application architectures is a commonly heard misconception.
During this talk we'll explain how you can benefit from containerizing your existing applications to reduce infrastructure footprint, make your application more portable and manage your existing application in a cloud native way. All without changing one line of code in your application itself.
Kubernetes made easy with Docker Enterprise - Tech deep dive on Docker/Kubern...Kangaroot
With the release of version 2.0 of the Docker EE platform, Kubernetes was integrated into the platform, now offering people choice of orchestrators.
This talk will do a deep-dive of how Kubernetes is integrated with Docker EE and how we take away the complexity of maintaining a fully high-available and secure Kubernetes cluster.
DCEU 18: Edge Computing with Docker EnterpriseDocker, Inc.
Marc Meunier - Director of Business Development, Docker
Adam Parco - Director of Engineering, Edge & IoT, Docker
The Internet of Things (IoT) is pushing more computing to the edge - where data from devices can be aggregated, filtered, and analyzed before it’s sent somewhere else. As edge devices become more powerful and capable of running sophisticated applications, the edge servers have to keep pace with development. The challenge for edge computing is that these servers and devices are distributed geographically across many sites and sometimes inaccessible. The Docker platform is designed for distributed computing and provides an easy way to securely distribute and run applications at the edge. In this session, we will outline some of the major trends around edge computing and the common architectures and use cases across different industries. We will highlight some of the work we’re doing with our customers to deliver on these edge use cases and where Docker is headed.
Demystifying Containerization Principles for Data ScientistsDr Ganesh Iyer
Demystifying Containerization Principles for Data Scientists - An introductory tutorial on how Dockers can be used as a development environment for data science projects
Two parts:
1. The evolution of Joyent's SmartDataCenter cloud infrastructure management software from a largely monolithic app to a microservices architecture.
2. How container infrastructure enables microservices.
More details in http://www.meetup.com/cloudclub/events/220026896/
Introduction to dockers and kubernetes. Learn how this helps you to build scalable and portable applications with cloud. It introduces the basic concepts of dockers, its differences with virtualization, then explain the need for orchestration and do some hands-on experiments with dockers
The challenge of application distribution - Introduction to Docker (2014 dec ...Sébastien Portebois
Live recording with the demos: https://www.youtube.com/watch?v=0XRcmJEiZOM
Contents
- The application distribution challenge
- The current solutions
- Introduction to Docker, Containers, and the Matrix from Hell
- Why people care: Separation of Concerns
- Technical Discussion
- Ecosystem, momentum
- How to build Docker images
- How to make containers talk to each other, how to handle data persistence
- Demo 1: isolation
- Demo 2: real case - installing Go Math! Academy, tail –f containers, unit tests
Docker is the world's leading software containerization platform.
This is a comprehensive introduction to Docker, suitable for delivering in introductory meetups to an audience who does not know about docker.
In case you want to deliver this presentation somewhere, kindly drop me a mail at aditya.konarde@gmail.com
You can contact me at:
Connect with me onLinkedIN: https://www.linkedin.com/in/adityakonarde
Add me on Facebook: https://www.facebook.com/Aditya.Konarde
Tweet to me @aditya_konarde
Docker is a tool designed to make it easier to create, deploy, and run applications
by using containers. Containers allow a developer to package up
an application with all of the parts it needs, such as libraries and other dependencies,
and ship it all out as one package. By doing so, thanks to the
container, the developer can rest assured that the application will run on
any other Linux machine regardless of any customized settings that machine
might have that could differ from the machine used for writing and testing
the code.
In a way, Docker is a bit like a virtual machine. But unlike a virtual
machine, rather than creating a whole virtual operating system, Docker allows
applications to use the same Linux kernel as the system that they’re
running on and only requires applications be shipped with things not already
running on the host computer. This gives a significant performance boost
and reduces the size of the application.
Docker is the developer-friendly container technology that enables creation of your application stack: OS, JVM, app server, app, database and all your custom configuration. So you are a Java developer but how comfortable are you and your team taking Docker from development to production? Are you hearing developers say, “But it works on my machine!” when code breaks in production? And if you are, how many hours are then spent standing up an accurate test environment to research and fix the bug that caused the problem?
This workshop/session explains how to package, deploy, and scale Java applications using Docker.
OpenStack, Containers, and Docker: The Future of Application Deployment
Twenty years ago, developers built static applications on well-defined stacks that ran on proprietary, monolithic hardware. Developers today want freedom to build applications using their choice of services and stacks and, ideally, want to be able to run those applications on any available hardware. Of course, this raises questions about service interaction, the practicality of migrating applications across environments, and the challenges of managing unlimited combinations of services and hardware environment.
By promoting an opensource approach to flexible and inter-operable infrastructure, OpenStack goes a long way towards achieving this vision of the future. This talk discusses the application and platform side of the equation, and the interplay between OpenStack, Container technology (e.g. LXC), and the opensource Docker.io project. Docker.io enables any application and its dependencies to be deployed as lightweight containers that run consistently virtually anywhere. The same containerized application that runs on a developer's laptop can run consistently on a bare metal server, an OpenStack cluster, a Rackspace cloud, a VM,etc. While providing isolation and compatibility, containers have significant size, performance, and deployment advantages over traditional VMs.
Recently, the community created an integration between Docker and OpenStack Nova, opening up exciting possibilities for web scale application deployment, continuous integration and deployment, private PaaS, and hybrid cloud. This session will give an introduction to Docker and containers in the context of OpenStack, and will then demonstrate cross-environment deployment of applications.
Getting Started with Docker - Nick StinematesAtlassian
Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere. In this session, you will learn how to get started building your first Docker container, and how to use Docker containers to simplify your CI process.
Docker - Demo on PHP Application deployment Arun prasath
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this demo, I will show how to build a Apache image from a Dockerfile and deploy a PHP application which is present in an external folder using custom configuration files.
Testing Micro Services is an area that cannot be avoided or procrastinated to any point of time. Each services’ build before it reaches the deployment stage must be ensured that it passes the test criteria defined by the project team.
Organization should be culturally aligned, as well as provide a subtle environment in adopting to a Micro Services architecture. Transitioning or Developing applications using Micro Services architecture is definitely not a cake walk
There is not much of complexity in terms of processes and communications between services in a Monolithic Application that deal with a single relational database. Most of the relational database use ACID transaction to process each request from the client.
In the land of Micro Services the question of analytics, complexity of algorithms, schema reporting gets well defined with a resilient data model. The culture and design principles should embrace failure and faults, similar to anti-fragile systems
Buy Verified PayPal Account | Buy Google 5 Star Reviewsusawebmarket
Buy Verified PayPal Account
Looking to buy verified PayPal accounts? Discover 7 expert tips for safely purchasing a verified PayPal account in 2024. Ensure security and reliability for your transactions.
PayPal Services Features-
🟢 Email Access
🟢 Bank Added
🟢 Card Verified
🟢 Full SSN Provided
🟢 Phone Number Access
🟢 Driving License Copy
🟢 Fasted Delivery
Client Satisfaction is Our First priority. Our services is very appropriate to buy. We assume that the first-rate way to purchase our offerings is to order on the website. If you have any worry in our cooperation usually You can order us on Skype or Telegram.
24/7 Hours Reply/Please Contact
usawebmarketEmail: support@usawebmarket.com
Skype: usawebmarket
Telegram: @usawebmarket
WhatsApp: +1(218) 203-5951
USA WEB MARKET is the Best Verified PayPal, Payoneer, Cash App, Skrill, Neteller, Stripe Account and SEO, SMM Service provider.100%Satisfection granted.100% replacement Granted.
RMD24 | Retail media: hoe zet je dit in als je geen AH of Unilever bent? Heid...BBPMedia1
Grote partijen zijn al een tijdje onderweg met retail media. Ondertussen worden in dit domein ook de kansen zichtbaar voor andere spelers in de markt. Maar met die kansen ontstaan ook vragen: Zelf retail media worden of erop adverteren? In welke fase van de funnel past het en hoe integreer je het in een mediaplan? Wat is nu precies het verschil met marketplaces en Programmatic ads? In dit half uur beslechten we de dilemma's en krijg je antwoorden op wanneer het voor jou tijd is om de volgende stap te zetten.
Digital Transformation and IT Strategy Toolkit and TemplatesAurelien Domont, MBA
This Digital Transformation and IT Strategy Toolkit was created by ex-McKinsey, Deloitte and BCG Management Consultants, after more than 5,000 hours of work. It is considered the world's best & most comprehensive Digital Transformation and IT Strategy Toolkit. It includes all the Frameworks, Best Practices & Templates required to successfully undertake the Digital Transformation of your organization and define a robust IT Strategy.
Editable Toolkit to help you reuse our content: 700 Powerpoint slides | 35 Excel sheets | 84 minutes of Video training
This PowerPoint presentation is only a small preview of our Toolkits. For more details, visit www.domontconsulting.com
Affordable Stationery Printing Services in Jaipur | Navpack n PrintNavpack & Print
Looking for professional printing services in Jaipur? Navpack n Print offers high-quality and affordable stationery printing for all your business needs. Stand out with custom stationery designs and fast turnaround times. Contact us today for a quote!
3.0 Project 2_ Developing My Brand Identity Kit.pptxtanyjahb
A personal brand exploration presentation summarizes an individual's unique qualities and goals, covering strengths, values, passions, and target audience. It helps individuals understand what makes them stand out, their desired image, and how they aim to achieve it.
Discover the innovative and creative projects that highlight my journey throu...dylandmeas
Discover the innovative and creative projects that highlight my journey through Full Sail University. Below, you’ll find a collection of my work showcasing my skills and expertise in digital marketing, event planning, and media production.
The world of search engine optimization (SEO) is buzzing with discussions after Google confirmed that around 2,500 leaked internal documents related to its Search feature are indeed authentic. The revelation has sparked significant concerns within the SEO community. The leaked documents were initially reported by SEO experts Rand Fishkin and Mike King, igniting widespread analysis and discourse. For More Info:- https://news.arihantwebtech.com/search-disrupted-googles-leaked-documents-rock-the-seo-world/
Improving profitability for small businessBen Wann
In this comprehensive presentation, we will explore strategies and practical tips for enhancing profitability in small businesses. Tailored to meet the unique challenges faced by small enterprises, this session covers various aspects that directly impact the bottom line. Attendees will learn how to optimize operational efficiency, manage expenses, and increase revenue through innovative marketing and customer engagement techniques.
Putting the SPARK into Virtual Training.pptxCynthia Clay
This 60-minute webinar, sponsored by Adobe, was delivered for the Training Mag Network. It explored the five elements of SPARK: Storytelling, Purpose, Action, Relationships, and Kudos. Knowing how to tell a well-structured story is key to building long-term memory. Stating a clear purpose that doesn't take away from the discovery learning process is critical. Ensuring that people move from theory to practical application is imperative. Creating strong social learning is the key to commitment and engagement. Validating and affirming participants' comments is the way to create a positive learning environment.
Falcon stands out as a top-tier P2P Invoice Discounting platform in India, bridging esteemed blue-chip companies and eager investors. Our goal is to transform the investment landscape in India by establishing a comprehensive destination for borrowers and investors with diverse profiles and needs, all while minimizing risk. What sets Falcon apart is the elimination of intermediaries such as commercial banks and depository institutions, allowing investors to enjoy higher yields.
[Note: This is a partial preview. To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations]
Sustainability has become an increasingly critical topic as the world recognizes the need to protect our planet and its resources for future generations. Sustainability means meeting our current needs without compromising the ability of future generations to meet theirs. It involves long-term planning and consideration of the consequences of our actions. The goal is to create strategies that ensure the long-term viability of People, Planet, and Profit.
Leading companies such as Nike, Toyota, and Siemens are prioritizing sustainable innovation in their business models, setting an example for others to follow. In this Sustainability training presentation, you will learn key concepts, principles, and practices of sustainability applicable across industries. This training aims to create awareness and educate employees, senior executives, consultants, and other key stakeholders, including investors, policymakers, and supply chain partners, on the importance and implementation of sustainability.
LEARNING OBJECTIVES
1. Develop a comprehensive understanding of the fundamental principles and concepts that form the foundation of sustainability within corporate environments.
2. Explore the sustainability implementation model, focusing on effective measures and reporting strategies to track and communicate sustainability efforts.
3. Identify and define best practices and critical success factors essential for achieving sustainability goals within organizations.
CONTENTS
1. Introduction and Key Concepts of Sustainability
2. Principles and Practices of Sustainability
3. Measures and Reporting in Sustainability
4. Sustainability Implementation & Best Practices
To download the complete presentation, visit: https://www.oeconsulting.com.sg/training-presentations
B2B payments are rapidly changing. Find out the 5 key questions you need to be asking yourself to be sure you are mastering B2B payments today. Learn more at www.BlueSnap.com.
VAT Registration Outlined In UAE: Benefits and Requirementsuae taxgpt
Vat Registration is a legal obligation for businesses meeting the threshold requirement, helping companies avoid fines and ramifications. Contact now!
https://viralsocialtrends.com/vat-registration-outlined-in-uae/
VAT Registration Outlined In UAE: Benefits and Requirements
Webinar Docker Tri Series
1. 2016 Newt Global |www.NewtGlobal.com | Confidential
Newt Global Services and Offerings
2. Docker 101 Tri Series Webinar
By:
Venkatnadhan Thirunalai
Jai Karthik
3. Founded in 2004 ,HQ at Dallas TX,
present in multiple locations in USA and
India
Leader in DevOps Transformation, Cloud
Enablement and Test Automation
One of top 100 fastest growing
companies of Dallas twice in a row
Clientele includes Fortune 50 companies
About Newt Global
4. Speakers
1/27/2017 Copyright 4
• Venkat is DevOps Practice Leader, His area of expertise includes DevOps
Practice, Consult Fortune 100 customers on DevOps IT Strategy. Responsible
for building the global pre-sales, consulting and delivery team for
NewtGlobal
• He has 16+ years of IT industry experience and delivered multiple enterprise
scale projects for Fortune 500 customer base
Venkatnadhan Thirunalai
DevOps, Practice Leader
NewtGlobal
• AWS solution specialist, DevOps strategist. Area of expertise includes AWS
infrastructure management and architectural design, Docker container
management solution, DevOps strategy for automation, Ansible scripter for
automation, Jenkins work practice for design architecture. Responsible for
AWS management, Docker management and DevOps automation works with
jenkins and ansible
• Industry experience of 6+ years in IT and worked on 24 projects with smooth
deliverables for International/Natinonal enterprise clients
Jayakarthi Dhanabalan
AWS Solution Specialist
Newt Global
5. Housekeeping Instructions
• All phones are set to mute. If you have any questions, please type them in the Chat window located beside the
presentation panel
• We have already received several questions from the registrants, which will be answered by the speakers during
the Q & A session
• We will continue to collect more questions during the session as we receive and will try to answer them during
today’s session
• In case if you do not receive answers to your question today, you will certainly receive answers via email shortly
• Thanks for your participation and enjoy the session!
1/27/2017 Copyright 5
6. Contents
Introduction to Docker, Containers, and the Matrix from Hell
Why people care: Separation of Concerns
Technical Discussion
Ecosystem
Docker Basics
Dockerfile
Docker Compose
Docker Swarm
7. Static website
Web frontend
User DB
Queue Analytics DB
Background workers
API endpoint
nginx 1.5 + modsecurity + openssl + bootstrap 2
postgresql + pgv8 + v8
hadoop + hive + thrift + OpenJDK
Ruby + Rails + sass + Unicorn
Redis + redis-sentinel
Python 3.0 + celery + pyredis + libcurl + ffmpeg + libopencv + nodejs +
phantomjs
Python 2.7 + Flask + pyredis + celery + psycopg + postgresql-client
Development VM
QA server
Public Cloud
Disaster recovery
Contributor’s laptop
Production Servers
The ChallengeMultiplicityofStacks
Multiplicityof
hardware
environments
Production Cluster
Customer Data Center
Doservicesandapps
interact
appropriately?
CanImigrate
smoothlyand
quickly?
8. The Matrix From Hell
Static website
Web frontend
Background workers
User DB
Analytics DB
Queue
Development
VM
QA Server
Single Prod
Server
Onsite
Cluster
Public Cloud
Contributor’s
laptop
Customer
Servers
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
? ? ? ? ? ? ?
12. Static website Web frontendUser DB Queue Analytics DB
Development
VM
QA server Public Cloud Contributor’s
laptop
Docker is a shipping container system for codeMultiplicityofStacks
Multiplicityof
hardware
environments
Production
Cluster
Customer Data
Center
Doservicesandapps
interact
appropriately?
CanImigrate
smoothlyandquickly
…that can be manipulated using
standard operations and run
consistently on virtually any
hardware platform
An engine that enables any
payload to be encapsulated
as a lightweight, portable,
self-sufficient container…
13. Static website
Web frontend
Background workers
User DB
Analytics DB
Queue
Development
VM
QA Server
Single Prod
Server
Onsite
Cluster
Public Cloud
Contributor’s
laptop
Customer
Servers
Docker eliminates the matrix from Hell
14. Why Developers Care
Build
once…(finally)
run anywhere
A clean, safe, hygienic and portable runtime environment for your app.
No worries about missing dependencies, packages and other pain points during
subsequent deployments.
Run each app in its own isolated container, so you can run various versions of
libraries and other dependencies for each app without worrying
Automate testing, integration, packaging…anything you can script
Reduce/eliminate concerns about compatibility on different platforms, either your
own or your customers.
Cheap, zero-penalty containers to deploy services? A VM without the overhead of
a VM? Instant replay and reset of image snapshots? That’s the power of Docker
15. Why Devops Cares?
Configure
once…run
anything
Make the entire lifecycle more efficient, consistent, and repeatable
Increase the quality of code produced by developers.
Eliminate inconsistencies between development, test, production, and
customer environments
Support segregation of duties
Significantly improves the speed and reliability of continuous deployment and
continuous integration systems
Because the containers are so lightweight, address significant performance,
costs, deployment, and portability issues normally associated with VMs
16. Why it works—separation of concerns
• Dan the Developer
• Worries about what’s “inside” the container
• His code
• His Libraries
• His Package Manager
• His Apps
• His Data
• All Linux servers look the same
• Oscar the Ops Guy
• Worries about what’s “outside”
the container
• Logging
• Remote access
• Monitoring
• Network config
• All containers start, stop, copy,
attach, migrate, etc. the same
way
17. More technical explanation
High
Level—It’s
a
lightweight
VM
Own process space
Own network interface
Can run stuff as root
Can have its own /sbin/init (different from host)
<<machine container>>
Low Level—
It’s chroot
on steroids
Can also not have its own /sbin/init
Container=isolated processes
Share kernel with host
No device emulation (neither HVM nor PV) from host)
<<application container>>
Run
everywhere
Regardless of kernel version
Regardless of host distro
Physical or virtual, cloud or not
Container and host architecture must match*
Run
anything
If it can run on the host, it can run in the
container
i.e. if it can run on a Linux kernel, it can run
WHY WHAT
18. App
A
Containers vs. VMs
Hypervisor (Type 2)
Host OS
Server
Guest
OS
Bins/
Libs
App
A’
Guest
OS
Bins/
Libs
App
B
Guest
OS
Bins/
Libs
AppA’
Docker
Host OS
Server
Bins/Libs
AppA
Bins/Libs
AppB
AppB’
AppB’
AppB’
VM
Container
Containers are isolated,
but share OS and, where
appropriate, bins/libraries
Guest
OS
Guest
OS
…result is significantly faster deployment,
much less overhead, easier migration,
faster restart
19. Why are Docker containers lightweight?
Bins/
Libs
App
A
Original App
(No OS to take
up space, resources,
or require restart)
AppΔ
Bins/
App
A
Bins/
Libs
App
A’
Guest
OS
Bins/
Libs
Modified App
Copy on write
capabilities allow
us to only save the diffs
Between container A
and container
A’
VMs
Every app, every copy of an
app, and every slight modification
of the app requires a new virtual server
App
A
Guest
OS
Bins/
Libs
Copy of
App
No OS. Can
Share bins/libs
App
A
Guest
OS
Guest
OS
VMs Containers
20. What are the basics of the Docker system?
Source
Code
Repository
Dockerfile
For
A
Docker Engine
Docker
Container
Image
Registry
Build
Docker
Host 2 OS (Linux)
ContainerA
ContainerB
ContainerC
ContainerA
Push
Search
Pull
Run
Host 1 OS (Linux)
21. Changes and Updates
Docker Engine
Docker
Container
Image
Registry
Docker Engine
Push
Update
Bins/
Libs
App
A
AppΔ
Bins/
Base
Container
Image
Host is now running A’’
Container
Mod A’’
AppΔ
Bins/
Bins/
Libs
App
A
Bins/
Bins/
Libs
App
A’’
Host running A wants to upgrade to A’’.
Requests update. Gets only diffs
Container
Mod A’
22. Some Docker vocabulary
Docker Image
The basis of a Docker container. Represents a full application
Docker Container
The standard unit in which the application service resides and executes
Docker Engine
Creates, ships and runs Docker containers deployable on a physical or virtual, host
locally, in a datacenter or cloud service provider
Registry Service (Docker Hub or Docker Trusted Registry)
Cloud or server based storage and distribution service for your images
22
24. Docker File System
• Logical file system by grouping different file system primitives into
branches (directories, file systems, subvolumes, snapshots)
• Each branch represents a layer in a Docker image
• Allows images to be constructed / deconstructed as needed vs. a huge
monolithic image (ala traditional virtual machines)
• When a container is started a writeable layer is added to the “top” of the
file system
24
25. Copy on Write
• Super efficient:
• Sub second instantiation times for containers
• New container can take <1 Mb of space
• Containers appears to be a copy of the original image
• But, it is really just a link to the original shared image
• If someone writes a change to the file system, a copy of the affected file/directory is “copied up”
25
26. What about data persistence?
• Volumes allow you to specify a directory in the container that exists outside of the
docker file system structure
• Can be used to share (and persist) data between containers
• Directory persists after the container is deleted
• Unless you explicitly delete it
• Can be created in a Dockerfile or via CLI
26
28. Dockerfile – Linux Example
28
• Instructions on how
to build a Docker
image
• Looks very similar to
“native” commands
• Important to
optimize your
Dockerfile
29. Dockerfiles
• Dockerfiles = image representations
• Simple syntax for building images
• Automate and script the images creation
30. FROM
• Sets the base image for subsequent instructions
• Usage: FROM <image>
• Example: FROM ubuntu
• Needs to be the first instruction of every Dockerfile
• TIP: find images with the command: docker search
31. RUN
• Executes any commands on the current image and commit the results
• Usage: RUN <command>
• Example: RUN apt-get install –y memcached
FROM ubuntu
RUN apt-get install -y memcached
• Is equivalent to:
docker run ubuntu apt-get install -y memcached
docker commit XXX
32. docker build
• Creates an image from a Dockerfile
• From the current directory = docker build
• From stdin = docker build - < Dockerfile
• From GitHub = docker build github.com/creack/docker-firefox
• TIP: Use –t to tag your image
33. Compose
• One binary to start/manage multiple containers and volumes on a single Docker host
• Originated from Fig
• Move your lengthy docker run commands to a YAML file
34. Installation
• Via pip
• Or just get the binary
$ sudo curl -L
https://github.com/docker/compose/releases/download/1.1.0/
docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-
compose
35. Use
$ docker-compose up -d
Creating vagrant_mysql_1...
Creating vagrant_wordpress_1...
$ docker-compose ps
Name Command State
Ports
-----------------------------------------------------------------
----------------
vagrant_mysql_1 /entrypoint.sh mysqld Up
3306/tcp
vagrant_wordpress_1 /entrypoint.sh apache2-for ... Up
0.0.0.0:80->80/tcp
37. Machine
• One binary to create a remote Docker host and setup the TLS communication with your local docker client.
• Automates the TLS setup and the configuration of the local environment
• Can manage multiple machines in different clouds at the same time
39. Local Use
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
dev * virtualbox Running tcp://192.168.99.100:2376
$ docker-machine env dev
export DOCKER_TLS_VERIFY=1
export
DOCKER_CERT_PATH=/Users/sebastiengoasguen/.docker/machine/machine
s/dev
export DOCKER_HOST=tcp://192.168.99.100:2376
$ docker images
REPOSITORY TAG … CREATED VIRTUAL SIZE
wordpress latest … 2 weeks ago 451.4 MB
mysql latest … 2 weeks ago 282.8 MB
mysql 5.5 … 2 weeks ago 214.5 MB
40. Cloud Use
• Many drivers
• Many more waiting for merge (i.e cloudstack )
$ ./docker-machine create -d digitalocean foobar
INFO[0000] Creating SSH key...
INFO[0001] Creating Digital Ocean droplet...
INFO[0005] Waiting for SSH...
INFO[0072] Configuring Machine...
41. Swarm
• Docker client endpoint that proxies requests to docker daemons running in a cluster.
• Cluster manager that keeps state of the cluster nodes
• Easily run as a container itself
• Multiple service discoveries for cluster nodes (docker hosted, etcd, consul, zookeeper, file based)
43. Use
• No install, run the container
• docker pull swarm
$ docker run -v /vagrant:/tmp/vagrant -p 1234:1234 -d swarm manage
file://tmp/vagrant/swarm-cluster.cfg -H=0.0.0.0:1234
72acd5bc00de0b411f025ef6f297353a1869a3cc8c36d687e1f28a2d8f422a06
$ docker -H 0.0.0.0:1234 info
Containers: 0
Nodes: 3
swarm-2: 192.168.33.12:2375
└ Containers: 0
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 490 MiB
…
$ docker -H 0.0.0.0:1234 run -d -p 80:80 nginx
44. Use Machine to create Swarm
• Get a token for discovery
• Start nodes with machine using the --swarm option
$ docker run swarm create
31e61710169a7d3568502b0e9fb09d66
$ docker-machine create -d virtualbox --swarm --swarm-master --
swarm-discovery token://31e61710169a7d3568502b0e9fb09d66 head
...
$ docker-machine create -d digitalocean --swarm --swarm-discovery
token://31e61710169a7d3568502b0e9fb09d66 worker-00
...
$ docker-machine create -d azure --swarm --swarm-discovery
token://31e61710169a7d3568502b0e9fb09d66 swarm-worker-01
45. Put it all together: Build, Ship, Run Workflow
Developers IT Operations
BUILD
Development Environments
SHIP
Create & Store Images
RUN
Deploy, Manage, Scale
46. Contact Us
For any questions/clarifications please contact:
Satheesh Reddy, Sales Manager
Newt Global Consulting LLC.
satheeshr@newtglobalcorp.com
http://newtglobal.com/
1/27/2017 Copyright 46
Let’s take a look at what build, ship run means in a little more detail, but before that we need to level set some Docker vocabulary and commands:
Image
The static component that represents a on-running applicatoin
Containers are derived from images
images contain EVERYTHING an application needs to run
Should always be built via a Dockerfile (which we’ll talk about in a bit_
Container
The standard unit in which the application service resides
Package app and dependencies together
Isolated from other containers
One container per app / service
Docker Engine
The program that creates, ships and runs containers
Deployable on any physical or vm host locally, in datacenters or cloud
Communicates with Docker Hub
Registry
The service that store, distributes and manages container images
Receives commands from Docker Client via Engine
Access control with public, private repos
Each action in a docker file creates a new layer in the image.
If we visualize our earlier dockerfile example, you can see the changes in the image that each step created (we only show the first five commands). Images are built from the bottom up, so any change made by a subsequent step, is layered on top of the previous changes already made.
Image layers can be shared between different images. This means that the layers are not duplicated on your Docker host (or on the Registry when they’re pushed). Depending on the underlying filesystem each of these layers is represented by a directory on the Docker host.
You’ll notice if you ever look at a complicated dockerfile that authors will work to put as many commands into a single line by concatenating them together. This is to reduce the numbers of layers in an image.
When you do a docker run command, an additional read / write layer is added to the image. An important point is that even if you started 100 containers, all that is created is 100 Read / Write layers, and they all point back to the read only image on the host.
As we mentioned on the previous slide, the layers are represented on the the disk as individual file primatives. In the case of AUFS each layer is a subdirectory holding the file system changes that layer created.
The layers are “stacked” on top of each other, and if you shell into a running container it will look like one cohesive file system.
In some cases there might be a file or directory that exists on multiple layers. In such a case, the ‘top most’ object is what’s represented in the container file system. This is because that object represents the last change made to the image (remember images are built from the bottom up)
This layering construct lets you start with the bare minimum and add exactly what you want. For instance, Alpine Linux is a very stripped down operating system. It’s about 2.6mb. When an image is built on that you need to explicitly add almost anything you’d want in your final image.
Copy on write is the technology that manages runtime changes to the container.
When you create a new container, you’re not booting a full operating system. You’re just creating a subdirectory on the Docker host to house any changes that are made to the running container. This is why new containers take <1 MB of space when they are started (since initially the new RW layer is empty) and why containers start so quickly
If you were to shell into the container, it would look like it was a full copy of the original image from which it was instantiated, but in reality you’d be looking at at a read only copy of that image.
At least until a change is made to the running container (file deletion, creation, update, etc)
When a change is detected in the container, the affected object is copied up the layers to the top most RW layer.
One of the tricky things about containers is that when a container is destroyed, that RW layer is removed. Any changes that were made to the container are destroyed in the process.
Clearly this is suboptimal in many cases. For instance you might want to save off some logs from your application or your container was running a database and you want the data to last after the container is destroyed.
The solution to this problem is something called a Volume. A volume is simply a subdirectory in your container that is mapped to a subdirectory that lives outside of the directory structure where your images and containers are stored on your docker host.
For instance, let’s say your application writes to /var/logs, and you wanted to save the logs after the container was destroyed.
You would create a new volume, and tell Docker to send any data destined to /var/logs to the directory that is being managed by the volume.
From a docker perspective we don’t really care where your volumes live. It simply needs to be on storage that is accessible by the Docekr host operating system.
You can create volumes at build time through the dockerfile or at run time via command line switch.
Docker pull pulls an image from the registry to the local host. This example shows us pulling the 1.0 version of the Catweb image from the mikegcoleman repo on Docker Hub
Docker images will list all the images on your docker host
Docker run will start a new container. In this case we are instructing the docker engine to run the mikegcoleman/catweb image we pulled earlier -d tells docker It should start the application in detached mode (running in the backgorund) and –p 5000:5000 tells docker engine that any requests coming into port 5000 on the host should be directed to port 5000 on this container. –name catweb specifies a name for our running container. If you do not specify a name, Docker will generate one (and they can be pretty funny)
Docker ps shows runing containers. ps –a will show all containers, including ones that have been stopped
Docker stop stops a running container (but does not delete it). You can specify the container name (catweb in our example) or the container ID (every image and container are assigned a unique ID)
Docker rm removes the stopped container. if you specify rm –f you will force docker to remove container even if it’s running. again you can specify the name or ID
Docker rmi removes the specified image
Docker build will create a new image from a docker file. In this example we are creating an image called mikegcoleman/catweb and we’re tagging it with a 2.0 version number. The period says to build the image from the docker file in the current directory. You can explicitly specify a path to your dockerfile if you so choose
Docker push is the opposite of Docker – it pushes an image up to a registry. In this case we’re pushing our newly created docker image up on to hub.
A Dockerfile describes how to build a docker image (using the ‘docker build’ command). The commands in the file are a mix of commands you’d actually run to install an application locally (in this case we’re building a python app, and if you’re familiar with Python, you’ll instantly recognize much of what is up on the screen) and specific keywords that tell Docker what to do (RUN a command COPY a file, etc).
An important point about Dockefiles is that they can live with your source code, and be versioned by your version control system. This means Docker images are 100% reproducible. This is another area where VMs and Containers can be different – many times VMs are hand built, and if you lose your golden image you’re out of luck.
This Dockerfile builds a simple flask-based python webapp.
Let’s step through the Dockerfile line by line – note that when the Docker file is processed, all these commands are being run on the Docker image – not your local machine.
Line 1: Build this new Docker image based on the official Apline Linux base image
Line 5: Install Pythong and Pip (the python package manager)
Line 7: use Pip to ensure that Pip is the latest version
Line 11: Requirements.txt holds a list of libraries the app will need, and is used by pip to actually install the library. This line copies the file from your local machine into your Docker image
Line 12: Uses that file and Pip to install the requirements into the container
Line 15: Copy my application code (app.py) into the /usr/src/app directory of the image
Line 16: Copy our index.html file into the /usr/src/app/templates directory of the image
Line 19: The application communicates on port 5000, so we tell Docker to listen on that Port
Line 22: When the container starts up – we fire up Python and pass it out application code to start the app
Again, these steps are pretty much EXACTLY what you would do on a traditional machine to run this application, only here there are being used to create an Docker image.
So, let’s put all this together and look how it might look in real environment
Starting in the middle, your IT organization might provide your developers with a set of blessed base images. These could be operating systems, languages, or components (redis, postgres, rabbitmq). Those keys indicate that those images are digitally signed, so your developers know that they are actually coming from your IT team (vs. being placed there by some nefarious actor in the hopes of introducing a security vulnerability into your organization).
Your developers download those signed images, add their code and libraries, and then build the application Docker images. These are then pushed back up to Docker hub. The important thing here is that developers still use all the tools they are familiar with – they use their preferred integrated development environment (IDE) for instance. The only major change is that at the end of the process they create a Docker file, and build an image.
We don’t show it here, but at some point you (hopefully) QA your app, and then we move over to the ops side of the house.
Here your ops team takes that application image and they put into production. They can use Docker Datacenter or Docker Cloud to deploy the application to whatever infrastructure makes sense. Cloud, physical, virtual, whatever. The important point here is that the application will run wherever they need it to, and the targeted server only needs to have the Docker engine – no need to control library versions, or ensure that all the right languages are installed. It just works.