This document provides an overview of Docker and containers. It begins with a brief introduction to 12 Factor Applications methodology and then defines what Docker is, explaining that containers utilize Linux namespaces and cgroups to isolate processes. It describes the Docker software and ecosystem, including images, registries, Docker CLI, Docker Compose, building images with Dockerfile, and orchestrating with tools like Kubernetes. It concludes with a live demo and links to additional resources.
Real-World Docker: 10 Things We've Learned RightScale
Docker has taken the world of software by storm, offering the promise of a portable way to build and ship software - including software running in the cloud. The RightScale development team has been diving into Docker for several projects, and we'll share our lessons learned on using Docker for our cloud-based applications.
OpenStack, Containers, and Docker: The Future of Application Deployment
Twenty years ago, developers built static applications on well-defined stacks that ran on proprietary, monolithic hardware. Developers today want freedom to build applications using their choice of services and stacks and, ideally, want to be able to run those applications on any available hardware. Of course, this raises questions about service interaction, the practicality of migrating applications across environments, and the challenges of managing unlimited combinations of services and hardware environment.
By promoting an opensource approach to flexible and inter-operable infrastructure, OpenStack goes a long way towards achieving this vision of the future. This talk discusses the application and platform side of the equation, and the interplay between OpenStack, Container technology (e.g. LXC), and the opensource Docker.io project. Docker.io enables any application and its dependencies to be deployed as lightweight containers that run consistently virtually anywhere. The same containerized application that runs on a developer's laptop can run consistently on a bare metal server, an OpenStack cluster, a Rackspace cloud, a VM,etc. While providing isolation and compatibility, containers have significant size, performance, and deployment advantages over traditional VMs.
Recently, the community created an integration between Docker and OpenStack Nova, opening up exciting possibilities for web scale application deployment, continuous integration and deployment, private PaaS, and hybrid cloud. This session will give an introduction to Docker and containers in the context of OpenStack, and will then demonstrate cross-environment deployment of applications.
This talk gives a brief introduction to OpenStack and Chef, then outlines the current state of deploying OpenStack with Chef. There was a live demo deploying to a Dell rack during the talk.
SCALE 9x, February 25-27 in Los Angeles.
An overview of Docker and Linux containers. There are three parts:
An introduction to Docker and containers
A demo that the audience can try out
An overview of the various vendors and groups in this space
The demo is meant to be a simple, step-by-step recipe that introduces the basic commands and ends by spinning up a node.js app using two linked containers: node and redis.
The final section explores the companies and groups that are working on containers, either complementing Docker's contributions or in direct competition with them.
Real-World Docker: 10 Things We've Learned RightScale
Docker has taken the world of software by storm, offering the promise of a portable way to build and ship software - including software running in the cloud. The RightScale development team has been diving into Docker for several projects, and we'll share our lessons learned on using Docker for our cloud-based applications.
OpenStack, Containers, and Docker: The Future of Application Deployment
Twenty years ago, developers built static applications on well-defined stacks that ran on proprietary, monolithic hardware. Developers today want freedom to build applications using their choice of services and stacks and, ideally, want to be able to run those applications on any available hardware. Of course, this raises questions about service interaction, the practicality of migrating applications across environments, and the challenges of managing unlimited combinations of services and hardware environment.
By promoting an opensource approach to flexible and inter-operable infrastructure, OpenStack goes a long way towards achieving this vision of the future. This talk discusses the application and platform side of the equation, and the interplay between OpenStack, Container technology (e.g. LXC), and the opensource Docker.io project. Docker.io enables any application and its dependencies to be deployed as lightweight containers that run consistently virtually anywhere. The same containerized application that runs on a developer's laptop can run consistently on a bare metal server, an OpenStack cluster, a Rackspace cloud, a VM,etc. While providing isolation and compatibility, containers have significant size, performance, and deployment advantages over traditional VMs.
Recently, the community created an integration between Docker and OpenStack Nova, opening up exciting possibilities for web scale application deployment, continuous integration and deployment, private PaaS, and hybrid cloud. This session will give an introduction to Docker and containers in the context of OpenStack, and will then demonstrate cross-environment deployment of applications.
This talk gives a brief introduction to OpenStack and Chef, then outlines the current state of deploying OpenStack with Chef. There was a live demo deploying to a Dell rack during the talk.
SCALE 9x, February 25-27 in Los Angeles.
An overview of Docker and Linux containers. There are three parts:
An introduction to Docker and containers
A demo that the audience can try out
An overview of the various vendors and groups in this space
The demo is meant to be a simple, step-by-step recipe that introduces the basic commands and ends by spinning up a node.js app using two linked containers: node and redis.
The final section explores the companies and groups that are working on containers, either complementing Docker's contributions or in direct competition with them.
[KubeCon EU 2021] Introduction and Deep Dive Into ContainerdAkihiro Suda
Join containerd maintainers and reviewers in a combined introduction and deep dive session. They will discuss the overview and the recent updates of containerd as well as how it is being used by Kubernetes, Docker and other container-based systems. The brief introduction about its architecture and service design will be included. The talk will also deep dive into how to leverage contained by extending and customizing it for your use case with low-level plugins like remote snapshotters, as well as by implementing your own containerd client. Upcoming features and recent discussion in containerd community will also be covered.
- - -
https://kccnceu2021.sched.com/event/iE6v/introduction-and-deep-dive-into-containerd-kohei-tokunaga-akihiro-suda-ntt-corporation?iframe=no
When Docker Engine 1.12 features unleashes software architectureAdrien Blind
This slidedeck deals with new features delivered with Docker Engine 1.12, in a larger context of application architecture & security. It has been presented at Voxxed Days Luxembourg 2016
JDD2014: Docker.io - versioned linux containers for JVM devops - Dominik DornPROIDEA
This presentation will introduce you to Docker - the new shiny star on the Devops horizon. It will teach you everything you need to know to get started with Docker, why you'd want to use it and which tools to use to get the most out of it. Additionally to showing the basics, it will introduce helpful libraries available for the JVM and how they can be used together with Docker to create secure, scalable and maintainable cloud setups for your applications.
Dev opsec dockerimage_patch_n_lifecyclemanagement_kanedafromparis
Lors de cette présentation, nous allons dans un premier temps rappeler la spécificité de docker par rapport à une VM (PID, cgroups, etc) parler du système de layer et de la différence entre images et instances puis nous présenterons succinctement kubernetes.
Ensuite, nous présenterons un processus « standard » de propagation d’une version CI/CD (développement, préproduction, production) à travers les tags docker.
Enfin, nous parlerons des différents composants constituant une application docker (base-image, tooling, librairie, code).
Une fois cette introduction réalisée, nous parlerons du cycle de vie d’une application à travers ses phases de développement, BAU pour mettre en avant que les failles de sécurité en période de développement sont rapidement corrigées par de nouvelles releases, mais pas nécessairement en BAU où les releases sont plus rares. Nous parlerons des diverses solutions (jfrog Xray, clair, …) pour le suivie des automatique des CVE et l’automatisation des mises à jour. Enfin, nous ferons un bref retour d’expérience pour parler des difficultés rencontrées et des propositions d’organisation mises en oeuvre.
Cette présentation bien qu’illustrée par des implémentations techniques est principalement organisationnelle.
Containers, Docker, and Security: State Of The Union (LinuxCon and ContainerC...Jérôme Petazzoni
Docker is two years old. While security has always been at the core of the questions revolving around Docker, the nature of those questions has changed. Last year, the main concern was "can I safely colocate containers on the same machine?" and it elicited various responses. Dan Walsh, SELinux expert, notoriously said: "containers do not contain!", and at last year's LinuxCon, Jérôme delivered a presentation detailing how to harden Docker and containers to isolate them better.
Today, people have new concerns. They include image transport, vulnerability mitigation, and more.
After a recap about the current state of container security, Jérôme will explain why those new questions showed up, and most importantly, how to address them and safely deploy containers in general, and Docker in particular.
Making DevOps Secure with Docker on Solaris (Oracle Open World, with Jesse Bu...Jérôme Petazzoni
Docker, the container Engine and Platform, is coming to Oracle Solaris! This is the talk that Jérôme Petazzoni (Docker) and Jesse Butler (Oracle) gave at Oracle Open World in November 2015.
KubeCon CloudNativeCon Seattle 2019 Recap - General overview and also summary of some of the application deployment track (App sig, Operator Framework, Helm, Kustomize, CNAB).
Deploy microservices in containers with Docker and friends - KCDC2015Jérôme Petazzoni
Docker lets us build, ship, and run any Linux application, on any platform. It found many early adopters in the CI/CD industry, long before it reached the symbolic 1.0 milestone and was considered "production-ready." Since then, its stability and features attracted enterprise users in many different fields, including very demanding ones like finance, banking, or intelligence agencies.
We will see how Docker is particularly suited to the deployment of distributed applications, and why it is an ideal platform for microservice architectures. In particular, we will look into three Docker related projects that have been announced at DockerCon Europe last December: Machine, Swarm, and Compose, and we will explain how they improve the way we build, deploy, and scale distributed applications.
[FOSDEM 2020] Lazy distribution of container imagesAkihiro Suda
https://fosdem.org/2020/schedule/event/containers_lazy_image_distribution/
The biggest problem of the OCI Image Spec is that a container cannot be started until all the tarball layers are downloaded, even though more than 90% of the tarball contents are often unneeded for the actual workload.
This session will show state-of-the-art alternative image formats, which allow runtime implementations to start a container without waiting for all its image contents to be locally available.
Especially, this session will put focus on CRFS/stargz and its implementation status in containerd (https://github.com/containerd/containerd/issues/3731). The plan for BuildKit integration will be shown as well.
Why everyone is excited about Docker (and you should too...) - Carlo Bonamic...Codemotion
In less than two years Docker went from first line of code to major Open Source project with contributions from all the big names in IT. Everyone is excited, but what's in for me - as a Dev or Ops? In short, Docker makes creating Development, Test and even Production environments an order of magnitude simpler, faster and completely portable across both local and cloud infrastructure. We will start from Docker main concepts: how to create a Linux Container from base images, run your application in it, and version your runtimes as you would with source code, and finish with a concrete example.
[DockerCon 2019] Hardening Docker daemon with Rootless modeAkihiro Suda
https://dockercon19.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=281879
Docker CE 19.03 is going to support "Rootless mode", which allows running the entire Docker daemon and its dependencies as a non-root user on the host, so as to protect the host from malicious containers in a simple but very strong way. Rootless mode is also attractive for users who cannot get `sudo` permission for installing Docker on shared computing machines. e.g. HPC users. In this talk, Akihiro Suda, the author of the Rootless mode (PR: moby#38050), will explain how users can get started with Rootless mode. He will also explain the implementation details of Rootless mode and planned enhancements such as LDAP integration.
Introduction to Docker, December 2014 "Tour de France" EditionJérôme Petazzoni
Docker, the Open Source container Engine, lets you build, ship and run, any app, anywhere.
This is the presentation which was shown in December 2014 for the "Tour de France" in Paris, Lille, Lyon, Nice...
[KubeCon EU 2021] Introduction and Deep Dive Into ContainerdAkihiro Suda
Join containerd maintainers and reviewers in a combined introduction and deep dive session. They will discuss the overview and the recent updates of containerd as well as how it is being used by Kubernetes, Docker and other container-based systems. The brief introduction about its architecture and service design will be included. The talk will also deep dive into how to leverage contained by extending and customizing it for your use case with low-level plugins like remote snapshotters, as well as by implementing your own containerd client. Upcoming features and recent discussion in containerd community will also be covered.
- - -
https://kccnceu2021.sched.com/event/iE6v/introduction-and-deep-dive-into-containerd-kohei-tokunaga-akihiro-suda-ntt-corporation?iframe=no
When Docker Engine 1.12 features unleashes software architectureAdrien Blind
This slidedeck deals with new features delivered with Docker Engine 1.12, in a larger context of application architecture & security. It has been presented at Voxxed Days Luxembourg 2016
JDD2014: Docker.io - versioned linux containers for JVM devops - Dominik DornPROIDEA
This presentation will introduce you to Docker - the new shiny star on the Devops horizon. It will teach you everything you need to know to get started with Docker, why you'd want to use it and which tools to use to get the most out of it. Additionally to showing the basics, it will introduce helpful libraries available for the JVM and how they can be used together with Docker to create secure, scalable and maintainable cloud setups for your applications.
Dev opsec dockerimage_patch_n_lifecyclemanagement_kanedafromparis
Lors de cette présentation, nous allons dans un premier temps rappeler la spécificité de docker par rapport à une VM (PID, cgroups, etc) parler du système de layer et de la différence entre images et instances puis nous présenterons succinctement kubernetes.
Ensuite, nous présenterons un processus « standard » de propagation d’une version CI/CD (développement, préproduction, production) à travers les tags docker.
Enfin, nous parlerons des différents composants constituant une application docker (base-image, tooling, librairie, code).
Une fois cette introduction réalisée, nous parlerons du cycle de vie d’une application à travers ses phases de développement, BAU pour mettre en avant que les failles de sécurité en période de développement sont rapidement corrigées par de nouvelles releases, mais pas nécessairement en BAU où les releases sont plus rares. Nous parlerons des diverses solutions (jfrog Xray, clair, …) pour le suivie des automatique des CVE et l’automatisation des mises à jour. Enfin, nous ferons un bref retour d’expérience pour parler des difficultés rencontrées et des propositions d’organisation mises en oeuvre.
Cette présentation bien qu’illustrée par des implémentations techniques est principalement organisationnelle.
Containers, Docker, and Security: State Of The Union (LinuxCon and ContainerC...Jérôme Petazzoni
Docker is two years old. While security has always been at the core of the questions revolving around Docker, the nature of those questions has changed. Last year, the main concern was "can I safely colocate containers on the same machine?" and it elicited various responses. Dan Walsh, SELinux expert, notoriously said: "containers do not contain!", and at last year's LinuxCon, Jérôme delivered a presentation detailing how to harden Docker and containers to isolate them better.
Today, people have new concerns. They include image transport, vulnerability mitigation, and more.
After a recap about the current state of container security, Jérôme will explain why those new questions showed up, and most importantly, how to address them and safely deploy containers in general, and Docker in particular.
Making DevOps Secure with Docker on Solaris (Oracle Open World, with Jesse Bu...Jérôme Petazzoni
Docker, the container Engine and Platform, is coming to Oracle Solaris! This is the talk that Jérôme Petazzoni (Docker) and Jesse Butler (Oracle) gave at Oracle Open World in November 2015.
KubeCon CloudNativeCon Seattle 2019 Recap - General overview and also summary of some of the application deployment track (App sig, Operator Framework, Helm, Kustomize, CNAB).
Deploy microservices in containers with Docker and friends - KCDC2015Jérôme Petazzoni
Docker lets us build, ship, and run any Linux application, on any platform. It found many early adopters in the CI/CD industry, long before it reached the symbolic 1.0 milestone and was considered "production-ready." Since then, its stability and features attracted enterprise users in many different fields, including very demanding ones like finance, banking, or intelligence agencies.
We will see how Docker is particularly suited to the deployment of distributed applications, and why it is an ideal platform for microservice architectures. In particular, we will look into three Docker related projects that have been announced at DockerCon Europe last December: Machine, Swarm, and Compose, and we will explain how they improve the way we build, deploy, and scale distributed applications.
[FOSDEM 2020] Lazy distribution of container imagesAkihiro Suda
https://fosdem.org/2020/schedule/event/containers_lazy_image_distribution/
The biggest problem of the OCI Image Spec is that a container cannot be started until all the tarball layers are downloaded, even though more than 90% of the tarball contents are often unneeded for the actual workload.
This session will show state-of-the-art alternative image formats, which allow runtime implementations to start a container without waiting for all its image contents to be locally available.
Especially, this session will put focus on CRFS/stargz and its implementation status in containerd (https://github.com/containerd/containerd/issues/3731). The plan for BuildKit integration will be shown as well.
Why everyone is excited about Docker (and you should too...) - Carlo Bonamic...Codemotion
In less than two years Docker went from first line of code to major Open Source project with contributions from all the big names in IT. Everyone is excited, but what's in for me - as a Dev or Ops? In short, Docker makes creating Development, Test and even Production environments an order of magnitude simpler, faster and completely portable across both local and cloud infrastructure. We will start from Docker main concepts: how to create a Linux Container from base images, run your application in it, and version your runtimes as you would with source code, and finish with a concrete example.
[DockerCon 2019] Hardening Docker daemon with Rootless modeAkihiro Suda
https://dockercon19.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=281879
Docker CE 19.03 is going to support "Rootless mode", which allows running the entire Docker daemon and its dependencies as a non-root user on the host, so as to protect the host from malicious containers in a simple but very strong way. Rootless mode is also attractive for users who cannot get `sudo` permission for installing Docker on shared computing machines. e.g. HPC users. In this talk, Akihiro Suda, the author of the Rootless mode (PR: moby#38050), will explain how users can get started with Rootless mode. He will also explain the implementation details of Rootless mode and planned enhancements such as LDAP integration.
Introduction to Docker, December 2014 "Tour de France" EditionJérôme Petazzoni
Docker, the Open Source container Engine, lets you build, ship and run, any app, anywhere.
This is the presentation which was shown in December 2014 for the "Tour de France" in Paris, Lille, Lyon, Nice...
Michigan IT Symposium 2017 - Container BOFJeffrey Sica
Development with Containers
Moderator: Jeffery Sica
Orchestration and Management with Containers
Moderator: Bob Killen
The Container has revolutionized how many industries and enterprises develop and deploy software and services. While the promise of containers to improve reliability, reproducibility and sharing application is helping to drive this adoption, there are still questions and concerns that are holding back broader adoption. In this BOF we will provide an opportunity for the Michigan IT community to engage with the local leaders in this area that are developing solutions and promoting the container model in their units
This BoF has been broken into two high-level topics: Development with Containers that is geared toward developers working with and implementing containerized applications, and Orchestration and Management with Containers that is geared toward sysadmins that are hosting and maintaining production-level containers. There will be an overview followed by an interactive period where the audience can ask questions of our experts and share their experiences with others. A key goal of these sessions is to understand and develop an agenda for solving questions and concerns that are holding back broader adoption.
Docker, Cloud Foundry & Bosh. Why use containers? How does Bluemix fit into this? What about adding services? All these questions are answered, and more!
Introduction to Docker and Monitoring with InfluxDataInfluxData
In this webinar, Gary Forgheti, Technical Alliance Engineer at Docker, and Gunnar Aasen, Partner Engineering, provide an introduction to Docker and InfluxData. From there, they will show you how to use the two together to setup and monitor your containers and microservices to properly manage your infrastructure and track key metrics (CPU, RAM, storage, network utilization), as well as the availability of your application endpoints.
The Axigen Docker image is provided for users to be able to run an Axigen based mail service within a Docker container.
The following services are enabled and mapped as 'exposed' TCP ports in Docker:
§ SMTP (25 - non secure, 465 - TLS)
§ IMAP (143 - non secure, 993 - TLS)
§ POP3 (110 - non secure, 995 - TLS)
§ WEBMAIL (80 - non secure, 443 - TLS)
§ WEBADMIN (9000 - non secure, 9443 - TLS)
CLI (7000 - non secure
My college ppt on topic Docker. Through this ppt, you will understand the following:- What is a container? What is Docker? Why its important for developers? and many more!
PuppetConf 2017: What’s in the Box?!- Leveraging Puppet Enterprise & Docker- ...Puppet
“Docker, Docker, Docker.” It’s a phrase we hear often, but what are containers, what can they be used for, and why should you know more about them? In this session, Grace (Puppet) and Tricia (AppDynamics) will introduce attendees to Docker and help them build and deploy their first container with Puppet. They will leverage the docker_image_build module from the Puppet Forge and take attendees through the proper workflow for coupling Docker and Puppet together. The session will focus on how to use some of the newest Docker features, such as multi-stage build files and password stores within Docker so you can pass "secrets" to a swarm for login credentials. The goal is to provide newcomers with a working proficiency of how to get started deploying containers using Puppet as their automation tool.
Originally Presented at WebSummit 2015. Find all the materials for the workshop here: https://github.com/emccode/training/tree/master/docker-workshop/websummit
Similar to Accelerate your development with Docker (20)
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
2. $ whoami
Started programming at age of 13
MSc in Computer Technologies (TU-Sofia) and Software Engineering (HFT Stuttgart)
Professional developer since year 2000
PHP core developer since 2002
Spent 11 years working at MySQL, SUN Micro and Oracle improving the MySQL client and server side
Last 2 years spent as freelancing technical team lead / consultant
Lately became CTO of DNH Soft
Software Development practices and architectures appassionato.
3. How many of you have used Docker / Linux containers?
4. What to expect from this talk?
Quick intro into 12 Factor Applications
What is Docker?
Containers from Linux POV
Description of technologies related to containers
Overview of Docker
Live Demo
5. 12 Factors App
● Methodology for building SAAS
● Drafted around 2011 at Heroku
1. Codebase
2. Dependencies
3. Config
4. Backing services
5. Build, release, run
6. Processes
7. Port binding
8. Concurrency
9. Disposability
10. Dev / Prod parity
11. Logs
12. Admin processes
6. Codebase / Dependencies / Config
Codebase
There should be exactly one codebase for a deployed service with the
codebase being used for many deployments.
Dependencies
All dependencies should be declared, with no implicit reliance on system tools
or libraries.
Config
Configuration that varies between deployments should be stored in the
environment.
7. Backing Services / Build, Release, Run / Processes
Backing Services
All backing services are treated as attached resources and attached and
detached by the execution environment.
Build, release, run
The delivery pipeline should strictly consist of build, release, run. Build stage
artefacts should not be available to release and run stage.
Build once, run everywhere.
Processes
Applications should be deployed as one or more stateless processes with
persisted data stored on a backing service.
8. Port Binding / Concurrency / Disposability
Port Binding
Self-contained services should make themselves available to other services by
specified ports.
Concurrency
Scale out via the process model
Processes
Maximize robustness with fast startup and graceful shutdown
9. Dev - Prod Parity / Logs / Admin Processes
Dev / Prod Parity
All environments should be as similar as possible.
Logs
Applications should produce logs as event streams and leave the execution
environment to aggregate.
Admin Processes
Any needed admin tasks should be kept in source control and packaged with
the application. They should run in the same environment as the application
itself.
10. To begin with, what is Docker?
Docker Inc. is a company, previously dotCloud
However in the past 6 years the name meant containers
Some people say dockerize when they mean containerize (similarly to the verb “to
google”)
Containers were not invented by Docker Inc. The company made them available
to the masses.
11. Then, what is a container?
Containerization is OS environment virtualization
It feels like a VM but ain't one. Some people call them lightweight VMs.
“One kernel to rule them all” compared to “one hypervisor to rule them all”.
Can't boot a different OS or kernel. Can't load other kernel modules.
Can boot different distro, however.
Typically only one process / service (forking apps) runs inside the container.
Examples of previous/other works : Solaris Zones, FreeBSD Jails
12. Containers on Linux
Containers on Linux rely on a couple of kernel features
Linux Namespaces, that provide isolation
Currently existing namespaces are : cgroup, IPC, network, mount, PID, user (UIDs &
GIDs), UTS
Control Groups (cgroups), that provide means for hierarchical organization for
metering and limiting of resources (memory, CPU, I/O, network) for group
(collections of processes)
13. Who is running them?
Containers are executed on lower level by
runtimes
LXC/LXD - LXD, written in Go, uses LXC
rkt - App Container compliant, deprecated, by
CoreOS, now Red Hat. Natively ACI, but
supports also Docker and OCI images. Forked
very recently
runC - OCI compliant implementation in Golang
by Docker Inc., a spin off from Docker Engine
since Docker 1.11
containerD - works with runC for the high level
details, while runC is low level
railcar - OCI compliant implementation in Rust by
Oracle
OCI has two specs, released in July'17 : Image
and Runtime
CRI-O, implementation of the Kubernetes (1.5+)
Container Runtime Interface (CRI) using OCI
compatible runtimes.
14. But there is more!
Containers are managed at a higher level by orchestrators.
Docker Compose (single host only) and Docker Swarm both are part of Docker Engine
Marathon on Apache Mesos
Cattle, obsoleted, by Rancher. Rancher 2.0 runs k8s
Kubernetes (k8s). Recently won the Orchestrator wars.
If you plan to use containers k8s should be your orchestrator of choice
KaaS is available from all major cloud providers - AWS (beta), Azure and GKE
15. In short, what’s in for me?
Containers are lightweight, or at least lighter than
VMs, both in run-time resources usage and size
Containers are immutable
Containers can be even read-only
Containers are meant to be ephemeral
Every container contains all needed
dependencies and doesn't need anything else
Implications:
Dep hell is gone. DLL hell memories resurface?
XAMPP is dead
Linux distro software choice is dead
Less software installed means less exploit
surface
16. Hosting of container images (registries)
Docker Inc. runs Docker Hub
Library of public images
Docker Store - commercially available containers
and Docker plugins.
Docker Hub supports automated builds triggered
on a commit in Github / BitBucket.
Storage for your images
● free of charge for you public ones
● has a cost for you private images
Alternatives are:
● Host a registry in a container on own VPS
or on premise
○ Docker Trusted Registry (Docker EE)
○ RedHat OpenShift CR
○ JFrog Artifactory
○ Sonatype Nexus
● Amazon Elastic Container Registry, you
need AWS SDK
● Google Container Registry, you need
Google Cloud SDK
● Azure Container Registry
17.
18. Docker (the software) Flavors
Supported OS for Docker CE:
● Linux (x86-64, ARM, ARM64, ppc64le,
s390x(
● MacOS, comes bundled with k8s
● Windows, comes bundled with k8s
● AWS
● Azure
Supported Platforms for Docker EE
● CentOS (x86-64)
● OL (x86-64)
● RHEL, SUSE Linux ES, Ubuntu (x86-64 /
ppc64le / s390x)
● MS Windows Server 2016 (x86-64)
● AWS
● Azure
● IBM Cloud
19. Docker Compose
Originally known as Fig
Orchestrator that uses IaC
“Cluster” configuration is stored in an
YAML file ( ./docker-compose.yml )
Features are constantly added, thus there
are many compose file versions. Latest is
3.6 as of 18.02
First line in the file states minimum
version
The file is split in 3 main sections - higher
level abstractions, since 2.0 : services,
networks, volumes
If you plan to use Docker Swarm, then you
have to use version 3.
Docker EE also now supports K8s
deployments from docker-compose.yml
20. Docker Compose Entities
services - The containers = instances of images.
With Swarm you can have multiple instance
per service - scaling up and down.
volumes / mounts - Persistently stored data.
Otherwise data is gone when the container
get removed.
Mounts import data from the host and are
shareable
Volumes are BLOBs and are shareable too
Volumes are abstracted thru plugins
networks - The actual glue between the services
DC creates a default network, if are lazy to
not create one.
This network is called <projectName>_default
<projectName> is derived from CWD, pass -p
to docker-compose for smth else.
Networks can be seen by other projects and
they are namespaced by project name.
Network frontend in P1 can be attached in
project P2 as external network under the
foreign name P1_frontend.
21. Docker, where is my data?
Container images are made of layers
aufs (/var/lib/docker/aufs), superseed by
overlayfs, shipped with Linux Kernel 4.0
cat /proc/filesystems to see what FSs your kernel support
22. Docker CLI
docker pull image[:tag|@digest], aka docker
image pull
● tag can is a version, digest is a sha256
digest (like git commit hash)
docker push image:tag, aka docker image push
docker rmi image:tag, aka docker image rm
docker build, aka docker image build
● use --no-cache to rebuild from scratch
● use -t image:tag to add name and version
docker images, aka docker image ls
docker image inspect
docker image inspect <imageid> | jq -r '.[].RootFS'
shows all layers of an image
23. More Docker CLI
docker run, aka docker container run
docker exec, aka docker container exec
docker rm, aka docker container rm
docker ps, aka docker container ls
docker stop, aka docker container stop (SIGTERM)
docker kill, aka docker container kill (SIGKILL)
docker kill `docker ps -q` to kill'em all (you might
also need to remove them)
docker inspect
● inspects networks, containers, images
● gives you tons of info in JSON format. Use
jq to process it.
docker container diff
docker network ls
docker network rm
docker network prune
docker system prune
24. Building a container image
docker build
● Simple - just run the command
● Transparent - the recipe how to build is in
the Dockerfile
● Self-contained everything is one place;
the Dockerfile, the assets
ONBUILD Strategy
● The Dockerfile is a simple “FROM
baseimage”
● Intransparent, as the sysadmin defines
what will happen
Asset Generation Pipeline Strategy
● Run different asset generators as
separate containers
● SASS, composer, etc.
● External driver is needed, like make,
gulp, or just whatever your CI provides
● Pro - smaller images
● Con - complicated because of multiple
moving parts
Multistage Builds Strategy
25. Multistage builds
Build different artifacts during different stages
Opt-in what to pull from a previous stage
In short, install the compile time deps in first stage, compile the app, pull only the
compiled code in the next stage which will eventually be the delivered image
Pro: No need for an external driver like make, gulp, etc
Pro: The recipe is in one place - the Dockerfile
Con: The Dockerfile become longish
26. Dockerfile Instructions
ARG <name>[=<default value>]
● Declares build time argument to the
Dockerfile. Pass valu to docker build.
FROM <image>[:<tag> | @<digest>] [AS <name>]
● Declares the base image to inherit from
● FROM can use ARG
● AS is for multistage builds
RUN ( <command> | [“exe”, “param1” …] )
● Execs a command in own layer
● ENV var setting is allowed by prefixing the
command with key=value
CMD
● The command to execute when starting
the container
● One per file
● This is not for executing statements
● See also ENTRYPOINT, it might use it
when no executable is declared
LABEL <key>=<value> <key>=<value> …
● For setting metadata which can be queried
later
● LABEL version=”1.0” vendor=”com.dnhsoft”
● Use LABEL instead of MAINTAINER
27. But there is more...
EXPOSE
● Tell docker daemon the port will be
exposed
● Doesn’t expose the port automagically, to
do so use docker run -p XXXX:YYYY
ENV (key value | key=value …)
● Sets a ENV variable which is valid until the
end of the Dockerfile
● The ENV will also exist during container
runtime
COPY [--chown=<user>:<group>] <src>... <dest>
● Copies files, dirs into the container at <dst>
● Allows chowing to user:group
● Wildcards are possible
● If <dst> is relative than WORKDIR is used for
resolving the path
● You can’t send as <src> files/dirs up the tree
● Use .dockerignore if you want to skip files
when using wildcards.
ADD [--chown=<user>:<group>] <src>... <dest>
● Same as COPY but also
● Supports <src> from URL
● Local tar.gz|bz2|xz can be decompressed
28. Hungry and ready for lunch?
ENTRYPOINT ["executable", "par1", "par2"]
● Makes from the container a command
● When you run a container the command
you pass is appended to the ENTRYPOINT
● http://www.johnzaccone.io/entrypoint-vs-
cmd-back-to-basics/
VOLUME /path/to/dir
● Shows the intent to mount at the location
● The real mount happens with docker run -
v hostdir:/path/to/dir
SHELL ["executable", "parameters"]
USER <UID>[:<GID>]
● Sets the uid:gid of subsequent commands
● Sets the uid:gid at container runtime
● Please use it, otherwise root = too much
rights
WORKDIR /path/to/workdir
● Sets $(PWD)
● Parameter can be absolute or relative
● When relative appended to current value
● Very much like cd /path/to/workdir
29. Here come the last ones before the demo
ONBUILD [INSTRUCTION]
● Schedule INSTRUCTION to be executed
when building a child image. A trigger.
● Multiple ONBUILD triggers are executed in
the same order
● Allows one-liners child Dockerfiles : FROM
base-onbuild:1.2
STOPSIGNAL
● Sets the signal number to send when
stopping.
● Could be a number, like 9, or name
SIGKILL
HEALTHCHECK [OPTIONS] CMD
● Allows Docker to check the healthiness of
the container by executing CMD
● CMD should return 0 for healthy and 1 for
unhealthy
● docker ps shows the status
● --interval=TIME , runs every TIME
● --timeout=TIME, probe fails after TIME
● --retries=N , run the probe up to N times
consecutively
● --start-period=TIME , wait TIME after
container start before running the probe.
Useful for containers with long boot time
After some introduction let’s move to the topic of the presentation.
Docker in the title of the presentation is not a clickbait but before I go deeply into Docker I will cover some basic things about containers.
I will start with what is a container from Linux point of view. After that I will give brief overview of the technologies related to containers.
Later we will switch the gear and see what Docker offers.
Finally, this presentation will finish with a live demo of development setup which uses Docker under Linux.
In first place Docker is a company. It created a product, which is easy to use and brings the software development and software operations to the next level.
Because Docker revolutionized the container technology for the masses Docker equals containers. Similarly, to the verb “to google” is “to search”, “to dockerize” means to “containerize”.
To be clear, containers were not invented by Docker, as GNU/Linux is not the first operating system of its kind, but both became widespread because of easy of use and thus cheap entry.
One gear back. What is a container? A container is an isolated environment. Compared to Virtual Machines it is lightweight and is OS environment virtualization compared to physical resources virtualization.
A VM contains whole operating system. It is a virtualized computer. A container might contain full blown OS but this is rarely the case. What a container doesn’t contain is a kernel. The kernel of the host machine is used to execute the binaries of the container.
So, with VMs there is a hypervisor to manage the usage of resources. With containers the OS kernel does it. There is a real implication because of that. You can’t boot a container which contains different OS. With VMs this is perfectly possible. However, it is possible to mix and match different Linux distributions. The host could be Ubuntu Server and in the container we might have Alpine or CentOS.
Typically only one process or in case of an applications that uses fork only one application runs inside the container.
Here I should mention that previous works in this world exist - Solaris Zones and FreeBSD Jails.
The Linux kernel has a couple of subsystems, which were not created directly with the idea of containers but when combined allow the implementation of containers.
The first feature is namespaces. Namespaces provide isolation. This is very similar to the namespaces in the programming languages that provide them. For example, different network namespaces provide isolation and allow to run in parallel several network stacks without conflicts.
Control Groups is another feature, which is essential for the containers. It provides means for metering resource utilization and limiting of it, if it needs be. This means that a container can have an upper limit of memory usage and CPU slices it will get. This is feature is on par with VMs.
Now, after we saw what features lay the ground for containers let’s talk a bit about how containers are run.
Containers are not run by the kernel. We need a special process for this. This process is called a runtime. There are many runtimes out there.
LXC/LXD is the very low level. LXC comes from Linux Containers. For example, already years ago it was possible to run Vagrant VMs with LXC as a backend (instead of Virtual Box or VMWare Player). Back in the days when Vagrant was the cutting edge, this was very useful as it was possible to run multiple virtual machines on non-server machines without killing the performace.
Rkt is another runtime, which is deprecated It has been developer by CoreOS. CoreOS was recently acquired by RedHat. CoreOS was/is a product with containers only in mind. Based on systemd and fleet as orchestrator. I will talk more about orchestrators a bit later. Rkt supports both Docker and OCI images. More about images later in the talk.
runC is a OCI (Open Container Initiative) compliant implementation of a runtime. Developed by Docker Inc, and implemented in Golang. Golang is the base technology for a lot new projects which we put in the category system programming. runC is a spin-off from Docker Engine. Docker Engine was the monolith product developed by Docker Inc. Because of fears from the community about vendor lock-in there were talks about creating a whole new stack. Because of the that Docker started splitting the monolith into separate products which build then the whole stack. runC and containerD aber both such products.
containerD is a layer above runC but has higher level abstractions.
Railcar is OCI compliant runtime, implemented in Rust (another language for system programming) by Oracle. I suppose Oracle Cloud uses railcar.
So far the open container initiative has released two specifications - image and runtime.
CRI-O is a runtime, that is product specific. It is a implementation of the Kubernetes Container Runtime Interface to use OCI compatible runtimes. More about Kubernetes later in the talk.
As we talked about runtimes we should also mention orchestrators.
Orchestrators are higher level product in the stack. This is where the real differentiation happens.
Let’s start with Docker Compose. Docker Compose is massively used during development and can be used for production deployments of relatively simple setups, as it supports only single hosts. D-C started as product called Fig and was acquired by Docker Inc.
Docker Swarm is the orchestrator implemented by Docker Inc. for distributed and highly available setups. For ease of use, the deployments are described in declarative way and are extension to the configuration used by Docker Compose. Declarative, also known, as infrastructure as code, deployments are currently very popular, especially for cloud development. They allow to keep the procedures as part of the version control system. Terraform and Puppet are well known products for IaC for VMs.
Marathon is the orchestrator of Apache Mesos platform.
Cattle was the orchestrator of Rancher OS v1. Rancher v2 uses Kubernetes. RancherOS is a OS based on Linux for running containers.
The last and the very important one is Kubernetes. It has recently won the Orchestrator war. Kubernetes as a service is now offered by all big three clould providers - AWS (as beta), Azure and GKE. CoreOS and Rancher switched to Kubernetes. Apache Mesos allows supports Kubernetes. Even Docker Inc. integrated K8s and the users of Docker Engine can choose whether to use Swarm or K8s.
So, let’s see what containers offer to the IT crowd
Containers are lightweight compared to VMs still allowing resource isolation, metering and limits. The isolation is not on hardware level, which could be problematic at times.
Containers are immutable and also read-only. In the first case the container in memory could be changed but the changes are gone if the container is restarted or killed. The container is the in memory representation of an image. A container is a process, an image is an executable as a comparison. Containers can be even read-only. Well, the file systems can be mounted read-only. This is very good from security point of view. A non-read-only container can be inspected for the changes compared to the image used to start the container. Perfect for security audits.
Containers are meant to be ephemeral although nowadays even database servers are ran as containers with the storage mounted from outside. Ephemeral containers could be killed at any time. This might happen because there is a new version of the container, or the container will be migrated to a different host (pod), or because the container misbehaves. This feature allow also blue/green deployments (rolling updates) to be easily implemented.
Every container comes with all dependencies it needs. There is no DLL hell anymore. However, unlike real or virtual machines, it is not possible to centrally update libraries in case of security problems and fix for every program.
Another implication is that for example XAMPP is dead. There is no need for pre-bundled distros. Just pull on yourself the language runtime, web server and database server of choice. Mix and match controlled directly by the developer. In this regard the choice of software of a Linux distro is no longer a problem. Recall that Debian switched from MySQL to MariaDB. Who cares what comes with Debian when one can pull a container, and even doesn’t need apt or yum repos when the latest version is pushed as a container.A container contains only the libraries needed for the particular service. Which in turn means that there is less attack service. A container might be few megs in size or tens of megs, rarely over a gigabyte.
Containers are started from images. These images in most cases are stored somewhere in a centralized manner. Images are hosted in registries. There are two versions of the registries. Currently v2 is in use.
Docker Inc. runs a registry called Docker Hub. Docker Hub can host public and private images. Private images come at a cost. Public images are either free of charge or can cost something to the end-user. Similarly to how AWS AMIs could be used for a charge.
Docker Hub can be used to build images, thus being part of a CI pipeline waiting on a changes coming from GitHub and BitBucked.
Alternatives to Docker Hub is a self hosted one - either in the cloud or on premises.
In the cloud all three big players offer Container Registries to be used with their container as a service products but not limited to this usage.
Here is a graphic from a recent survey regarding the usage of Docker Registries
Docker comes in two flavours. Community Edition and Enterprise Edition. CE is free of charge and supports these platforms. EE costs money, supports K8S and is available on these platforms.
The easiest way to get usage of containers when doing software development, especially when doing web development, is by using Docker compose.
D-C was called Fig before acquired by Docker Inc.
D-C is a infrastructure as code tool. The infrastructure of the project is statically defined in a declarative file in YAML format. The name of the file is docker-compose.yml
D-C contantly adds features, mostly regarding Swarm, while deprecating prevous ones. For this reasons there are multiple versions of the D-C file. There are 3 major versions. V3 is the latest major version.
In v2 the file format was reorganized to create higher level abstractions called services, networks and volumes.
If you plan to use Docker Swarm as an orchestrator then only major version 3 is available for you
The container instances are called services. Every service configures the container. For example mounts, volumes, ports, networks attached to etc.
Volumes are for persistently stored data. Volumes can be either blobs or mounted directories. There are volume plugins so the volume might not be local at all.
Networks are the glue between the services. There is always a default network, if you are lazy to create your own. The default network name is projectName_default. ProjectName comes from the CWD.
Networks are used as VLANs to separate the traffic. For example a typical Web project will have a frontend and backend network or even more. The database will sit on one network connected to the application. The web clients will connect to the frontend network, where the web server and the application will reside. For logging with ELK there might be another network and so on.
Other projects can attach to existing networks. Thus, the interoperability between projects is possible.
Docker uses union file systems for data storage. Union file systems work like a series of diffs. If I file is wanted it will be searched for through a linked lists of layers.
When doing development on the local machine the interaction with the containers happen through the docker and docker-compose CLIs
Here are some of the most commonly used CLI commands
Docker pull is used to pull a container from a remote registry. If it is a public image on Docker Hub there is no need to log in. However if it is a private one or you are using a private one (on premises or in the cloud) the first thing is to log into the registry. An image always has a tag and digest. In most cases it will be pulled by a tag. If tag is omitted latest that is used. For stable environments latest should be avoided and you should stick to particular tag. When an image is pushed a digest is created. This digest can be used to pull the image. The digest can never be changed thus you will always pull the same image. Tags can be changed and so different environments might get different images. Tags should not be abused in this way, but sometimes it happens.
Docker push pushes an image. As already stated, the digest of the newly pushed image will be shown.
Docker rmi removes an image from the local machine. It won’t delete an image. Deleting image in the registry is often hard if not impossible.
Docker build creates an image by using Dockerfile as a bluprint. The Dockerfile might refer to other files too.
Here you see that there are two commands that do the same thing. At some point of time the CLI was refactored so more commands can be easily added. A level of indirection was added - namespacing.
Docker images shows all images which exist locally. Some of them might not exist anywhere else. This might happen if the image was never pushed to a registry or was created on demand by docker-compose with a build directive. This usage of docker-compose is better not used as two different machines could end with different images. Images are better pre-built by one person in a team and then pushed to a registry. So every developer uses the same image.
Docker image inspect shows a lot of useful information about an image. The information is in JSON format. By using jq as JSON filter we can easily extract particular information from the JSON object without using more / less.
Docker run starts a new container
Docker exec executes are command in an existing container
Docker rm removes a container (not the image)
Docker ps shows the running containers. -a shows all containers, even the stopped ones. -q is quiet, which means it shows only IDs. This is useful for kill.
Docker stop stops a container. It can be later resumed with docker start
Docker kill stops a container with SIGKILL (9). As an example to kill and remove all containers use this command
Docker inspect shows information about different entities. Can be used for networks, containers and images.
Docker container diff shows the changes in the container (the mounted FS) compared to the image used to start the container. The changes are in own layer. This layer might be committed but it is strongly advised not to do so.
Docker network ls shows all existing networks. When stopping a project the project networks continue to exist. These networks might be manually removed with docker network rm orl docker network prune.
Docker system prune cleans up the system from unused resources. This can often release gigabytes of data from old unused images.
Building an image might be compex operation.
The docker build command is used for this, as already shown. The Dockerfile is the recipe that is read and executed line by line. Every line is a different layer in the image. Thus it is advised to join multiple commands. Commands that should create only temporary command should also be joined so after the command finishes the temporal data is not committed to the layer. This is typically used with apt-get update, which downloads tons of info about packages. This info is not needed in the final container.
The Dockerfile might refer to external files, for example assets to be copied into the container. This assets might be source files or scripts, or images, etc.
At the end the container image is self-contained
There are a few strategies for building containers.
When using ONBUILD a base image is being inherited. This base image is created by some skillful person. The derived Dockerfile is pretty simple. The problem is that the skillful person might become a SPoF. Also the knowledge is not spread.Another strategy is to run different asset generators as a part of a external pipeline (for example with Make). These generators create end files which are then copied into the container. Images are small.
The third strategy is the multistage builds.
Multistage builds allow to skip the usage of an external pipeline and rely only on the Dockerfile. The multistage builds allow to cherry pick assets from particular image from previously built image in the same Dockerfile. So let’s say in a stage we load the vendors/ directory with composer (PHP). The we copy the vendors/ but not composer. When using C/C++, the whole build toolchain is installed into one image but in the next image only the executable is copied from the former. At the build toolchain gets removed as it is not copied. The Dockerfiles when using this strategy tend to become long.