Docker is changing the way we create and deploy software. This presentation is a hands-on introduction to how to use docker to build and test software, in your laptop and in your Jenkins CI server
Presentation prepared for lectures on Fascism for PS 240 Introduction to Political Theory at the University of Kentucky, Spring 2007. Dr. Christopher S. Rice, Instructor.
CAMBRIDGE A2 HISTORY: LEON TROTSKY. Contains: who was Trotsky, early life, meeting Lenin, disputes, uprisings, provisional government, disagreements and resignation, Trotsky leader, Trotsky dead.
The Jenkins open source continuous integration server now provides a “pipeline” scripting language which can define jobs that persist across server restarts, can be stored in a source code repository and can be versioned with the source code they are building. By defining the build and deployment pipeline in source code, teams can take full control of their build and deployment steps. The Docker project provides lightweight containers and a system for defining and managing those containers. The Jenkins pipeline and Docker containers are a great combination to improve the portability, reliability, and consistency of your build process.
This session will demonstrate Jenkins and Docker in the journey from continuous integration to DevOps.
Build, Publish, Deploy and Test Docker images and containers with Jenkins Wor...Docker, Inc.
This lightning talk will show you how simple it is to apply CI to the creation of Docker images, ensuring that each time the source is changed, a new image is created, tagged, and published. I will then show how easy it is to then deploy containers from this image and run tests to verify the behaviour.
Deploying Spring Boot applications with Docker (east bay cloud meetup dec 2014)Chris Richardson
This presentation describes how to deploy a Spring Boot-based microservice using Docker.
See http://plainoldobjects.com/2014/11/16/deploying-spring-boot-based-microservices-with-docker/
Infrastructure Deployment with Docker & AnsibleRobert Reiz
This is an introduction to Docker & Ansible. It shows how Ansible can be used as orchestration too for Docker. There are 2 real world examples included with code examples in a Gist.
Presentation prepared for lectures on Fascism for PS 240 Introduction to Political Theory at the University of Kentucky, Spring 2007. Dr. Christopher S. Rice, Instructor.
CAMBRIDGE A2 HISTORY: LEON TROTSKY. Contains: who was Trotsky, early life, meeting Lenin, disputes, uprisings, provisional government, disagreements and resignation, Trotsky leader, Trotsky dead.
The Jenkins open source continuous integration server now provides a “pipeline” scripting language which can define jobs that persist across server restarts, can be stored in a source code repository and can be versioned with the source code they are building. By defining the build and deployment pipeline in source code, teams can take full control of their build and deployment steps. The Docker project provides lightweight containers and a system for defining and managing those containers. The Jenkins pipeline and Docker containers are a great combination to improve the portability, reliability, and consistency of your build process.
This session will demonstrate Jenkins and Docker in the journey from continuous integration to DevOps.
Build, Publish, Deploy and Test Docker images and containers with Jenkins Wor...Docker, Inc.
This lightning talk will show you how simple it is to apply CI to the creation of Docker images, ensuring that each time the source is changed, a new image is created, tagged, and published. I will then show how easy it is to then deploy containers from this image and run tests to verify the behaviour.
Deploying Spring Boot applications with Docker (east bay cloud meetup dec 2014)Chris Richardson
This presentation describes how to deploy a Spring Boot-based microservice using Docker.
See http://plainoldobjects.com/2014/11/16/deploying-spring-boot-based-microservices-with-docker/
Infrastructure Deployment with Docker & AnsibleRobert Reiz
This is an introduction to Docker & Ansible. It shows how Ansible can be used as orchestration too for Docker. There are 2 real world examples included with code examples in a Gist.
ExpoQA 2017 Using docker to build and test in your laptop and JenkinsElasTest Project
In this workshop the basics about container use in the development environment are presented. Then we go further by describing how to leverage containers in the CI server, using Jenkins and Pipelines.
Dockerizing Symfony2 application. Why Docker is so cool And what is Docker? And what are Containers? How they works? What are the ecosystem of Docker? And how to dockerize your web application (can be based on Symfony2 framework)?
Testing fácil con Docker: Gestiona dependencias y unifica entornosMicael Gallego
Docker es una tecnología que permite empaquetar el software de forma que se pueda ejecutar de forma sencilla y rápida, sin instalación y en cualquier sistema operativo. Es como tener cualquier programa instalado en su propia máquina virtual, pero arranca mucho más rápido y consume menos recursos. Docker está cambiando la forma en la que desplegamos software, pero también está afectando al propio proceso de desarrollo y particularmente al testing.
En este taller pondremos en práctica cómo usar Docker para facilitar la implementación de diferentes tipos de tests y su ejecución tanto en el portátil como en el entorno de integración continua. Aunque las técnicas que veremos se podrán aplicar en cualquier lenguaje de programación, los ejemplos estarán basados en Java y en JavaScript.
Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps.
Learn More: http://www.collabnix.com
Running the Oracle SOA Suite Environment in a Docker ContainerGuido Schmutz
Docker is all about making it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. Docker helps creating, moving and duplicating environments.
This presentation will give an introduction to Docker, the ideas behind containerization and explain why there is so much hype around Docker and why you should be taking notice. I will show how Docker containers can be used to setup different environments, such as SOA Suite, Service Bus, Business Activity Monitoring and Event Processing and Stream Explorer. The talk will also include various short live demos.
This session provides a quick introduction of Docker containers on Linux, and how to configure it on Ubuntu running on a POWER8 processor-based system. We discuss requisites, steps, repositories and use cases. We also make a comparison between Docker and AIX Workload Partitions. During the presentation we demonstrate how to deploy and use containers, and how to manager Docker containers on Power.
DCSF 19 Building Your Development Pipeline Docker, Inc.
Oliver Pomeroy, Docker & Laura Tacho, Cloudbees
Enterprises often want to provide automation and standardisation on top of their container platform, using a pipeline to build and deploy their containerized applications. However this opens up new challenges; Do I have to build a new CI/CD Stack? Can I build my CI/CD pipeline with Kubernetes orchestration? What should my build agents look like? How do I integrate my pipeline into my enterprise container registry? In this session full of examples and how-to's, Olly and Laura will guide you through common situations and decisions related to your pipelines. We'll cover building minimal images, scanning and signing images, and give examples on how to enforce compliance standards and best practices across your teams.
Docker provides a new, powerful way of prototyping, testing and deploying applications on cloud-based infrastructures. In this seminar we delve into the concept of Docker containers without requiring any previous knowledge from the audience.
Why everyone is excited about Docker (and you should too...) - Carlo Bonamic...Codemotion
In less than two years Docker went from first line of code to major Open Source project with contributions from all the big names in IT. Everyone is excited, but what's in for me - as a Dev or Ops? In short, Docker makes creating Development, Test and even Production environments an order of magnitude simpler, faster and completely portable across both local and cloud infrastructure. We will start from Docker main concepts: how to create a Linux Container from base images, run your application in it, and version your runtimes as you would with source code, and finish with a concrete example.
An on-going presentation for the Docker workshop on how to integrate docker into Vagrant as a provider. In order to remove the requirement of having a VM, and speedup development environments. It also features Puppet as the configuration management system.
The code can be found in: https://github.com/npoggi/vagrant-docker
Introduction to Docker and Monitoring with InfluxDataInfluxData
In this webinar, Gary Forgheti, Technical Alliance Engineer at Docker, and Gunnar Aasen, Partner Engineering, provide an introduction to Docker and InfluxData. From there, they will show you how to use the two together to setup and monitor your containers and microservices to properly manage your infrastructure and track key metrics (CPU, RAM, storage, network utilization), as well as the availability of your application endpoints.
Similar to Using Docker to build and test in your laptop and Jenkins (20)
Cómo incluir videoconferencia en tu web usando la tecnología WebRTC y servidores de media open source y comerciales. Se explora en más detalle OpenVidu, una plataforma de videoconferencias con ediciones open source y comerciales
¿Cómo poner software de calidad en manos del usuario de forma rápida?Micael Gallego
Ciclo de vida del software, repositorios de código, análisis estático de código, pruebas software, integración continua, entrega continua, despliegue continuo, DevOps.
Curso de Angular 9 para desarrollo de aplicaciones SPA (Single Page Application).
● Tema 1: Introducción a Angular: TypeScript y herramientas
● Tema 2: Componentes
● Tema 3: REST y Servicios
● Tema 4: Aplicaciones multipágina: Router
● Tema 5: Librerías de componentes
● Tema 6: Publicación
Concurrencia y asincronía: Lenguajes, modelos y rendimiento: GDG Toledo Enero...Micael Gallego
Una vista panorámica de la situación actual de la concurrencia y la asincronía. Comparando modelos de concurrencia y técnicas de programación asíncrona en lenguajes de programación como Java, C/C++ y JavaScript.
Dev Tools para Kubernetes - Codemotion 2019Micael Gallego
Charla impartida entre Pablo Chico y Micael Gallego en la que se muestran algunas herramientas para mejorar la experiencia de desarrollo de aplicaciones cloud native para Kubernetes. Concretamente, se presenta cómo okteto puede reducir el tiempo empleado en el ciclo de change, build, push, deploy de pods Java en Kubernetes usando la sincronización de ficheros.
Ejemplos de código en https://github.com/micaelgallego/k8s-dev-tools-codemo19
Testing cloud and kubernetes applications - ElasTestMicael Gallego
Kubernetes applications are complex distributed systems composed by several microservices. When some end to end test is failing in these kind of applications, root cause is difficult without good observability tools. In this presentation, several tools are presented to make easier root cause analysis of cloud and kubernetes applications. One of the most interesting ones is ElasTest, a platform that integrates several open source tools to provide observability to e2e testing of complex distributed systems.
Kubernetes es una plataforma cada vez más utilizada para poner en producción aplicaciones y servicios. Todos los grandes proveedores cloud la ofrecen y también puede instalarse on premises. En estas slides presentaremos los concetos básicos de la plataforma y aprenderemos a desplegar aplicaciones.
Las slides se han usado en un curso gratuito que ha sido grabado y publicado aquí: https://www.youtube.com/watch?v=5ovqsvqwtZM
Testeando aplicaciones Kubernetes: escalabilidad y tolerancia a fallosMicael Gallego
Kubernetes se ha convertido en una de las plataformas preferidas para la puesta en producción de aplicaciones escalables y tolerantes a fallos. Estas características son muy interesantes, pero suponen un reto para los sistemas de integración continua. ¿Cómo se prueba de forma automática que una aplicación se comporta como se espera cuando existen fallos o cuando la carga crece de forma importante? Y si no lo hace, cuál es el motivo? El Caos testing nos ayuda a simular fallos (matar contenedores, cortar la red...) pero además necesitamos monitorizar la aplicación para conocer su comportamiento interno. En la presentación damos un repaso por las técnicas y herramientas más utilizadas en este ámbito.
OpenVidu es una plataforma para incorporar videoconferencia y video streaming en tus aplicaciones web. Es muy fácil de usar y tienes multitud de ejemplos con diferentes tecnologías. Además, es open source. Qué más se puede pedir?
Estas slides son una presentación a las pruebas de software. Para qué sirven, qué tipos de pruebas existen, qué librerías, frameworks y herramientas se pueden utilizar para implemenar pruebas automatizadas, etc.
Node.js es una tecnología cada vez más popular para el desarrollo de servicios web. Grandes abanderados de Java como Netflix están usando cada vez más JavaScript para implementar parte de su backend. Pese a esta realidad, muchos javeros como yo no quieren tocar JavaScript ni con un palo, y cuando hay que hacerlo, sólo en el browser.
Si eres javero y no te gusta JavaScript, en esta presentación tendrás una visión general sobre cómo desarrollar servicios web con Node.js. Verás cómo con TypeScript, async/await y frameworks como Nest y TypeORM no echarás de menos a Spring y JPA. Pero lo mismo pasa al revés, verás cómo en Java también puedes implementar apps con los mismos principios reactivos y funcionales tan comunes en Node.js.
Como ser mas productivo en el desarrollo de aplicacionesMicael Gallego
Charla impartida en la Universidad Politécnica de Madrid presentando algunas técnicas y herramientas para ser más productivo en el desarrollo de aplicaciones
Docker para Data Scientist - Master en Data Science URJCMicael Gallego
Presentación de Docker en el Master en Data Science de la URJC en la asignatura de Arquitecturas en la nube. En esta asignatura hablamos de AWS, Azure, Docker, Kubernetes, Mesos
El Aprendizaje Basado en Proyectos y la Clase Invertida para acercar el mundo...Micael Gallego
En este artículo se describe una metodología docente que pretende emular en el aula el trabajo que los alumnos realizarán cuando finalicen sus estudios. Esta metodología combina el Aprendizaje Basado en Proyectos y la Clase Invertida y está diseñada para la asignatura de Desarrollo Web del Grado en Ingeniería del Software de la URJC. La metodología propuesta se aplicará en el curso 2016/2017 y supone una evolución de una metodología previa, aplicada en el curso 2015/2016 en la misma asignatura. Se espera que los cambios introducidos en esta nueva metodología mejoren los resultados obtenidos en el curso pasado.
El mundo real en el aula, con la ayuda del profesorMicael Gallego
Presentación en las Jornadas de Innovación y TIC Educativas: JITICE 2016 de la Universidad Rey Juan Carlos.
Aplicación de Project Based Learning y Flipped Classroom en la asignatura de "Desarrollo Web" en el Grado de Ingeniería del Software
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
3. Consultancy / Training
Cloud Computing
WebTechnologies
Extreme Programming
Testing / Git / Jenkins
Software Architecture
Concurrent Programming
Open source elastic platform
for end to end testing
http://codeurjc.es http://elastest.io
Advanced log management
Test orchestration
Test execution comparison
Web and Mobile testing
Security testing
IoT testing
Performance testing
4. Virtualizacion and Containers
● Developers want to reduce the
differences between local,
continuous integration and
production environments
● Avoiding “It works in my machine”
type of problems
Virtualization Containers
5. Virtualizacion and Containers
● Virtualization
– Full fledged Virtual Machine (VirtualBox)
– Developer friendly managed VM
(Vagrant)
● Containers
– Docker
6.
7. VirtualBox
● Developed by Oracle (was owned by Sun
Microsystems)
● Mostly open source, several free (but closed
source) modules
● Windows, Linux and Mac versions
● Advanced Desktop Virtualization
– Shared folders host / guest
– Keyboard and mouse advanced integration
– Graphic 3D acceleration
– Webcam
https://www.virtualbox.org/
9. VirtualBox
● Manual
– Create an empty virtual machine
– Connect to a ISO (simulating real CD
devide)
– Install a full fledged Operating System
– It is time consuming and it is not easy
to share VM between developers
13. Vagrant
● It is a command line utility to manage
VMs
● It makes very easy to download and start
a new VM (only with a command)
● Allows to provisioning the new VM with
command provisioning tools (script, chef,
puppet, ansible…)
● VM configuration is specified in a text file,
allowing to share it in the git repository
https://www.vagrantup.com/
14. Vagrant
● How to create a new VM with ubuntu Xenial
● Vagrant manages certificates and networking
to make easy to connect to the new VM
● By default, working dir is shared
with VM
$ vagrant init ubuntu/xenial64
$ vagrant up
$ vagrant ssh
16. Docker
● With VMs you can have the production
environment in your laptop
● But…
– VMs takes minutes to start up
– VMs use (waste?) a lot of resources
(memory and disk space)
17. Docker
● In a first look, containers can be
considered as “lightweight VMs”
– They contain an isolated environment to run
apps
– Start in milliseconds
– They use only the resources it needs
– A container doesn't have a full fledged
operating system, only the minimal
software to execute apps
19. Docker
●
Containers and VMs are very different
Virtual Machines Containers
Heavier Lighter
Execute several processes per
Virtual Machine
Usually execute only one process
per container
Ssh connection Direct execution in the container
(rarely needed)
More isolated using hypervisor Less isolated because are
executed using kernel features
Can virtualizeWindows over
Linux
Linux containers must be
executed in linux hosts*
* More on that later
20. Docker
● To install an application in a linux system you
need all dependencies installed
● Can be incompatibilities between applications
that need different version of the same
dependency
● Docker include in a container all needed
software isolated to the rest of the system
22. Docker
● Docker containers SO support
– Linux containers
● Very mature technology
● It can be used in any* linux distribution
– Windows containers
● Preliminary technology
● It only can be used in a very recent**
Windows Server version
* Kernel version 3.10 or greather. Published in June 2013
**Windows Server 2016 (Core and with Desktop Experience), Nano Server, andWindows 10
Professional and Enterprise (Anniversary Edition).
23. Docker
● You can execute linux containers in
any operating system
● It uses virtualization (under the
covers) in Mac and Windows
24. Docker
● Docker Toolbox for Mac and Windows
– It uses VirtualBox as virtualization
– It is not the same development experience
than in linux
● Docker for Mac and Windows
– Uses native virtualization technology in each
operating system
– Only available in new versions of that SOs
26. •Docker Image
– Basic template for a container (hard disk of
VM)
– It contains SO (ubuntu), libs (Java) and app
(webapp.jar)
– A container always is started from an image
– If you want to start a new container from an
image that is not in your system, it is
automatically downloaded from Internet
Docker concepts
27. •Docker Registry
●
Remote service used to store and retrive docker
images
●
It can hold several versions of the same image
●
All versions of the same image are located in the
same repository (like in git)
●
Docker Hub is a public registry managed by Docker
Inc.
●
You can buy private repositories in Docker Hub
●
You can also operate your own private registry
Docker concepts
29. •Docker Container
– It is the “equivalent” of a Virtual Machine
– A container is created from a docker image
– When a file is wrote, the image it is not
modified, the container is modified
– It can be started, paused or
stopped
Docker concepts
30. •Docker Engine
– Local service used to control docker
– Manages images (download, create, pull,
push…)
– Manages containers (start, stop, commit...)
– It can be used with the docker client or
using its REST API
Docker concepts
31. •Docker client
– Command line interface (CLI) tool to
control docker engine
– It is available when docker is installed in a
system to connect to their local docker
engine
Docker concepts
33. First steps with docker
Install Docker
– Windows:
● Microsoft Windows 10 Professional or Enterprise 64-bit:
https://store.docker.com/editions/community/docker-ce-desktop-windows
● Other Windows versions: https://www.docker.com/products/docker-toolbox
– Linux:
● Ubuntu: https://store.docker.com/editions/community/docker-ce-server-ubuntu
● Fedora: https://store.docker.com/editions/community/docker-ce-server-fedora
● Debian: https://store.docker.com/editions/community/docker-ce-server-debian
● CentOS: https://store.docker.com/editions/community/docker-ce-server-centos
– Mac:
● Apple Mac OS Yosemite 10.10.3 or above:
https://store.docker.com/editions/community/docker-ce-desktop-mac
● Older Mac: https://www.docker.com/products/docker-toolbox
34. First steps with docker
Hands on…
https://github.com/docker/labs/tree/master/beginner
35. First steps with docker
Testing if docker is correctly installed
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
03f4658f8b78: Pull complete
a3ed95caeb02: Pull complete
Digest:
sha256:8be990ef2aeb16dbcb9271ddfe2610fa6658d13f6dfb8bc72074cc1ca369
66a7
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working
correctly.
...
36. First steps with docker
Running your first container
$ docker run alpine ls -l
total 48
drwxr-xr-x 2 root root 4096 Mar 2 16:20 bin
drwxr-xr-x 5 root root 360 Mar 18 09:47 dev
drwxr-xr-x 13 root root 4096 Mar 18 09:47 etc
drwxr-xr-x 2 root root 4096 Mar 2 16:20
home
drwxr-xr-x 5 root root 4096 Mar 2 16:20 lib
......
......
37. First steps with docker
Running your first container
$ docker run alpine ls -l
Command “run”
Creates a new
container and start it
38. First steps with docker
Running your first container
$ docker run alpine ls -l
Image name
alpine is a minimal linux system
(4.8Mb).The image is downloaded if
not stored in local machine
39. First steps with docker
Running your first container
$ docker run alpine ls -l
Command “ls -l”
This command will be
executed inside the
running container
40. First steps with docker
Inspecting the downloaded images
$ docker images
REPOSITORY TAG IMAGE ID
CREATED VIRTUAL SIZE
alpine latest c51f86c28340
4 weeks ago 1.109 MB
hello-world latest 690ed74de00f
5 months ago 960 B
List all images stored in the system
41. First steps with docker
Executing a container
$ docker run alpine echo "hello from alpine"
hello from alpine
Execute the command “echo” inside the container
42. First steps with docker
Inspecting containers (executing)
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
a6a9d46d0b2f alpine "echo 'hello from alp" 6 minutes
ago Exited (0) 6 minutes ago lonely_kilby
ff0a5c3750b9 alpine "ls -l" 8 minutes
ago Exited (0) 8 minutes ago elated_ramanujan
c317d0a9e3d2 hello-world "/hello" 34 seconds
ago Exited (0) 12 minutes ago stupefied_mcclintock
It shows containers in the system.
All of them has STATUS Exited.These containers are
not currently executing (but using disk space)
43. First steps with docker
Interactive commands in containers
$ docker run -it alpine /bin/sh
/ # ls
bin dev etc home lib linuxrc media mnt proc
root run sbin sys tmp usr var
/ # uname -a
Linux 97916e8cb5dc 4.4.27-moby #1 SMP Wed Oct 26 14:01:48 UTC 2016 x86_64
Linux
/ # exit
$
To execute an interactive command it is necessary to
use the option “-it” to connect the console to the
container command
44. First steps with docker
● Interactive commands in containers
– When you execute a /bin/sh command in
a container it offers a “similar” experience
than a ssh connection
– Buy there are no ssh server neither ssh
client
– It is executing a shell inside the
container
45. First steps with docker
● Managing containers lifecycle
$ docker run -d seqvence/static-site
Option “-d”
Executes the container
in background
46. First steps with docker
● Managing containers lifecycle
$ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
a7a0e504ca3e seqvence/static-site "/bin/sh -c 'cd /usr/"
28 seconds ago Up 26 seconds
Container id is
a7a0e504ca3e
This id is used to refer to
this container
STATUS is UP
47. First steps with docker
● Managing containers lifecycle
– Stop running container
– Delete files of the stopped container
$ docker stop a7a0e504ca3e
$ docker rm a7a0e504ca3e
48. Net services with docker
● Start container exposing a port
docker run --name static-site
-e AUTHOR="Your Name" -d
-p 9000:80 seqvence/static-site
49. Net services with docker
● Start container exposing a port
docker run --name static-site
-e AUTHOR="Your Name" -d
-p 9000:80 seqvence/static-site
--name static-site
Specifies a unique name
for the container
50. Net services with docker
docker run --name static-site
-e AUTHOR="Your Name" -d
-p 9000:80 seqvence/static-site
-e AUTHOR="Your Name"
Set the environment variable
AUTHOR to value “Your Name”
● Start container exposing a port
51. Net services with docker
docker run --name static-site
-e AUTHOR="Your Name" -d
-p 9000:80 seqvence/static-site
-d
Execute container as deamon
● Start container exposing a port
52. Net services with docker
docker run --name static-site
-e AUTHOR="Your Name" -d
-p 9000:80 seqvence/static-site
-p 9000:80
Connects the host port 9000 to
the port 80 in the container
● Start container exposing a port
53. Net services with docker
● Use the service
– Open http://127.0.0.1:9000 in a browser in
your host to access 80 port in container
54. Net services with docker
● Use the service
– If you are using Docker Toolbox for Mac or
Windows you can’t use 127.0.0.1 IP
– Then you have to open
http://192.168.99.100:9000/ in the browser
$ docker-machine ip default
192.168.99.100
55. Net services with docker
● Container management
– Stop and remove the container
– Stop and remove a running container
– Remove all running containers
$ docker rm -f static-site
$ docker stop static-site
$ docker rm static-site
$ docker rm -f $(docker ps -a -q)
56. Managing docker images
● List images in host
Tag is like “version”. Latest is… the
latest ;)
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
seqvence/static-site latest 92a386b6e686 2 hours ago 190.5 MB
nginx latest af4b3d7d5401 3 hours ago 190.5 MB
python 2.7 1c32174fd534 14 hours ago 676.8 MB
postgres 9.4 88d845ac7a88 14 hours ago 263.6 MB
Containous/traefik latest 27b4e0c6b2fd 4 days ago 20.75 MB
...
57. Managing docker images
● Managing versions
– Download a concrete version
– Download latest version
$ docker pull ubuntu:12.04
$ docker pull ubuntu
60. Managing docker images
● Image types
– Base images
● Images without a parent image
● Examples: Ubuntu, debian, alpine…
● Used by Operating Systems
– Child images
● Base images plus some additional software
● Examples: Nginx, Apache, MySQL...
61. Managing docker images
● Official vs User images
– Official images
● Images created by trusted
companies or communities
– User images
● Any user can create an account and
upload her own images
62. Managing docker images
● Create your first image
– We will create a web application for display
random cat pics using Python
– Create a folder called flask-app
– Download all files in this URL to the folder
https://github.com/docker/labs/tree/master/beginner/flask-app
63. Managing docker images
● Create your first image
– You have all source files for the web
application
– But you need Python and Flask to execute
the app
– To execute the web application, you will
create a new image with dependencies
(Python and Flask) and your application code
– Then you can create a new container to
execute your application
64. Managing docker images
● Dockerfile
– File used to describe a new image
– Specifies
● Base image
● Commands to execute in the image
● Files to include in the image from the
project folder
● Open ports
● Command to execute when start the image
66. Managing docker images
● Dockerfile
– FROM: Base image
– COPY: Copy files from Dockerfile folder
– RUN execute commands
– EXPOSE: Public ports
– CMD: Command to execute when
container is started
https://docs.docker.com/engine/userguide/eng-
image/dockerfile_best-practices/
67. Managing docker images
● Build the image
– In the folder with a Dockerfile execute
– Executed actions
● Create a new container with base image
● Execute commands and copy app files
● Create a new container with the result
$ docker build -t myfirstimage .
68. Managing docker images
● Run the new image
– Open http://127.0.0.1:9000/ in the browser
– Windows and Mac users with Toolbox use the IP
$ docker run -p 9000:5000 myfirstimage
* Running on http://0.0.0.0:5000/
(Press CTRL+C to quit)
69. Managing docker images
● Build the image again
– Change some HTML in templatesindex.html
– Create the image again
– The Dockerfile steps without changes are not re-
executed (are reused from previous execution)
– The image is created very quickly because only
the files copy is perfomed
$ docker build -t myfirstimage .
70. Volumes
● Volumes
– Allow sharing files between host and container
– Execute a container to show an nonexistent file
– Create a text file
$ docker run alpine cat /data/file.txt
cat: can't open '/data/file.txt': No such
file or directory
$ echo "My file" >> file.txt
71. Volumes
Volumes
● Mount a host folder inside a container folder
● Host contents replace container contents of
that folder
● Containers can write files in volumes to be
available in the host
$ sudo docker run -v $PWD:/data alpine
cat /data/file.txt
My file
72. Volumes
● Volumes
– Docker images use volumes to read files
from host
– Official NGINX container can serve host
files using http
● Serving current folder files ($PWD)
● Go to http://127.0.0.1:9000/file.txt
https://hub.docker.com/_/nginx/
$ docker run -p 9000:80 -v
$PWD:/usr/share/nginx/html:ro -d nginx
73. Volumes
● Volumes
– Docker Toolbox for Win or
Mac only allow folders
inside user folder to be
used as volume
– You can use other folders
but have to configure
shared folders in
VirtualBox
https://hub.docker.com/_/nginx/
74. •Containers main use cases
– Net service
●
Executed in background long time...
●
Used through network
●
Ex: Databases, web servers...
– Command
●
Execute a single command and stop
●
Read and write files from host with volumes
●
Ex: Java Compiler, jekyll, ffmpeg...
Docker container usage
75. •Docker for building software
– A container can have all needed environment to
execute a developer tool
– For example, you can have the compiler and the
test dependencies in a container
– You can clone a git repository and execute the
(dockerized) compiler without install any
software in your host
Docker for software developers
76. •Dockerized Java Maven
– Clone a maven repo
– Compile and exec tests
Docker for software developers
$ git clone
https://github.com/jglick/simple-maven-project-with-tests.git
$ cd simple-maven-project-with-tests
$ docker run --rm -v $PWD:/data -w /data maven mvn package
77. •Dockerized Java Maven
Docker for software developers
$ docker run --rm -v $PWD:/data -w /data maven mvn package
https://hub.docker.com/_/maven/
--rm
Remove container when
execution finish
78. •Dockerized Java Maven
Docker for software developers
$ docker run --rm -v $PWD:/data -w /data maven mvn package
https://hub.docker.com/_/maven/
--w
Working dir for the command
79. •Dockerized Java Maven
Docker for software developers
$ docker run --rm -v $PWD:/data -w /data maven mvn package
https://hub.docker.com/_/maven/
maven
Official Maven image
81. •Dockerized Java Maven
– Jar package is generated in /target folder in host
– As container command is executed as root user
(by default), generated files are owned by root.
– Change to your user
Docker for software developers
simple-maven-project-with-tests-1.0-SNAPSHOT.jar
https://hub.docker.com/_/maven/
sudo chown -R username:group target
82. •Advantages of dockerized dev tools
– Avoid several developers having different
versions of such tools
– It is very easy to test the same code in different
versions (Java 7, Java 8...)
– Reduce tools configuration problems.You can
compile and execute a project easily
– The same tools can be executed in development
laptops and also in CI environment
Docker for software developers
84. •Docker in Continuous Integration
– If you execute dev tools in containers, it is very
easy to compile, test and package in CI
environment
– Only have to execute the same command in
laptop and CI environment
– If a tool changes, only have to change the
command, it is not necessary to install anything
Docker in CI servers
86. ● Jenkins installation
– You need Java
– Go to https://jenkins.io/
– Download
– Download LTS Release
– Generic Java Package (.war)
Docker in CI servers
91. ● Create new Jenkins job
– Create a job with pipeline
– Pipeline:
● Clone git repository
● Compile, test and package Java project
● Copy test results to Jenkins
Jenkins Job
95. node {
// Mark the code checkout 'stage'....
stage 'Checkout'
// Get some code from a GitHub repository
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
// Mark the code build 'stage'....
stage 'Build'
// Run the maven build
sh "docker run --rm -v $PWD:/data -w /data maven mvn package"
step([$class: 'JUnitResultArchiver',
testResults: '**/target/surefire-reports/TEST-*.xml'])
}
Pipeline
Jenkins Job
103. ● Advantages of using docker in CI
– CI server just need docker installed, nothing
more
– All tools needed by devs are containerized
– Tools are downloaded (and cached)
automatically when needed
– Several languages/stacks/dependencies can be
used in the same CI server without conflicts
– Sysadmins do not need to give access to CI
server to developers (enforcing security)
Docker in CI servers
104. ● Testing different languages with
Docker
– Testing Node apps with mocha
● https://dzone.com/articles/testing-nodejs-application-
using-mocha-and-docker
– Testing C++ apps with Gtest
● https://github.com/yutakakinjyo/gtest-cmake-example
– Testing Angular apps
● https://jaxenter.com/build-and-test-angular-apps-using
-docker-132371.html
Docker in CI servers
105. ● Some issues of using docker in CI
– Issue: By default project dependencies have to
be downloaded in every build
● Solution: Use a host folder as cache
– Issue: Old docker images waste HD space
● Solution: Use docker garbage collector (as you
can download images when needed)
– Issue: Window tools can’t be dockerized in linux
containers
● Solution: Use portable tools as much as
possible ;)
Docker in CI servers
107. ● Testing tools based on docker
– TestContainers
● Define testing dependencies in your JUnit test
● https://www.testcontainers.org/
– Dockunit
● Test your code in several environments
● https://www.npmjs.com/package/dockunit
– Muchas más...
Docker in CI servers
108. ● Conclusions
– Docker containers are changing the way we
develop, build, test and ship software
– Containers allow developers to use the same dev
tools and execute the project in the same
environment
– Containers ease the configuration and share of CI
servers
– If Continuous Integration is easier to use, more
projects will use it and more test will be executed
Docker in CI servers