Invited to introduce Docker to the Dept. for Information and Communication Services (Informations- und Kommunikationsdienste - IuK) at the University of Rostock.
Docker is a very useful tool in every data scientists toolbox. In this talk I present motivations to use Docker and made some live demos of typical tools used in data science, such as RStudio, Jupyter Notebook, or Elasticsearch.
Zero Downtime Deployment with Ansible - learn how to provision Linux servers with a web-proxy, a database and automate zero downtime deployment of a Java application to a load balanced environment.
These are the slides from a tutorial held at the Velocity Conference in Barcelona November 19th, 2014.
Git repo: https://github.com/steinim/zero-downtime-ansible
This document discusses using CommandBox and Docker to deploy real projects. It covers background on the development workflow and environments, benefits of Docker and CommandBox, code cleanup tools like CFLint and git hooks, serving apps with CommandBox, server monitoring with Prometheus, dynamic configuration, caching, session storage, logging with Elasticsearch and Kibana, load balancing with Kubernetes, data changes, scheduled tasks, and canary/blue-green deployments. The overall message is that CommandBox and tools can provide structure and simplify transitions to help teams succeed in deploying applications.
Big query - Command line tools and Tips - (MOSG)Soshi Nemoto
BigQuery =Command line tools and Tips for business use=
Mulodo Open Study Group (MOSG) @Ho chi minh, Vietnam
http://www.meetup.com/Open-Study-Group-Saigon/events/231504491/
2012 coscup - Build your PHP application on Herokuronnywang_tw
The document discusses deploying PHP applications on Heroku. It provides an overview of Heroku, including that it is a Platform-as-a-Service, was launched in 2007, uses Amazon Web Services, offers many add-ons, allows easy scaling, supports PostgreSQL, and offers some free usage. It then walks through deploying a basic "Hello World" PHP app on Heroku, including creating an app, adding code, committing and pushing to Heroku, and viewing the deployed app.
Making environment for_infrastructure_as_codeSoshi Nemoto
The document provides instructions for setting up an environment for infrastructure as code using tools like Vagrant, Ansible, and Fabric. It details steps to install the necessary tools, create a Vagrant machine, edit configuration files to configure the Vagrant IP address and SSH keys, and then provides a test command to validate the Fabric deployment is working properly.
The document discusses building a lightweight Docker container for Perl by starting with a minimal base image like BusyBox, copying just the Perl installation and necessary shared libraries into the container, and setting Perl as the default command to avoid including unnecessary dependencies and tools from a full Linux distribution. It provides examples of Dockerfiles to build optimized Perl containers from Gentoo and by directly importing a tarball for minimal size and easy distribution.
ApacheCon 2014 - What's New in Apache httpd 2.4Jim Jagielski
The document summarizes new features in Apache HTTPD version 2.4, including improved performance through the Event MPM, faster APR, and reduced memory usage. It describes new configuration options like finer timeout controls and the <If> directive. New modules like mod_lua and mod_proxy submodules are highlighted. The document also discusses how Apache has adapted to cloud computing through dynamic proxying, load balancing, and self-aware environments.
Docker is a very useful tool in every data scientists toolbox. In this talk I present motivations to use Docker and made some live demos of typical tools used in data science, such as RStudio, Jupyter Notebook, or Elasticsearch.
Zero Downtime Deployment with Ansible - learn how to provision Linux servers with a web-proxy, a database and automate zero downtime deployment of a Java application to a load balanced environment.
These are the slides from a tutorial held at the Velocity Conference in Barcelona November 19th, 2014.
Git repo: https://github.com/steinim/zero-downtime-ansible
This document discusses using CommandBox and Docker to deploy real projects. It covers background on the development workflow and environments, benefits of Docker and CommandBox, code cleanup tools like CFLint and git hooks, serving apps with CommandBox, server monitoring with Prometheus, dynamic configuration, caching, session storage, logging with Elasticsearch and Kibana, load balancing with Kubernetes, data changes, scheduled tasks, and canary/blue-green deployments. The overall message is that CommandBox and tools can provide structure and simplify transitions to help teams succeed in deploying applications.
Big query - Command line tools and Tips - (MOSG)Soshi Nemoto
BigQuery =Command line tools and Tips for business use=
Mulodo Open Study Group (MOSG) @Ho chi minh, Vietnam
http://www.meetup.com/Open-Study-Group-Saigon/events/231504491/
2012 coscup - Build your PHP application on Herokuronnywang_tw
The document discusses deploying PHP applications on Heroku. It provides an overview of Heroku, including that it is a Platform-as-a-Service, was launched in 2007, uses Amazon Web Services, offers many add-ons, allows easy scaling, supports PostgreSQL, and offers some free usage. It then walks through deploying a basic "Hello World" PHP app on Heroku, including creating an app, adding code, committing and pushing to Heroku, and viewing the deployed app.
Making environment for_infrastructure_as_codeSoshi Nemoto
The document provides instructions for setting up an environment for infrastructure as code using tools like Vagrant, Ansible, and Fabric. It details steps to install the necessary tools, create a Vagrant machine, edit configuration files to configure the Vagrant IP address and SSH keys, and then provides a test command to validate the Fabric deployment is working properly.
The document discusses building a lightweight Docker container for Perl by starting with a minimal base image like BusyBox, copying just the Perl installation and necessary shared libraries into the container, and setting Perl as the default command to avoid including unnecessary dependencies and tools from a full Linux distribution. It provides examples of Dockerfiles to build optimized Perl containers from Gentoo and by directly importing a tarball for minimal size and easy distribution.
ApacheCon 2014 - What's New in Apache httpd 2.4Jim Jagielski
The document summarizes new features in Apache HTTPD version 2.4, including improved performance through the Event MPM, faster APR, and reduced memory usage. It describes new configuration options like finer timeout controls and the <If> directive. New modules like mod_lua and mod_proxy submodules are highlighted. The document also discusses how Apache has adapted to cloud computing through dynamic proxying, load balancing, and self-aware environments.
My talk from DevOpsCon Berlin 2016.
Ansible is a radically simple and lightweight provisioning framework which makes your servers and applications easier to provision and deploy. By orchestrating your application deployments you gain benefits such as documentation as code, testability, continuous integration, version control, refactoring, automation and autonomy of your deployment routines, server and application configuration. Ansible uses a language that approaches plain English, uses SSH and has no agents to install on remote systems. It is the simplest way to automate and orchestrate application deployment, configuration management and continuous delivery.
In this tutorial you will be given an introduction to Ansible and learn how to provision Linux servers with a web-proxy, a database and some other packages. Furthermore we will automate zero downtime deployment of a Java application to a load balanced environment.
This document discusses using the MEAN stack with Docker. It provides Dockerfiles to containerize MongoDB, a MongoDB replica set configurator, Node.js, sample applications, and MongoDB Management Service monitoring/backup agents. It also describes using Vagrant to set up a demo environment with Docker containers for a MongoDB replica set and sample app.
Streamline your development environment with dockerGiacomo Bagnoli
These days applications are getting more and more complex. It's becoming quite
difficult to keep track of all the different components an application needs in order to
function (a database, a message queueing system, a web server, a document
store, a search engine, you name it.). How many times we heard 'it worked on my
machine'?. In this talk we are going to explore Docker, what it is, how it works
and how much it can benefit in keeping the development environment consistent.
We are going to talk about Dockerfiles, best practices, tools like fig and vagrant,
and finally show an example of how it applies to a ruby on rails
application.
A Node.JS bag of goodies for analyzing Web TrafficPhilip Tellis
This document is a presentation about analyzing web traffic using Node.js modules. It introduces Node.js and the npm package manager. It then discusses modules for parsing HTTP logs, including parsing user agents, handling IP addresses, geolocation, and date formatting. It also covers modules for statistical analysis like fast-stats, gauss, and statsd. The presentation provides code examples for using these modules and takes questions at the end.
Access google command list from the command lineEthan Lorance
This document discusses how to access Google services from the command line using GoogleCL, a Python application. It provides instructions on downloading and installing GoogleCL, activating services by approving access in the browser, and using functions to interact with services like uploading documents and videos, editing blogs, and more. The conclusion states that GoogleCL allows interacting with Google apps via the command line and expands what you can do with Google services.
Docker can be used as an everyday development tool. It allows building, shipping and running applications securely by using containers. Containers allow encapsulating applications from the host machine and provide resource isolation using features like cgroups and namespaces. The key Docker concepts include images, containers, volumes, and the Docker engine. Docker Compose can be used to define and run multi-container Docker applications using a YAML file.
This document discusses Selinko's use of Docker in their development, testing, and production environments. Some key points:
- Selinko is a Belgian company that provides secure IoT platforms and track and trace microchips.
- They use Docker for its portability, reproducibility, scalability, and other benefits aligned with the 12 factor app principles.
- In development, they use Docker Machine and Docker Compose. In testing, Jenkins. In production, CoreOS and systemd unit files to run Docker containers.
- They've learned best practices like avoiding running as root, minimizing layers, dealing with images sizes being virtual, and using Tini to avoid signals and zombies in containers.
ApacheConNA 2015: What's new in Apache httpd 2.4Jim Jagielski
The document discusses the new features of Apache HTTP Server version 2.4, including performance improvements through more efficient modules and data structures, enhanced configuration options, new modules for capabilities like Lua scripting and remote IP access, and improved proxy functionality for dynamic and cloud environments. Key areas covered are performance, configuration, new modules, and proxy features.
Converting Your Dev Environment to a Docker Stack - php[world]Dana Luther
Heard a lot about docker but not sure where to start? Frustrated maintaining development VMs? In this presentation we will go over the simplest ways to convert your development environment over to a docker stack, including support for full acceptance testing with Selenium. We’ll then go over how to modify the stack to mimic your production/pre-production environment(s) as closely as possible, and demystify working with the containers in the stack.
Heroku enables developers to deploy applications using modern software design patterns for software-as-a-service. Applications benefit from continuous deployment with frequent code releases, easy management of dependencies, and configurability through environment variables. Heroku provides tools for managing data and services for building, running, and monitoring applications.
“warpdrive”, making Python web application deployment magically easy.Graham Dumpleton
Ask a beginner to deploy a Python web application and they will often complain it is too hard. Although we have standards for how a Python web application should interface with a web server, the web servers for Python all work differently, with a myriad of options and being difficult to set up properly.
In this talk you will be given a preview of a project called 'warpdrive', a project being developed to simplify the process of deploying a Python web application.
The 'warpdrive' project makes it easy to run your Python web application on your own system, but it can also create a Docker image for your application, providing you with an easy path to deploying it on a Docker service.
How 'warpdrive' works is also compatible with next generation Platform as a Service (PaaS) offerings such as the latest OpenShift, which has been reimplemented around Docker and Kubernetes.
See how working on and deploying your Python web application could be made so much easier using 'warpdrive'.
DCSF19 Tips and Tricks of the Docker Captains Docker, Inc.
Brandon Mitchell, BoxBoat
Docker Captain Brandon Mitchell will help you accelerate your adoption of Docker containers by delivering tips and tricks on getting the most out of Docker. Topics include managing disk usage, preventing subnet collisions, debugging container networking, understanding image layers, getting more value out of the default volume driver, and solving the UID/GID permission issues with volumes in a way that allows images to be portable from any developer laptop and to production.
A look at some of the configuration issues that containers introduce, and how to avoid or fix them. Discusses immutable infrastructure, the difference between build-time and runtime configuration, scheduler configuration and more.
Converting Your Dev Environment to a Docker Stack - CascadiaDana Luther
The document discusses converting a development environment to a Docker stack. It describes some of the benefits of using Docker for consolidating different environments that may have varying PHP and MySQL versions. It provides an example docker-compose.yml file that defines services for Nginx, MySQL, and PHP-FPM containers along with networking and volume configurations. It also includes some sidebars on Docker concepts like the hierarchy of images, containers, services and stacks, as well as common Docker commands.
Running Docker in Development & Production (DevSum 2015)Ben Hall
This document provides an overview of Docker containers and how to use Docker for development and production environments. It discusses Docker concepts like images, containers, and Dockerfiles. It also demonstrates how to build images, run containers, link containers, manage ports, and use Docker Compose. The document shows how Docker can be used to develop applications using technologies like ASP.NET, Node.js, and Go. It also covers testing, deploying to production, and optimizing containers for production.
This fabric workshop aims to create a deploy tool using Fabric that can deploy code to servers defined in roles, show existing tags, change to a different tagged version, and remove tags. The agenda includes demonstrating local tasks, remote tasks using an Ansible inventory file, and functions for mkdir, cd, and uploading files. Fabric provides a simple way to automate operations across multiple servers.
Repoinit: a mini-language for content repository initializationBertrand Delacretaz
This document discusses repoinit, an Apache Sling mini-language for initializing content repositories. Repoinit scripts are run at startup by SlingRepositoryInitializers to register the SlingRepository, set ACLs, and perform other initialization tasks. The talk covers the history and usage of repoinit, provides examples, and discusses best practices around parsing repoinit scripts using a parser generator rather than writing parsers by hand.
Docker can be used as an everyday work tool for developers and system administrators. It provides tools to work with containers, which enable operating-system-level virtualization. Docker images contain executable packages that include code, runtimes, and configuration files to run software. Containers run as isolated processes on the host machine, using resources from the host operating system. Common Docker commands include docker run to launch containers, docker build to build images, and docker ps to view running containers. Docker Compose allows defining and running multi-container applications using a YAML configuration file.
Explains how Docker and Nix work as deployment solutions, in what ways they are similar and different, and how they can be combined to achieve interesting results.
My talk from DevOpsCon Berlin 2016.
Ansible is a radically simple and lightweight provisioning framework which makes your servers and applications easier to provision and deploy. By orchestrating your application deployments you gain benefits such as documentation as code, testability, continuous integration, version control, refactoring, automation and autonomy of your deployment routines, server and application configuration. Ansible uses a language that approaches plain English, uses SSH and has no agents to install on remote systems. It is the simplest way to automate and orchestrate application deployment, configuration management and continuous delivery.
In this tutorial you will be given an introduction to Ansible and learn how to provision Linux servers with a web-proxy, a database and some other packages. Furthermore we will automate zero downtime deployment of a Java application to a load balanced environment.
This document discusses using the MEAN stack with Docker. It provides Dockerfiles to containerize MongoDB, a MongoDB replica set configurator, Node.js, sample applications, and MongoDB Management Service monitoring/backup agents. It also describes using Vagrant to set up a demo environment with Docker containers for a MongoDB replica set and sample app.
Streamline your development environment with dockerGiacomo Bagnoli
These days applications are getting more and more complex. It's becoming quite
difficult to keep track of all the different components an application needs in order to
function (a database, a message queueing system, a web server, a document
store, a search engine, you name it.). How many times we heard 'it worked on my
machine'?. In this talk we are going to explore Docker, what it is, how it works
and how much it can benefit in keeping the development environment consistent.
We are going to talk about Dockerfiles, best practices, tools like fig and vagrant,
and finally show an example of how it applies to a ruby on rails
application.
A Node.JS bag of goodies for analyzing Web TrafficPhilip Tellis
This document is a presentation about analyzing web traffic using Node.js modules. It introduces Node.js and the npm package manager. It then discusses modules for parsing HTTP logs, including parsing user agents, handling IP addresses, geolocation, and date formatting. It also covers modules for statistical analysis like fast-stats, gauss, and statsd. The presentation provides code examples for using these modules and takes questions at the end.
Access google command list from the command lineEthan Lorance
This document discusses how to access Google services from the command line using GoogleCL, a Python application. It provides instructions on downloading and installing GoogleCL, activating services by approving access in the browser, and using functions to interact with services like uploading documents and videos, editing blogs, and more. The conclusion states that GoogleCL allows interacting with Google apps via the command line and expands what you can do with Google services.
Docker can be used as an everyday development tool. It allows building, shipping and running applications securely by using containers. Containers allow encapsulating applications from the host machine and provide resource isolation using features like cgroups and namespaces. The key Docker concepts include images, containers, volumes, and the Docker engine. Docker Compose can be used to define and run multi-container Docker applications using a YAML file.
This document discusses Selinko's use of Docker in their development, testing, and production environments. Some key points:
- Selinko is a Belgian company that provides secure IoT platforms and track and trace microchips.
- They use Docker for its portability, reproducibility, scalability, and other benefits aligned with the 12 factor app principles.
- In development, they use Docker Machine and Docker Compose. In testing, Jenkins. In production, CoreOS and systemd unit files to run Docker containers.
- They've learned best practices like avoiding running as root, minimizing layers, dealing with images sizes being virtual, and using Tini to avoid signals and zombies in containers.
ApacheConNA 2015: What's new in Apache httpd 2.4Jim Jagielski
The document discusses the new features of Apache HTTP Server version 2.4, including performance improvements through more efficient modules and data structures, enhanced configuration options, new modules for capabilities like Lua scripting and remote IP access, and improved proxy functionality for dynamic and cloud environments. Key areas covered are performance, configuration, new modules, and proxy features.
Converting Your Dev Environment to a Docker Stack - php[world]Dana Luther
Heard a lot about docker but not sure where to start? Frustrated maintaining development VMs? In this presentation we will go over the simplest ways to convert your development environment over to a docker stack, including support for full acceptance testing with Selenium. We’ll then go over how to modify the stack to mimic your production/pre-production environment(s) as closely as possible, and demystify working with the containers in the stack.
Heroku enables developers to deploy applications using modern software design patterns for software-as-a-service. Applications benefit from continuous deployment with frequent code releases, easy management of dependencies, and configurability through environment variables. Heroku provides tools for managing data and services for building, running, and monitoring applications.
“warpdrive”, making Python web application deployment magically easy.Graham Dumpleton
Ask a beginner to deploy a Python web application and they will often complain it is too hard. Although we have standards for how a Python web application should interface with a web server, the web servers for Python all work differently, with a myriad of options and being difficult to set up properly.
In this talk you will be given a preview of a project called 'warpdrive', a project being developed to simplify the process of deploying a Python web application.
The 'warpdrive' project makes it easy to run your Python web application on your own system, but it can also create a Docker image for your application, providing you with an easy path to deploying it on a Docker service.
How 'warpdrive' works is also compatible with next generation Platform as a Service (PaaS) offerings such as the latest OpenShift, which has been reimplemented around Docker and Kubernetes.
See how working on and deploying your Python web application could be made so much easier using 'warpdrive'.
DCSF19 Tips and Tricks of the Docker Captains Docker, Inc.
Brandon Mitchell, BoxBoat
Docker Captain Brandon Mitchell will help you accelerate your adoption of Docker containers by delivering tips and tricks on getting the most out of Docker. Topics include managing disk usage, preventing subnet collisions, debugging container networking, understanding image layers, getting more value out of the default volume driver, and solving the UID/GID permission issues with volumes in a way that allows images to be portable from any developer laptop and to production.
A look at some of the configuration issues that containers introduce, and how to avoid or fix them. Discusses immutable infrastructure, the difference between build-time and runtime configuration, scheduler configuration and more.
Converting Your Dev Environment to a Docker Stack - CascadiaDana Luther
The document discusses converting a development environment to a Docker stack. It describes some of the benefits of using Docker for consolidating different environments that may have varying PHP and MySQL versions. It provides an example docker-compose.yml file that defines services for Nginx, MySQL, and PHP-FPM containers along with networking and volume configurations. It also includes some sidebars on Docker concepts like the hierarchy of images, containers, services and stacks, as well as common Docker commands.
Running Docker in Development & Production (DevSum 2015)Ben Hall
This document provides an overview of Docker containers and how to use Docker for development and production environments. It discusses Docker concepts like images, containers, and Dockerfiles. It also demonstrates how to build images, run containers, link containers, manage ports, and use Docker Compose. The document shows how Docker can be used to develop applications using technologies like ASP.NET, Node.js, and Go. It also covers testing, deploying to production, and optimizing containers for production.
This fabric workshop aims to create a deploy tool using Fabric that can deploy code to servers defined in roles, show existing tags, change to a different tagged version, and remove tags. The agenda includes demonstrating local tasks, remote tasks using an Ansible inventory file, and functions for mkdir, cd, and uploading files. Fabric provides a simple way to automate operations across multiple servers.
Repoinit: a mini-language for content repository initializationBertrand Delacretaz
This document discusses repoinit, an Apache Sling mini-language for initializing content repositories. Repoinit scripts are run at startup by SlingRepositoryInitializers to register the SlingRepository, set ACLs, and perform other initialization tasks. The talk covers the history and usage of repoinit, provides examples, and discusses best practices around parsing repoinit scripts using a parser generator rather than writing parsers by hand.
Docker can be used as an everyday work tool for developers and system administrators. It provides tools to work with containers, which enable operating-system-level virtualization. Docker images contain executable packages that include code, runtimes, and configuration files to run software. Containers run as isolated processes on the host machine, using resources from the host operating system. Common Docker commands include docker run to launch containers, docker build to build images, and docker ps to view running containers. Docker Compose allows defining and running multi-container applications using a YAML configuration file.
Explains how Docker and Nix work as deployment solutions, in what ways they are similar and different, and how they can be combined to achieve interesting results.
Использование Docker в CI / Александр Акбашев (HERE Technologies)Ontico
РИТ++ 2017, Root Conf
Зал Пекин + Шанхай, 6 июня, 17:00
Тезисы:
http://rootconf.ru/2017/abstracts/2504.html
В своём докладе я расскажу о том, почему мы решили использовать Docker в рамках Continuous Integration: ускорить тесты, повысить стабильность, улучшить контроль над окружением и используемыми библиотеками.
Доклад так же содержит подробности о многих сложностях, с которыми пришлось столкнуться в ходе миграции на Docker: борьба с растущим числом и размером образов, бесконтрольные обновления образов, нестабильное поведение, и другие.
В конце доклада я покажу, как именно мы следим за стабильностью Docker в нашей инфраструктуре. И насколько Docker стабилен на больших объемах (больше 100k билдов в сутки).
Be a better developer with Docker (revision 3)Nicola Paolucci
Be a better developer with Docker: tricks of the trade (revision 3)
The talk will teach developers how to approach their development environment setups using Docker, covering awesome tricks to make the experience smooth, fast, powerful and repeatable. The talk is logically divided in five parts:
- What is Docker
- Why Docker makes developers happier
- Workflows and techniques
- Tips and tricks
- Future developments
This document provides instructions on various Docker commands and concepts. It begins with definitions of Docker and the differences between VMs and Docker containers. It then covers topics like installing Docker, finding Docker images and versions, building images with Dockerfiles, running containers with commands like docker run, and managing images and containers.
You've heard of Fat/Uber JARs and are probably building them today. They provide much greater app portability and minimize the risk of missing dependencies. However, in a containerized world, where small code changes and re-deployments can occur frequently for high scale environments, the overhead of processing and transferring virtually duplicate content can quickly grow. In this session, we'll explore the benefits and costs of Fat JAR packaging and demonstrate various options for slimming your apps and saving those trees using popular frameworks like Wildfly Swarm, Dropwizard, Spring Boot and Eclipse Vert.x.
Docker has created enormous buzz in the last few years. Docker is a open-source software containerization platform. It provides an ability to package software into standardised units on Docker for software development. In this hands-on introductory session, I introduce the concept of containers, provide an overview of Docker, and take the participants through the steps for installing Docker. The main session involves using Docker CLI (Command Line Interface) - all the concepts such as images, managing containers, and getting useful work done is illustrated step-by-step by running commands.
This document summarizes the steps to build and run a Docker container for Nginx. It describes creating a Dockerfile that installs Nginx on Ubuntu, builds the image, runs a container from the image mounting a local directory, and commits changes to create a new image version. Key steps include installing Nginx, exposing ports 80 and 443, committing a container to create a new image with added files, and using Docker commands like build, run, commit, diff and inspect.
AWS re:Invent 2016: Amazon ECR Deep Dive on Image Optimization (CON401)Amazon Web Services
“Are you struggling with bulky images or slow push and pull times? In this session we will walk through the anatomy of a Docker image and provide techniques you can use to optimize images for faster pushes and pulls and reduce your overall storage footprint. We will discuss Docker image building (build containers versus runtime containers to remove unnecessary software), Docker image composition (minimizing the number of layers), the Docker Remote API (optimizing how images are pushed and pulled), and CI/CD Integration (automate building, versioning, and deploying images to production). We’ll also examine the tools that ECR provides to make Docker image management easier so that you can focus on building your application. Finally, we'll hear from Pinterest about how they use ECR and Docker, some valuable experiences gained along the way, and best practices for using ECR with Apache Mesos.”
AtlasCamp 2015 Docker continuous integration trainingSteve Smith
A 2-hour training session delivered at AtlasCamp in Prague, June 9th 2015.
* Docker vs virtual machines
* Docker concepts
* Docker for testing
* Automation with Docker Compose
* Continuous integration with Bamboo Docker support
* Extracting test results from Docker containers
* Continuous deployment with deployment environments
Scaling Development Environments with DockerDocker, Inc.
This document discusses using Docker to create a scalable development environment. It outlines setting up containers for different development components like the build server, production servers, and tools. Templates are used to configure container dependencies and build processes. The goal is allowing developers to run all components locally for testing and to reproduce the production environment.
The document discusses different approaches to packaging Java applications including fat JARs, thin JARs, skinny JARs, and hollow JARs. It analyzes the packaging of sample "Hello World" applications using Spring Boot, WildFly Swarm, Eclipse Vert.x, and Dropwizard. Fat JARs package the entire application and dependencies together but can become very large, while thinner approaches package dependencies externally to reduce size and improve redeployment speeds for container-based applications.
This document provides an introduction to Docker and includes instructions for several exercises to help users learn Docker in 90 minutes. The document covers downloading and running Docker containers, creating Docker images, understanding Docker layers, exposing container ports, using Dockerfiles to build images, and sharing images in Docker repositories. The exercises guide users through hands-on experience with common Docker commands and concepts.
This document discusses Docker and provides an introduction and overview. It introduces Docker concepts like Dockerfiles, commands, linking containers, volumes, port mapping and registries. It also discusses tools that can be used with Docker like Fig, Baseimage, Boot2Docker and Flynn. The document provides examples of Dockerfiles, commands and how to build, run, link and manage containers.
This document provides instructions for pulling and running Docker containers for common applications like Nginx, MySQL, WordPress, PostgreSQL, Redis, and GitLab. It demonstrates how to pull base images, define Dockerfiles to customize images, run containers with links and ports, and mount volumes to persist data outside containers.
Running Docker in Development & Production (#ndcoslo 2015)Ben Hall
The document discusses running Docker in development and production. It covers:
- Using Docker containers to run individual services like Elasticsearch or web applications
- Creating Dockerfiles to build custom images
- Linking containers together and using environment variables for service discovery
- Scaling with Docker Compose, load balancing with Nginx, and service discovery with Consul
- Clustering containers together using Docker Swarm for high availability
1. Create a Dockerfile that defines the base image, installs Nginx and any modules, and exposes ports 80 and 443.
2. Build the image from the Dockerfile using "docker build ."
3. Run a container from the new image and publish the ports so Nginx is accessible.
Talk given at Devoxx Belgium 2018
Spring Boot is awesome. Docker is awesome. Together you can do great things. But, are you doing it the right way? We'll walk you through, in detail, the optimal way to structure Docker images for Spring Boot applications for iterative development. Structuring your Docker images correctly is really important for teams doing continuous integration and continuous delivery. Using Docker best practices, we'll show you the code and the technologies used to optimize Docker images for Spring Boot apps!
Improving Reproducibility and Reuse of Modelling Results in the Life SciencesMartin Scharm
The document discusses computational models in systems biology and improving their reproducibility and reuse. It describes how models evolve over time through various versions as corrections and improvements are made. This can include changes to the structure and equations of the models. The author presents their work developing methods to detect, characterize, and communicate the differences between versions of computational models. This includes an algorithm to identify changes, formats to report changes, and an ontology called COMODI to semantically describe change types. The goal is to improve understanding of a model's evolution and history to increase trust and reuse of models.
Model Management in Systems Biology: Challenges – Approaches – SolutionsMartin Scharm
I gave this talk as a webinar in the FAIRDOM webinar series 2016. The recordings of the webinar are available from http://fair-dom.org/knowledgehub/webinars-2/martin-scharm/
This document discusses metadata for the COMBINE archive format. It begins by explaining that the COMBINE archive allows sharing of all information needed to reproduce a modeling project in a single file. It then provides examples of metadata for models, simulations, and results within a COMBINE archive. Finally, it proposes extending the metadata by integrating additional ontologies such as PROV and PAV to capture more detailed information about models, simulations, people, and relationships between these elements.
Characterising differences between model versionsMartin Scharm
The document discusses characterizing differences between versions of computational models. It describes a system called BiVeS that identifies changes by mapping elements between two model versions hierarchically, then evaluating the mapping to find inserts, deletes, updates and moves. An ontology called COMODI is being built to classify model changes by type, target, reason, intention and more to help understand how and why models evolve.
The document discusses the evolution of computational models over time. It provides the example of models of the cell cycle control network in the bacterium Clostridium acetobutylicum that have been developed and expanded on since 1984. The models have become more detailed over time, incorporating new experimental findings and insights. The document also discusses how models of the core cell cycle oscillator have evolved since 1991 with the addition of new molecular components and regulatory interactions based on studies in different organisms.
M2CAT: Extracting reproducible simulation studies from model repositories usi...Martin Scharm
Martin Scharm presents M2CAT, a workflow to extract reproducible simulation studies from model repositories. It searches the graph database Masymos for relevant models, simulations, and related data. M2CAT then uses the CombineArchive Toolkit to export the selected studies as COMBINE archive files, which provide a standard format for sharing complete simulation experiments. These archives can be explored and modified using the CombineArchive Web interface or other tools. The goal is to make the large amounts of data in repositories more usable and enable researchers to more easily reproduce and build upon existing computational studies.
M2CAT: Extracting reproducible simulation studies from model repositories usi...Martin Scharm
The document discusses M2CAT, a workflow that extracts reproducible simulation studies from model repositories. It searches the model repository Masymos for relevant studies, retrieves the necessary data, and exports it as a COMBINE archive using the CombineArchive Toolkit. This packages all the files into a single container that can be shared, modified and explored using various CAT tools. The workflow aims to make simulation studies more reproducible and accessible by bundling related models, data and descriptions into standardized packages.
HandsOn: git (or version control in general...)Martin Scharm
The document discusses an invited talk on Git and version control approaches given in Rostock, Germany in 2015. It provides an overview of topics that were covered in the talk, including commits, branches, tags, undoing changes, diffs, logs, stashing, GitHub/Bitbucket, syncing across machines, files to ignore with .gitignore, Git modules, aliases, relative paths, common problems like detached heads and merge conflicts, collaboration techniques, and using Git in the cloud. Examples and images are linked to illustrate various points. The talk aimed to provide insights into national and international practices using version control systems.
The CellML models’ walk through the repositoryMartin Scharm
This document summarizes a retrospective study of the CellML model repository. It found that the repository grew from around 500 models in 2007 to over 2,500 models in 2013. Version control was implemented to track model development over time, revealing over 18,000 total versions and 15,000 differences between versions. Common operations on models in the repository included updates, deletes, inserts, and moves of components. The study provided insights into the evolution of models in the repository.
CombineArchiveWeb -- web based tool to handle files associated with modelling...Martin Scharm
The document discusses the CombineArchive Toolkit (CAT), a web-based tool for handling and sharing files associated with modeling results. CAT allows users to create, modify, explore, and share models and data through a single CombineArchive file format. It was created by researchers at the University of Rostock's Department of Systems Biology and Bioinformatics.
Improving the Management of Computational Models -- Invited talk at the EBIMartin Scharm
Improving the Management of Computational Models:
storage – retrieval & ranking – version control
More information and slides to download at http://sems.uni-rostock.de/2013/12/martin-visits-the-ebi/
Talk by Martin Scharm at the COMBINE meeting September 2013 in Paris.
Find more information and download the slides at http://sems.uni-rostock.de/2013/09/sems-at-the-combine-2013/
Talk at the IPK in Gatersleben to initiate collaborations with the VANTED/e!Dal crews. The slides are also available at our website http://sems.uni-rostock.de/2013/05/research-visit-at-the-ipk-gatersleben/
This document describes BiVeS and BudHat, tools for version control of computational models. BiVeS detects differences between versions of models encoded in standardized formats like SBML. It maps common elements and constructs an XML diff file. BudHat visualizes these diffs, highlighting changes in reaction networks or providing a readable report. The tools are open source and intended to extend existing model repositories with version control capabilities. A demonstration of BiVeS and BudHat in action is provided.
BiVeS & BudHat -- Version Control for Computational Models @ All hands PALs M...Martin Scharm
Talk about version control for computational models at the All hands PALs Meeting in Heidelberg by Martin Scharm, member of the SEMS project at the University of Rostock.
The slides are also available at our website http://sems.uni-rostock.de/2012/11/sems-at-the-pals-meeting-2012/
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Docker Demo @ IuK Seminar
1. martin scharm
dept. for systems biology and bioinformatics
university of rostock
IuK Seminar
Rostock, 2016-05-24
2. disclaimer
most of the stuff was not made by me. follow the
links to find the actual creators.
paper: https://dx.doi.org/10.6084/m9.figshare.3397576.v1
10. FROM debian:stable
RUN apt-get install -y curl
RUN apt-get install -y moon-buggy
RUN apt-get install -y sl
imagesconsist
ofread-onlylayers
changesresultin
newlayers
When Docker mounts the rootfs, it starts read-only, as in a traditional Linux boot,
but then, instead of changing the file system to read-write mode, it takes advantage
of a union mount to add a read-write file system over the read-only file system.
In fact there may be multiple read-only file systems stacked on top of each other.
We think of each one of these file systems as a layer.
https://docs.docker.com/v1.6/terms/layer/
11. FROM debian:stable
RUN apt-get install -y curl
RUN apt-get install -y moon-buggy
RUN apt-get install -y sl
imagesconsist
ofread-onlylayers
changesresultin
newlayers
12. FROM debian:stable
RUN apt-get install -y curl
RUN apt-get install -y moon-buggy
RUN apt-get install -y sl RUN apt-get install -y nethack-console
FROM debian:stable
RUN apt-get update && apt-get install -y --no-install-recommends curl
RUN apt-get install -y --no-install-recommends moon-buggy
RUN apt-get install -y --no-install-recommends sl
Dockerfile:
docker build
creates an image a different image with
similar “dependencies”
13. anatomy of a dockerized app
● Dockerfile: receipt do build an image
● Image: runtime environment
● Container: instance of the app
● Volume: persistent data
● Networks: communication
14. docker hub
● like github for docker images
● pull – push – share your stuff
https://hub.docker.com/
16. get an image from the docker HUB
$ docker pull nginx:latest
latest: Pulling from library/nginx
3059b4820522: Pull complete
ff978d850939: Pull complete
9d1b4547bc10: Pull complete
7bb610d87cee: Pull complete
bbd672577eed: Pull complete
f4a3cc2c46e0: Pull complete
8f9345da4c7a: Pull complete
72cd8a7c892b: Pull complete
Digest: sha256:46a1b05e9ded54272e11b06e13727371a65e2ef8a87f9fb447c64e0607b90340
Status: Downloaded newer image for nginx:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
binfalse/debian-with-curl-moonbuggy-sl latest 125374f94e47 About an hour ago 149.2 MB
nginx latest 72cd8a7c892b 2 weeks ago 182.7 MB
binfalse/skype latest bec4e37e163d 5 weeks ago 565.1 MB
binfalse/deb-skype latest bec4e37e163d 5 weeks ago 565.1 MB
debian stable 82f85996fa28 6 weeks ago 125 MB
17. run the image
$ docker run --name some-nginx -d -p 2222:80 -v /opt/docker/web:/usr/share/nginx/html:ro -d nginx
ec0771865e5f03a3f55df3611f15f97a88e6eee2c26802f5f95784ed28116222
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec0771865e5f nginx "nginx -g 'daemon off" 25 seconds ago Up 25 seconds 443/tcp, 0.0.0.0:2222->80/tcp some-nginx
$ curl localhost:2222
...
$ docker kill some-nginx
some-nginx
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec0771865e5f nginx "nginx -g 'daemon off" 8 minutes ago Exited (137) 7 seconds ago some-nginx
$ docker rm some-nginx
some-nginx
18. create an image
$ cat Dockerfile
FROM debian:stable
RUN apt-get update && apt-get install -y --no-install-recommends curl
RUN apt-get install -y --no-install-recommends moon-buggy
RUN apt-get install -y --no-install-recommends sl
$ docker build -t binfalse/debian-with-curl-moonbuggy-sl .
Sending build context to Docker daemon 2.048 kB
Step 0 : FROM debian:stable
---> 82f85996fa28
Step 1 : RUN apt-get update && apt-get install -y --no-install-recommends curl
---> Running in 16ce78bf2cfa
Ign http://httpredir.debian.org stable InRelease
Get:1 http://httpredir.debian.org stable-updates InRelease [142 kB]
....
Processing triggers for libc-bin (2.19-18+deb8u4) ...
---> c2566a69a8e2
Removing intermediate container 16ce78bf2cfa
Step 2 : RUN apt-get install -y --no-install-recommends moon-buggy
---> Running in e485857c3881
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
moon-buggy
...
$ docker run --rm -it binfalse/debian-with-curl-moonbuggy-sl /usr/games/sl
that’s just for showcase,
not best practise!
19. remove an image
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
binfalse/debian-with-curl-moonbuggy-sl latest 711a58dd52d2 18 minutes ago 149.2 MB
nginx latest 72cd8a7c892b 2 weeks ago 182.7 MB
binfalse/skype latest bec4e37e163d 5 weeks ago 565.1 MB
binfalse/deb-skype latest bec4e37e163d 5 weeks ago 565.1 MB
debian stable 82f85996fa28 6 weeks ago 125 MB
$ docker rmi binfalse/debian-with-curl-moonbuggy-sl
Untagged: binfalse/debian-with-curl-moonbuggy-sl:latest
Deleted: 711a58dd52d207421124396061d0f505f1e223ae9803c0d6be601cd510a7c50c
Deleted: 95df58df3f4b320ecc2cff76746a9576658e26136f124992b8fa176b03678341
Deleted: c2566a69a8e2f3f351498cbe3ffe26780b100f3867ce9e2f262b33eed484b640
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
nginx latest 72cd8a7c892b 2 weeks ago 182.7 MB
binfalse/skype latest bec4e37e163d 5 weeks ago 565.1 MB
binfalse/deb-skype latest bec4e37e163d 5 weeks ago 565.1 MB
debian stable 82f85996fa28 6 weeks ago 125 MB
20. #app1: wordpress + mysql
+ some extra security
MySQL
docker pull mysql:latest
docker run -e MYSQL_ROOT_PASSWORD=yourpassword
--name db -v /home/mysql/:/var/lib/mysql/
-d mysql
# optionally connect to configure the db
alias dockip="docker inspect --format ’{{ .NetworkSettings.IPAddress }}’"
mysql -h$(dockip db) -uroot -pyourpassword
Wordpress
docker pull wordpress:latest
docker run --name my-wordpress --link db:mysql
-v /home/wp/:/var/www/html/ -p 80:80
-d wordpress
benefit: isolation
● host is safe if hacker breaks into wordpress
● plugins won’t be able to see db files
● mysql cannot see wp config etc
21. #app2: jail for skype
https://binfalse.de/2016/01/04/docker-jail-for-skype/
jail that “obfuscated malicious
binary blob with network capabilities”
$ docker run -d -p 127.0.0.1:55555:22
--name skype_container binfalse/skype
$ ssh -X -p 55555 docker@127.0.0.1
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jan 4 23:07:37 2016 from 172.17.42.1
$ skype
22. #app3: teaching
● let’s assume students are asked to c++-code
an std::out for
this is correct
● expected solution:
#include <iostream>
int main()
{
std::cout << "this is correct" << std::endl;
}
23. #app3: teaching
● tiny bash script to compile && execute the
students’ code: executer.sh
#!/bin/bash
# lets assume the submissions are always found in /job
EXECUTABLE=/job/program.out
SOURCE=/job/program.cpp
# compile it if it wasn’t compiled yet
[ -x $EXECUTABLE ] || g++ -o $EXECUTABLE $SOURCE
# go for it
$EXECUTABLE
24. #app3: teaching
● create a Dockerfile
● create a docker image
# meta
FROM centos
MAINTAINER martin scharm
# install a c++ compiler
RUN yum install -y gcc-c++
# add the executer script
ADD executer.sh /executer.sh
# makes this a binary
ENTRYPOINT /executer.sh
$ docker build -t binfalse/tutors-little-helper .
Sending build context to Docker daemon 3.072 kB
Step 0 : FROM centos
---> 60e65a8e4030
...
25. #app3: teaching
● lets say students’ submissions are in
● check submissions using the docker image
$ find /opt/docker/student-submissions/
/opt/docker/student-submissions/1
/opt/docker/student-submissions/1/program.cpp
/opt/docker/student-submissions/2
/opt/docker/student-submissions/2/program.cpp
/opt/docker/student-submissions/3
/opt/docker/student-submissions/3/program.cpp
$ for i in /opt/docker/student-submissions/*
do
echo "checking submission "${i/*//}
docker run --rm -v $i:/job binfalse/tutors-little-helper
done
checking submission 1
this is correct
checking submission 2
this is correct
checking submission 3
this is not correct
submissions 1 & 2 seem to be correct..!?
student #3 is definitely too stupid...
26. #app3: teaching
● but the hell is that:
$ cat /opt/docker/student-submissions/2/program.cpp
#include <iostream>
#include <fstream>
int main()
{
// do something malicious that the tutors won’t recognize
std::ifstream src("/etc/passwd");
std::ofstream dst("/tmp/newpasswd");
dst << src.rdbuf() <<
"evil:x:1001:1001:Evil User,,,:/home/evil:/bin/bash" <<
std::endl;
// pretend being harmless delivering correct result
std::cout << "this is correct" << std::endl;
}
29. Passive Benchmarking with docker
LXC, KVM & OpenStack
Hosted @ SoftLayer
Boden Russell (brussell@us.ibm.com)
IBM Global Technology Services
Advanced Cloud Solutions & Innovation
V2.0
Supporting
statistics from
http://www.slideshare.net/BodenRussell/kvm-and-docker-lxc-benchmarking-with-openstack/
30. Cloudy Performance: Serial VM Reboot
docker KVM
0
20
40
60
80
100
120
140
2.58
124.43
Average Server Reboot Time
TimeInSeconds
http://www.slideshare.net/BodenRussell/kvm-and-docker-lxc-benchmarking-with-openstack/
31. Guest Performance: CPU
Bare Metal docker KVM
0
2
4
6
8
10
12
14
16
18
15.26 15.22 15.13
Calculate Primes Up To 20000
Seconds
http://www.slideshare.net/BodenRussell/kvm-and-docker-lxc-benchmarking-with-openstack/
32. Cloudy Performance: Steady State Packing
0.00E+00
1.00E+09
2.00E+09
3.00E+09
4.00E+09
5.00E+09
6.00E+09
7.00E+09
Docker: Compute Node Used Memory (full test duration)
Memory
Time
MemoryUsed
Delta
734 MB
Per VM
49 MB
0.00E+00
1.00E+09
2.00E+09
3.00E+09
4.00E+09
5.00E+09
6.00E+09
7.00E+09
KVM: Compute Node Used Memory (full test duration)
Memory
Time
MemoryUsed
Delta
4387 MB
Per VM
292 MB
http://www.slideshare.net/BodenRussell/kvm-and-docker-lxc-benchmarking-with-openstack/
34. take home.
● smaller, more understandable apps – do one thing and
do it well.
● no/weakened dependency hell
● smaller & faster deployment
● +reproducibility
● don’t ignore traditional controls such as high patch level
● docker is not enterprise virtualisation, no cloud platform,
no configuration management, no deployment
framework, no development environment
35. that’s it.
feel free to come around for discussions
on and off docker and/or a beer.
@binfalse
http://binfalse.de
martin@jabber.lesscomplex.org
questions? doubts? comments?
room 413
ulmencampus
54.086325,12.107683