Bjørn Nordlund discusses how Docker allows building and sharing infrastructure as easily as code. Docker uses containers based on Linux kernel features like namespaces and cgroups to package applications securely and independently from the underlying infrastructure. Docker provides a simple interface to create, start, stop, move, and share application containers. This allows more efficient utilization of server resources and easier deployment of applications.
Docker orchestration using core os and ansible - Ansible IL 2015Leonid Mirsky
The last couple of years have seen an increasing interest in Docker and related technologies. One of these technologies is CoreOS, a new operating system built from the ground up for running Docker containers at scale.
In this talk we will learn about CoreOS main concepts and tools. We will get our hands dirty as we work together toward a goal of running a CoreOS cluster on AWS (using Ansible) and running docker containers on it.
The talk will conclude with a discussion on the place of Ansible (and configuration management tools in general) in the "next-generation" stack.
Docker orchestration using core os and ansible - Ansible IL 2015Leonid Mirsky
The last couple of years have seen an increasing interest in Docker and related technologies. One of these technologies is CoreOS, a new operating system built from the ground up for running Docker containers at scale.
In this talk we will learn about CoreOS main concepts and tools. We will get our hands dirty as we work together toward a goal of running a CoreOS cluster on AWS (using Ansible) and running docker containers on it.
The talk will conclude with a discussion on the place of Ansible (and configuration management tools in general) in the "next-generation" stack.
Docker + Arm - Multi-arch builds with Docker `buildx`Elton Stoneman
Slides from the Docker webinar covering multi-arch images - containers which can run on Windows, Linux, Intel and Arm. Demonstrates a cross-platform build farm with and without `buildx`.
Getting instantly up and running with Docker and SymfonyAndré Rømcke
A look into how you can start to use Docker today with ready made setup with php7, nginx, redis, blackfire and so on. How you may extend it, and integrating it into your continuous integration workflow, and how you can setup a continuous deployment workflow using for instance Travis-CI.
Quicklink: https://legacy.joind.in/19070
Docker Meetup: Docker Networking 1.11, by Madhu VenugopalMichelle Antebi
In this talk, Madhu Venugopal will present Docker Networking & Service Discovery features shipped in 1.11 and new Experimental Vlan network drivers introduced in 1.11.
The latest releases of today’s popular Linux distributions include all the tools needed to do interesting things with Linux containers.
For the Makefile MicroVPS project, I set out to build a minimal virtual private server-like environment in a Linux container from scratch.
These are my requirements for the MicroVPS:
Minimal init sequence
Most of what happens in a rc.sysinit file is not needed (or wanted) in a container. However, to work like a virtual private server, the MicroVPS will need some kind of init system. The absolute minimum would be enough to start the network and at least one service.
Native network namespace
The MicroVPS will have a dedicated network namespace. It should be easy to configure.
Native package management
The package set installed in the container image will be managed using native tools like deb or rpm.
Automated build
An automated repeatable build process is a must.
Fast iteration cycle
The building and testing cycle must be fast enough not to drive me insane.
Easy management
It should be easy to distribute, monitor, and run a MicroVPS container.
In this tutorial, I will show how to use the tools included with Linux to build a virtual private server in a Linux container from scratch, using GNU Make to automate the build process.
In this OWASP/Null Delhi session, I discussed the docker attack surface. Furthermore, I demonstrated how an attacker can escape the docker container and gain access to the host machine.
Ref: https://null.co.in/events/655-delhi-combined-null-delhi-owasp-delhi-meetup
Introduction to Project atomic (CentOS Dojo Bangalore)Lalatendu Mohanty
The talk was given in CentOS Dojo Bangalore on 29th April 2015
http://wiki.centos.org/Events/Dojo/Bangalore2015
This slides contains introduction to Project Atomic and CentOS Atomic SIG.
Dockerizing Symfony2 application. Why Docker is so cool And what is Docker? And what are Containers? How they works? What are the ecosystem of Docker? And how to dockerize your web application (can be based on Symfony2 framework)?
Docker is the next best thing in deployment and infrastructure management. This talk will go over a brief introduction of the Docker objects and how they interact.
Er Apache Camel riktig valg for deg? Lytt til erfarne Camel spotters.Bjørn Nordlund
A problemsolving humoristic presentation of apache camel and enterprise integration presented at javazone 2011.
The presentation was filmed and is available at vimeo http://vimeo.com/28760446
Docker + Arm - Multi-arch builds with Docker `buildx`Elton Stoneman
Slides from the Docker webinar covering multi-arch images - containers which can run on Windows, Linux, Intel and Arm. Demonstrates a cross-platform build farm with and without `buildx`.
Getting instantly up and running with Docker and SymfonyAndré Rømcke
A look into how you can start to use Docker today with ready made setup with php7, nginx, redis, blackfire and so on. How you may extend it, and integrating it into your continuous integration workflow, and how you can setup a continuous deployment workflow using for instance Travis-CI.
Quicklink: https://legacy.joind.in/19070
Docker Meetup: Docker Networking 1.11, by Madhu VenugopalMichelle Antebi
In this talk, Madhu Venugopal will present Docker Networking & Service Discovery features shipped in 1.11 and new Experimental Vlan network drivers introduced in 1.11.
The latest releases of today’s popular Linux distributions include all the tools needed to do interesting things with Linux containers.
For the Makefile MicroVPS project, I set out to build a minimal virtual private server-like environment in a Linux container from scratch.
These are my requirements for the MicroVPS:
Minimal init sequence
Most of what happens in a rc.sysinit file is not needed (or wanted) in a container. However, to work like a virtual private server, the MicroVPS will need some kind of init system. The absolute minimum would be enough to start the network and at least one service.
Native network namespace
The MicroVPS will have a dedicated network namespace. It should be easy to configure.
Native package management
The package set installed in the container image will be managed using native tools like deb or rpm.
Automated build
An automated repeatable build process is a must.
Fast iteration cycle
The building and testing cycle must be fast enough not to drive me insane.
Easy management
It should be easy to distribute, monitor, and run a MicroVPS container.
In this tutorial, I will show how to use the tools included with Linux to build a virtual private server in a Linux container from scratch, using GNU Make to automate the build process.
In this OWASP/Null Delhi session, I discussed the docker attack surface. Furthermore, I demonstrated how an attacker can escape the docker container and gain access to the host machine.
Ref: https://null.co.in/events/655-delhi-combined-null-delhi-owasp-delhi-meetup
Introduction to Project atomic (CentOS Dojo Bangalore)Lalatendu Mohanty
The talk was given in CentOS Dojo Bangalore on 29th April 2015
http://wiki.centos.org/Events/Dojo/Bangalore2015
This slides contains introduction to Project Atomic and CentOS Atomic SIG.
Dockerizing Symfony2 application. Why Docker is so cool And what is Docker? And what are Containers? How they works? What are the ecosystem of Docker? And how to dockerize your web application (can be based on Symfony2 framework)?
Docker is the next best thing in deployment and infrastructure management. This talk will go over a brief introduction of the Docker objects and how they interact.
Er Apache Camel riktig valg for deg? Lytt til erfarne Camel spotters.Bjørn Nordlund
A problemsolving humoristic presentation of apache camel and enterprise integration presented at javazone 2011.
The presentation was filmed and is available at vimeo http://vimeo.com/28760446
Min drømmeapplikasjon (dreamapp) er en lyntale fra javaZone2010 som handler om at kompleks deployment med mange komponenter og infrastruktur gjør at du mister kunder/brukere av systemene du lager. 5 minutter er det jeg gidder å investere i å sette opp en tjeneste/produkt for å teste det ut.
Jeg viser også hvordan du på under 5 minutter kan sette i gang et javaprosjekt og bygge en deploybar applikasjon klar for produksjonssetting.
NoSql presentation at IASA norway meeting. The point is to choose a db solution that fits your needs between functionality, scaling and complexity. Nothing is for free, but rdbms is not the only answer for all problems either.
Dev opsec dockerimage_patch_n_lifecyclemanagement_2019kanedafromparis
Lors de cette présentation, nous allons dans un premier temps rappeler la spécificité de docker par rapport à une VM (PID, cgroups, etc) parler du système de layer et de la différence entre images et instances puis nous présenterons succinctement kubernetes.
Ensuite, nous présenterons un processus « standard » de propagation d’une version CI/CD (développement, préproduction, production) à travers les tags docker.
Enfin, nous parlerons des différents composants constituant une application docker (base-image, tooling, librairie, code).
Une fois cette introduction réalisée, nous parlerons du cycle de vie d’une application à travers ses phases de développement, BAU pour mettre en avant que les failles de sécurité en période de développement sont rapidement corrigées par de nouvelles releases, mais pas nécessairement en BAU où les releases sont plus rares. Nous parlerons des diverses solutions (jfrog Xray, clair, …) pour le suivie des automatique des CVE et l’automatisation des mises à jour. Enfin, nous ferons un bref retour d’expérience pour parler des difficultés rencontrées et des propositions d’organisation mises en oeuvre.
Cette présentation bien qu’illustrée par des implémentations techniques et très organisationnel
Linux containers and Docker specifically have revolutionized the way applications are run at scale, but testing can greatly benefit from those technologies too.Containers allow to run tests in isolation with a minimum performance penalty, increased speed with respect to virtual machine based tests and easier configuration and less complexity for integration testing. Testing with containers allows running tests in a new, clean environment for each execution, minimizing false positives and environment corruption. At the same time it allows reusing container clusters to run development, testing and production workloads.You will learn to effectively use Jenkins with Docker and Kubernetes, a multi host Docker clustering technology, to run your Jenkins jobs in isolated containers for each execution at scale.
http://www.agiletestingdays.com/session/using-docker-for-testing/
From Monolith to Docker Distributed ApplicationsCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed microservice architectures. But migrating an existing Java application to a distributed microservice architecture is no easy task, requiring a shift in the software development, networking, and storage to accommodate the new architecture. This presentation provides insights into the experience of the speaker and his colleagues in creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon and applicable to all types of applications, especially Java- and JVM-based ones.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
14. Learn more on http://docker.io
Use Case Examples Link
Build your own PaaS
Dokku - Docker powered mini-Heroku.
The smallest PaaS implementation
you’ve ever seen http://bit.ly/191Tgsx
Web Based Environment for
Instruction
JiffyLab – web based environment for
the instruction, or lightweight use of,
Python and UNIX shell http://bit.ly/12oaj2K
Easy Application Deployment
Deploy Java Apps With Docker =
Awesome http://bit.ly/11BCvvu
Running Drupal on Docker http://bit.ly/15MJS6B
Installing Redis on Docker http://bit.ly/16EWOKh
Create Secure Sandboxes
Docker makes creating secure
sandboxes easier than ever http://bit.ly/13mZGJH
Create your own SaaS Memcached as a Service http://bit.ly/11nL8vh
Continuous Integration and
Deployment
Next Generation Continuous Integration
& Deployment with dotCloud’s Docker
and Strider http://bit.ly/ZwTfoy
Lightweight Desktop
Virtualization
Docker Desktop: Your Desktop Over SSH
Running Inside Of A Docker Container http://bit.ly/14RYL6x
https://github.com/bjornno/dockerdemo
Editor's Notes
I will present some of the basic building blocks of todays cloud services, or Plattform As A service frameworks.
I will present a tool called docker, that is an open source implementation, it’s similar to other tools used by
For example cloud foundry or heroku.
----
CF use warden
Here I have a small webapp that runs in Cloud Foundry, an open source paas solution.
Actually a cloud foundry hosted on my laptop
I can deploy my app with a single push command, it will get access to all services available from the plattform, right now my app consists of a load balancer, and an appnode.
If I want to change the infrastructure I can do it easily. I can for example add databases, message queues, or scale up by adding more memory,
or scale out adding more app nodes like this (cf scale –i 5).
Wow, I added 50 nodes
And it all happens in seconds.
How do they do this?
----------------
cf services
They all use what we call containers –
It is similar to a virtual machine, but without all the friction and overhead of a virtual machine.
The idea is the same as cargo containers. You can put whatever you like into them, but seen from the outside it’s the same, its one unit.
You could put a OS, some files, a database, an appserver, anything into it.. And ship it.
You can then run it on your local machine, a integration test server, deploy it to a customer or to a public cloud provider. Without changing anything.
You will always have the exact same environment
you can stack multiple containers together on the same servers.
The containers could for example be multiple instances of the same application but for different customers running completely isolated from each other, providing multi tenancy
Or it could be different applications
Or you can scale your app out by deploying multiple containers running on multiple servers
So one implementation of such a container is the linux container or LXC,
That lets you run multiple linux system within one linux system
-----
Fast: ~97% of bare metal
start/stop in miliseconds
Agile: container can be moved seamlessly between local, vm, bare metal with a click of a button, or scripted
Flexible: containerize a whole system with os, db, etc or just an application. Freedom
Leightweight:
On a typical physical server, with average compute resources, you can easily run:
● 10-100 virtual machines
● 100-1000 containers
Cloudy: support from various cloud management framworks, like open stack
Is becoming the new “unit of deployment”
Changing how we develop, package, deploy and manage apps at all scales (test/dev to production)
Removes the friction of using virtual machines
Simplify workflow and provides performance benefits. It’s the basis of most paas solutions like heroku, cloudfoundry etc
A linux container uses features from the linux kernel to create isolated environments on the same machine. Seen from the inside of such an environment It looks like a virtual machine
but from the outside, the host os, it looks like a process.
But since LXC is pretty hard to work with directly
-----
like control groups for resource isolation (cpu, memory, I/O, network etc), and kernel namespaces to isolate an applications
view of the surrounding operating system, like processes, users, network, filesystems etc. And chroot to change the root directory to the container.
In effect you get an isolated environment where you can install your own linux os, and your own applications without the cost of creating a virtual machine.
---------
Namespaces:
Processes, network interfaces, filesystem, hostname, users
eg you can have multiple processes with pid=42 in different environments
Control groups: cgroups
kernel feature to limit and isolate resource usage (cpu memory disk I/O etc)
Chroot:
Change the root to a directory on the filesystem for a single process, the process can not normally access files on the outside of this directory
Aufs:
Writable single-s
We use docker,
which is a tool that adds a user friendly layer to work with linux containers.
You get a command line interface and a rest interface.
You can create new images, commit their state, push/pull to a repository and a bunch of other usefull features.
----------
“A docker is the person that works on the dock loading and unloading ships”
Docker has a command line interface with git like commands for pulling down images, pushing new versions, diff, history etc.
And it heavily uses aufs, which is a ……….stackable unification file system wich unifies several directories and provides a single directory.
It is a layered filesystem where many containers can have their own filesystems, but all common files are shared or copied.
I will not go into detail how this work, but the effect is that you only need to change the diffs between two containers using mostly the same files.
With docker and linux containers you have the building blocks to create the containers that can run in the cloud.
Open stack and many cloud providers has native support for docker containers.
But it could also be the building blocks to create your own cloud or even your own paas.
Giving you more flexibility and control, and the possibility to tailor the infrastructire exactly to your need.
So I will finish up by demonstrating some of the basics of docker and how you can use the basics to implement more advanced use cases..
And the benefit of working with containers is many.
As you already have seen in the first demo, where I ran more than 50 containers on my laptop.
It is really fast, little overhead both for memory, processing and size.
They share the same kernel as the host, and only the differences to the file systems are stored, all equal files is stored just once.
-----------
And it is Fast: ~97% of bare metal
start/stop in miliseconds
Leightweight:
On a typical physical server, with average compute resources, you can easily run:
● 10-100 virtual machines
● 100-1000 containers
They use marginally more resources than the applications you run, as it shares most with the surrounding OS and other containers.
So lets finish with some hands on with docker
docker run -i -t -p 80:9292 bjornno/ubuntu /bin/bash
git clone https://github.com/bjornno/dockerdemo.git
cd dockerdemo
bundle
rackup
http://localhost:8000
diff
Commit
History
Share
Normally you would not create images interactively like this, but use a Dockerfile…
Normally you would not create the images interactively like this, but instead have a Dockerfile (think of it as a makefile for building a container), that is checked in with your application source code.
You always start with a image and adds stuff to it. Here I start with a ruby image which is an ubuntu with ruby tools pre installed.
docker build –t bjornno/app .
docker run -p 80:9292 bjornno/app
So that was a very short intro to docker.
I hope you now know a little more how the cloud and PaaS solutions works.
And that you could use the same tools locally for packaging your apps, testing, and deploying.
Check out dockers homepage for more resources.
And also my git repo for this demo.
Thank you..