From Zero Docker to Hackathon Winner - Marcos Lilljedahl and Jimena TapiaDocker, Inc.
This is my story about how I got involved in the Docker hackathon (and won) without knowing Docker at all. I'll share what technological limitations I had before using Docker and how I managed to solve them, and also some tips to getting started. As a closing, I'll talk about the Whaleprint project and some key features that we would love to see in docker today.
From Arm to Z: Building, Shipping, and Running a Multi-platform Docker Swarm ...Docker, Inc.
We live in a multi-platform world, and who doesn't want their project to run on all of them? The last few DockerCon events have covered the introduction of multi-platform image capabilities into the Docker registry and engine releases. Now it's time to put these features to good use building applications across architectures and running them all in a heterogeneous Docker Swarm! In this talk we'll cover the new `docker manifest` command for making multi-architecture images; how to emulate architectures in docker containers on your own machine; and give a live demonstration of these capabilities with a Docker Swarm consisting of workers of different CPU architectures, including armhf, ppc64le, s390x, and x86_64. We'll also share some pointers for making sure your project is multi-platform ready! Three Takeaways: 1. Attendees will be introduced to manifest lists and how to create multi-arch images using the new 'docker manifest' command. 2. Attendees will learn how to easily create and deploy a basic multi-arch service using multi-platform images. 3. Bonus: Attendees will learn how to run non-native docker containers on their systems.
Using Docker to Develop, Test and Run Maven Projects - Wouter DanesNLJUG
Docker recently hit version 1.0 and is being picked up around the world by Ops teams to ease running their applications. Docker can also play a big role in easing the development of applications. In this talk I will address how to use docker to: - create a more scalable build environment using jenkins and docker; - integration test your software using maven and docker; - package your software and run the images in different environments.
1. The document outlines the author's journey to becoming a Docker Captain, including founding their company Collabnix in 2015 and containerizing legacy Dell applications.
2. It discusses what Docker is and how it helps address the modern challenges of developing and deploying distributed, loosely coupled applications across multiple servers.
3. Docker Captains are elite community leaders and ambassadors who promote Docker through blogging, writing, speaking, tutorials, and open source contributions. The tips shared encourage getting involved in the Docker community by sharing knowledge and speaking at events.
This document summarizes a keynote about Docker's goals of making hardware programmable through containers and open standards. The keynote discusses Docker's goals of reinventing the programmer's toolbox by solving problems like runtime, packaging, composition and networking incrementally. It also discusses building better infrastructure plumbing and promoting open standards through projects like runC, Notary, the Open Container Project and more. The goal is to help organizations solve problems in unique ways through an open developer platform and standards.
DockerDay2015: Deploy Apps on IBM BluemixDocker-Hanoi
Tom Tran gave a presentation on IBM Bluemix at DockerDay Vietnam 2015. He discussed:
1) What Bluemix is and its core concepts including accounts, organizations, spaces, apps, and services.
2) The different deployment options for Bluemix including public, dedicated, and local environments.
3) The development tools available for building and deploying apps to Bluemix like the web IDE, Eclipse, Visual Studio, and command line.
Efficient Parallel Testing with Docker by Laura FrankDocker, Inc.
Fast and efficient software testing is easy with Docker. We often
use containers to maintain parity across development, testing, and production environments, but we can also use containerization to significantly reduce time needed for testing by spinning up multiple instances of fully isolated testing environments and executing tests in parallel. This strategy also helps you maximize the utilization of infrastructure resources. The enhanced toolset provided by Docker makes this process simple and unobtrusive, and you’ll see how Docker Engine, Registry, Machine, and Compose can work together to make your tests fast.
From Zero Docker to Hackathon Winner - Marcos Lilljedahl and Jimena TapiaDocker, Inc.
This is my story about how I got involved in the Docker hackathon (and won) without knowing Docker at all. I'll share what technological limitations I had before using Docker and how I managed to solve them, and also some tips to getting started. As a closing, I'll talk about the Whaleprint project and some key features that we would love to see in docker today.
From Arm to Z: Building, Shipping, and Running a Multi-platform Docker Swarm ...Docker, Inc.
We live in a multi-platform world, and who doesn't want their project to run on all of them? The last few DockerCon events have covered the introduction of multi-platform image capabilities into the Docker registry and engine releases. Now it's time to put these features to good use building applications across architectures and running them all in a heterogeneous Docker Swarm! In this talk we'll cover the new `docker manifest` command for making multi-architecture images; how to emulate architectures in docker containers on your own machine; and give a live demonstration of these capabilities with a Docker Swarm consisting of workers of different CPU architectures, including armhf, ppc64le, s390x, and x86_64. We'll also share some pointers for making sure your project is multi-platform ready! Three Takeaways: 1. Attendees will be introduced to manifest lists and how to create multi-arch images using the new 'docker manifest' command. 2. Attendees will learn how to easily create and deploy a basic multi-arch service using multi-platform images. 3. Bonus: Attendees will learn how to run non-native docker containers on their systems.
Using Docker to Develop, Test and Run Maven Projects - Wouter DanesNLJUG
Docker recently hit version 1.0 and is being picked up around the world by Ops teams to ease running their applications. Docker can also play a big role in easing the development of applications. In this talk I will address how to use docker to: - create a more scalable build environment using jenkins and docker; - integration test your software using maven and docker; - package your software and run the images in different environments.
1. The document outlines the author's journey to becoming a Docker Captain, including founding their company Collabnix in 2015 and containerizing legacy Dell applications.
2. It discusses what Docker is and how it helps address the modern challenges of developing and deploying distributed, loosely coupled applications across multiple servers.
3. Docker Captains are elite community leaders and ambassadors who promote Docker through blogging, writing, speaking, tutorials, and open source contributions. The tips shared encourage getting involved in the Docker community by sharing knowledge and speaking at events.
This document summarizes a keynote about Docker's goals of making hardware programmable through containers and open standards. The keynote discusses Docker's goals of reinventing the programmer's toolbox by solving problems like runtime, packaging, composition and networking incrementally. It also discusses building better infrastructure plumbing and promoting open standards through projects like runC, Notary, the Open Container Project and more. The goal is to help organizations solve problems in unique ways through an open developer platform and standards.
DockerDay2015: Deploy Apps on IBM BluemixDocker-Hanoi
Tom Tran gave a presentation on IBM Bluemix at DockerDay Vietnam 2015. He discussed:
1) What Bluemix is and its core concepts including accounts, organizations, spaces, apps, and services.
2) The different deployment options for Bluemix including public, dedicated, and local environments.
3) The development tools available for building and deploying apps to Bluemix like the web IDE, Eclipse, Visual Studio, and command line.
Efficient Parallel Testing with Docker by Laura FrankDocker, Inc.
Fast and efficient software testing is easy with Docker. We often
use containers to maintain parity across development, testing, and production environments, but we can also use containerization to significantly reduce time needed for testing by spinning up multiple instances of fully isolated testing environments and executing tests in parallel. This strategy also helps you maximize the utilization of infrastructure resources. The enhanced toolset provided by Docker makes this process simple and unobtrusive, and you’ll see how Docker Engine, Registry, Machine, and Compose can work together to make your tests fast.
Reduce DevOps Friction with Docker & Jenkins by Andy Pemberton, CloudbeesDocker, Inc.
Jenkins and Docker are two game-changing technologies: together, they have huge potential to reduce DevOps friction. Come learn about the integration points between CloudBees Jenkins Platform and Docker and how you can use them to get on the path to frictionless DevOps in your company.
The Tale of a Docker-based Continuous Delivery Pipeline by Rafe Colton (ModCl...Docker, Inc.
The ModCloth Platform team has been building a Docker-based continuous delivery pipeline. This presentation discusses that project and how we build containers at ModCloth. The topics include what goes into our containers; how to optimize builds to use the Docker build cache effectively; useful development workflows (including using fig); and the key decision to treat containers as processes instead of mini-vms. This presentation will also discuss (and demo!) the workflow we’ve adopted for building containers and how we’ve integrated container builds with our CI.
DockerLabs is a GITHUB repository which holds a mix of labs & tutorials related to Docker, Kubernetes & Cloud that will help you, no matter if you are a beginner, Sysadmins, IT Pro or Developer.
Works based on crowdsourcing model where group of Docker enthusiasts come together via to contribute towards a common goal –
“Learning by Collaborative Contributions”
"Workstation Up" - Docker Development at Flow by Mike RothDocker, Inc.
Docker is an integral part of Flow's technology stack, supporting everything from a developer's local environment to Production containers in AWS.
"Workstation" has become central to a developer's toolset at Flow, giving them the ability to bring up/down a service, along with any upstream/downstream dependencies, in a single, simple command implemented with GOlang CLI. For example, developers can run “workstation up --app www” - and reliably have the www app running along with its dozens of transitive dependencies. It truly is reliable - requiring no additional configuration - and just continues to work.
The team has recently transitioned to Docker for Mac Beta and just love referencing containers via localhost!
Immutable infrastructure with Docker and EC2dotCloud
This document discusses Gilt's strategy of using immutable infrastructure with Docker and EC2 to enable continuous delivery and minimize risk when deploying new software versions. Some key points made include:
- Gilt builds Docker containers for each new application version, creates a new "stack" of infrastructure to run the container, and uses incremental rollout and automated rollback to reduce risk.
- Immutable infrastructure emerges naturally with Docker since each version requires new containers and infrastructure rather than updating existing instances.
- Automating deployment, rollback, and incremental rollout across new infrastructure stacks reduces probability, cost and occurrences of failures when deploying new versions.
- Instant rollback is possible by moving traffic back to the previous version's infrastructure if
The document discusses how Docker can be used for integration testing. It describes how Docker allows running production environments locally, creating proofs of concept, and avoiding port collisions in continuous integration environments. It also outlines how Docker commands like build, run, start, stop, link, expose, and tag can fit into different steps of a build process. Finally, it introduces the docker-maven-plugin for building Docker images and integrating Docker into the Maven build lifecycle for testing.
DockerCon SF 2015: Enabling Microservices @OrbitzDocker, Inc.
The slides from Steve Hoffman and Rick Fast's presentation at DockerCon SF 2015 -
Talk Description:
In this talk we will discuss how we enabled decomposition of one of our 250+ system components into a continously deployed microservice cluster.
This includes building a standardized Docker server composed of various local companion services along side the Docker daemon including: dynamic service discovery via Consul, a log relay to a centralized Elasticsearch cluster, and forwarding/batching of Dropwizard metrics to Graphite.
Building on this we'll cover our Jenkins driven automated pipeline for building Docker images and rolling deployments via Ansible using static placement on existing infrastructure while prototyping dynamic placement using Docker + Apache Mesos.
Making it Easier to Contribute to Open Source Projects Using Docker Container...Docker, Inc.
Making it easy to contribute to open source project using Docker containers, by lowering the system admin required to get started. Also making it easy "try" out new technology.
Dockerizing stashboard - Docker meetup at TwiliodotCloud
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this session, we learnt about Docker ONBUILD triggers, how these triggers work and how to use them. In addition, we covered basic docker-compose introduction by demonstrating how to a mini microservices application (with 2 nodes). The session run for 30 minutes. the code sample used in the meetup can be found at github.com/Codefresh-Examples/express-angular-mongo
For any question please email us at contact@codefresh.io
Passionate about Docker technology and want to join our team? give us a shout at joinus@codefresh.io
Join our meetup to attend future sessions online @
meetup.com/Containers-101-online-meetup/
This document provides an introduction to Docker. It begins by introducing the presenter and agenda. It then explains that containers are not virtual machines and discusses the differences in architecture and benefits. It covers the basic Docker workflow of building, shipping, and running containers. It discusses Docker concepts like images, containers, and registries. It demonstrates basic Docker commands. It shows how to define a Dockerfile and build an image. It discusses data persistence using volumes. It covers using Docker Compose to define and run multi-container applications and Docker Swarm for clustering. It provides recommendations for getting started with Docker at different levels.
Node.js Rocks in Docker for Dev and OpsBret Fisher
This document discusses best practices for building Node.js applications in Docker containers. It covers topics like using multi-stage Docker builds to avoid packaging devDependencies in production images, properly handling process shutdown in Node.js, and using Docker Compose for local development with features like auto-restart on file changes and dependency management between linked services. The overall goal is to help build production-ready Node.js images and development environments that are optimized for speed, size and security.
Basic Idea
Develop a build system that leverages Docker for implementing continuous integration/deployment(CI/CD) pipeline. A git commit must kick off packaging a Docker Image and provisioning it in a VM.
A git based commit should be used for starting of a build for a docker image which would then be run and provisioned in a Virtual Machine. After every commit a series of test cases is then run on the code to ensure the correctness of the code. After all the test-cases pass, the image gets updated on docker-hub registry, and a VM gets provisioned which can then run the software directly (after pulling the image from the docker-hub).
This entire process ensures that the most recent and updated version of the code is available to the person who is using the software and this speeds up the overall process by at least 2-3 folds.
DockerCon SF 2015: Docker in the New York Times NewsroomDocker, Inc.
The document discusses how the New York Times uses Docker in its newsroom. Some key points:
- Docker is used to deploy over 300 micro-applications across multiple servers for density, speed, and true versioning.
- Docker provides consistency through immutable containers and distributed applications. It allows for horizontal scaling.
- Common services like configuration, discovery, routing and scheduling are abstracted out into separate micro-applications on GitHub like Remora and Promise.
- Docker containers provide consistency while applications are heterogeneous. Future areas of focus include improving notifications, secrets management, image registry, and automated building.
Lightweight virtualization uses container technology to isolate processes and their resources through namespaces and cgroups. Docker is a container management system that provides lightweight virtualization. Baidu chose Docker for its BAE platform because containers provide better isolation than sandboxes with fewer restrictions and lower costs. Docker meets BAE's needs but was improved with additional security and resource constraints for its PAAS platform.
Docker for Mac and Windows: The Insider's Guide by Justin CormackDocker, Inc.
Docker for Mac and Windows were released in beta in March, and provide lots of new features that users have been clamouring for including: file system notifications, simpler file sharing, and no Virtualbox hassles.
During this talk, I will give the inside guide to how these products work. We will look at all the major components and how they fit together to make up the product. This includes a technical deep dive covering the hypervisors for OSX and Windows, the custom file sharing code, the networking, the embedded Alpine Linux distribution, and more.
What's New in Docker 1.12 (June 20, 2016) by Mike Goelzer & Andrea LuzzardiMike Goelzer
Docker 1.12 introduces several new features for managing containerized applications at scale including Docker Swarm mode for native clustering and orchestration. Key features include services that allow defining and updating distributed applications, a built-in routing mesh for load balancing between nodes, and security improvements like cryptographic node identities and TLS encryption by default. The document also discusses plugins, health checks, and distributed application bundles for declaring stacks of services.
Reduce DevOps Friction with Docker & Jenkins by Andy Pemberton, CloudbeesDocker, Inc.
Jenkins and Docker are two game-changing technologies: together, they have huge potential to reduce DevOps friction. Come learn about the integration points between CloudBees Jenkins Platform and Docker and how you can use them to get on the path to frictionless DevOps in your company.
The Tale of a Docker-based Continuous Delivery Pipeline by Rafe Colton (ModCl...Docker, Inc.
The ModCloth Platform team has been building a Docker-based continuous delivery pipeline. This presentation discusses that project and how we build containers at ModCloth. The topics include what goes into our containers; how to optimize builds to use the Docker build cache effectively; useful development workflows (including using fig); and the key decision to treat containers as processes instead of mini-vms. This presentation will also discuss (and demo!) the workflow we’ve adopted for building containers and how we’ve integrated container builds with our CI.
DockerLabs is a GITHUB repository which holds a mix of labs & tutorials related to Docker, Kubernetes & Cloud that will help you, no matter if you are a beginner, Sysadmins, IT Pro or Developer.
Works based on crowdsourcing model where group of Docker enthusiasts come together via to contribute towards a common goal –
“Learning by Collaborative Contributions”
"Workstation Up" - Docker Development at Flow by Mike RothDocker, Inc.
Docker is an integral part of Flow's technology stack, supporting everything from a developer's local environment to Production containers in AWS.
"Workstation" has become central to a developer's toolset at Flow, giving them the ability to bring up/down a service, along with any upstream/downstream dependencies, in a single, simple command implemented with GOlang CLI. For example, developers can run “workstation up --app www” - and reliably have the www app running along with its dozens of transitive dependencies. It truly is reliable - requiring no additional configuration - and just continues to work.
The team has recently transitioned to Docker for Mac Beta and just love referencing containers via localhost!
Immutable infrastructure with Docker and EC2dotCloud
This document discusses Gilt's strategy of using immutable infrastructure with Docker and EC2 to enable continuous delivery and minimize risk when deploying new software versions. Some key points made include:
- Gilt builds Docker containers for each new application version, creates a new "stack" of infrastructure to run the container, and uses incremental rollout and automated rollback to reduce risk.
- Immutable infrastructure emerges naturally with Docker since each version requires new containers and infrastructure rather than updating existing instances.
- Automating deployment, rollback, and incremental rollout across new infrastructure stacks reduces probability, cost and occurrences of failures when deploying new versions.
- Instant rollback is possible by moving traffic back to the previous version's infrastructure if
The document discusses how Docker can be used for integration testing. It describes how Docker allows running production environments locally, creating proofs of concept, and avoiding port collisions in continuous integration environments. It also outlines how Docker commands like build, run, start, stop, link, expose, and tag can fit into different steps of a build process. Finally, it introduces the docker-maven-plugin for building Docker images and integrating Docker into the Maven build lifecycle for testing.
DockerCon SF 2015: Enabling Microservices @OrbitzDocker, Inc.
The slides from Steve Hoffman and Rick Fast's presentation at DockerCon SF 2015 -
Talk Description:
In this talk we will discuss how we enabled decomposition of one of our 250+ system components into a continously deployed microservice cluster.
This includes building a standardized Docker server composed of various local companion services along side the Docker daemon including: dynamic service discovery via Consul, a log relay to a centralized Elasticsearch cluster, and forwarding/batching of Dropwizard metrics to Graphite.
Building on this we'll cover our Jenkins driven automated pipeline for building Docker images and rolling deployments via Ansible using static placement on existing infrastructure while prototyping dynamic placement using Docker + Apache Mesos.
Making it Easier to Contribute to Open Source Projects Using Docker Container...Docker, Inc.
Making it easy to contribute to open source project using Docker containers, by lowering the system admin required to get started. Also making it easy "try" out new technology.
Dockerizing stashboard - Docker meetup at TwiliodotCloud
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
In this session, we learnt about Docker ONBUILD triggers, how these triggers work and how to use them. In addition, we covered basic docker-compose introduction by demonstrating how to a mini microservices application (with 2 nodes). The session run for 30 minutes. the code sample used in the meetup can be found at github.com/Codefresh-Examples/express-angular-mongo
For any question please email us at contact@codefresh.io
Passionate about Docker technology and want to join our team? give us a shout at joinus@codefresh.io
Join our meetup to attend future sessions online @
meetup.com/Containers-101-online-meetup/
This document provides an introduction to Docker. It begins by introducing the presenter and agenda. It then explains that containers are not virtual machines and discusses the differences in architecture and benefits. It covers the basic Docker workflow of building, shipping, and running containers. It discusses Docker concepts like images, containers, and registries. It demonstrates basic Docker commands. It shows how to define a Dockerfile and build an image. It discusses data persistence using volumes. It covers using Docker Compose to define and run multi-container applications and Docker Swarm for clustering. It provides recommendations for getting started with Docker at different levels.
Node.js Rocks in Docker for Dev and OpsBret Fisher
This document discusses best practices for building Node.js applications in Docker containers. It covers topics like using multi-stage Docker builds to avoid packaging devDependencies in production images, properly handling process shutdown in Node.js, and using Docker Compose for local development with features like auto-restart on file changes and dependency management between linked services. The overall goal is to help build production-ready Node.js images and development environments that are optimized for speed, size and security.
Basic Idea
Develop a build system that leverages Docker for implementing continuous integration/deployment(CI/CD) pipeline. A git commit must kick off packaging a Docker Image and provisioning it in a VM.
A git based commit should be used for starting of a build for a docker image which would then be run and provisioned in a Virtual Machine. After every commit a series of test cases is then run on the code to ensure the correctness of the code. After all the test-cases pass, the image gets updated on docker-hub registry, and a VM gets provisioned which can then run the software directly (after pulling the image from the docker-hub).
This entire process ensures that the most recent and updated version of the code is available to the person who is using the software and this speeds up the overall process by at least 2-3 folds.
DockerCon SF 2015: Docker in the New York Times NewsroomDocker, Inc.
The document discusses how the New York Times uses Docker in its newsroom. Some key points:
- Docker is used to deploy over 300 micro-applications across multiple servers for density, speed, and true versioning.
- Docker provides consistency through immutable containers and distributed applications. It allows for horizontal scaling.
- Common services like configuration, discovery, routing and scheduling are abstracted out into separate micro-applications on GitHub like Remora and Promise.
- Docker containers provide consistency while applications are heterogeneous. Future areas of focus include improving notifications, secrets management, image registry, and automated building.
Lightweight virtualization uses container technology to isolate processes and their resources through namespaces and cgroups. Docker is a container management system that provides lightweight virtualization. Baidu chose Docker for its BAE platform because containers provide better isolation than sandboxes with fewer restrictions and lower costs. Docker meets BAE's needs but was improved with additional security and resource constraints for its PAAS platform.
Docker for Mac and Windows: The Insider's Guide by Justin CormackDocker, Inc.
Docker for Mac and Windows were released in beta in March, and provide lots of new features that users have been clamouring for including: file system notifications, simpler file sharing, and no Virtualbox hassles.
During this talk, I will give the inside guide to how these products work. We will look at all the major components and how they fit together to make up the product. This includes a technical deep dive covering the hypervisors for OSX and Windows, the custom file sharing code, the networking, the embedded Alpine Linux distribution, and more.
What's New in Docker 1.12 (June 20, 2016) by Mike Goelzer & Andrea LuzzardiMike Goelzer
Docker 1.12 introduces several new features for managing containerized applications at scale including Docker Swarm mode for native clustering and orchestration. Key features include services that allow defining and updating distributed applications, a built-in routing mesh for load balancing between nodes, and security improvements like cryptographic node identities and TLS encryption by default. The document also discusses plugins, health checks, and distributed application bundles for declaring stacks of services.
DockerCon EU 2015: Monitoring and Managing Dynamic Docker EnvironmentsDocker, Inc.
Presented by Alois Reitbauer, Chief Technical Evangelist, ruxit
This talk provides detailed insights into how to manage large-scale production Docker environments. We will cover how to tune your containerised micro services for ideal performance, validate automated deployments with Marathon and Mesos and tune and manage the deployment complexity of hundreds of nodes. Last but not least we will demonstrate how easy it is to get up and running monitoring Docker using Ruxit.
Mobycraft - Docker in 8-bit by Aditya Gupta Docker, Inc.
Mobycraft is a Minecraft client-side mod to manage and visualize Docker containers in Minecraft. This mod can be installed in any standard Minecraft client and allow young kids to learn Docker fundamentals in a fun way. It allowed a 13-year old boy to apply his Minecraft modding skills to pick up Docker concepts such as Engine, Machine, Swarm, and Remote API.
This project became a great bonding experience between a father and a son. It allowed them to engage in fun and geeky conversations, such as code reviews and tooling discussion, and thereby building memories for a lifetime.
Sailabove is a container as a service offering from OVH Group that provides features like private registries, networks, volumes, load balancing, and log/metric collection. It is used internally by OVH for their code engine and microservices and allows customers to deploy custom code in containers for processing IoT sensor data and building IoT platforms as a service with on-demand resources. Sailabove is part of OVH's focus on containers and leverages their experience with Docker and containers internally over the past few years.
DockerCon SF 2015: From Months to MinutesDocker, Inc.
How GE Appliances Brought Docker Into the Enterprise -
Talk Description: In a traditional enterprise IT shop, it’s common to find a plethora of aging technologies. From COBOL running on mainframes, to huge Java applications spread across both physical and virtual hardware, the enterprise can sometimes resemble a living museum of IT. For application owners, bureaucracy, lack of business priority, and complex infrastructure can slow innovation, and make it difficult to stay current.
At GE, we leveraged Docker/Mesos to create an internal application platform that brings speed, simplicity, and cutting edge deployment processes to our enterprise, empowering developers to go from concept to production in minutes, rather than months.
Monitoring Containers at New Relic by Sean Kane Docker, Inc.
New Relic went all-in with Docker very early, and has continued to stay on the forefront of the container ecosystem, both as a user of the technology and as a monitoring and analytics vendor. Today, a variety of teams utilize Docker in a variety of ways using a mix of home-grown and external OSS frameworks. The Container Fabric team is working on our next generation container platform utilizing Mesos/Marathon and a variety of other OSS tools, like Heka. We will briefly review our setup, and then discuss how we gather data that we care about from the ecosystem and inject it into the various tools we rely on for visibility and analytics. We love the functionality of what we’ve built, and we believe that you will find it useful too.
20 mins to Faking the DevOps Unicorn by Matt williams, DatadogDocker, Inc.
Something changed in job ads over the last few years: everyone wants the DevOps Unicorn. What is that and why did this happen? You probably have a good amount of what is in that description, but is there an easy way to fill in the rest of the 100%? It turns out that it is possible to fake your way to being a DevOps Unicorn. All that you need is a way to know which metrics are the most important. And to know that you need a framework that applies everywhere. No really, it's easier than you think. There is some work needed on your part, but just a few minutes is enough to get started. In this 20 minute session, we will cover what changed in the market, what the framework looks like, and how to apply it to all of the containerized applications you need to monitor.
This document provides step-by-step instructions for dockerizing a WordPress installation. It describes downloading Docker, creating a Dockerfile to install Apache, MySQL, PHP and WordPress, building a Docker image from the Dockerfile, running the image as a container and configuring WordPress. The summary commits the container changes to an image, tags it, and pushes it to the Docker registry so others can use it.
The document provides an agenda and highlights from Day 2 of a DockerCon conference. It includes details on upcoming keynote speakers from IBM and Google, as well as highlights from the previous day including top speakers, quotes, and photos from the event. Winners of the hackathon are also announced, which include projects that focus on instrumentation and logging of Docker hosts as well as facilitating open source slideshow authoring.
The document discusses hardware-backed authentication keys called YubiKeys. It describes the YubiKey's durable design with no batteries or moving parts that allows it to communicate over USB and support multiple authentication protocols like one-time passwords, smart cards, and FIDO U2F. The YubiKey 4 can support multiple configurations and protocols simultaneously. YubiKeys can be used across many applications and services for two-factor authentication, code signing, and encryption. Yubico, the company that created YubiKeys, has millions of users in 150 countries and provides resources and documentation to support integration and use of YubiKeys.
The Mushroom Cloud Effect or What Happens When Containers Fail? by Alois Mayr...Docker, Inc.
This document discusses the "mushroom cloud effect" that can occur when containers fail in highly dynamic container environments. It describes how a failure in one container due to a lack of disk space on the host led to cascading failures that affected many dependent services. The failure spread as container health checks failed and orchestration rescheduled containers, eventually exhausting disk space and preventing any new containers from running. Automated monitoring is needed to pinpoint the root cause of such cascading failures in complex systems with many interdependent containers and services.
John Engates, CTO at Docker, gave the keynote at dockercon14. He discussed how Docker allows developers to test and deploy applications in new ways that were previously not possible. He highlighted trends in mobility, big data/analytics, the internet of things, and social/context technologies. Engates also discussed how Rackspace will offer native support for Docker on their cloud platform.
Slides from Vincent Batts' Talk at DockerCon SF 2015
Description: Gain inspiration and confidence to contribute in a mutually beneficial way. To become more than just a consumer of the ecosystem, develop the project yourself and profit your singular initiative. Whether you are looking for enterprise ready solutions, to make development life easier, or you’d like to see certain new features, making contributions to the greater community with a public spirit ensures the continued growth and health of the Docker project. Through personal stories of acceptance and concessions, I will share practical tips and lessons learned as a regular open source contributor and particularly involved Docker collaborator.
DockerCon SF 2015: Networking BreakoutDocker, Inc.
This document provides an overview of Docker's new networking capabilities through libnetwork. It introduces libnetwork, which provides a pluggable driver-based networking stack for containers. Libnetwork implements the Container Network Model and provides APIs for creating and managing networks and endpoints. It supports multiple networking drivers like bridge and overlay. The goals are to make networking and services first-class objects in Docker and span networks across multiple hosts. The presentation encourages trying the new networking features in Docker experimental and contributing to libnetwork.
Experiences with AWS immutable deploys and job processingDocker, Inc.
How Docker is used at Gilt: At Gilt we use Docker primarily as a unit of immutability and to allow a standard way of deploying all kinds of software as opposed to its container properties.
Why Gilt built Ionroller: An overview of the problems we tried to solve with Ionroller and immutable deploys. Pitfalls we've encountered with immutable deployments since Ionroller saw adoption in Gilt. Will cover issues such as DNS traffic migration, utilisation of resources ELBs not warmed up properly, Elasticbeanstalk using Nginx as proxy etc. Our experiences with Cloudformation and Codedeploy as an alternative to Ionroller and Elasticbeanstalk.
Jobs: How we used to do batch jobs. Solutions we considered such as Mesos and Chronos. An overview of Sundial, an in house solution we built in the last few months and hope to open source for running containerized Docker jobs on Amazon ECS and why we chose it as our preferred solution.
Tyrion Cannister Neural Styles by Dora Korpar and Siphan BouDocker, Inc.
Understanding deep learning is a real challenge, and even getting started installing software on your machine is difficult. In creating our Docker "hack", our goal was to try to make the deep learning algorithm Neural Style accessible to everyone by creating a user-friendly GUI that can be launched with one command and that optimizes the entire experience.
Introduction to Docker I Docker Workshop @ TwitterDocker, Inc.
Docker is a software platform that allows applications to run in isolated containers. Containers use layers and images to package up code and dependencies to enable portable deployment between computing infrastructures. Docker containers build upon the idea of operating-system-level virtualization to deliver software in packages called containers that include everything needed to run the application.
DockerCon14 Contributing to Docker by TianonDocker, Inc.
This document provides information on various ways to contribute to the Docker project, including contributing code via pull requests, helping with documentation, testing, triaging issues, and more. It discusses the large number of existing contributors and files in the Docker codebase. The document encourages submitting pull requests and offers tips for doing so, such as keeping PRs small and easy to review, writing tests, and discussing significant proposed changes first on IRC. It also introduces tools like Gordon that can help with code reviews and maintenance.
DockerCon EU 2015: Sparebank; a journey towards DockerDocker, Inc.
Sparebank 1 bank wanted to break up its monolithic application architecture into microservices. It first used virtual machines for development environments but found Docker provided better portability and efficiency. The bank has been adopting Docker for all of its applications and services, bringing benefits like easier deployment and management. It sees Docker as key to its future application development and infrastructure.
Getting Started Contributing to DockerDocker, Inc.
This document provides information and steps for contributing to open source projects like Docker. It discusses what Docker is, different ways to contribute including documentation, tutorials, issues, and code. The main steps outlined are to sign up for GitHub, install Docker, find an issue to work on, fork the Docker repository, make your contribution, and submit a pull request. Contributing code involves forking the repository, making changes locally, and submitting a pull request. Getting help is available through forums, chatrooms, and IRC. The goal is for many people to make small improvements through collaboration.
January OpenNTF Webinar: 4D - Domino Docker Deep DiveHoward Greenberg
This talk is for Domino admins and developers who would like to leverage containerization and want to get started navigating this jungle of technologies. Docker, Podman, Kubernetes, OpenShift, and more - we're going to explain when to use which platform and how to automate your deployments. The speakers will be:
Thomas Hampel, Director, HCL Product Management
Daniel Nashed, HCL Lifetime Ambassador
I have evidence that using git and GitHub for documentation and community doc techniques can give us 300 doc changes in a month. I’ve bet my career on these methods and I want to share with you.
How and Why you can and should Participate in Open Source Projects (AMIS, Sof...Lucas Jellema
For a long time I have been reluctant to actively contribute to an open source project. I thought it would be rather complicated and demanding – and that I didn't have the knowledge or skills for it or at the very least that they (the project team) weren't waiting for me.
In December 2021, I decided to have a serious input into the Dapr.io project – and now finally to determine how it works and whether it is really that complicated. In this session I want to tell you about my experiences. How Fork, Clone, Branch, Push (and PR) is the rhythm of contributing to an open source project and how you do that (these are all Git actions against GitHub repositories). How to learn how such a project functions and how to connect to it; which tools are needed, which communication channels are used. I tell how the standards of the project – largely automatically enforced – help me to become a better software engineer, with an eye for readability and testability of the code.
How the review process is quite exciting once you have offered your contribution. And how the final "merge to master" of my contribution and then the actual release (Dapr 1.6 contains my first contribution) are nice milestones.
I hope to motivate participants in this session to also take the step yourself and contribute to an open source project in the form of issues or samples, documentation or code. It's valuable to the community and the specific project and I think it's definitely a valuable experience for the "contributer". I looked up to it and now that I've done it gives me confidence – and it tastes like more (I could still use some help with the work on Dapr.io, by the way).
The document summarizes Day 2 of DockerCon. It discusses Docker being ready for production use with solutions for building, shipping, and running containers. It highlights Docker Hub growth and improvements to quality. Business Insider's journey with Docker is presented, covering lessons learned around local development and using Puppet and Docker Hub. Future directions discussed include orchestration tools and image security.
Presentation from the CopenhagenR - useR Group Meetup at IT University of Copenhagen on Oct. 11 2016 on how to automatically deploy web applications built in R to a Cloud server (here DigitalOcean) using open source Docker with GitHub and basic Continuous Integration (here CircleCI) for automated testing and deployment.
Presenter:
Niels Ole Dam, Things in Flow
Excerpt from the invitation to the meetup:
Niels will talk about his favorite R-setup and will demonstrate how R, combined with some nice DockeR and Github tricks, can help even small teams and companies leverage the power of modern cloud computing. Niels uses R on a daily basis in his work as an independent consultant and he will share his thoughts on DockeR at the next meetup.
Subjects covered:
- How to setup and use RStudio, Docker, Docker Compose locally and with GitHub intgration.
- How to setup and use Continuous Integration (CI) with automated testing and deployment to DigitalOcean using CircelCI and with reuse of the same docker-compose.yml file locally and remotely.
- Tips and tricks on how to setup a good workflow.
- Introduction to all the technologies and tools used.
There are lots of clickable links in the pdf-version of the slides.
Code for the setup demonstrated can be found at:
https://github.com/thingsinflow/r-docker-workflow
An accompanying clickable flowdiagram can be found at:
http://bit.ly/R-Docker-workflow
Enjoy!
:-)
This document discusses Apache CloudStack, an open source cloud computing platform that is currently in incubation at the Apache Software Foundation (ASF). It provides information on how the ASF works, CloudStack's status and participation, the development process, and how to contribute to the project. Key points include that CloudStack 4.0 has been released, the code has moved to Apache repositories, and contributors are working on improvements for the 4.1 release including packaging, dev tools, storage architecture, and testing. The document encourages participation through discussions, documentation, code reviews, and issue tracking on CloudStack's Jira and Review Board systems.
This document summarizes Anne Gentle's presentation on treating documentation like code. Some key points include:
- Documentation should be stored and managed in a version control system like code to enable features like automatic builds, continuous integration, testing, and review processes.
- Goals of treating docs like code include improving quality, trust, workflows, ability to scale collaboration, and giving documentation ownership.
- Plans should consider users, contributors, deliverables, and business needs when setting up documentation processes and tools.
- Automating builds, publishing, and other processes through continuous integration/delivery helps improve efficiency and accuracy of documentation.
A Hands-On Introduction To Docker Containers.pdfEdith Puclla
This document provides a hands-on introduction to Docker containers. It discusses what Docker is, how it solves the "it works on my machine" problem by allowing applications to run the same in any Docker environment. It then covers how to install Docker, basic Docker components like Dockerfile, image and container. It demonstrates basic Docker commands and discusses what's new with Docker like extensions and WebAssembly support. Finally it promotes getting involved in the Docker community.
OpNovember Water Cooler Talk: The Mystery of Domino on Docker - Part 1Howard Greenberg
November Water Cooler Talk: The Mystery of Domino on Docker - Part 1
Why Use Docker for Managers, Developers, or Administrators - Christian Guedemann, Webgate
Docker Demo from a Developer Perspective - Dan Dumont, HCL
Using Docker for Admins - Roberto Boccadoro, ELD Engineering
For the video go to http://www.openntf.org/webinars
Design thinking: Building a developer experience from scratchBecky Todd
Becky Todd discusses redesigning Atlassian's developer documentation from scratch. User testing revealed that developers struggled to navigate outdated content and often failed to complete onboarding tasks. Todd then led a redesign process that improved search, navigation, and content updates through design thinking. This included community contribution, early adopters, and building a content authoring toolkit. Follow-up user testing showed developers could now complete onboarding tasks in under 30 minutes and deploy usable code.
Habitat Workshop at Velocity London 2017Mandi Walls
Mandi Walls is the Technical Community Manager for EMEA at Chef and the Habitat Community lead is Ian Henry. The document discusses how modern applications are trending toward immutability, platform agnosticism, complexity reduction, and scalability. It provides an overview of ways to work with Habitat, including using artifacts that run themselves via the supervisor, exporting to Docker, and building plans from scratch or using scaffolding.
Karthik Gaekwad presented on containers and microservices. He discussed the evolution of DevOps and how containers and microservices fit within the DevOps paradigm by allowing for collaboration between development and operations teams. He defined containers, microservices, and common containerization concepts. Gaekwad also provided examples of how organizations are using containers for standardization, continuous integration and delivery pipelines, and hosting legacy applications.
The document discusses the concept of "Docs Like Code", which treats documentation like code by storing docs in version control systems, using plain text formats, and integrating doc writing and publishing into the same workflow as software development. It provides the case study of Apache Pulsar, which uses GitHub and other tools to collaborate effectively on docs between developers, writers and users. Benefits include better doc quality and syncing with code through continuous integration/deployment of docs.
Slides from Ben Golub's (Docker CEO, @golubbe) opening day keynote at the DockerCon EU conference in Amsterdam on December 4, 2014 (http://europe.dockercon.com/)
Hey curious friend, let's play a game. How can we bring together two different companies, an established enterprise with traditional dev and ops having cultural differences when working together with a DevOps champion startup. In the middle exists a number of real use cases on how we are bringing DevOps culture with Docker to Atos Worldline. In my talk I will discuss the first use cases for Docker at Atos Worldline, where we are today, learnings and benefits until now, our future technology stack and how Docker is changing our human stack a.k.a. how we communicate and work together.
When you treat docs like code, you multiply everyone’s efforts and streamline processes through collaboration, automation, and innovation. The benefits are real, but these efforts are complex. The ways you can leverage developer process and tools vary widely. Let’s unpack the absolute best situation for using a docs as code model.
Then, we can walk through multiple considerations that may point you in one direction or another. We can talk about version control, publishing, REST API considerations, source formats, automation, quality controls and testing, and lessons learned. Let’s study best practices that are outcome-dependent and situational, creating strategic efforts.
This document provides an overview of open source software and recommendations for companies adopting open source. It discusses how open source can accelerate projects and attract talent. It profiles companies like Adobe, Netflix, Oracle, Samsung, and Microsoft that contribute to open source despite not being commonly associated with it. The document outlines how to launch an open source project, including using an open source license, README, contribution guidelines, and code of conduct. It also discusses roles in open source projects and various open source business models. The recommendations encourage companies to publish independent components on GitHub, take releases from GitHub, and create developer websites to engage with the open source community.
Similar to Contribute 101: Compose/Kitematic/Machine by Ben Bonnefoy (20)
Containerize Your Game Server for the Best Multiplayer Experience Docker, Inc.
Raymond Arifianto, AccelByte and
Mark Mandel, Google -
We have been deploying containerized micro-services for our Game Backend Services for a while. Now we are tackling the challenge to scale up fleets of game dedicated servers in multiple regions, multiple data centers and multiple providers - some in bare metal, some in Cloud. So we leverage docker containerization to deploy Game Servers to achieve Portability, Fast Deployment and Predictability, enabling us to scale up to thousands of servers, on demand, without a sweat.
How to Improve Your Image Builds Using Advance Docker BuildDocker, Inc.
Nicholas Dille, Haufe-Lexware + Docker Captain -
Docker continues to be the standard tool for building container images. For more than a year Docker ships with BuildKit as an alternative image builder, providing advanced features for secret and cache management. These features help to make image builds faster and more secure. In this session, Docker Captain Nicholas Dille will teach you how to use Buildkit features to your advantage.
Build & Deploy Multi-Container Applications to AWSDocker, Inc.
Lukonde Mwila, Entelect -
As the cloud-native approach to development and deployment becomes more prevalent, it's an exciting time for software engineers to be equipped on how to dockerize multi-container applications and deploy them to the cloud.
In this talk, Lukonde Mwila, Software Engineer at Entelect, will cover the following topics:
- Docker Compose
- Containerizing an Nginx Server
- Containerizing an React App
- Containerizing an Node.JS App
- Containerizing anMongoDB App
- Runing Multi-Container App Locally
- Creating a CI/CD Pipeline
- Adding a build stage to test containers and push images to Docker Hub
- Deploying Multi-Container App to AWS Elastic Beanstalk
Lukonde will start by giving an overview of how Docker Compose works and how it makes it very easy and straightforward to startup multiple Docker containers at the same time and automatically connect them together with some form of networking.
After that, Lukonde will take a hands on approach to containerize an Nginx server, a React app, a NodeJS app and a MongoDB instance to demonstrate the power of Docker Compose. He'll demonstrate usage of two Docker files for an application, one production grade and the other for local development and running of tests. Lastly, he'll demonstrate creating a CI/CD pipeline in AWS to build and test our Docker images before pushing them to Docker Hub or AWS ECR, and finally deploying our multi-container application AWS Elastic Beanstalk.
Securing Your Containerized Applications with NGINXDocker, Inc.
The document summarizes Kevin Jones' presentation on securing containerized applications with NGINX. It discusses the benefits of using a reverse proxy for security, NGINX best practices for TLS configuration, and deploying NGINX in Docker containers. It also provides code examples and configurations for setting up NGINX as a reverse proxy, optimizing TLS, and using NGINX as a sidecar proxy.
How To Build and Run Node Apps with Docker and ComposeDocker, Inc.
Kathleen Juell, Digital Ocean -
Containers are an essential part of today's microservice ecosystem, as they allow developers and operators to maintain standards of reliability and reproducibility in fast-paced deployment scenarios. And while there are best practices that extend across stacks in containerized environments, there are also things that make each stack distinct, starting with the application image itself.
This talk will dive into some of these particularities, both at the image and service level, while also covering general best practices for building and running Node applications with database backends using Docker and Compose.
Jessica Deen, Microsoft -
Helm 3 is here; let's go hands-on! In this demo-fueled session, I'll walk you through the differences between Helm 2 and Helm 3. I'll offer tips for a successful rollout or upgrade, go over how to easily use charts created for Helm 2 with Helm 3 (without changing your syntax), and review opportunities where you can participate in the project's future.
Distributed Deep Learning with Docker at SalesforceDocker, Inc.
Jeff Hajewski, Salesforce -
There is a wealth of information on building deep learning models with PyTorch or TensorFlow. Anyone interested in building a deep learning model is only a quick search away from a number of clear and well written tutorials that will take them from zero knowledge to having a working image classifier. But what happens when you need to deploy these models in a production setting? At Salesforce, we use TensorFlow models to help us provide customers with insights into their data, and we do this as close to real-time as possible. Designing these systems in a scalable manner requires overcoming a number of design challenges, but the core component is Docker. Docker enables us to design highly scalable systems by allowing us to focus on service interactions, rather than how our services will interact with the hardware. Docker is also at the core of our test infrastructure, allowing developers and data scientists to build and test the system in an end to end manner on their local machines. While some of this may sound complex, the core message is simplicity - Docker allows us to focus on the aspects of the system that matter, greatly simplifying our lives.
The First 10M Pulls: Building The Official Curl Image for Docker HubDocker, Inc.
James Fuller, webcomposite s.r.o. -
Curl is the venerable (yet very modern) 'swiss army knife' command line tool and library for transferring data with URLs. Recently we (the Curl team) decided to build a release for Docker Hub. This talk will outline our current development workflow with respect to the docker image and provide insights on what it takes to build a docker image for mass public consumption. We are also keen to learn from users and other developers how we might improve and enhance the official curl docker image.
Fabian Stäber, Instana -
In recent years, we saw a great paradigm shift in software engineering away from static monolithic applications towards dynamic distributed horizontally scalable architectures. Docker is one of the key technologies enabling this development. This shift poses a lot of new challenges for application monitoring, ranging from practical issues (need for automation) to technical challenges (Docker networking) to organizational topics (blurring line between software engineers and operations) to fundamental questions (define what is an application). In this talk we show how Docker changed the way we do monitoring, how modern application monitoring systems work, and what future developments we expect.
COVID-19 in Italy: How Docker is Helping the Biggest Italian IT Company Conti...Docker, Inc.
Clemente Biondo, Engineering Ingegneria Informatica -
When the COVID 19 pandemic started, Engineering Ingegneria Informatica Group (1.25 billion euros of revenues, 65 offices around the world, 12.000 employees) was forced to put their digital transformation to the test in order to maintain operational continuity. In this session, Clemente Biondo, the Tech Lead of the Information Systems Department, will share how his company is reacting to this unforeseeable scenario and how Docker-driven digital transformation had paved the path for work to continue remotely. Clemente will discuss learnings moving from colocated teams, manual approaches, email based-business processes, and a monolithic application to a mature DevOps culture characterized by a distributed autonomous workforce and a continuous deployment process that deploys backward-compatible Docker containerized microservices into hybrid multi cloud datacenters an average of twice a day with zero-downtime. He will detail how they use Docker to unify dev, test and production environments, and as an efficient and automated mechanism for deploying applications. Lastly, Clemente shares how, in our darkest hour, he and others are working to shine their brightest light.
The document discusses how NOAA's Space Weather Prediction Center transitioned from a monolithic architecture to microservices using Docker. It describes how they started with a small verification project, then replaced their critical GOES satellite data source. This improved developers' morale and delivery speed. They encountered some security issues initially but learned from them. The transition was very successful and allowed them to quickly expand their mission to forecast aviation impacts using scientists' models packaged as Docker services.
Become a Docker Power User With Microsoft Visual Studio CodeDocker, Inc.
Brian Christner, 56k + Docker Captain -
In this session, we will unlock the full potential of using Microsoft Visual Studio Code (VS Code) and Docker Desktop to turn you into a Docker Power User. When we expand and utilize the VS Code Docker plugin, we can take our projects and Docker skills to the next level. In addition to using VS Code, we streamline our Docker Desktop development workflow with less context switching and built-in shortcuts. You will learn how to bootstrap new projects, quickly write Dockerfiles utilizing templates, build, run, and interact with containers all from VS Code.
How to Use Mirroring and Caching to Optimize your Container RegistryDocker, Inc.
Brandon Mitchell, Boxboat + Docker Captain -
How do you make your builds more performant? This talk looks at options to configure caching and mirroring of images that you need to save on bandwidth costs and to keep running even if something goes down upstream.
Monolithic to Microservices + Docker = SDLC on Steroids!Docker, Inc.
Ashish Sharma, SS&C Eze -
SS&C Eze provides various products in the stock market domain. We spent the last couple of years building Eclipse which is an investment suite born in cloud. The journey so far has been very interesting. The very first version of the product were a bunch of monolithic windows services and deployed using Octopus tool. We successfully managed to bring all the monolithic problem to the cloud and created a nightmare for ourselves. We then started applying microservices architecture principles and started breaking the monolithic into small services. Very soon we realized that we need a better packaging/deployment tool. Docker looked like a magical solution to our problem. Since its adoption, It has not only solved the deployment problem for us but has made a deep impact on different aspects of SDLC. It allowed us to use heterogeneous technology stacks, simplified development environment setup, simplified our testing strategy, improved our speed of delivery, and made our developers more productive. In this talk I would like to share our experience of using Docker and its positive impact on our SDLC.
Kubernetes networking can be complex to scale due to issues like growing iptables rules, but newer solutions are helping. Pod networking uses CNI plugins like flannel or Calico to assign each pod an IP and allow communication. Service networking uses kube-proxy and iptables or IPVS for load balancing to pods. DNS is used to resolve service names to IPs. While Kubernetes networking brings flexibility, operators must learn the nuances of their specific CNI plugin and issues can arise, but the ecosystem adapts quickly to new needs and changes don't impact all workloads.
Andy Clemenko, StackRox -
One underutilized, and amazing, thing about the docker image scheme is labels. Labels are a built in way to document all aspects about the image itself. Think about all the information that the tags inside your clothing carry. If you care to look you can find out everything about the garment. All that information can be very valuable. Now think about how we can leverage labels to carry similar information. We can even use the labels to contain Docker Compose or even Kubernetes Yaml. We can even include labels into the CI/CD process making things more secure and smoother. Come find out some fun techniques on how to leverage labels to do some fun and amazing things.
Using Docker Hub at Scale to Support Micro Focus' Delivery and Deployment ModelDocker, Inc.
Micro Focus uses Docker Hub at scale to support its software delivery and deployment model. Some key points:
- Docker Hub is used as the registry service for Micro Focus container images
- It allows for optimized, secure, reliable and cost-effective software delivery through deployments and updates of container images to customers and partners
- Micro Focus leverages features like private repositories, offline/online access, signing and scanning of images, and integration with CI/CD pipelines
- Over 1,650 organizations, 450 repositories, and 18 teams are used on Docker Hub to manage access and deliver software from Micro Focus
Build & Deploy Multi-Container Applications to AWSDocker, Inc.
Lukonde Mwila, Entelect
As the cloud-native approach to development and deployment becomes more prevalent, it's an exciting time for software engineers to be equipped on how to dockerize multi-container applications and deploy them to the cloud.
In this talk, Lukonde Mwila, Software Engineer at Entelect, will cover the following topics:
- Docker Compose
- Containerizing an Nginx Server
- Containerizing an React App
- Containerizing an Node.JS App
- Containerizing anMongoDB App
- Runing Multi-Container App Locally
- Creating a CI/CD Pipeline
- Adding a build stage to test containers and push images to Docker Hub
- Deploying Multi-Container App to AWS Elastic Beanstalk
Lukonde will start by giving an overview of how Docker Compose works and how it makes it very easy and straightforward to startup multiple Docker containers at the same time and automatically connect them together with some form of networking.
After that, Lukonde will take a hands on approach to containerize an Nginx server, a React app, a NodeJS app and a MongoDB instance to demonstrate the power of Docker Compose. He'll demonstrate usage of two Docker files for an application, one production grade and the other for local development and running of tests. Lastly, he'll demonstrate creating a CI/CD pipeline in AWS to build and test our Docker images before pushing them to Docker Hub or AWS ECR, and finally deploying our multi-container application AWS Elastic Beanstalk.
From Fortran on the Desktop to Kubernetes in the Cloud: A Windows Migration S...Docker, Inc.
Elton Stoneman, Docker Captain + Container Consultant and Trainer
How do you provide a SaaS offering when your product is a 10-year old Fortran app, currently built to run on Windows 10? With Docker and Kubernetes of course - and you can do it in a week (... to prototype level at least).
In this session I'll walk through the processes and practicalities of taking an older Windows app, making it run in containers with Kubernetes, and then building a simple API wrapper to host the whole stack as a cloud-based SaaS product.
There's a lot of technology here from a real world case study, and I'll focus on:
- running Windows apps in Docker containers
- building a .NET Core API which can run in Linux or Windows containers
- running the stack in Kubernetes with Docker Desktop locally and AKS in the cloud
- configuring AKS workloads in Azure to burst out to Azure Container Instances
And there's a core theme to this session: Docker and Kubernetes are complex technologies, but they're the key to modern development. If you invest time learning them, they make projects like this simple, portable, fast and fun.
Developing with Docker for the Arm ArchitectureDocker, Inc.
This virtual meetup introduces the concepts and best practices of using Docker containers for software development for the Arm architecture across a variety of hardware systems. Using Docker Desktop on Windows or Mac, Amazon Web Services (AWS) A1 instances, and embedded Linux, we will demonstrate the latest Docker features to build, share, and run multi-architecture images with transparent support for Arm.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
5. • Helps developers build, ship and run applications faster
• Written in Go
• Open source!
• Under an Apache 2.0 License
• Most starred Go Project on Github
Fun fact: In the past year, 58% of pull requests submitted to the Docker Engine
were authored by people who are neither maintainers nor Docker employees,
12% by maintainers working for other companies, and 30% by Docker
employees themselves.
Introduction to the Docker Project
6. • Each subproject, or repo, has its own set of maintainers
• Want to find them? Open the MAINTAINERS file in the docker/xyz repo
• Maintainers are responsible for:
Reviewing/Approving PRs & making design decisions
Doing day-today work of running project operations
• Not all Maintainers work at Docker
• New Maintainers are added from the community by existing Maintainers
Maintainers spend their time doing whatever needs to be done, not necessarily
what is the most interesting or fun
The project structure
8. • Is the fastest and easiest way to start using Docker on your laptop
• Build and run containers through a simple, yet powerful graphical
user interface (GUI)
• Mount volumes easily via file browsing
Kitematic
9. Docker Compose
• Is a tool for defining and running multi-container Docker
applications.
• Uses a Compose file (YAML) to configure your application’s
services
• With a single command, creates and starts all the services from
your configuration.
10. Docker Machine
• Automatically sets up Docker on your cloud providers and inside
your data center
• Provisions the hosts, installs Docker Engine on them and
configures the Docker client to talk to the Docker Engines.
• Allows you to setup separate environments with a few single
commands
13. Yes, there are many ways to contribute, not just with code!
• Write documentation
• Review PRs or Triage issues
• Report bugs
• Mentor other users or contributors
• Make tutorials
• Write tests
• Organize a meetup
• Answer support questions on IRC, Docker’s forums or stackoverflow
Ways to Contribute
14. Step 1: Install the software you need (Docker, git, etc.)
Step 2: Fork the repo
Step 3: Find an issue to work on
Step 4: Work on that issue
Step 5: Create a pull request
Step 6: Participate in your PR review until a successful merge
Full guide here: https://docs.docker.com/opensource/code/
How you contribute to code
15. Finding and claiming those issues
If you’re just starting out, find unclaimed issues based on your experience level
and interests:
• Go to the repo you’re interested in
• Click “Issues”
• Filter for experience: ie, exp/beginner
• Filter for type of issue: ie, kind/docs
• Claim the issue by commenting “#dibs”
More on issues
20. • Hangout on IRC and take a look at comments on GitHub
• Check out our Open Source documentation at:
https://docs.docker.com/opensource
• Attend one of the Meet the Maintainers sessions
• Follow the discussion in our Docker Forums
How to learn more
21.
22. Well almost...
• French Ben @FrenchBen - Maintainer Kitematic
• David Gageot @dgageot - Maintainer Machine
• Aanand Prasad @AanandPrasad - Maintainer Compose
Ask us anything