The first Technology driven reality competition showcasing the incredible virtualization community members and their talents. Virtually Everywhere · virtualdesignmaster.com
#VirtualDesignMaster 3 Challenge 3 - Steven Viljoenvdmchallenge
While things on Mars have been going well, since we now have multiple options for our infrastructure, the fact remains that we are working on the colonization of a foreign planet.
This season, on Virtual Design Master, we’ve been dealing with the aftermath of a second round of the zombie outbreak. We’ve designed the systems to build spaceships, which are evacuating what’s left of the human race from the earth. We’ve built the infrastructure to support our new civilization on the moon. We’ve also tried to prepare what’s left of the earth in case there is another round of the outbreak, by fortifying islands across the world by deploying infrastructure remotely.
Oscon London 2016 - Docker from Development to ProductionPatrick Chanezon
Docker revolutionized how developers and operations teams build, ship, and run applications, enabling them to leverage the latest advancements in software development: the microservice architecture style, the immutable infrastructure deployment style, and the DevOps cultural model.
Existing software layers are not a great fit to leverage these trends. Infrastructure as a service is too low level; platform as a service is too high level; but containers as a service (CaaS) is just right. Container images are just the right level of abstraction for DevOps, allowing developers to specify all their dependencies at build time, building and testing an artifact that, when ready to ship, is the exact thing that will run in production. CaaS gives ops teams the tools to control how to run these workloads securely and efficiently, providing portability between different cloud providers and on-premises deployments.
Patrick Chanezon offers a detailed overview of the latest evolutions to the Docker ecosystem enabling CaaS: standards (OCI, CNCF), infrastructure (runC, containerd, Notary), platform (Docker, Swarm), and services (Docker Cloud, Docker Datacenter). Patrick ends with a demo showing how to do in-container development of a Spring Boot application on a Mac running a preconfigured IDE in a container, provision a highly available Swarm cluster using Docker Datacenter on a cloud provider, and leverage the latest Docker tools to build, ship, and run a polyglot application architected as a set of microservices—including how to set up load balancing.
The first Technology driven reality competition showcasing the incredible virtualization community members and their talents. Virtually Everywhere · virtualdesignmaster.com
#VirtualDesignMaster 3 Challenge 3 - Steven Viljoenvdmchallenge
While things on Mars have been going well, since we now have multiple options for our infrastructure, the fact remains that we are working on the colonization of a foreign planet.
This season, on Virtual Design Master, we’ve been dealing with the aftermath of a second round of the zombie outbreak. We’ve designed the systems to build spaceships, which are evacuating what’s left of the human race from the earth. We’ve built the infrastructure to support our new civilization on the moon. We’ve also tried to prepare what’s left of the earth in case there is another round of the outbreak, by fortifying islands across the world by deploying infrastructure remotely.
Oscon London 2016 - Docker from Development to ProductionPatrick Chanezon
Docker revolutionized how developers and operations teams build, ship, and run applications, enabling them to leverage the latest advancements in software development: the microservice architecture style, the immutable infrastructure deployment style, and the DevOps cultural model.
Existing software layers are not a great fit to leverage these trends. Infrastructure as a service is too low level; platform as a service is too high level; but containers as a service (CaaS) is just right. Container images are just the right level of abstraction for DevOps, allowing developers to specify all their dependencies at build time, building and testing an artifact that, when ready to ship, is the exact thing that will run in production. CaaS gives ops teams the tools to control how to run these workloads securely and efficiently, providing portability between different cloud providers and on-premises deployments.
Patrick Chanezon offers a detailed overview of the latest evolutions to the Docker ecosystem enabling CaaS: standards (OCI, CNCF), infrastructure (runC, containerd, Notary), platform (Docker, Swarm), and services (Docker Cloud, Docker Datacenter). Patrick ends with a demo showing how to do in-container development of a Spring Boot application on a Mac running a preconfigured IDE in a container, provision a highly available Swarm cluster using Docker Datacenter on a cloud provider, and leverage the latest Docker tools to build, ship, and run a polyglot application architected as a set of microservices—including how to set up load balancing.
Docker introduction.
References : The Docker Book : Containerization is the new virtualization
http://www.amazon.in/Docker-Book-Containerization-new-virtualization-ebook/dp/B00LRROTI4/ref=sr_1_1?ie=UTF8&qid=1422003961&sr=8-1&keywords=docker+book
This document provides instructions for a lab on using Docker to install and run containers. The objectives are to install Docker, create images and containers, launch applications in containers, and store and access data in containers. It outlines setting up Docker on Ubuntu, pulling existing images like Fedora and running containers from them. Specific steps look at running the "hello-world" container, installing wget in a Fedora container, and persisting data. The last section provides instructions for building a Docker image to run the OwnCloud application in a container, addressing aspects like installing the application, configuring network access, and persisting data and configuration.
Docker New York Meetup May 2015 - The Docker Orchestration Ecosystem on Azure Patrick Chanezon
Docker Inc. provides products and services for managing containers. The Docker ecosystem includes open source tools for building, shipping, and running applications packaged into containers. Key components include Docker Engine for building containers, Docker Hub for sharing container images, and orchestration tools like Docker Swarm and Kubernetes for deploying containers across multiple hosts. Many companies are developing technologies that work with Docker to provide additional container management capabilities.
Creating an effective developer experience on KubernetesLenses.io
In this presentation, we will talk about the tools we leveraged and developed, the processes we established in our CI/CD in order to give to our developers fully isolated environments per their needs and run automated tests in a Kubernetes cluster.
Presented by Spiros Economakis, a Senior DevOps Engineer and Cloud Integration Lead at Lenses.io
Email him at: spiros@lenses.io
Follow him on Twitter: @spirosoik
If you're not familiar with Docker yet, here is your chance to catch up: a quick overview of the Open Source Docker Engine, and its associated services delivered through the Docker Hub. It also includes Jérôme will also discuss the new features of Docker 1.0, and briefly explain how you can run and maintain Docker on Azure. In addition, an Azure team member will demonstrate how deploy docker to Azure. The presentation will be followed by a Q&A session!
Remix of two other open source presentations along with my own content, 40 slides set to play at 20 seconds auto-timed (similar to Pecha-Kucha style timing). This was delivered via Caribbean Tech Dev forum's monthly Google Hangout in November 2015, and video can be viewed at https://www.youtube.com/watch?v=xANrsSin_-0
Docker is a containerization platform that packages applications and dependencies into containers that can run on any infrastructure. Containers are more lightweight than virtual machines and provide operating-system-level virtualization. The key Docker components are the Docker Engine (including the daemon and client), images, containers, registries, and networks. Dockerfiles define how to build images automatically by running commands. Images act as templates for containers, which are lightweight and portable environments for applications.
The document provides instructions for installing Red Hat Enterprise Linux 6 (RHEL 6) using the basic graphical installation process, including requirements for hardware, partitioning disks, setting the hostname and time zone, creating users and passwords, and selecting installation options. It outlines the steps to boot from the installation media, navigate the installation screens to configure language and keyboard settings, storage selection, networking configuration, and partitioning disks for the root, boot and swap partitions.
This document provides steps to deploy a WordPress application with a MySQL database on Kubernetes. It demonstrates creating secrets for database credentials, persistent volumes for database storage, services for external access, and deploying the WordPress and MySQL containers. Various Kubernetes objects like deployments, services, secrets and persistent volumes are defined in YAML files and applied to set up the WordPress application on Kubernetes.
OpenStack services can be containerized to provide a more lightweight and portable deployment option compared to virtual machines. The document discusses how OpenStack services like Nova, Neutron, Cinder etc. run as individual containers that share the host operating system and can be configured by modifying files in a shared directory. Logs and operations like start/stop/restart can also be managed from the host through the container IDs. Overall, containerization allows OpenStack deployments to benefit from advantages like agility, density and simplified operations.
Space ship depots continue to come online, with launches to the Moon occurring daily. The moon bases
have been stabilized, and humans are beginning to settle in.
Not surprisingly, many island nations fared better than expected during the outbreak. Their isolation could
be very valuable if we face a third round of infection before the earth has been evacuated. We need to get
them back on the grid as soon as possible. Japan, Madagascar, and Iceland are first on the list for
building infrastructures. Local teams have managed to get some equipment, but all you’ll have to start
with is one repository and blank hardware. As we’ve learned while building the depots, travel is
dangerous and difficult. You will need to create your infrastructure in a lab first, to ensure it will be able to
be quickly deployed by a local team. Once the process has been deemed successful, we will establish a
satellite link to the islands to get everything we need to the local repositories
The document is a slide deck presentation by Bret Fisher on going into production with Docker and Swarm. Some key points from the presentation include focusing first on Dockerfiles rather than complex orchestration, avoiding anti-patterns like using the "latest" tag or trapping unique data in containers, and starting with a simple 3 node Swarm cluster for high availability before scaling up further. The presentation also provides examples of full tech stacks using various open source and commercial tools for a Dockerized infrastructure.
This presentation gives a brief understanding of docker architecture, explains what docker is not, followed by a description of basic commands and explains CD/CI as an application of docker.
This document discusses security mechanisms in Docker containers, including control groups (cgroups) to limit resources, namespaces to isolate processes, and capabilities to restrict privileges. It covers secure computing modes like seccomp that sandbox system calls. Linux security modules like AppArmor and SELinux are also mentioned, along with best practices for the Docker daemon and container security overall.
This document provides an overview of Docker Swarm and how to set up and use a Docker Swarm cluster. It discusses key Swarm concepts, initializing a cluster, adding nodes, deploying services, rolling updates, draining nodes, failure scenarios, and the Raft consensus algorithm used for leader election in Swarm mode. The document walks through examples of creating a Swarm, adding nodes, deploying a service, inspecting and scaling services, rolling updates, and draining nodes. It also covers failure scenarios for nodes and managers and how the Swarm handles them.
Since its first 1.12 release on July 2016, Docker Swarm Mode has matured enough as a clustering and scheduling tool for IT administrators and developers who can easily establish and manage a cluster of Docker nodes as a single virtual system. Swarm mode integrates the orchestration capabilities of Docker Swarm into Docker Engine itself and help administrators and developers with the ability to add or subtract container iterations as computing demands change. With sophisticated but easy to implement features like built-in Service Discovery, Routing Mesh, Secrets, declarative service model, scaling of the services, desired state reconciliation, scheduling, filters, multi-host networking model, Load-Balancing, rolling updates etc. Docker 17.06 is all set for production-ready product today. Join me webinar organised by Docker Izmir, to get familiar with the current Swarm Mode capabilities & functionalities across the heterogeneous environments.
The document discusses Docker Swarm, a Docker container orchestration tool. It provides an overview of key Swarm features like cluster management, service discovery, load balancing, rolling updates and high availability. It also discusses how to deploy applications using Swarm, including accessing GPUs, the deployment workflow, and using Swarm on ARM architectures. The conclusion states that the best orchestration tool depends on one's use case and preferences as each has advantages and disadvantages.
Under the Hood with Docker Swarm Mode - Drew Erny and Nishant Totla, DockerDocker, Inc.
Join SwarmKit maintainers Drew and Nishant as they showcase features that have made Swarm Mode even more powerful, without compromising the operational simplicity it was designed with. They will discuss the implementation of new features that streamline deployments, increase security, and reduce downtime. These substantial additions to Swarm Mode are completely transparent and straightforward to use, and users may not realize they're already benefiting from these improvements under the hood.
This document defines horror genre conventions and subgenres of horror films. Genre conventions are elements like characters and plot points that distinguish one genre from another. Horror aims to elicit a negative emotional reaction by playing on primal fears. The document outlines several horror subgenres including comedy horror, action horror, slasher films, zombie films, psychological horror, and science fiction horror. It provides examples of popular films that fall within each subgenre.
Docker introduction.
References : The Docker Book : Containerization is the new virtualization
http://www.amazon.in/Docker-Book-Containerization-new-virtualization-ebook/dp/B00LRROTI4/ref=sr_1_1?ie=UTF8&qid=1422003961&sr=8-1&keywords=docker+book
This document provides instructions for a lab on using Docker to install and run containers. The objectives are to install Docker, create images and containers, launch applications in containers, and store and access data in containers. It outlines setting up Docker on Ubuntu, pulling existing images like Fedora and running containers from them. Specific steps look at running the "hello-world" container, installing wget in a Fedora container, and persisting data. The last section provides instructions for building a Docker image to run the OwnCloud application in a container, addressing aspects like installing the application, configuring network access, and persisting data and configuration.
Docker New York Meetup May 2015 - The Docker Orchestration Ecosystem on Azure Patrick Chanezon
Docker Inc. provides products and services for managing containers. The Docker ecosystem includes open source tools for building, shipping, and running applications packaged into containers. Key components include Docker Engine for building containers, Docker Hub for sharing container images, and orchestration tools like Docker Swarm and Kubernetes for deploying containers across multiple hosts. Many companies are developing technologies that work with Docker to provide additional container management capabilities.
Creating an effective developer experience on KubernetesLenses.io
In this presentation, we will talk about the tools we leveraged and developed, the processes we established in our CI/CD in order to give to our developers fully isolated environments per their needs and run automated tests in a Kubernetes cluster.
Presented by Spiros Economakis, a Senior DevOps Engineer and Cloud Integration Lead at Lenses.io
Email him at: spiros@lenses.io
Follow him on Twitter: @spirosoik
If you're not familiar with Docker yet, here is your chance to catch up: a quick overview of the Open Source Docker Engine, and its associated services delivered through the Docker Hub. It also includes Jérôme will also discuss the new features of Docker 1.0, and briefly explain how you can run and maintain Docker on Azure. In addition, an Azure team member will demonstrate how deploy docker to Azure. The presentation will be followed by a Q&A session!
Remix of two other open source presentations along with my own content, 40 slides set to play at 20 seconds auto-timed (similar to Pecha-Kucha style timing). This was delivered via Caribbean Tech Dev forum's monthly Google Hangout in November 2015, and video can be viewed at https://www.youtube.com/watch?v=xANrsSin_-0
Docker is a containerization platform that packages applications and dependencies into containers that can run on any infrastructure. Containers are more lightweight than virtual machines and provide operating-system-level virtualization. The key Docker components are the Docker Engine (including the daemon and client), images, containers, registries, and networks. Dockerfiles define how to build images automatically by running commands. Images act as templates for containers, which are lightweight and portable environments for applications.
The document provides instructions for installing Red Hat Enterprise Linux 6 (RHEL 6) using the basic graphical installation process, including requirements for hardware, partitioning disks, setting the hostname and time zone, creating users and passwords, and selecting installation options. It outlines the steps to boot from the installation media, navigate the installation screens to configure language and keyboard settings, storage selection, networking configuration, and partitioning disks for the root, boot and swap partitions.
This document provides steps to deploy a WordPress application with a MySQL database on Kubernetes. It demonstrates creating secrets for database credentials, persistent volumes for database storage, services for external access, and deploying the WordPress and MySQL containers. Various Kubernetes objects like deployments, services, secrets and persistent volumes are defined in YAML files and applied to set up the WordPress application on Kubernetes.
OpenStack services can be containerized to provide a more lightweight and portable deployment option compared to virtual machines. The document discusses how OpenStack services like Nova, Neutron, Cinder etc. run as individual containers that share the host operating system and can be configured by modifying files in a shared directory. Logs and operations like start/stop/restart can also be managed from the host through the container IDs. Overall, containerization allows OpenStack deployments to benefit from advantages like agility, density and simplified operations.
Space ship depots continue to come online, with launches to the Moon occurring daily. The moon bases
have been stabilized, and humans are beginning to settle in.
Not surprisingly, many island nations fared better than expected during the outbreak. Their isolation could
be very valuable if we face a third round of infection before the earth has been evacuated. We need to get
them back on the grid as soon as possible. Japan, Madagascar, and Iceland are first on the list for
building infrastructures. Local teams have managed to get some equipment, but all you’ll have to start
with is one repository and blank hardware. As we’ve learned while building the depots, travel is
dangerous and difficult. You will need to create your infrastructure in a lab first, to ensure it will be able to
be quickly deployed by a local team. Once the process has been deemed successful, we will establish a
satellite link to the islands to get everything we need to the local repositories
The document is a slide deck presentation by Bret Fisher on going into production with Docker and Swarm. Some key points from the presentation include focusing first on Dockerfiles rather than complex orchestration, avoiding anti-patterns like using the "latest" tag or trapping unique data in containers, and starting with a simple 3 node Swarm cluster for high availability before scaling up further. The presentation also provides examples of full tech stacks using various open source and commercial tools for a Dockerized infrastructure.
This presentation gives a brief understanding of docker architecture, explains what docker is not, followed by a description of basic commands and explains CD/CI as an application of docker.
This document discusses security mechanisms in Docker containers, including control groups (cgroups) to limit resources, namespaces to isolate processes, and capabilities to restrict privileges. It covers secure computing modes like seccomp that sandbox system calls. Linux security modules like AppArmor and SELinux are also mentioned, along with best practices for the Docker daemon and container security overall.
This document provides an overview of Docker Swarm and how to set up and use a Docker Swarm cluster. It discusses key Swarm concepts, initializing a cluster, adding nodes, deploying services, rolling updates, draining nodes, failure scenarios, and the Raft consensus algorithm used for leader election in Swarm mode. The document walks through examples of creating a Swarm, adding nodes, deploying a service, inspecting and scaling services, rolling updates, and draining nodes. It also covers failure scenarios for nodes and managers and how the Swarm handles them.
Since its first 1.12 release on July 2016, Docker Swarm Mode has matured enough as a clustering and scheduling tool for IT administrators and developers who can easily establish and manage a cluster of Docker nodes as a single virtual system. Swarm mode integrates the orchestration capabilities of Docker Swarm into Docker Engine itself and help administrators and developers with the ability to add or subtract container iterations as computing demands change. With sophisticated but easy to implement features like built-in Service Discovery, Routing Mesh, Secrets, declarative service model, scaling of the services, desired state reconciliation, scheduling, filters, multi-host networking model, Load-Balancing, rolling updates etc. Docker 17.06 is all set for production-ready product today. Join me webinar organised by Docker Izmir, to get familiar with the current Swarm Mode capabilities & functionalities across the heterogeneous environments.
The document discusses Docker Swarm, a Docker container orchestration tool. It provides an overview of key Swarm features like cluster management, service discovery, load balancing, rolling updates and high availability. It also discusses how to deploy applications using Swarm, including accessing GPUs, the deployment workflow, and using Swarm on ARM architectures. The conclusion states that the best orchestration tool depends on one's use case and preferences as each has advantages and disadvantages.
Under the Hood with Docker Swarm Mode - Drew Erny and Nishant Totla, DockerDocker, Inc.
Join SwarmKit maintainers Drew and Nishant as they showcase features that have made Swarm Mode even more powerful, without compromising the operational simplicity it was designed with. They will discuss the implementation of new features that streamline deployments, increase security, and reduce downtime. These substantial additions to Swarm Mode are completely transparent and straightforward to use, and users may not realize they're already benefiting from these improvements under the hood.
This document defines horror genre conventions and subgenres of horror films. Genre conventions are elements like characters and plot points that distinguish one genre from another. Horror aims to elicit a negative emotional reaction by playing on primal fears. The document outlines several horror subgenres including comedy horror, action horror, slasher films, zombie films, psychological horror, and science fiction horror. It provides examples of popular films that fall within each subgenre.
The document provides advice on starting a business by discussing common reasons why businesses fail such as ignorance, aimlessness, laziness, impatience, and greed. It emphasizes that building a business requires strategy, targeting opportunities, planning, and using the right tools. Some examples of business opportunities mentioned include taking advantage of demand for umbrellas, raincoats, and funeral services during typhoons. The entrepreneurial mindset involves seizing opportunities by assessing risks, rewards, workability, sustainability, and profitability. Turning an idea into a business requires understanding the problem solution, identifying the target market, and having a unique value proposition.
With strong evidence to suggest that knowledge or ‘thought leadership’, when done well, can significantly raise profile and build and strengthen brands, leading to stronger client relationships and new revenue building opportunities.
El documento habla sobre un juego llamado Gears of War. El juego es sin precedentes y invita a los jugadores a soñar. El autor del documento es APST: RIERA MOSQUERA DENNIS GABRIEL.
El documento describe un molde para hacer un cojín en forma de muñeco de nieve. Fue compartido por Claudia y publicado en el sitio web de ecoartesanias.com. El documento también menciona brevemente que la Navidad conmemora el nacimiento de Jesucristo.
Nature reserves work to protect wildlife and environments by conserving natural resources. While early humans used resources like wood sustainably, modern usage rates are alarming and unsustainable unless replaced. If deforestation and resource depletion continue, shortages will result with loss of ecosystems. Conservation through sparing usage and alternative energies can lessen impacts on dwindling resources. Russia has over 100 nature reserves established to preserve 1.4% of its land and wildlife. Collective conservation efforts are needed to protect natural resources for the future.
Europe is a continent located in the northern hemisphere. It is separated from Asia by the Ural Mountains and from Africa by the Mediterranean Sea. There are 46 countries in Europe, with Russia being the largest and Vatican City being the smallest. The population of Europe is approximately 800 million people, around 65% of whom live in cities. Some of the most populated European cities are Moscow, London, Istanbul, Paris, and Madrid. There are about 40 major languages spoken in Europe, with several such as English, Spanish, Portuguese, and French also spoken outside of Europe in countries like the United States, Canada, Brazil, and parts of Africa.
Тренды рынка игровых решений. Летняя версия. Olga Iachmeneva
Этот опрос мы проводили в конце лета 2014 года. Сейчас мы будем его повторять т к есть предположения, что данные немного поменяются. Следите за нашей рассылкой! Будем делиться. http://rgpgnews.gr8.com/
This document outlines Christina Sewell's graphic design assignment for a book sleeve. It includes 3 parts: 1) Research of book cover designs, 2) Designing the draft and final book sleeve with cover, spine, and back cover, and 3) Printing the final book sleeve. The book sleeve is designed for "Colour Blind" by Christina Sewell and includes praise quotes for the book on the front cover. Research includes photos of 3 book covers. The draft and final designs improve on the initial book sleeve concept.
This presentation by Andrew Aslinger discusses best practices and pitfalls of integrating Docker into Continuous Delivery Pipelines. Learn how Andrew and his team used Docker to replace Chef to simplify their development and migration processes.
The document discusses how NOAA's Space Weather Prediction Center transitioned from a monolithic architecture to microservices using Docker. It describes how they started with a small verification project, then replaced their critical GOES satellite data source. This improved developers' morale and delivery speed. They encountered some security issues initially but learned from them. The transition was very successful and allowed them to quickly expand their mission to forecast aviation impacts using scientists' models packaged as Docker services.
Velocity NYC 2017: Building Resilient Microservices with Kubernetes, Docker, ...Ambassador Labs
1. The presentation introduces Docker, Kubernetes, and Envoy as foundational tools for building microservices. Docker allows packaging applications into portable containers, Kubernetes provides a platform to manage containers across clusters of hosts, and Envoy handles traffic routing and resilience at the application layer.
2. The presenters demonstrate how to build a simple Python web application into a Docker container image. They then deploy the containerized application to a Kubernetes cluster using Kubernetes objects like deployments and services. This allows the application to scale across multiple pods and be accessed via a stable service endpoint.
3. Finally, the presenters note that as applications become distributed across microservices, failures at the application layer (L7) become more common and
Docker in Production, Look No Hands! by Scott CoultonDocker, Inc.
In this session we will talk about HealthDirect’s journey with Docker. We will follow the life cycle of a container through our CD process to its home in our swarm cluster with just a git commit thanks to configuration management. We will cover the CD process for Docker, Docker swarm, Docker networking and service discovery. The audience will leave with a solid foundation of how to build a production ready swarm cluster (A github repo with code will be given). They will also have the knowledge of how to implement a CD framework using Docker.
Dataverse can be deployed using Docker containers to improve maintainability and portability. The document discusses how Docker can isolate applications and their dependencies into portable containers. It provides an example of deploying Dataverse as a set of microservices within Docker containers. Instructions are included on building Docker images, running containers, and managing the containers and images through commands and tools like Docker Desktop, Docker Hub, and Docker Compose.
This document discusses containers and container orchestration on Azure. It begins with an introduction to containers and their advantages over virtual machines. It then covers building Dockerfiles, container commands, and hosting container registries and applications on Azure. Container orchestration with Kubernetes is discussed as a way to deploy and scale containerized applications on the cloud, providing capabilities like auto-scaling, self-healing, service discovery and load balancing. The document points to additional future content on using Azure Kubernetes Service.
This document discusses Docker, containers, and how Docker addresses challenges with complex application deployment. It provides examples of how Docker has helped companies reduce deployment times and improve infrastructure utilization. Key points covered include:
- Docker provides a platform to build, ship and run distributed applications using containers.
- Containers allow for decoupled services, fast iterative development, and scaling applications across multiple environments like development, testing, and production.
- Docker addresses the complexity of deploying applications with different dependencies and targets by using a standardized "container system" analogous to intermodal shipping containers.
- Companies using Docker have seen benefits like reducing deployment times from 9 months to 15 minutes and improving infrastructure utilization.
This document discusses Docker, containers, and containerization. It begins by explaining why containers and Docker have become popular, noting that modern applications are increasingly decoupled services that require fast, iterative development and deployment to multiple environments. It then discusses how deployment has become complex with diverse stacks, frameworks, databases and targets. Docker addresses this problem by providing a standardized way to package applications into containers that are portable and can run anywhere. The document provides examples of results organizations have seen from using Docker, such as significantly reduced deployment times and increased infrastructure efficiency. It also covers Docker concepts like images, containers, the Dockerfile and Docker Compose.
This document provides a step-by-step tutorial for creating a simple CORBA application with a C++ server and Java client. It describes installing Orbacus 4.0 beta 2 for C++ and Java, setting environment variables, specifying a "Count" object in IDL, implementing the C++ server and Java client code, and running the example. The goal is to demonstrate basic CORBA functionality like platform and language transparency through a simple but working implementation.
Tom Leach and Travis Thieman of GameChanger talk about their experiences migrating their build and deploy pipeline from being heavily based on Chef to one based around Docker.
This presentation is split in to two main sections. The first section covers the motivations for why GameChanger, as a fast-growing startup, identified a need to replace it's existing Chef-based deploy model with a model which reduces deploy-time risk and allows its engineering team to scale.
The second section is a high-level walkthrough of the new GameChanger deploy pipeline based around Docker.
ContainerDayVietnam2016: Dockerize a small businessDocker-Hanoi
This document discusses how Docker can transform development and deployment processes for modern applications. It outlines some of the challenges of developing and deploying applications across different environments, and how Docker addresses these challenges through containerization. The document then provides examples of how to dockerize a Rails and Python application, set up an Nginx reverse proxy with Let's Encrypt, and configure a Docker cluster for continuous integration testing.
PuppetConf 2017: What’s in the Box?!- Leveraging Puppet Enterprise & Docker- ...Puppet
“Docker, Docker, Docker.” It’s a phrase we hear often, but what are containers, what can they be used for, and why should you know more about them? In this session, Grace (Puppet) and Tricia (AppDynamics) will introduce attendees to Docker and help them build and deploy their first container with Puppet. They will leverage the docker_image_build module from the Puppet Forge and take attendees through the proper workflow for coupling Docker and Puppet together. The session will focus on how to use some of the newest Docker features, such as multi-stage build files and password stores within Docker so you can pass "secrets" to a swarm for login credentials. The goal is to provide newcomers with a working proficiency of how to get started deploying containers using Puppet as their automation tool.
Deploying deep learning models with Docker and KubernetesPetteriTeikariPhD
Short introduction for platform agnostic production deployment with some medical examples.
Alternative download: https://www.dropbox.com/s/qlml5k5h113trat/deep_cloudArchitecture.pdf?dl=0
EclipseCon 2016 - OCCIware : one Cloud API to rule them allMarc Dutoo
This document provides an overview of OCCIware, a project that aims to create a cloud consumer platform using the Open Cloud Computing Interface (OCCI) standard. It discusses the need for such a platform given the fragmented state of existing cloud solutions. OCCIware takes a model-driven engineering approach, using Eclipse modeling tools to generate an OCCI extension, designer, and runtime configuration from a domain model. The document demonstrates using these tools to model a Linked Data application and deploy its configuration to Docker. Upcoming work on OCCIware includes improving existing generators, integrating additional capabilities like simulation, and contributing back to the OCCI standard.
OCCIware Project at EclipseCon France 2016, by Marc Dutoo, Open WideOCCIware
Hear hear dev & ops alike - ever got bitten by the fragmentation of the Cloud space at deployment time, By AWS vs Azure, Open Shift vs Heroku ? in a word, ever dreamt of configuring at once your Cloud application along with both its VMs and database ? Well, the extensible Open Cloud Computing Interface (OCCI) REST API (see http://occi-wg.org/) allows just that, by addressing the whole XaaS spectrum.
And now, OCCI is getting powerboosted by Eclipse Modeling and formal foundations. Enter Cloud Designer and other outputs of the OCCIware project (See http://www.occiware.org) : multiple visual representations, one per Cloud layer and technology. XaaS Cloud extension model validation, documentation & ops scripting generation. Simulation, decision-making comparison. Connectors that bring those models to life by getting their status from common Cloud services. Runtime middleware, deployed, monitored, adminstrated. And tackling the very interesting challenge of modeling a meta API in EMF's metamodel, while staying true to EMF, Eclipse tools and the OCCI standard.
Featuring Eclipse Sirius, Acceleo generators, EMF at runtime. Coming soon to a new Eclipse Foundation project near you, if so you'd like.
This talk includes a demonstration of the Docker connector and of how to use Cloud Designer to configure a simple Cloud application's deployment on the Roboconf PaaS system and OpenStack infrastructure.
Using Rancher and Docker with RightScale at Industrie IT RightScale
Many early Docker users are also now looking at clustering solutions such as Rancher. Industrie IT is using Docker, Rancher, and RightScale to help clients build digital applications using continuous integration (CI) and continuous delivery (CD) practices.
The document discusses continuous deployment with Docker. It begins with introductions of the presenter Andrew Aslinger and an overview of Docker. It then discusses using Docker for continuous deployment on AWS, including building and pushing Docker images, triggering EC2 instances to pull the latest images. It covers some advanced Docker techniques and OpenWhere's experiences using Docker. It recommends Docker for continuous deployment but notes some limitations for more complex scenarios.
The document summarizes Day 2 of DockerCon. It discusses Docker being ready for production use with solutions for building, shipping, and running containers. It highlights Docker Hub growth and improvements to quality. Business Insider's journey with Docker is presented, covering lessons learned around local development and using Puppet and Docker Hub. Future directions discussed include orchestration tools and image security.
Similar to #VirtualDesignMaster 3 Challenge 4 - Dennis George (20)
While things on Mars have been going well, since we now have multiple options for our infrastructure, the fact remains that we are working on the colonization of a foreign planet.
#VirtualDesignMaster 3 Challenge 3 – James Brownvdmchallenge
While things on Mars have been going well, since we now have multiple options for our infrastructure, the fact remains that we are working on the colonization of a foreign planet.
While things on Mars have been going well, since we now have multiple options for our infrastructure, the fact remains that we are working on the colonization of a foreign planet.
#VirtualDesignMaster 3 Challenge 3 - Dennis Georgevdmchallenge
While things on Mars have been going well, since we now have multiple options for our infrastructure, the fact remains that we are working on the colonization of a foreign planet.
#VirtualDesignMaster 3 Challenge 3 - Abdullah Abdullahvdmchallenge
While things on Mars have been going well, since we now have multiple options for our infrastructure, the fact remains that we are working on the colonization of a foreign planet.
#VirtualDesignMaster 3 Challenge 2 - Steven Viljoenvdmchallenge
We’ve examined how we can rebuild inrastucture from scratch, but now let’s think outside the box, and inside the clouds. Before the zombie apocalypse began, many organizations were beginning to leverage public cloud infrastructures for a number of reasons.
We’ve examined how we can rebuild inrastucture from scratch, but now let’s think outside the box, and inside the clouds. Before the zombie apocalypse began, many organizations were beginning to leverage public cloud infrastructures for a number of reasons.
#VirtualDesignMaster 3 Challenge 2 – James Brownvdmchallenge
We’ve examined how we can rebuild inrastucture from scratch, but now let’s think outside the box, and inside the clouds. Before the zombie apocalypse began, many organizations were beginning to leverage public cloud infrastructures for a number of reasons.
We’ve examined how we can rebuild inrastucture from scratch, but now let’s think outside the box, and inside the clouds. Before the zombie apocalypse began, many organizations were beginning to leverage public cloud infrastructures for a number of reasons.
#VirtualDesignMaster 3 Challenge 2 - Dennis Georgevdmchallenge
We’ve examined how we can rebuild inrastucture from scratch, but now let’s think outside the box, and inside the clouds. Before the zombie apocalypse began, many organizations were beginning to leverage public cloud infrastructures for a number of reasons.
#VirtualDesignMaster 3 Challenge 2 - Abdullah Abdullahvdmchallenge
We’ve examined how we can rebuild inrastucture from scratch, but now let’s think outside the box, and inside the clouds. Before the zombie apocalypse began, many organizations were beginning to leverage public cloud infrastructures for a number of reasons.
#VirtualDesignMaster 3 Challenge 1 - Abdullah Abdullahvdmchallenge
We are now settled on Mars, and ready to build a more permanent infrastructure. Keep in mind that power, cooling, and space are extremely expensive resources on Mars.
#VirtualDesignMaster 3 Challenge 1 - Dennis Georgevdmchallenge
Millionaire philanthropists Richard M. and Elon B., have teamed up to work towards humanity’s survival after the outbreak of the virus, that lead to the zombie apocalypse and evacuation of what was left of the human species from earth.
The objective of this design is to supportfirst of its kind humancoloniesin Marsuntila more permanent infrastructurewill be built. Alsobuild amessaging and collaboration infrastructure on top of it for developinga warning systemto give a more human touch to residents in Mars.
#VirtualDesignMaster 3 Challenge 1 – James Brownvdmchallenge
We are now settled on Mars, and ready to build a more permanentinfrastructure. Keep in mind that power, cooling, and space are extremelyexpensive resources on Mars. In order to save space, we have decidednot to use a traditional FiberChannel infrastructure, meaning there will beno dedicated FiberChannel Switches.
Project : We are now settled on Mars, but need more permanent infrastructure Focus Area : VMware vSphere infrastructure, Compute, Storage, Network, Disaster Recovery
Fully Automated Load Balancing, Compute resource of the cluster load gets automatically balanced. Hosts will be grouped with respect to the location of the datacenter and affinity rule will enabled onthe VM’s to run in the neededdatacenters.
#VirtualDesignMaster 3 Challenge 1 - Steven Viljoenvdmchallenge
We are now settled on Mars, and ready to build a more permanent infrastructure. Keep in mind that power, cooling, and space are extremely expensive resources on Mars. In order to save space, we have decided not to use a traditional Fibre Channel infrastructure, meaning there will be no dedicated Fibre Channel Switches. We do however have plenty of 10G Ethernet switches, with some 40G Ethernet switches.
This season, on Virtual Design Master, we’ve been dealing with the aftermath of a second round of the zombie outbreak. We’ve designed the systems to build spaceships, which are evacuating what’s left of the human race from the earth. We’ve built the infrastructure to support our new civilization on the moon. We’ve also tried to prepare what’s left of the earth in case there is another round of the outbreak, by fortifying islands across the world by deploying infrastructure remotely.
This season, on Virtual Design Master, we’ve been dealing with the aftermath of a second round of the zombie outbreak. We’ve designed the systems to build spaceships, which are evacuating what’s left of the human race from the earth. We’ve built the infrastructure to support our new civilization on the moon. We’ve also tried to prepare what’s left of the earth in case there is another round of the outbreak, by fortifying islands across the world by deploying infrastructure remotely.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
2. Authors
The following authors contributed to the creation of this deliverable.
Dennis George
892 Bessy Trail,
Milton, ON L9T 0A6
Canada
(905) 699 – 3151
dennisgeorg@gmail.com
Revision History
Revision Change Description Updated By Date
0.1 Document Created Dennis George 07/28/2015
3. Table of Contents
Section 1:
Overview ................................................................................... 4
Executive Summary ......................................................................................................5
Project Overview........................................................................................................5
Project Goals...........................................................................................................6
Constraints..............................................................................................................7
Risks .......................................................................................................................7
Assumptions............................................................................................................7
Section 2:
UBUNTU and APACHE2 Web Server...................................... 8
Architecture...................................................................................................................9
Overview ....................................................................................................................9
Image build..................................................................................................................10
Overview ..................................................................................................................10
Dockerfile .................................................................................................................10
Dependencies ..........................................................................................................10
Container deployment.................................................................................................11
Overview ..................................................................................................................11
Kubernetes architecture...........................................................................................11
Dependencies ..........................................................................................................12
Section 3:
CoreOS with NGINX Web Server .......................................... 13
Image build..................................................................................................................14
Overview ..................................................................................................................14
Dependencies ..........................................................................................................14
Container deployment.................................................................................................15
Overview ..................................................................................................................15
Dependencies ..........................................................................................................15
Section 4:
References.............................................................................. 16
Supplemental Information and Links........................................................................17
5. Executive Summary
Project Overview
Our millionaire philanthropist friend is seeking an infrastructure design for permanent human
colonization on Mars. Virtual Design Master an online reality show that challenges virtualization
professionals to come up with innovative virtualization designs has been tasked to select the next
Virtual Design Master to design a permanent IT infrastructure on Mars.
In challenge 1, the team designed an “on-prem” infrastructure solution to support various critical
control systems such as the Environmental system, Greenhouse control system, and productivity and
collaboration systems.
As part of the challenge 2 initiatives, the team was tasked to design a solution to support the same
requirements in a public cloud of choice and provide justification for the same.
Then for challenge 3, the team was asked to design a disaster recovery plan for some key
applications on Mars, by deploying our own infrastructure or leveraging the cloud.
Now that we’ve established ourselves across the solar system, for Challenge 4 it is time to make plans
for taking our home planet back from the zombies. Select teams of humans and robots will be sent
back to Earth to attempt to secure different areas across the globe. We will of course need working
infrastructures help us with this as we reclaim the planet. Chances of sending back IT people are
pretty slim, so you will need to create an orchestrated system to rebuild the infrastructures. We know
this is not an easy task so we are going to start small.
Our test will be a simple web application that displays “Welcome Back to Earth!”.
Since we are evaluating the best way to accomplish orchestrating our infrastructure, you will need to
use two different orchestration tools, two different operating system, and two different web servers.
Because we don’t know what type of infrastructure we’re going to end up with (we could end up with a
mix of equipment and operating systems), the web servers must run inside of a container. Share your
code, and build instructions so that a complete walk through is available. Make sure to include
dependencies your application may have.
After careful planning, and thought process the team has designed the follow design to deploy the
Web Applications in an automated and orchestrated fashion.
6. Project Goals
During the course of the project, the Virtual Design Master team and Dennis George identified a
number of different project goals. The following summarizes those goals and illustrates how this
infrastructure them.
Goal ID Description
GO01 Automate and orchestrate infrastructure build procedures for
rebuilding web servers on Earth.
GO02 Use container technology to accommodate varying hardware
and OS availability.
GO03 Solution should accommodate at least two different
Operating Systems and two different Web Servers.
GO04 Share the code used to build and associated instructions,
including any dependencies the applications may have.
7. Constraints
During the course of the project, the team has identified different constraints to the Disaster
Recovery Plan.
The following table summarizes the constraints:
Constraint
ID
Description
CS01 Learn and design an automated deployment process for a
technology that one has zero experience in four hours! Way to
go Melissa, you made me learn Docker tech. and container
tech in significantly short amount of time. This is why we love
what we do!
Risks
During the course of the project, the team has identified risks to the Disaster Recovery Plan, and
the following table summarizes the risks:
Risk ID Description
RI01 My first attempt with Docker containers! As it was apparent from
Challenge 3!
RI02 HTTP traffic over the “wild web”! Good thing it is just a welcome
message else the security team would flip out!
Assumptions
The team has made some assumptions for the proposed automated infrastructure deployment plan
and the following table summarizes the assumptions made:
Assumption
ID
Description
AS01 We are most likely going to deploy this infrastructure on public
cloud services that have somehow survived the apocalypse
on Earth. Besides we are short of IT folks to send to Earth.
AS02 The firewall exceptions for the Web application to be exposed
to the internet is expected to be configured for access on the
Internet.
9. 9
Architecture
Overview
The primary driver for this design has been automation from the ground up. Meaning that the team
has decided to leverage a vanilla Ubuntu Docker image and automate the layering of services
required to build the web server and deliver the web page, along with automated deployment of the
containers.
The design has emphasized on quick and hands-off deployment of the infrastructure across the different locations
(whether public or private cloud). The end-goal being to deliver the message that we are back on Earth to reclaim
what is rightfully ours!
10. 10
Image build
Overview
For the Docker image creation process itself, the team has configured the following Dockerfile config.
to leverage a publicly available Ubuntu image, and automatically layer it with Apache web server,
necessary tools and also download the webpage payload from our Mars base station
(http://104.197.47.189:5000/) which happens to be a 3-node Kubernetes cluster running on the
Google Compute Engine fronted by loadbalancers on Mars!
As a backup, a finished image will be stored in a private repository, and managed with any updates to
mitigate the risk of public repositories not being under our control.
Dockerfile
FROM ubuntu
MAINTAINER Dennis George <dennisgeorg@gmail.com>
RUN sudo apt-get update && apt-get install -y apache2
RUN sudo apt-get update && apt-get install -y curl
RUN sudo update-rc.d apache2 enable
RUN sudo service apache2 start
RUN curl http://104.197.47.189:5000 > /var/www/html/index.html
Dependencies
The following table outlines the dependencies for the automated image build process:
Dependency ID Description
WEBSRV01DEP01 Availability of the vanilla Ubuntu Docker image on publicly
available repositories.
WEBSRV01DEP02 Availability of Apache2 web server on publicly available
Ubuntu repositories.
WEBSRV01DEP03 Availability of Curl on publicly available Ubuntu repositories.
WEBSRV01DEP04 Availability of our Mars base infrastructure for the web page
payload.
WEBSRV01DEP05 Availability of public Google drive infrastructure for images
embedded in the web page (for the aesthetics!).
11. 11
Container deployment
Overview
For automation of the container deployment, the team has decided to leverage the Kubernetes
controller along with the Jenkins and Swarm.
Kubernetes architecture
12. 12
Dependencies
The following table outlines the dependencies for the automated image build process:
Dependency ID Description
WEBSRV01DEP06
WEBSRV01DEP07
WEBSRV01DEP08
WEBSRV01DEP09
WEBSRV01DEP10
14. 14
Image build
Overview
Dependencies
The following table outlines the dependencies for the automated image build process:
Dependency ID Description
WEBSRV02DEP01 Availability of the vanilla CoreOS Docker image on publicly
available repositories.
WEBSRV02DEP02 Availability of NGINX web server on publicly available
CoreOS repositories.
WEBSRV02DEP03 Availability of Curl on publicly available CoreOS repositories.
WEBSRV02DEP04 Availability of our Mars base infrastructure for the web page
payload.
WEBSRV02DEP05 Availability of public Google drive infrastructure for images
embedded in the web page (for the aesthetics!).
17. Supplemental Information and Links
These links provide further information on the concepts and recommendations discussed during this
document.
• Your First Custom Image in Docker – Thank you Melissa for planting the Docker bug in me!
• Docker user guide - Docker 101!
• ANNOUNCING DOCKER MACHINE, SWARM, AND COMPOSE FOR ORCHESTRATING
DISTRIBUTED APPS – A glimpse into Docker Machine, Swarm and Compose.
• HOW TO AUTOMATE DOCKER BUILDS AND AUTO DEPLOY - This article describes how to
create an Automated Docker Build that auto deploys Docker containers in combination with
these services
• Automated Image Builds with Jenkins, Packer, and Kubernetes – GCE documentation
outlining the process for automation of container deployment.