Brayden Winterton gives an introduction to Docker. He explains that Docker solves the "Matrix from Hell" of inconsistent environments by using containers to package applications and their dependencies in portable, standardized units. Developers benefit from Docker because it allows them to build once and run anywhere while avoiding dependency issues. System administrators benefit because Docker provides standardized, repeatable environments that are faster and more reliable to deploy. Brayden then demonstrates Docker by running a sample application in a container and linking multiple containers together.
This document discusses Docker and provides an agenda for a Docker training. It explains that Docker provides lightweight virtualization, portability, and improved efficiency. Docker containers are isolated environments that reduce time from development to production. Docker images serve as templates for launching containers, which contain all the dependencies for an application to run. Resources for learning more about Docker are also provided.
This document discusses Docker, an open source containerization platform. It allows applications to run regardless of how they were created or their dependencies by packaging code and dependencies together. This solves issues of applications working differently in different environments. Docker uses containers, which are lightweight and isolated from each other, sharing the same operating system kernel. Containers improve resource utilization and allow running multiple applications simultaneously reliably. Docker images are used as templates to launch containers from and can be built from base images with custom code added via Dockerfiles for reproducible builds.
Flash Camp Chennai - Build automation of Flex and AIR applicationsRIA RUI Society
Complete session on how to set up a continuous integration server for compiling and deploying Flex, Flash and AIR applications. The build process also include code quality check, code duplication check, compiler warning reporting, TODO and FIXME list reporting, and Unit testing.
Javascript Frameworks (and How to Learn Them)All Things Open
Presented at: All Things Open RTP Meetup
Presented by: Peter Elbaum, Praxent
Abstract:
There are countless blog posts and tweets given to the topic of whether to choose Vue, React or Angular. We spend a lot of time debating the difference between the frameworks, but we often overlook the reasons that front-end frameworks were created in the first place. This talk will address how front-end development was done before frameworks existed and discuss the main problem that frameworks solve. Through grasping the reason for frameworks, attendees of this talk will be able to accelerate the process of learning a new front-end framework. Specifically, this talk with address component-based architecture, application state management, and component interaction. We'll cover what to look for when learning a new framework and strategies for internalizing the nuances of the various framework ecosystems. Seeing the commonalities among these frameworks allows for grasping the bigger picture.
PuppetConf 2016: Keynote: Pulling the Strings to Containerize Your Life - Sco...Puppet
Scott Coulton is a Platform Engineering Lead at Autopilot who discusses how his company used Docker and Puppet to improve their CI/CD processes and speed up deployments to production while maintaining compliance. He explains how they had development teams deploy themselves by treating infrastructure as code that is automated, built, and tested. This allowed them to break down barriers and usher in a new wave of infrastructure development. Puppet was used for configuration management to containerize systems and help spread DevOps practices to other teams.
This document contains the slides from a presentation given by Oleksandr Pastukhov in August 2016 at JUG Shenzhen. The presentation introduces Docker, including what it is for developers and administrators, the differences between containers and VMs, Docker basics, and how Docker can be used to deploy applications across different environments like development, testing, production and more. Various Docker commands are also listed and explained.
Docker is an open-source tool that allows developers to package applications into containers that can run on any infrastructure regardless of operating system. It provides an additional layer of abstraction and automation of operating system-level virtualization. Docker allows developers to build, ship, and run distributed applications, and is useful for both developers and DevOps users by making deployments more efficient, consistent, and repeatable across environments from development to production.
This document discusses Docker and provides an agenda for a Docker training. It explains that Docker provides lightweight virtualization, portability, and improved efficiency. Docker containers are isolated environments that reduce time from development to production. Docker images serve as templates for launching containers, which contain all the dependencies for an application to run. Resources for learning more about Docker are also provided.
This document discusses Docker, an open source containerization platform. It allows applications to run regardless of how they were created or their dependencies by packaging code and dependencies together. This solves issues of applications working differently in different environments. Docker uses containers, which are lightweight and isolated from each other, sharing the same operating system kernel. Containers improve resource utilization and allow running multiple applications simultaneously reliably. Docker images are used as templates to launch containers from and can be built from base images with custom code added via Dockerfiles for reproducible builds.
Flash Camp Chennai - Build automation of Flex and AIR applicationsRIA RUI Society
Complete session on how to set up a continuous integration server for compiling and deploying Flex, Flash and AIR applications. The build process also include code quality check, code duplication check, compiler warning reporting, TODO and FIXME list reporting, and Unit testing.
Javascript Frameworks (and How to Learn Them)All Things Open
Presented at: All Things Open RTP Meetup
Presented by: Peter Elbaum, Praxent
Abstract:
There are countless blog posts and tweets given to the topic of whether to choose Vue, React or Angular. We spend a lot of time debating the difference between the frameworks, but we often overlook the reasons that front-end frameworks were created in the first place. This talk will address how front-end development was done before frameworks existed and discuss the main problem that frameworks solve. Through grasping the reason for frameworks, attendees of this talk will be able to accelerate the process of learning a new front-end framework. Specifically, this talk with address component-based architecture, application state management, and component interaction. We'll cover what to look for when learning a new framework and strategies for internalizing the nuances of the various framework ecosystems. Seeing the commonalities among these frameworks allows for grasping the bigger picture.
PuppetConf 2016: Keynote: Pulling the Strings to Containerize Your Life - Sco...Puppet
Scott Coulton is a Platform Engineering Lead at Autopilot who discusses how his company used Docker and Puppet to improve their CI/CD processes and speed up deployments to production while maintaining compliance. He explains how they had development teams deploy themselves by treating infrastructure as code that is automated, built, and tested. This allowed them to break down barriers and usher in a new wave of infrastructure development. Puppet was used for configuration management to containerize systems and help spread DevOps practices to other teams.
This document contains the slides from a presentation given by Oleksandr Pastukhov in August 2016 at JUG Shenzhen. The presentation introduces Docker, including what it is for developers and administrators, the differences between containers and VMs, Docker basics, and how Docker can be used to deploy applications across different environments like development, testing, production and more. Various Docker commands are also listed and explained.
Docker is an open-source tool that allows developers to package applications into containers that can run on any infrastructure regardless of operating system. It provides an additional layer of abstraction and automation of operating system-level virtualization. Docker allows developers to build, ship, and run distributed applications, and is useful for both developers and DevOps users by making deployments more efficient, consistent, and repeatable across environments from development to production.
This document provides an overview of ONNX and ONNX Runtime. ONNX is an open format for machine learning models that allows models to be shared across different frameworks and tools. ONNX Runtime is a cross-platform open source inference engine that runs ONNX models. It supports hardware acceleration and has a modular design that allows for custom operators and execution providers to extend its capabilities. The document discusses how ONNX helps with deploying machine learning models from research to production and how ONNX Runtime performs high performance inference through optimizations and hardware acceleration.
Continuous Delivery with Jenkins declarative pipeline XPDays-2018-12-08Борис Зора
When you start your journey with µServices, you should be confident with your delivery lifecycle. In case of mistake, you should be able to navigate to appropriate tag in vcs to reproduce the bug with a test & go though pipeline within 3 hours to production with high confidence of quality.
We will discuss set of tools that could help you to achieve this within 3 months on your project. It does not include system decoupling suggestions. And in the same time, if you decide to break down monolith, it is better to do with dev & devOps best practices.
Software archaeology for beginners: code, community and cultureJames Turnbull
Most open source projects are rightly proud of their communities, long histories (both measured in time and version control), passionate debates and occasional trolling. Newcomers to these communities often face an uphill battle, though. Not just in understanding decision making processes and community standards, but in coming to terms with often complex, contradictory, and poorly documented code bases. This talk will introduce you to the concepts and tools you need to be an expert code, culture, and community archaeologist and quickly become productive and knowledgeable in an unknown or legacy code base.
CICD Pipelines for Microservices: Lessons from the TrenchesCodefresh
You have finally split your big monolith into microservices built on top of Kubernetes! Now what? How do you validate a more complex application? And how do you make it scale? In this live talk, we look at two case studies, Expedia’s journey to microservices, and Codefresh. If you try to treat microservices like monoliths you’ll end up with thousands of broken pipelines that are impossible to maintain. Learn from the mistakes of the past and let us show you how we fought our way to something much better! This live talk has everything, tech tips, best practices, and yes, even the fabled business value that our bosses all seem to care so much about!
Continuous Integration with Maven for Android appsHugo Josefson
Why Maven can be relevant for building Android applications, and how a complete Jenkins server can be set up for building and running tests on Android applications.
Installation script for the Jenkins server is at http://github.com/hugojosefson/jenkins-with-android
Why You Should be Using Multi-stage Docker Builds in 2019Codefresh
This document discusses the benefits of using multi-stage Docker builds. It notes that traditional Docker builds result in large images containing build tools and files not needed for runtime. Multi-stage builds address this by allowing builders to create multiple stages, each producing a new image, to arrive at a minimal final image containing only what is needed for production. This improves build speed and produces more secure images by removing unnecessary components. Multi-stage builds can be used across many programming languages and are supported in continuous integration/deployment on platforms like Codefresh.
This document summarizes a presentation about continuous integration and continuous delivery for Magento projects. It introduces the speaker and defines continuous integration as merging code into a shared repository multiple times per day, verified by automated builds. Continuous delivery is described as producing software in short cycles to allow reliable releases at any time. The presentation provides tips for setting up Jenkins and Phing to implement continuous integration and continuous delivery workflows for Magento, including build triggers, steps, and deployment.
Continuous integration (CI) and continuous delivery (CD) are software engineering practices that involve regularly merging code changes into a shared repository and performing automated builds and tests. CI involves integrating code changes daily to find issues early, while CD ensures code can be reliably released at any time through short development cycles with automated testing, deployment, and documentation. Implementing CI/CD helps build better quality software faster and cheaper by identifying defects early, encouraging collaboration, and facilitating frequent releases.
Docker is an open platform for building and running distributed applications using lightweight containers. Containers allow applications to run reliably from one computing environment to another without conflict from other applications and dependencies. Docker provides benefits like very fast application deployment with little overhead, easy scaling, and no dependency issues - making it useful for both development and production of distributed applications.
The document discusses using Docker containers to replace virtual machines for autograding student assignments in Autolab. It outlines integrating Docker with Tango, Autolab's autograder, which would involve implementing a Docker volume management module (VMMS) and preparing host machines to run Docker. Docker containers would provide a homogeneous environment for assignments while allowing customized software and initializing faster than VMs. The implementation was tested using EC2 instances and the Datalab assessment, with results matching the existing VM-based system. Future work includes multiplexing platforms and hosting internal Docker images.
Test-driven development (TDD) involves writing unit tests before production code to ensure features are correctly implemented and prevent future bugs. Following TDD best practices like writing only enough test code to fail and then just enough production code to pass helps improve code quality and productivity. Leading open source projects emphasize the importance of testing, with some requiring over 90% test coverage. Resources like books and articles provide guidance on techniques like TDD, refactoring, and working with legacy code.
1) DevOps in real time discusses challenges of maintaining 24/7 operations for multiple online projects and teams spread across different geographies.
2) The plan is to use tools like Chef, continuous integration, monitoring, backups, and team communication to improve processes around deployment cycles, server configuration, and working with many teams.
3) Chef is highlighted as a tool to help with automatic server configuration, continuous delivery, simplifying testing, controlling monitoring and backups. Templates, autoscaling, and single naming rules are also discussed to help manage infrastructure.
The document discusses reasons for using Kotlin Multiplatform including writing code once that can run on multiple platforms, sharing business logic between server and client apps, and accessing native platform APIs from Kotlin. It notes Kotlin Multiplatform allows optional code sharing so only common code is shared. It also discusses why other cross-platform solutions may not be ideal and provides an overview of Kotlin/Native and concurrency in Kotlin.
This document discusses why and how to share code across platforms using Kotlin Multiplatform. It notes that sharing code can increase productivity by reducing duplicated code and bugs, while allowing apps to share features and business logic. However, performance may decrease and innovation could slow if too much code is shared. Kotlin Multiplatform allows writing code once that can run natively on multiple platforms, while each platform still gets optimized binaries and access to native APIs. It provides optional code sharing without limiting developers to a subset of platforms or requiring shared UIs.
Let's talk about how to build a CI/CD pipeline when working with Docker containers and Kubernetes. I'll explore a few good practices that will help you to create pipelines and deploy with zero-downtime in just minutes. Whether your Kubernetes cluster is on AWS, Azure, Google, or on-premises; I'll demonstrate how to create a pipeline using managed solutions and open source projects.
This document discusses Docker and containerization. It begins with an introduction to Docker and how containers virtualize operating systems and applications. It then covers key Docker concepts like images and containers, and demonstrates basic Docker commands. The document also discusses using Docker for software development workflows, including continuous integration and deployment. Finally, it briefly introduces distributed Docker technologies like Swarm and Kubernetes for orchestrating containers across clusters.
Mohammed Zaghloul is a senior front-end developer who has experience with C#, Angular, jQuery, and Ember. He will be giving a workshop on Docker that covers: (1) a quick introduction to Docker, (2) running a HelloWorld Docker container, and (3) building a Docker image and creating a containerized web application. Docker allows developers to package applications and dependencies into standardized units called containers that can be run anywhere. It uses a client-server architecture where the Docker client communicates with the Docker daemon to build, run, and distribute containers. Important concepts include Docker images, which are templates for creating containers, and containers, which are runnable instances of images.
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and delivery of applications and microservices. This applies equally to enterprise datacenters as well the cloud.
Jari discuss the solutions and benefits of a deeply-integrated deployment pipeline for using technologies such as container management platforms, Docker containers, and CI/CD solutions. He also demonstrated deployment of a CI/CD pipeline using container management, as well as showed how to deploy a containerized application using Github, Travis, Kontena, CoreOS and DigitalOcean with a continuous delivery pipeline.
Attendees learned how the deployment process can be automated and significantly sped up by using containers and modern CI/CD pipeline tools. Through the demo example, the attendees will see an example of how this is built and run using the mentioned CI/CD solutions. This showed them how a continuous delivery pipeline will help streamline their processes and make deployments more agile.
Deploying Windows Containers with Draft, Helm and KubernetesJessica Deen
This document discusses deploying Windows applications using Draft, Helm, and Kubernetes. It provides an overview of working with Windows containers, including requirements for matching kernel versions between build and deploy environments and having the same kernel across the pipeline. It also discusses using Helm to manage Kubernetes applications and Draft to simplify the process for developers. Specific prerequisites and demos are presented for deploying ASP.NET and .NET Core applications on Kubernetes clusters with Windows nodes.
The Beam Vision for Portability: "Write once run anywhere"Knoldus Inc.
This session is all about knowing a modern way to define and execute data processing pipelines with Apache Beam, an open-source unified programming model. we will talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favourite programming language with their preferred execution backend.
Agenda
1. The changing landscape of IT Infrastructure
2. Containers - An introduction
3. Container management systems
4. Kubernetes
5. Containers and DevOps
6. Future of Infrastructure Mgmt
About the talk
In this talk, you will get a review of the components & the benefits of Container technologies - Docker & Kubernetes. The talk focuses on making the solution platform-independent. It gives an insight into Docker and Kubernetes for consistent and reliable Deployment. We talk about how the containers fit and improve your DevOps ecosystem and how to get started with containerization. Learn new deployment approach to effectively use your infrastructure resources to minimize the overall cost.
Kubernetes, Toolbox to fail or succeed for beginners - Demi Ben-Ari, VP R&D @...Demi Ben-Ari
Kubernetes (K8s) is a tool for managing containerized applications across multiple servers. It allows deploying and managing containerized applications without relying on virtual machines. Kubernetes can schedule containers across a cluster of nodes, provide basic health checking and restart policies, load balancing, storage orchestration and more. Some key Kubernetes concepts include pods, deployments, services, replication controllers and volumes. Kubernetes is well suited for microservices architectures as it helps manage the scaling and networking needs of distributed applications.
This document provides an overview of ONNX and ONNX Runtime. ONNX is an open format for machine learning models that allows models to be shared across different frameworks and tools. ONNX Runtime is a cross-platform open source inference engine that runs ONNX models. It supports hardware acceleration and has a modular design that allows for custom operators and execution providers to extend its capabilities. The document discusses how ONNX helps with deploying machine learning models from research to production and how ONNX Runtime performs high performance inference through optimizations and hardware acceleration.
Continuous Delivery with Jenkins declarative pipeline XPDays-2018-12-08Борис Зора
When you start your journey with µServices, you should be confident with your delivery lifecycle. In case of mistake, you should be able to navigate to appropriate tag in vcs to reproduce the bug with a test & go though pipeline within 3 hours to production with high confidence of quality.
We will discuss set of tools that could help you to achieve this within 3 months on your project. It does not include system decoupling suggestions. And in the same time, if you decide to break down monolith, it is better to do with dev & devOps best practices.
Software archaeology for beginners: code, community and cultureJames Turnbull
Most open source projects are rightly proud of their communities, long histories (both measured in time and version control), passionate debates and occasional trolling. Newcomers to these communities often face an uphill battle, though. Not just in understanding decision making processes and community standards, but in coming to terms with often complex, contradictory, and poorly documented code bases. This talk will introduce you to the concepts and tools you need to be an expert code, culture, and community archaeologist and quickly become productive and knowledgeable in an unknown or legacy code base.
CICD Pipelines for Microservices: Lessons from the TrenchesCodefresh
You have finally split your big monolith into microservices built on top of Kubernetes! Now what? How do you validate a more complex application? And how do you make it scale? In this live talk, we look at two case studies, Expedia’s journey to microservices, and Codefresh. If you try to treat microservices like monoliths you’ll end up with thousands of broken pipelines that are impossible to maintain. Learn from the mistakes of the past and let us show you how we fought our way to something much better! This live talk has everything, tech tips, best practices, and yes, even the fabled business value that our bosses all seem to care so much about!
Continuous Integration with Maven for Android appsHugo Josefson
Why Maven can be relevant for building Android applications, and how a complete Jenkins server can be set up for building and running tests on Android applications.
Installation script for the Jenkins server is at http://github.com/hugojosefson/jenkins-with-android
Why You Should be Using Multi-stage Docker Builds in 2019Codefresh
This document discusses the benefits of using multi-stage Docker builds. It notes that traditional Docker builds result in large images containing build tools and files not needed for runtime. Multi-stage builds address this by allowing builders to create multiple stages, each producing a new image, to arrive at a minimal final image containing only what is needed for production. This improves build speed and produces more secure images by removing unnecessary components. Multi-stage builds can be used across many programming languages and are supported in continuous integration/deployment on platforms like Codefresh.
This document summarizes a presentation about continuous integration and continuous delivery for Magento projects. It introduces the speaker and defines continuous integration as merging code into a shared repository multiple times per day, verified by automated builds. Continuous delivery is described as producing software in short cycles to allow reliable releases at any time. The presentation provides tips for setting up Jenkins and Phing to implement continuous integration and continuous delivery workflows for Magento, including build triggers, steps, and deployment.
Continuous integration (CI) and continuous delivery (CD) are software engineering practices that involve regularly merging code changes into a shared repository and performing automated builds and tests. CI involves integrating code changes daily to find issues early, while CD ensures code can be reliably released at any time through short development cycles with automated testing, deployment, and documentation. Implementing CI/CD helps build better quality software faster and cheaper by identifying defects early, encouraging collaboration, and facilitating frequent releases.
Docker is an open platform for building and running distributed applications using lightweight containers. Containers allow applications to run reliably from one computing environment to another without conflict from other applications and dependencies. Docker provides benefits like very fast application deployment with little overhead, easy scaling, and no dependency issues - making it useful for both development and production of distributed applications.
The document discusses using Docker containers to replace virtual machines for autograding student assignments in Autolab. It outlines integrating Docker with Tango, Autolab's autograder, which would involve implementing a Docker volume management module (VMMS) and preparing host machines to run Docker. Docker containers would provide a homogeneous environment for assignments while allowing customized software and initializing faster than VMs. The implementation was tested using EC2 instances and the Datalab assessment, with results matching the existing VM-based system. Future work includes multiplexing platforms and hosting internal Docker images.
Test-driven development (TDD) involves writing unit tests before production code to ensure features are correctly implemented and prevent future bugs. Following TDD best practices like writing only enough test code to fail and then just enough production code to pass helps improve code quality and productivity. Leading open source projects emphasize the importance of testing, with some requiring over 90% test coverage. Resources like books and articles provide guidance on techniques like TDD, refactoring, and working with legacy code.
1) DevOps in real time discusses challenges of maintaining 24/7 operations for multiple online projects and teams spread across different geographies.
2) The plan is to use tools like Chef, continuous integration, monitoring, backups, and team communication to improve processes around deployment cycles, server configuration, and working with many teams.
3) Chef is highlighted as a tool to help with automatic server configuration, continuous delivery, simplifying testing, controlling monitoring and backups. Templates, autoscaling, and single naming rules are also discussed to help manage infrastructure.
The document discusses reasons for using Kotlin Multiplatform including writing code once that can run on multiple platforms, sharing business logic between server and client apps, and accessing native platform APIs from Kotlin. It notes Kotlin Multiplatform allows optional code sharing so only common code is shared. It also discusses why other cross-platform solutions may not be ideal and provides an overview of Kotlin/Native and concurrency in Kotlin.
This document discusses why and how to share code across platforms using Kotlin Multiplatform. It notes that sharing code can increase productivity by reducing duplicated code and bugs, while allowing apps to share features and business logic. However, performance may decrease and innovation could slow if too much code is shared. Kotlin Multiplatform allows writing code once that can run natively on multiple platforms, while each platform still gets optimized binaries and access to native APIs. It provides optional code sharing without limiting developers to a subset of platforms or requiring shared UIs.
Let's talk about how to build a CI/CD pipeline when working with Docker containers and Kubernetes. I'll explore a few good practices that will help you to create pipelines and deploy with zero-downtime in just minutes. Whether your Kubernetes cluster is on AWS, Azure, Google, or on-premises; I'll demonstrate how to create a pipeline using managed solutions and open source projects.
This document discusses Docker and containerization. It begins with an introduction to Docker and how containers virtualize operating systems and applications. It then covers key Docker concepts like images and containers, and demonstrates basic Docker commands. The document also discusses using Docker for software development workflows, including continuous integration and deployment. Finally, it briefly introduces distributed Docker technologies like Swarm and Kubernetes for orchestrating containers across clusters.
Mohammed Zaghloul is a senior front-end developer who has experience with C#, Angular, jQuery, and Ember. He will be giving a workshop on Docker that covers: (1) a quick introduction to Docker, (2) running a HelloWorld Docker container, and (3) building a Docker image and creating a containerized web application. Docker allows developers to package applications and dependencies into standardized units called containers that can be run anywhere. It uses a client-server architecture where the Docker client communicates with the Docker daemon to build, run, and distribute containers. Important concepts include Docker images, which are templates for creating containers, and containers, which are runnable instances of images.
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and delivery of applications and microservices. This applies equally to enterprise datacenters as well the cloud.
Jari discuss the solutions and benefits of a deeply-integrated deployment pipeline for using technologies such as container management platforms, Docker containers, and CI/CD solutions. He also demonstrated deployment of a CI/CD pipeline using container management, as well as showed how to deploy a containerized application using Github, Travis, Kontena, CoreOS and DigitalOcean with a continuous delivery pipeline.
Attendees learned how the deployment process can be automated and significantly sped up by using containers and modern CI/CD pipeline tools. Through the demo example, the attendees will see an example of how this is built and run using the mentioned CI/CD solutions. This showed them how a continuous delivery pipeline will help streamline their processes and make deployments more agile.
Deploying Windows Containers with Draft, Helm and KubernetesJessica Deen
This document discusses deploying Windows applications using Draft, Helm, and Kubernetes. It provides an overview of working with Windows containers, including requirements for matching kernel versions between build and deploy environments and having the same kernel across the pipeline. It also discusses using Helm to manage Kubernetes applications and Draft to simplify the process for developers. Specific prerequisites and demos are presented for deploying ASP.NET and .NET Core applications on Kubernetes clusters with Windows nodes.
The Beam Vision for Portability: "Write once run anywhere"Knoldus Inc.
This session is all about knowing a modern way to define and execute data processing pipelines with Apache Beam, an open-source unified programming model. we will talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favourite programming language with their preferred execution backend.
Agenda
1. The changing landscape of IT Infrastructure
2. Containers - An introduction
3. Container management systems
4. Kubernetes
5. Containers and DevOps
6. Future of Infrastructure Mgmt
About the talk
In this talk, you will get a review of the components & the benefits of Container technologies - Docker & Kubernetes. The talk focuses on making the solution platform-independent. It gives an insight into Docker and Kubernetes for consistent and reliable Deployment. We talk about how the containers fit and improve your DevOps ecosystem and how to get started with containerization. Learn new deployment approach to effectively use your infrastructure resources to minimize the overall cost.
Kubernetes, Toolbox to fail or succeed for beginners - Demi Ben-Ari, VP R&D @...Demi Ben-Ari
Kubernetes (K8s) is a tool for managing containerized applications across multiple servers. It allows deploying and managing containerized applications without relying on virtual machines. Kubernetes can schedule containers across a cluster of nodes, provide basic health checking and restart policies, load balancing, storage orchestration and more. Some key Kubernetes concepts include pods, deployments, services, replication controllers and volumes. Kubernetes is well suited for microservices architectures as it helps manage the scaling and networking needs of distributed applications.
Presentation about Docker:
2016 Trends:
* Microservices: load balancing and orchestration
* Cloud
* Continuos integration
* Environment-less deployment
What are containers?
Why Docker?
Docker project
Docker. Inc
Docker VS VM
Docker basics
Some statistics about Docker and some Docker use case insights
Docker compose configuration file:
http://www.mediafire.com/download/lfmfzrkgn9wzegm/docker-compose.yml
Présentation link:
https://docs.google.com/presentation/d/1x11EgUqBVLAl70p53rZ-nJoLlL6FoZd2KbvTRxyVp1g/pub?start=false&loop=false&delayms=3000
This document discusses Azure AI on-premises using Docker containers. It covers Microsoft Cognitive Services, Docker, and Azure Cognitive Services containers. The key points are:
- Microsoft Cognitive Services are AI algorithms that can be consumed via REST APIs to solve problems in areas like computer vision, natural language processing, and speech recognition.
- Docker containers allow these cognitive services to run locally on-premises for applications that cannot send data to the cloud. The containers package the services and their dependencies to run consistently on any infrastructure.
- A live demo will show how to utilize Docker containers for Azure Cognitive Services on an on-premises server to bring AI capabilities locally without needing internet access. Questions will be
Intro to OpenShift, MongoDB Atlas & Live DemoMongoDB
Get the fundamentals on working with containers in the cloud. In this session, you will learn how to run and manage containers in production. We'll level set with a quick intro to Kubernetes and OpenShift, so you understand some basic terminology. From there, it's all live demo. We’ll spin up Java, MongoDB (including Atlas, the hosted DBaas), integrate code from Github, and make some shiny JSON spatial services. Finally, we’ll cover best practices in using containers when going to production with an application, and answer all of your questions.
Extensible dev secops pipelines with Jenkins, Docker, Terraform, and a kitche...Richard Bullington-McGuire
Have you ever needed to wrestle a legacy application onto a modern, scalable cloud platform, while increasing security test coverage? Sometimes real applications are not easily stuffed into a Docker container and deployed in a container orchestration system. In this talk, Modus Create Principal Architect Richard Bullington-McGuire will show how to compose Jenkins, Docker, Terraform, Packer, Ansible, Packer, Vagrant, Gauntlt, OpenSCAP, the CIS Benchmark for Linux, AWS CodeDeploy, Auto Scaling Groups, Application Load Balancers, and other AWS services to create a performant and scalable solution for deploying applications. A local development environment using Vagrant mirrors the cloud deployment environment to minimize surprises upon deployment.
Docker is an open platform for developers and system administrators to build, ship and run distributed applications. Using Docker, companies in Jordan have been able to build powerful system architectures that allow speeding up delivery, easing deployment processes and at the same time cutting major hosting costs.
George Khoury shares his experience at Salalem in building flexible and cost effective architectures using Docker and other tools for infrastructure orchestration. The result allows them to easily and quickly move between different cloud providers.
This document provides an overview of containers and Docker for automating DevOps processes. It begins with an introduction to containers and Docker, explaining how containers help break down silos between development and operations teams. It then covers Docker concepts like images, containers, and registries. The document discusses advantages of containers like low overhead, environment isolation, quick deployment, and reusability. It explains how containers leverage kernel features like namespaces and cgroups to provide lightweight isolation compared to virtual machines. Finally, it briefly mentions Docker ecosystem tools that integrate with DevOps processes like configuration management and continuous integration/delivery.
Docker is an open source platform that allows developers and sysadmins to build, ship, and run distributed applications anywhere. It provides portability, standardized environments, and the ability to rapidly scale applications up and down. Many enterprises are using Docker to build continuous delivery pipelines where code commits trigger automated builds and deployment of new Docker containers. This allows applications to be deployed more frequently and consistently across development, testing, and production environments.
Tampere Docker meetup - Happy 5th Birthday DockerSakari Hoisko
Part of official docker meetup events by Docker Inc.
https://events.docker.com/events/docker-bday-5/
Meetup event:
https://www.meetup.com/Docker-Tampere/events/248566945/
- Docker celebrated its 5th birthday with events worldwide including one in Cluj, Romania. Over 100 user and customer events were held.
- The Docker platform now has over 450 commercial customers, 37 billion container downloads, and 15,000 Docker-related jobs on LinkedIn.
- The event in Cluj included presentations on Docker and hands-on labs to learn Docker, as well as social activities like taking selfies with a birthday banner.
Containers are changing development and deployment using technologies like Docker and Kubernetes. Containers leverage cgroups and namespaces in Linux kernels to isolate processes and share resources without full virtual machines. Docker popularized containers by making them easy to build, run and share. Kubernetes is the most popular container orchestrator, allowing containers to run together across clusters with services for load balancing, scaling and failover. Developers can now develop in containers for consistent environments, while operations teams can deploy containerized applications with automation and roll back updates if needed.
Containers are widely used today for consolidation and cloud native applications, with emerging uses in edge computing and serverless technologies. Containers provide efficiency, repeatability, isolation, and portability. Traditional uses include consolidation to reduce infrastructure costs and improve server utilization. Cloud native applications are also well suited to containers due to their microservices architecture enabling rapid deployment and scalability. For day 2 operations, container orchestration platforms like Kubernetes are used for lifecycle management, placement, and service discovery. Container storage interfaces provide pluggable storage solutions, while observability tools help debug complex microservices interactions through application performance monitoring and distributed tracing. Open APIs and standards are important to avoid vendor lock-in and allow flexibility in building, deploying and
This document provides an overview of Docker for web developers. It defines containers and Docker, discusses the benefits of Docker like faster deployment and portability. It explains key Docker concepts like images, containers, Dockerfile for building images, Docker platform, and commands for managing images and containers. The document also describes what happens behind the scenes when a container is run, and how to install and use Docker on Linux, Windows and Mac.
The document provides an overview of Docker for web developers. It defines containers and Docker, explaining that Docker allows developers to package applications into standardized units for development, shipment and deployment. It covers Docker concepts like images, containers, Dockerfiles and registries. It also discusses how to install Docker, manage images and containers, configure networking, mount volumes, and allow communication between containers. The goal is to explain the key Docker concepts and components to help developers understand and use Docker.
Introduction to dockers and kubernetes. Learn how this helps you to build scalable and portable applications with cloud. It introduces the basic concepts of dockers, its differences with virtualization, then explain the need for orchestration and do some hands-on experiments with dockers
The slides talk about Docker and container terminologies but will also be able to see the big picture of where & how it fits into your current project/domain.
Topics that are covered:
1. What is Docker Technology?
2. Why Docker/Containers are important for your company?
3. What are its various features and use cases?
4. How to get started with Docker containers.
5. Case studies from various domains
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
2. About Me
Brayden Winterton
● Computer Science Major at BYU
● Head of BYU Production Services
Development Team
How I use docker:
● Development environments
● CI/CD pipeline
● Clustered Production Environments
3. Survey
● Who here has heard of Docker?
● Who has used Docker before?
● Who uses Docker as part of their daily
workflow?
13. What is Docker?
Docker Engine:
● CLI
● Docker Daemon
● Runs the containers
Docker Hub:
● Sharing applications/containers
● Used in automating workflows
● Much like Github
15. Why developers love it:
● Devs can build apps using any language and toolchain
● Build once, run anywhere
● Docker creates a clean portable runtime for the app
● No more worries about missing dependencies
● Peace of mind knowing that apps will not conflict (nor
will their dependencies)
● Compatibility concerns go out the window
● Automated testing, building, packaging, etc becomes
much more simple.
● Fast, lightweight runtime environments
16. Why sysadmins love it:
● Standardized, repeatable environments
● No more “works on my machine”
● Abstract application from OS and Infrastructure
● Flexibility in where to run the application
● Deployment driven by business priorities
● Rapid scale-up and scale-down to respond to load
● Eliminate inconsistencies between environments
● Increase reliability and speed of CI/CD systems
● TL;DR: Deploy any app, to any infrastructure
17. Want to separate concerns?
Development:
● Worries about what’s
inside the container:
o Code
o Libraries
o Data
o Applications
● Assumes that all
environments look the
same
Operations:
● Worries about what’s
outside the container:
o Logging
o Monitoring
o Configurations
● All containers start, stop,
reload, and accept
configuration the same
way
18. Want to combine concerns?
● Give developers access to existing
resources
● Allow developers to deploy built containers
to infrastructure
o Either using Continuous Deployment or a
standardized method
● Make the developers wake up in the middle
of the night to fix it
Mange everything from Legacy applications and monolithic stacks to applications that are in development and microservice oriented architectures
What is this thing that everyone keeps talking about?
Once upon a time we had simple lamp stacks, web applications were extremely simple
Typically one server, or one server image replicated for load balancing.
Sometimes databases were even found on the same machines (Ugh)
Today, with the movement to microservices and distributed services, stacks are much more complex (userDB, static stack, frontend, api, etc.)
Not only are the stacks more complex, they need to be run on more diverse hardware.
Dev environments
Production environments
QA
On premise (woo, that’s hard)
etc.
This all leads to the Matrix from hell!
Something we have all faced
What runs where? Why doesn’t it run? What are it’s dependancies?
Can the Static frontend and the Normal Frontend run together?
This sucks. Tons of time to upkeep and to keep straight. Changes over time and as development iterates
We have had this same problem before, in the shipping world
How can we ship Item X? What methods of shipping will work? Which ones will damage the goods? What is the cheapest?
The solution? Shipping containers!
A standard container that everyone agrees on using and supporting.
Loaded and then sealed.
It can be transfered between shipping methods without causing an issue.
Isolates the goods from other things next to it.
Docker does for the application /devops world, what the shipping container did for the shipping world.
The application/stack is packed into the container.
That container can be run on any hardware platform without concerns for how it will interact with nearby applications or dependencies.
So now our matrix from hell looks like this!
Doesn’t matter what is run where, docker ensures that it will run.
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications.
Consists of two parts:
Docker engine
This is what actually runs the containers/applications
used to build containers and modify existing ones
runs in the command line
uses namespaces, cgroups, and unionfilesystems to isolate containers.
Docker hub:
Like github for docker containers
build off of existing containers
grab generic containers to be used in your stack (such as mysql, postgress, graphite with statsd, etc)
Ok so this is cool and all….. but
Why should I use docker?
The reasons that developers love docker are endless
Let developers code their applications in whatever language using whatever toolchain they want (stack doesn’t matter)
apps are built once, and they will run on any environment. dev, local machine, vms, bare metal, production, qa, etc (granted that they have the 3.2+ kernel or 2,6,32+ on RHEL)
developers can know with 100% assurance what is inside their container, no surprises
developers dont’ have to worry about ops missing a dependency during deployment, it comes built in to the container.
Developers can rest assured that their apps will be completely isolated from everything around them, and that conflicting dependencies will not be an issue
OS, kernel version, libraries, etc are no longer a concern for the developer, again everything is packed in.
Testing, building, packaging is much more simple knowing that the application is portable and functioning, automation is much more simple, anything you can script, you can automate.
runtimes(containers) are fast and lightweight. Startup times are in the seconds, not minutes.
There are also a million reasons that sysadmins / ops loves docker:
Finally! Standard repeatable environments for everyone, qa, dev, testing, etc. across all teams (don’t need to keep the same environment for all groups)
Developers may not love it, but it gets rid of one of the age old go to excuses. If it works anywhere, it works everywhere.
Abstract your application from your OS and infrastructure, make is self-dependant. (change around infrastructure and os as much as you like
This allows for flexibility in where to run the application. Bare metal, public cloud, private cloud, etc.
deployment no longer needs to be restricted by limitations of the infrastructure. Let business needs drive your deployments
containers are so lightweight and start up so fast that scaling to load becomes extremely easy and fast
get rid of the issues of updated or changed dependencies between environments. (works stage, but fails in prod due to library version)
Increase the speed and reliability of your CI/CD systems, build the image and then run all tests and deployments using the already created image, image does not vary between steps
tl;dr
Looking to separate operations and development concerns? Docker helps out a ton!
Developers only have to worry about the inside of the containers
Operations worries about outside
only overlap happens with configuration, make sure ops and dev agree on a standard configuration management (env variables, key/value store such as etcd, etc.)
Looking to merge your team to a devops format?
Merge the concerns by allowing your development team to deploy automagically
Allow developers to deploy when their containers are built
If it works for the developer, it should work in production
Almost all issues found can be resolved by the developer updating the container and re deploying
very minimal ops intervention in the process and in fixes
ops can focus on creating ci/cd pipelines and providing deployment methods to the development team.
no more ops deployment headaches.
You may be thinking, wait this sounds alot like why we use vms! To separate and isolate applications.
The difference is: we get rid of the bloat from repeated Guest OS’s and from repeated bins/libs!
Why is it that these containers are lighter than vms?
First of all, we loose the repeated load of having a guest OS and all of the pieces that make that up
Another huge aspect of docker is the layered filesystem!
Each change to the filesystem creates a new cached layer.
Cached layers can be used across multiple copies of the container.
This allows for fast startup times
Modifications only adds a new layer to the filesystem .
Ok so many of you may be asking “Ok so how do I use this marvelous technology?” Lets take a look. First we are going to start with a basic example. A simple hello world example!
docker run -it ubuntu /bin/echo “Hello World!”
the container spun up, and then ran the echo command with “Hello World!”
Don’t believe that it was the container?
docker run ubuntu /usr/bin/dpkg --get-selections | wc -l
dpkg --get-selections | wc -l
Different number of packages! different systems!
Ok, so let’s say I don’t want it to take over my terminal! What now?
docker run -d ubuntu /bin/echo "Hello World!"
*WAIT! what was that big number printed out?
* We daemonized the process (forked it to the background) the number returned is the container hash number
we can use just the first few characters. Just like git.
Lets check the logs.
docker logs (container number)
see? We can see that the container did actually start and run the command we gave it!
Ok so let’s be honest, the “hello world” example was cute but probably not very helpful. Who ever runs hello world in production?
Lets see something real!
How about running a static frontend through a container? Lets use nginx!
docker run -d nginx
docker ps
no need to give nginx a command, it has a default one built into the container.
we can see the nginx container is running! But how do we get to it? The ports haven’t been mapped! Let’s map those ports!
docker kill (container)
docker run -d -p 8000:80 nginx
go to localhost:8000 to get the welcome nginx page!
Woot! we have nginx running now! docker allows us to map ports which the container has exposed to ports on the host.
the syntax is -p hostPort:containerPort
So that’s great and all, we have docker running nginx now. But what if we want to change the content that nginx is displaying?
The containers we have seen up to now have been completely isolated from the host machine’s filesystem. But there is a way to make a filesystem visible to the container. Volumes!
docker run -d -p 8000:80 -v /home/bcwinter/dev/dockerDemo:/usr/share/nginx/html nginx
vim index.html (add new <h2>And hello to you!</h2>)
Volumes create persistent connections between the container’s filesystem and the host’s filesystem
volumes can also be shared between containers (ie container A can have a volume from container B)
Now what if we wanted to load our code into the container? So that it wasn’t needed on the host machine through a volume?
Well, we can modify the image in two ways. Interactively, or through a dockerfile.
First let’s try this interactively.
docker run -it nginx /bin/bash
echo "<html><body><h1>Hello World</h1></body></html>" > /usr/share/nginx/html/index.html
exit
docker ps -l
docker commit -m="Edited the index.html" -a="Brayden Winterton" (containerid) bcwinter/nginx:v2
docker run -d -p 8000:80 bcwinter/nginx:v2 nginx -g ‘daemon off;’
docker kill
So that is interactively, when you open a container interatively, you open a writeable layer that layer can be changed and then commited, just like git
some downsides, not repeatable, several commands, can loose your “entrypoint” command
but there is a nicer, easier, more repeatable way, docker files!
cd ~/dev/dockerDemo
vim index.html
vim Dockerfile
docker build -t bcwinter/nginx:v3 .
docker run -d -p 8000:80 bcwinter/nginx:v3
localhost:8000
It works! The process is repeatable
Also due to the cached file system. Subsequent builds only rewrite layers that have changed.
Show a more complex dockerfile
vim exampleFile
Subsequent builds would only rebuild the add layers if the files had changed, other layers will not be rebuilt as they are cached
Docker files are the way to go, especially for CI/CD workflows.
Now as we talked about before, most of our stacks today are no longer just a simple LAMP stack, they consist of several moving parts. What if we wanted to link a database container and a frontend container for example?
Docker makes this possible with links!
Linking containers injects environment variables into the container as well as updating /etc/hosts for proper redirection of requests. as well as environment variables!
for example:
docker run --name postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
docker run --link postgres:db ubuntu cat /etc/hosts
docker run --link postgres:db ubuntu env
as you can see! docker created an entry fo the host and created the necessary env variables.
This allows for configuration to be statically set and not have to be coded into the application. If the applicatoin requests something from the url ‘db’ it is always going to get the db, no matter where it is. These assumptions help to make the application portable and lightweight.
Orchestrations is a big name in the game of docker. There are many solutions out there
Kubernetes
Mesosphere
Flynn
Deis
Shipyard
etc.
One of the most simplistic to start out with and my favorite for single host deployments is fig.
Fig is simple yet powerful. Great at dependency
management, and great for managing several containers at once.
This is how I run my development environments.
fig example
show dockerfile
show fig.yaml
fig up
fig web env
fig stop
fig is a great start to learning how to orchestrate several containers.