This document discusses Anat Kisel's company SAW's usage of Docker for their DevOps processes. It covers how they use Docker for local developer environments, internal single node deployments, AWS deployments, and continuous integration builds. Docker helped solve challenges around complex infrastructure setup and long manual processes by standardizing environments and automating deployments. Their Docker workflow includes building images, running integration tests, and pushing images to internal and AWS repositories.
Overview of kubernetes and its use as a DevOps cluster management framework.
Problems with deployment via kube-up.sh and improving kubernetes on AWS via custom cloud formation template.
Container Runtimes: Comparing and Contrasting Today's EnginesPhil Estes
A webinar presented for the {code} Community on August 30, 2017. In this talk, we looked at the sphere of modern container runtimes that start with Docker's emergence in 2013/2014 to today's additions of rkt, OCI's runc, containerd, cri-o, and Cloud Foundry's garden-runc project, many of them consolidating around the OCI standard for container runtime and image specifications.
DCSF 19 Accelerating Docker Containers with NVIDIA GPUsDocker, Inc.
Using the NVIDIA Container Runtime, many developers and enterprises have been developing, benchmarking and deploying deep learning (DL) frameworks, HPC and other GPU accelerated containers at scale for the last two years. In this talk, we will go over the architecture of the NVIDIA Container Runtime and discuss our recent close collaboration with Docker. The result of our collaboration with Docker is a seamless native integration of the runtime enabling Docker Engine 19.03 CE and the forthcoming Docker Enterprise release to run GPU accelerated containers. We will also highlight containerized NVIDIA drivers. This new feature eliminates the overhead of provisioning GPU machines and brings GPU support on container optimized operating systems, which either lack package managers for installing software or require all applications to run in containers. In this session, you will learn how GPU accelerated containers can be easily built and deployed through the use of driver containers and native support for GPUs in Docker 19.03. The session will include a demo of running a GPU accelerated deep learning container using the new CLI options in Docker 19.03 and containerized drivers. Running NVIDIA GPU accelerated containers with Docker has never been this easy!
Learn best practices in container security to make your containers seaworthy through the build, ship, and run lifecycle.
Demos temporarily living at github.com/endophage/apps (look under wordpress dir)
Overview of kubernetes and its use as a DevOps cluster management framework.
Problems with deployment via kube-up.sh and improving kubernetes on AWS via custom cloud formation template.
Container Runtimes: Comparing and Contrasting Today's EnginesPhil Estes
A webinar presented for the {code} Community on August 30, 2017. In this talk, we looked at the sphere of modern container runtimes that start with Docker's emergence in 2013/2014 to today's additions of rkt, OCI's runc, containerd, cri-o, and Cloud Foundry's garden-runc project, many of them consolidating around the OCI standard for container runtime and image specifications.
DCSF 19 Accelerating Docker Containers with NVIDIA GPUsDocker, Inc.
Using the NVIDIA Container Runtime, many developers and enterprises have been developing, benchmarking and deploying deep learning (DL) frameworks, HPC and other GPU accelerated containers at scale for the last two years. In this talk, we will go over the architecture of the NVIDIA Container Runtime and discuss our recent close collaboration with Docker. The result of our collaboration with Docker is a seamless native integration of the runtime enabling Docker Engine 19.03 CE and the forthcoming Docker Enterprise release to run GPU accelerated containers. We will also highlight containerized NVIDIA drivers. This new feature eliminates the overhead of provisioning GPU machines and brings GPU support on container optimized operating systems, which either lack package managers for installing software or require all applications to run in containers. In this session, you will learn how GPU accelerated containers can be easily built and deployed through the use of driver containers and native support for GPUs in Docker 19.03. The session will include a demo of running a GPU accelerated deep learning container using the new CLI options in Docker 19.03 and containerized drivers. Running NVIDIA GPU accelerated containers with Docker has never been this easy!
Learn best practices in container security to make your containers seaworthy through the build, ship, and run lifecycle.
Demos temporarily living at github.com/endophage/apps (look under wordpress dir)
Intro to coreOS linux distributions and how it can be used to run docker based workloads in the cloud.
coreOS instances can be started in a cloudstack cloud, it makes use of cloud-init basics to
Taking Docker to Production: What You Need to Know and DecideDocker, Inc.
DevOps in the Real World is far from perfect, yet we all dream of that amazing auto-healing fully-automated CI/CD micro-service infrastructure that we'll have "someday." But until then, how can you really start using containers today, and what decisions do you need to make to get there? This session is designed for practitioners who are looking for ways to get started now with Docker and Swarm in production. This is not a Docker 101, but rather it's to help you be successful on your way to Dockerizing your production systems. Attendees will get tactics, example configs, real working infrastructure designs, and see the (sometimes messy) internals of Docker in production today.
Compare Docker deployment options in the public cloudSreenivas Makam
Compare Docker public cloud deployment options using Docker machine, Docker Cloud, Docker datacenter, Docker for AWS, Azure and Google cloud, AWS ECS, Google Container engine, Azure Container service.
Bitbucket Pipelines - Powered by KubernetesNathan Burrell
This talk covers how pipelines uses Kubernetes to power its builder infrastructure and shares some tips on running Kubernetes at scale in a secure way.
This presentation was presented to the sydney Kubernetes meetup on the 3rd of August 2017.
Orchestrating Docker Containers with Google Kubernetes on OpenStackTrevor Roberts Jr.
Kubernetes, Docker, CoreOS, and OpenStack for container workload management.
No audio, but there are annotations to follow along with the workload.
A video accompanies a Microservices Meetup talk that I presented on February 18, 2015 at https://www.youtube.com/watch?v=RfyIYhOzyPY
Acknowledgements to Kelsey Hightower for the workflow that I used, and Google for the example application shown.
Platform Orchestration with Kubernetes and DockerJulian Strobl
Big companies like Google containerize theirs environments for easier maintaining, scaling, and reliability. This talk gives an introduction how to build such an environment and maintain applications written in distinct programming languages. The container orchestration is done with Kubernetes by Google and Docker containers. For mass deployment CoreOS is used.
Meet up presentation on Continuous Integration with Docker on Amazon Web Services (AWS). The presentation covers benefits of Docker on AWS along with advanced Docker patterns and lessons learned.
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
From SCALE13 session on 2015-02-22. Overview of Docker, swarm, and demonstration of docker-machine for easily bootstrapping container environments and swarm clusters.
Kubernetes 101 - A Cluster Operating Systemmikaelbarbero
The popularity of the Kubernetes platform is continuously increasing... for good reasons! It's a wonderful modular platform made out of fundamentals orthogonal bricks used to defined even more useful bricks. It enables a DevOps friendly envrionnment where microservices and continously delivery feel at home.
If you have not yet dig into what is usually defined as a Cluster Operating System, it's time to catch-up! This thorough introduction to Kubernetes will cover:
* What is a Node and what is the difference between master node(s) and worker nodes.
* What is it like to run an application in Kubernetes
* What is a Pod and how it relates to containers
* How to organize resources with Labels and Namespaces
* How to scale your application with ReplicaSet
* How to expose your application to clients internal to your clusters and to external clients with Services
* What is a Volume and how it is used to attach persistent storage, configuration and secrets to pods
How to do zero downtime rolling update of your application with Deployments
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Kubernetes Architecture - beyond a black box - Part 2Hao H. Zhang
This continues the Kubernetes architecture deep dive series. (Part 1 see https://www.slideshare.net/harryzhang735/kubernetes-beyond-a-black-box-part-1)
In Part 2 I'm going to cover the following:
- Kubernetes's 3 most import design choices: Micro-service Choreography, Level-Triggered Control, Generalized Workload and Centralized Controller
- Default scheduler limitation and community's next step
- Interface to production environment
- Workload abstraction: strength and limitations
This concludes my work and knowledge sharing about Kubernetes.
There's a new Docker release, and with it lots of changes. In this video Mano Marks, Docker Developer Relations Director, highlights some of the biggest new features.
Docker 1.9 Release Blog Post:
http://blog.docker.com/2015/11/docker-1-9-production-ready-swarm-multi-host-networking
Docker 1.9 Release Notes:
https://github.com/docker/docker/blob/master/CHANGELOG.md
Docker Swarm 1.0 Blog Post:
http://blog.docker.com/2015/11/swarm-1-0
Docker Multi Host Networking Documentation:
http://docs.docker.com/engine/userguide/networking/
Docker Swarm Documentation:
https://docs.docker.com/swarm
Docker Compose Documentation:
https://docs.docker.com/compose
Online Meetup on Multi Host Networking:
http://www.meetup.com/Docker-Online-Meetup/events/226522306/
Online Meetup on Swarm:
http://www.meetup.com/Docker-Online-Meetup/events/226520109/
Docker is an open platform for developers and system administrators to build, ship and run distributed applications. With Docker, IT organizations shrink application delivery from months to minutes, frictionlessly move workloads between data centers and the cloud and can achieve up to 20X greater efficiency in their use of computing resources. Inspired by an active community and by transparent, open source innovation, Docker containers have been downloaded more than 700 million times and Docker is used by millions of developers across thousands of the world’s most innovative organizations, including eBay, Baidu, the BBC, Goldman Sachs, Groupon, ING, Yelp, and Spotify. Docker’s rapid adoption has catalyzed an active ecosystem, resulting in more than 180,000 “Dockerized” applications, over 40 Docker-related startups and integration partnerships with AWS, Cloud Foundry, Google, IBM, Microsoft, OpenStack, Rackspace, Red Hat and VMware.
2016 - Easing Your Way Into Docker: Lessons From a Journey to Productiondevopsdaysaustin
Presentation by Steve Woodruff
The story of how SpareFoot broke up its monolithic application into micro services, deployed Docker into production, and established a "contract" between Dev and Ops.
Intro to coreOS linux distributions and how it can be used to run docker based workloads in the cloud.
coreOS instances can be started in a cloudstack cloud, it makes use of cloud-init basics to
Taking Docker to Production: What You Need to Know and DecideDocker, Inc.
DevOps in the Real World is far from perfect, yet we all dream of that amazing auto-healing fully-automated CI/CD micro-service infrastructure that we'll have "someday." But until then, how can you really start using containers today, and what decisions do you need to make to get there? This session is designed for practitioners who are looking for ways to get started now with Docker and Swarm in production. This is not a Docker 101, but rather it's to help you be successful on your way to Dockerizing your production systems. Attendees will get tactics, example configs, real working infrastructure designs, and see the (sometimes messy) internals of Docker in production today.
Compare Docker deployment options in the public cloudSreenivas Makam
Compare Docker public cloud deployment options using Docker machine, Docker Cloud, Docker datacenter, Docker for AWS, Azure and Google cloud, AWS ECS, Google Container engine, Azure Container service.
Bitbucket Pipelines - Powered by KubernetesNathan Burrell
This talk covers how pipelines uses Kubernetes to power its builder infrastructure and shares some tips on running Kubernetes at scale in a secure way.
This presentation was presented to the sydney Kubernetes meetup on the 3rd of August 2017.
Orchestrating Docker Containers with Google Kubernetes on OpenStackTrevor Roberts Jr.
Kubernetes, Docker, CoreOS, and OpenStack for container workload management.
No audio, but there are annotations to follow along with the workload.
A video accompanies a Microservices Meetup talk that I presented on February 18, 2015 at https://www.youtube.com/watch?v=RfyIYhOzyPY
Acknowledgements to Kelsey Hightower for the workflow that I used, and Google for the example application shown.
Platform Orchestration with Kubernetes and DockerJulian Strobl
Big companies like Google containerize theirs environments for easier maintaining, scaling, and reliability. This talk gives an introduction how to build such an environment and maintain applications written in distinct programming languages. The container orchestration is done with Kubernetes by Google and Docker containers. For mass deployment CoreOS is used.
Meet up presentation on Continuous Integration with Docker on Amazon Web Services (AWS). The presentation covers benefits of Docker on AWS along with advanced Docker patterns and lessons learned.
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
From SCALE13 session on 2015-02-22. Overview of Docker, swarm, and demonstration of docker-machine for easily bootstrapping container environments and swarm clusters.
Kubernetes 101 - A Cluster Operating Systemmikaelbarbero
The popularity of the Kubernetes platform is continuously increasing... for good reasons! It's a wonderful modular platform made out of fundamentals orthogonal bricks used to defined even more useful bricks. It enables a DevOps friendly envrionnment where microservices and continously delivery feel at home.
If you have not yet dig into what is usually defined as a Cluster Operating System, it's time to catch-up! This thorough introduction to Kubernetes will cover:
* What is a Node and what is the difference between master node(s) and worker nodes.
* What is it like to run an application in Kubernetes
* What is a Pod and how it relates to containers
* How to organize resources with Labels and Namespaces
* How to scale your application with ReplicaSet
* How to expose your application to clients internal to your clusters and to external clients with Services
* What is a Volume and how it is used to attach persistent storage, configuration and secrets to pods
How to do zero downtime rolling update of your application with Deployments
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
Kubernetes Architecture - beyond a black box - Part 2Hao H. Zhang
This continues the Kubernetes architecture deep dive series. (Part 1 see https://www.slideshare.net/harryzhang735/kubernetes-beyond-a-black-box-part-1)
In Part 2 I'm going to cover the following:
- Kubernetes's 3 most import design choices: Micro-service Choreography, Level-Triggered Control, Generalized Workload and Centralized Controller
- Default scheduler limitation and community's next step
- Interface to production environment
- Workload abstraction: strength and limitations
This concludes my work and knowledge sharing about Kubernetes.
There's a new Docker release, and with it lots of changes. In this video Mano Marks, Docker Developer Relations Director, highlights some of the biggest new features.
Docker 1.9 Release Blog Post:
http://blog.docker.com/2015/11/docker-1-9-production-ready-swarm-multi-host-networking
Docker 1.9 Release Notes:
https://github.com/docker/docker/blob/master/CHANGELOG.md
Docker Swarm 1.0 Blog Post:
http://blog.docker.com/2015/11/swarm-1-0
Docker Multi Host Networking Documentation:
http://docs.docker.com/engine/userguide/networking/
Docker Swarm Documentation:
https://docs.docker.com/swarm
Docker Compose Documentation:
https://docs.docker.com/compose
Online Meetup on Multi Host Networking:
http://www.meetup.com/Docker-Online-Meetup/events/226522306/
Online Meetup on Swarm:
http://www.meetup.com/Docker-Online-Meetup/events/226520109/
Docker is an open platform for developers and system administrators to build, ship and run distributed applications. With Docker, IT organizations shrink application delivery from months to minutes, frictionlessly move workloads between data centers and the cloud and can achieve up to 20X greater efficiency in their use of computing resources. Inspired by an active community and by transparent, open source innovation, Docker containers have been downloaded more than 700 million times and Docker is used by millions of developers across thousands of the world’s most innovative organizations, including eBay, Baidu, the BBC, Goldman Sachs, Groupon, ING, Yelp, and Spotify. Docker’s rapid adoption has catalyzed an active ecosystem, resulting in more than 180,000 “Dockerized” applications, over 40 Docker-related startups and integration partnerships with AWS, Cloud Foundry, Google, IBM, Microsoft, OpenStack, Rackspace, Red Hat and VMware.
2016 - Easing Your Way Into Docker: Lessons From a Journey to Productiondevopsdaysaustin
Presentation by Steve Woodruff
The story of how SpareFoot broke up its monolithic application into micro services, deployed Docker into production, and established a "contract" between Dev and Ops.
WebSphere Application Server Liberty Profile and DockerDavid Currie
Presentation from IBM InterConnect 2015 covering a brief introduction to Docker, the relationship between IBM and Docker, and then using WebSphere Application Server Liberty Profile under Docker.
This presentation by Andrew Aslinger discusses best practices and pitfalls of integrating Docker into Continuous Delivery Pipelines. Learn how Andrew and his team used Docker to replace Chef to simplify their development and migration processes.
Slides from DockerCon SF 2015 –
Docker at Lyft: Speeding up development w/ Matthew Leventi
Talk description: Learn how Docker enables Lyft to increase developer productivity across our engineering organization. We'll go through a local development model that decreases our developer onboard time, and keeps our teams focused on delivering product goals. We'll also talk about how we use Docker to test changes to our servers and allow QA testing of our mobile clients. You'll come out of the talk with techniques and reasons for integrating docker not just in the cloud but also onto developer's laptops.
Best Practices for Running Kafka on Docker ContainersBlueData, Inc.
Docker containers provide an ideal foundation for running Kafka-as-a-Service on-premises or in the public cloud. However, using Docker containers in production environments for Big Data workloads using Kafka poses some challenges – including container management, scheduling, network configuration and security, and performance.
In this session at Kafka Summit in August 2017, Nanda Vijyaydev of BlueData shared lessons learned from implementing Kafka-as-a-Service with Docker containers.
https://kafka-summit.org/sessions/kafka-service-docker-containers
DCEU 18: Building Your Development PipelineDocker, Inc.
Oliver Pomeroy - Solution Engineer, Docker
Laura Frank Tacho - Director of Engineering, CloudBees
Enterprises often want to provide automation and standardisation on top of their container platform, using a pipeline to build and deploy their containerized applications. However this opens up new challenges… Do I have to build a new CI/CD Stack? Can I build my CI/CD pipeline with Kubernetes orchestration? What should my build agents look like? How do I integrate my pipeline into my enterprise container registry? In this session full of examples and “how-to”s, Olly and Laura will guide you through common situations and decisions related to your pipelines. We’ll cover building minimal images, scanning and signing images, and give examples on how to enforce compliance standards and best practices across your teams.
2. Agenda
- SAW Product and our DevOps Challenges
- Docker as solutions to our challenges
- Docker CI Pipeline
- Local Dev Docker Deployment
- SND: Single Docker Node Deployment
- AWS & Public Cloud
- SNB: Single Node Docker Based Build
8. SAW DevOps challenges given our architecture
- Our Architecture is very advanced & complex (many
components)
- In the past (Before Docker and our automatic deployment
done today by ansible) we did a lot of manual steps
- Dev Env – challenge is to encapsulate the infra complexity from
developer, allowing him independency from shared infra
- CI Build has big challenges in its scale
10. We began to use Docker ~2.5 years ago
Today we are using Docker as solution in 4 areas:
- Local Developer Docker Deployment
- Single Node Docker Deployment internal farms
- AWS Docker Deployment
- CI: Docker Based Build
11. Docker Solution #1 (of 4):
Local Developer Docker Deployment
-First we used it for Docker Deployment locally for our
developers to avoid “infra noise” for them when they
work
-This made huge difference in our RnD Efficiency
12. Docker Solution #2 (of 4):
Single Node Docker Deployment internal farms
-Second we created farm of Single Node Docker Farm
we use for many e2e use cases:
-Deployment on feature branches
-Deployment for Bug hunts and Regressions
-Deployment for PMs, Discover etc
13. -Third we began to use Docker deployment for
Public Clouds
-Started with AWS
-Used today for internal users only
Docker Solution #3 (of 4):
AWS Docker Deployment
14. Docker Solution #4 (of 4):
CI: Docker Based Build
- We’ve implemented Docker for CI builds.
-We provision dedicated Docker infrastructure
services for each build
-We maintain unified infrastructure across
development, build & deployment environment.
16. Our Docker Images…
- We have 16 Infra & App images, deployed as 32 container instances:
- Infra Images such as: redis, idol, elastic search, mongo, postgress etc
- App Images such as: tomcat, gateway, lcm, platform, saw, ui etc
- In addition as have Base images such as JDK, Consul-template etc
- And last we have, Utilities images such as Provision, selenium etc.
17. Pipeline to create Docker images – our flow
Triggers Build Integration
Test
Push to
registry
18. Triggers
We have different triggers that can cause this flow to start:
SCM change
–Change in Dockerfile – PostgreSQL upgrade version
–Added new Container to build - PPO container
–Change of vagrant flow
Other build:
–SAW build
–Docker base image build
Triggers Build Integration
Test
Push to
registry
19. Build
Docker build scripts written in gradle using Docker API
The build lifecycle
- Build Docker images from Docker files
- Create and run container
- Run unit test for the container
(E.G. test connection to tomcat port on tomcat container)
- Push container to repository (to dev in this stage)
Triggers Build Integration
Test
Push to
registry
20. Integration test
Running integration test
- Call vagrant up on virtual box - Validation for developer
- Pull Docker images from registry
- Run all farm on that VM
- Run test
Call vagrant up on Manage host – Validation for SND
- Pull Docker images from registry
- Run all farm on that VM
- Run test
Triggers Build Integration
Test
Push to
registry
21. Push
Push Images to registry
- Call gradle build to push images
- HP prod registry
- AWS registry
- Storage in S3
Triggers Build Integration
Test
Push to
registry
22. Docker CI Pipeline – Summery
22
Infra Build
and Push
Maas Build
and Push
Dev
Registry
Prod
Registry
AWS
Registry
Triggered
Integration test
Vagrant provision
Infra Push
Maas Push
Triggers Build
Integration
Test
Push to
registry
24. AWS Deployment for SAW
- As we said, we have 16 images, deployed as 32 container instances (HA)
- Provising infrastructure of a farm takes ~15 min.
- We provision new farm as VPC by using terraform.
- Deploying SAW on this farm takes ~1h and keep improving by ansible
- Auto registration of farms in public DNS
25. Deployment process in AWS Flow
25
Provision
container
Jenkins run
Terraform create
VPC and all AWS
resources
Manage host
Ansible
playbook
Orchestrate
containers
(pull and run)
Registrator
Use S3
Storage
End Point
Copy Ansible
resources
Instance with
Docker service
For Infra and Saw
Run ansible
playbooks
Paas ,Infra ,Nfs
,Maas
VPC
26. Terraform
Deploy AWS farm resources
- VPC
- Subnets
- Route tables
- Instances
- Security Group
- Route 53 DNS
- Registry S3 storage end point
Jenkins
run
Terraform
Copy Ansible
Run ansible
VPC
27. Ansible playbooks
Deployment and Orchestrate of:
Maas Dockers using 4 playbooks…
–PAAS - deploy all PAAS containers on all Docker servers
–Consul , registrator , logstash-agent , monitor-agent
–INFRA – Deploy infra containers on relevant TAGs instance
–Dataebases , ….
–NFS – Create NFS cluster and create mount to the relevant instance
–MAAS – Deploy MAAS containers
– create initialized data , test Tenants.
Jenkins
run
Terraform
Copy Ansible
Run ansible
VPC
28. Deployment process in AWS Flow – Finaly we have a VPC ready
28
Provision
container
Jenkins run
Terraform create
VPC and all AWS
resources
Manage host
Ansible
playbook
Orchestrate
containers
(pull and run)
Registrator
Use S3
Storage
End Point
Copy Ansible
resources
Instance with
Docker service
For Infra and Saw
Run ansible
playbooks
Paas ,Infra ,Nfs
,Maas
VPC
30. CI Build Facts
- We have 30 build servers (32 CPU, 128GB RAM, 500GB storage)
- Our CI build takes 1 hour
- We’re running over 100 builds a day
31. Motivation for Single Node Build
- Provide isolated environment for each build
- Reduce build time
- Improve build stability
- Simplify troubleshooting and reduce maintenance effort
32. Docker based Build CI Flow:
Compilation
Start
Server
Git Push
Vagrant up
Upload to
Nexus
Integration
Tests
33. SNB: Build Server Configuration
Vertica
Platform GatewayNginx
IDOL
MongoDB
PostgreSQL
Openfire
Redis
RabbitMQ
SMTPServer
HAProxy
Kibana
ElasticSearch
Logstash
Consul
Registrator
Cadvisor
Integration
Test
Build Server - Build servers is dedicated to single
build.
- Build server is running all
compilation and runtime processes.
- Infrastructure processes are
running in Docker containers at the
same server
- Server load is regulated by number
of running threads.