2. Docker Swarm
• Native clustering for Docker
• Serves the standard Docker API
• Any tool which already communicates with a
Docker daemon can use Swarm to transparently
scale to multiple host.
ex. Compose, Deis, DockerUI, Shipyard, Drone, Jenkins, Docker client, …
3. Scheduling
• Ships with a simple scheduling backend
• Allow swapping in more powerful backends, like
Mesos, Kubernetes, for large scale production
deployments
6. Step1 -
create a Swarm clustering
• We need a discovery service
• Here we use the hosted one
• $ docker run --rm swarm create
6856663cdefdec325839a4b7e1de38e8
7. Step 2 -
Add Nodes
• Connect to each nodes
( azuredev-node1, azuredev-node2, azuredev-node3 )
• Register the Swarm agents to the discovery service
• $ docker run -d swarm join --addr=<node_ip:2375> token://
6856663cdefdec325839a4b7e1de38e8
13. List nodes in the cluster
• $ docker run --rm swarm list token://
6856663cdefdec325839a4b7e1de38e8
172.31.40.100:2375
172.31.40.101:2375
172.31.40.102:2375
14. More Discovery Service
• https://docs.docker.com/swarm/discovery/
• etcd, consul, zookeeper, static list of ip
• Example (etcd)
$ swarm join --addr=<node_ip:2375> etcd://<etcd_ip>/<path>
16. Constraint Filter
• Give the label first
$ docker -d --label storage=ssd
• set constraint when run a container
$ docker run -d -P -e constraint:storage==ssd --name db mysql
17. Standard Constraint
• From docker info
- storagedriver
- executiondriver
- kernelversion
- operatingsystem
19. Affinity Filter - Example
• Schedule 2 containers and make the container #2
next to the container #1
• $ docker run -d -p 80:80 --name front nginx
87c4376856a8
• $ docker run -d --name logger -e
affinity:container==87c4376856a8 logger
20. Port Filter
• $ docker run -d -p 80:80 nginx
• Selects a node where the public 80 port is
available and schedules a container on it