About docker cluster management tools
1. Base concepts of cluster
management and docker
2. Docker Swarm
3. Amazon EC2 Container Service
4. Kubernetes
5. Mesosphere
13. Swarm / scheduling strategies
1. BinPacking - CPU and
RAM available and will
return the node the most
packed already
2. Random
13
14. Swarm / scheduling filters
14
1. Constraint
a. key/value - support glob and regexp
b. dockerinfo
2. Affinity
a. containers
b. images
3. Dependency
a. Shared volumes (--volumes-from)
b. Links (--link)
c. Shared network stack (--net)
4. Port
5. Health
15. Swarm / service discovery
Providers:
1. token (docker hub service)
2. file
3. etcd
4. consul
5. zookeeper
15
16. Setup Swarm cluster manually
1 step: install >= 1.4.0 docker
2 step: change /etc/default/docker file to listen tcp
DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix://var/run/docker.sock"
3 step: create certificates and configure TLS (optional)
4 step: docker pull swarm
5 step: docker run --rm swarm create
generate unique cluster_id for using docker hub discovery service
6 step: docker run -d swarm join --addr=<node_ip:2375> token://<cluster_id>
run this command on all hosts
7 step: docker run -d -p <swarm_port>:2375 swarm manage token://<cluster_id>
start the Swarm Master
8 step: export DOCKER_HOST=tcp://<swarm_ip>:<swarm_port>
9 step: use your usual docker commands :-)
16
17. #1 Setup cluster on AWS by Docker Machine
1 step: download Docker-machine and add it to PATH
https://docs.docker.com/machine/#installation
2 step: run command to create Swarm Master
docker-machine create -d amazonec2 --swarm --swarm-master
--swarm-discovery=token://<generated_cluster_id>
--amazonec2-access-key=*****
--amazonec2-ami=ami-823686f5
--amazonec2-instance-type=t2.micro
--amazonec2-region=eu-west-1
--amazonec2-root-size=10
--amazonec2-secret-key=*****
--amazonec2-security-group=my
--amazonec2-vpc-id=default
swarm-master
17
18. #2 Setup cluster on AWS by Docker Machine
3 step: run command (like in step 2 but without --swarm-
master key) to create Swarm Slave
docker-machine create -d amazonec2 --swarm
--swarm-discovery=token://<generated_cluster_id>
….
swarm-slave-01
4 step: export DOCKER_HOST=tcp://<swarm_ip>:<swarm_port>
5 step: use your usual docker commands or Docker-
compose :-)
18
19. Swarm / conclusion
+ standard Docker API
+ extremely easy to get started
- many features are not implemented “yet”
(multi-master, multi-host network, failover)
DOCKER MACHINE + SWARM + COMPOSE
=
19
20. Amazon EC2 Container Service (preview)
ECS is available in the US East (N. Virginia) and the US
West (Oregon) region during the preview.
20
21. ECS key concepts
Cluster - a logical grouping of container instances
Container Instance - EC2 instance that is running the ECS
agent and has been registered into a cluster.
Task Definition - a description of an application (json) - lists
of containers grouped together.
Task - task definition that is running on a container instance.
21
22. #1 Setup ECS cluster
step 1: create IAM role that allows
EC2 use ECS service.
step 2: install awscli > 1.7
step 3: change region in ~/.
aws/config
[default]
output = json
region = us-east-1
22
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:CreateCluster",
"ecs:RegisterContainerInstance",
"ecs:
DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Submit*",
"ecs:Poll"
],
"Resource": [
"*"
]
}
]
}
23. #2 Setup ECS cluster
step 3: run command to create cluster (Your account is
limited to 2 clusters):
aws ecs create-cluster --cluster-name MyCluster
step 4: create user data script init_script.sh
#!/bin/bash
echo ECS_CLUSTER=MyCluster >> /etc/ecs/ecs.config
step 5: create 3 EC2 instances in cluster
aws ec2 run-instances --image-id ami-801544e8 --count 3 --instance-type t2.
micro --key-name <public_key> --security-groups <sec_group> --user-data
file://init_script.sh --iam-instance-profile Name=<IAM role name>
23
26. If we stop this EC2 instance, task with nginx container will
be resheduled (failover) to another hosts in cluster!
26
27. EC2 Container Service / conclusion
+ Use ECS we don’t need to administrate Master nodes. High
availability of ECS is responsibility of AWS engineers.
- I have not found how to integrate with ELB, Autoscale and other
Amazon services (may be it’s under development now)
27
29. Kubernetes (k8s) key concepts
Node - worker machine in Kubernetes (previously known as
Minion)
Pod - the smallest unit - colocated group of Docker containers.
Label - key-value tag
Replication controller - ensures that a specified number of pod
"replicas" are running at any one time
Service - provide a single, stable name and address for a set of
pods. They act as basic load balancers.
29
31. Kubernetes / monitoring - Heapster
Heapster enables monitoring of clusters using cAdvisor.
31
32. #1 Kubernetes / Setup on AWS
step 1: install aws cli and k8s
step 2: check your aws creds in ~/.
aws/credentials
step 3: add env vars:
export PATH=$PATH:
<path_to_untar_k8s_directory>/platforms/<os>/<platform>
export PATH=$PATH:<path_to_untar_k8s_directory>/cluster
export KUBERNETES_PROVIDER=aws
step 4: create ‘kubernetes’ IAM role with
EC2FullAccess
32
33. #2 Kubernetes / Setup on AWS
step 5: up the cluster (it takes about 5 minutes) kube-up.sh
- script will provision a new VPC, 1 master and 4 node (minions) in us-west-2 (Oregon).
- create a keypair called "kubernetes" as well as reuse an IAM role also called "kubernetes"
- create S3 bucket ‘kubernetes-staging-***’ and upload Salt provision scripts
- create CAFile, CertFile, KeyFile on your local computer
At the end of the script execution you see the URL of k8s master
33
34. #3 Kubernetes / Setup on AWS
step 6: export KUBERNETES_MASTER=https://<generated_url_from_step_5>
Now cluster is ready and we can manipulate this one by kubectl
Then you can see examples of replication controllers and services in
kubernetes git repo
https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples
34
35. Kubernetes / conclusion
In my opinion Kubernetes is the most progressive and
feature-rich cluster management tool nowadays.
+ pluggable architecture (in future you can easily replace
docker by other container engine)
+ self-healing (auto-restart, auto-replication)
+ Google Container Engine (Alpha) powered by
Kubernetes
+ support integration with a lot of Cloud providers
+ declarative templates of all resources (json or yaml)
35