Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Consuming Cinder from Docker

382 views

Published on

Slides from OpenStack Days East - Using Cinder as a Docker Volume Backend

Published in: Technology
  • Be the first to comment

Consuming Cinder from Docker

  1. 1. Consuming Cinder from Docker John Griffith SolidFire/NetApp August 23, 2016
  2. 2. About me • Contributing to OpenStack a while (longer than some, less than others) • Have been pretty passionate about core Cinder and things like reference driver • Worked at a little Storage Startup called SolidFire recently bought by NetApp • Have an AWESOME job where I get to work on OSS and always try new things • Can be opinionated and blunt at times… sorry about that 2
  3. 3. 3 Maybe you showed up because you heard rumors that I’d talk about Docker?
  4. 4. 4 OpenStack Cinder -Block Storage as a Service • Abstraction layer to provide a pool of Block Resources • Use backend storage from different vendors, don’t have to care of know what’s actually serving up the blocks • Scale out… just keep plugging in back-ends • Scheduler can figure out where to place volumes for you
  5. 5. 5 It’s like having an infinite number of disks That you can hot plug in and out of your Instances Photo Credit: Roger Smith
  6. 6. 6 Only really need a few things • Create/Delete • Attach/Detach • Snapshot
  7. 7. 7 Of course there’s more for those that want it • Replication • CG’s • Backups • Migration • Imports/Exports • Transfer ownership • Extend • ... (This is me trying not to be opinionated and blunt… moving on)
  8. 8. 8 I was told this would be a Docker talk!!
  9. 9. 9 Yes, I’m going to talk about Docker • Docker is the best geek bait EVER!!! • “OpenStack in Containers” • “Container Orchestration in OpenStack” • “OpenStack on Kubernetes”
  10. 10. 10 Unicorns for EVERYONE • All sorts of interesting ideas and plans on the horizon • Let’s bypass some of the hype and just talk about some cool stuff you can do today • Try to remember a little bit of the past while we’re at it
  11. 11. 11 History repeats itself in tech • $NEW_THING is like pets vs cattle • $NEW_THING needs better networking • $NEW_THING needs persistent storage • $NEW_THING is a different development paradigm • $NEW_THING is about small ephemeral services echo $NEW_THING OpenStack export NEW_THING=Containers
  12. 12. 12 Just like we heard in OpenStack –Containers need networking and storage options • Volume Plugin capability for Docker introduced initially in 1.8 • Continues to mature • List of Vendors racing to provide a plugin accelerating rapidly • Nobody wants to be late to the party, especially those that were late to Cinder
  13. 13. 13 Docker Volume Plugins –General things to know • Docker provides a simple Volume API • INCLUDES PROVISIONING!!!!! • Driver runs as a daemon • Most common right now are simple UNIX domain sockets • Runs on same node as the Docker Engine • json-rpc over http POST • Works with Swarm, Engine and Compose
  14. 14. So I wrote a Cinder Plugin • Written in Golang • Focus on JUST Cinder • Vendor neutral/independent • Open Source • Gladly welcome contributors and feedback • Anticipating/Hoping for Cinder community support 14
  15. 15. 15 Can’t I already do this? Yep, you can Cool stuff out there already Adoption is the greatest compliment
  16. 16. 16 Don’t get me wrong Some of those existing plugins that wrap up Cinder are pretty cool Some offer additional benefits Some might fit your use case better Some of them you may have already invested in and have relationships with the contributing vendors Do your thing, that’s AWESOME Don’t hate, we’re all in this together
  17. 17. 17 Brace yourselves, it’s about to get terrifying… Well… for a few people at least
  18. 18. 18 These Plugins aren’t under an umbrella • Docker Plugins are NOT in a Docker repo • Cinder Docker Plugin isn’t in an OpenStack repo
  19. 19. 19 Some OpenStack folks just had a stroke For now: https://github.com/j-griffith/cinder-docker-driver Licensed under the “unlicense” Potential for inclusion under OpenStack some day? Or maybe Docker?
  20. 20. 20 So how does this work • It’s not “much” different than how we do things with OpenStack/Nova • Create a volume • Attach a volume • It’s all the same stuff we’ve been doing for years, we just change the consumer • Cinder really doesn’t care what you’re doing on the other side • By the way, we’re talking Docker, but it doesn’t have to be Docker either
  21. 21. 21 I have to give a shout out to Docker on1.12 • Docker 1.12 was a HUGE step forward • Swarm advancements are my favorite • I can deploy a Swarm cluster wicked fast • Swarm in OpenStack or Public Cloud is stupid easy
  22. 22. 22 Recipe for a tasty Swarm Cluster with persistent data Start with some Peanut Butter • Basic OpenStack Cloud • Compute Networking and Storage Mix in a bit of Chocolate • Docker 1.12 Top it off with some frosting • Cinder Docker Driver
  23. 23. 23 We’ll use docker-machine with the OpenStack driver – Because we can, and it works pretty well (Our OpenStack Cloud)
  24. 24. 24 We have our ingredients, here’s the basic steps 1. docker-machine to create 3 Nova Instances and setup Docker 2. Create a Swarm Cluster 3. Install , configure and start the cinder-docker-driver 4. Deploy a Swarm service that creates and uses a Cinder Volume
  25. 25. 25 Create our nodes –use env vars instead of args We’ll use docker-machine and the built in OpenStack driver for this There’s a LOT of arguments required to the cli, so let’s start by creating an env file rather than typing everything in. export OS_FLAVOR_ID=2export OS_DOMAIN_NAME=$OS_USER_DOMAIN_NAME export OS_IMAGE_ID=d5c276bc-cb70-42c4-9291-96f40a03a74c export OS_SSH_USER=ubuntu export OS_KEYPAIR_NAME=jdg export OS_PRIVATE_KEY_FILE=$HOME/.ssh/id_rsaexport OS_SSH_USER=ubuntu export OS_TENANT_ID=$OS_PROJECT_ID
  26. 26. 26 Create our nodes This just does our ”nova boot ….” for us, creating the Instances based on env vars It does a few additional things for us too though • Install Docker • Configure and Setup certs for Docker • Verify Docker is up and running • Create a node entry in the docker nodes db ➜ docker-machine create –d openstack swarm-1 ➜ docker-machine create –d openstack swarm-2 ➜ docker-machine create –d openstack swarm-3
  27. 27. 27 We can view our nodes using docker-machine ➜ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS swarm-1 - openstack Running tcp://172.16.140.157:2376 v1.12.0 swarm-2 - openstack Running tcp://172.16.140.159:2376 v1.12.0 swarm-3 - openstack Running tcp://172.16.140.161:2376 v1.12.0
  28. 28. 28 Set a node up as a Swarm Manager ➜ eval $(docker-machine env swarm-1) ➜ SWARM-1-IP=$(docker-machine ip swarm-1) ➜ docker swarm init --advertise-addr $SWARM-1-IP –listen-addr $SWARM-1-IP:2377 Swarm initialized: current node (5oi3h06yci5mvsau6czcbbxqu) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-33zfeg2ppr9043o4itdn2cznwn7yuy7na1fqg2aduoemihw93o -3znh32dbpmb5goc8l1ia286it 172.16.140.157:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
  29. 29. 29 Set our other two Instances as Workers ➜ eval $(docker-machine env swarm-2) ➜ docker swarm join --token SWMTKN-1-33zfeg2ppr9043o4itdn2cznwn7yuy7na1fqg2aduoemihw93o- 3znh32dbpmb5goc8l1ia286it 172.16.140.157:2377 This node joined a swarm as a worker. ➜ eval $(docker-machine env swarm-3) ➜ docker swarm join --token SWMTKN-1-33zfeg2ppr9043o4itdn2cznwn7yuy7na1fqg2aduoemihw93o- 3znh32dbpmb5goc8l1ia286it 172.16.140.157:2377 This node joined a swarm as a worker.
  30. 30. 30 That’s it, you now have a Swarm Cluster!!!!
  31. 31. 31 Now it’s time to install the Cinder Driver on each Node • Install the driver • Copy over a config file • Start the daemon • Restart Docker
  32. 32. 32 You can just use docker-machine from your laptop… ➜ for each in $(docker-machine ls -q); do; docker-machine ssh $each "curl –sSL https://raw.githubusercontent.com/j-griffith/cinder-docker-driver/master/install.sh | sh -" ; done ➜ for each in $(docker-machine ls -q); do; docker-machine scp cdd.config.json $each:~/config.json; done ➜ for each in $(docker-machine ls -q); do; docker-machine ssh $each "sudo cinder-docker-driver --config config.json > cdd.log 2>&1 &" ; done ➜ for each in $(docker-machine ls -q); do; docker-machine ssh $each "sudo Service docker restart" ; done
  33. 33. 33 About that install…. Config file is just OpenStack Creds Creating a Service file for the driver has made it’s way up pretty far on the TODO list { "Endpoint": "http://172.16.140.243:5000/v2.0", "Username": "jdg", "Password": “ABC123", "TenantID": "3dce5dd10b414ac1b942aba8ce8558e7“ }
  34. 34. 34 Now you can do cool things Let’s build the simple counter application • Redis container with Cinder Volume • Web front end to take user input We’ll run this as a Swarm service, so we can do things like scale it, drain-nodes and move the containers uninterrupted. All while persisting our Redis data
  35. 35. 35 Create a Docker network first so the Swarm nodes have a layer to communicate on… ➜ eval $(docker-machine env swarm-1) ➜ docker network create demo-net Bd45fad9911005ce2ff8e311a2738681d179589d8d06989a136e8020bc5a8155
  36. 36. 36 Launch our services, start with the Redis service… ➜ eval $(docker-machine env swarm-1) ➜ docker service create --name redis --network demo-net –-mount type=volume,src=counter-demo,dst=/data,volume-driver=cinder -p 6379:6379 redis This will: • Pull the Redis image if it’s not available • Get/Create the volume on the Cinder backend • Attach the Volume to the Swarm node • Partition, Format and Mount the Volume • Link the volume to the Redis Containers /data directory • Start the Redis Container
  37. 37. 37 You can see the attached volume on the Swarm Node…. ubuntu@swarm-3:~$ ls /dev/disk/by-path/ ip-10.10.9.1:3260-iscsi-iqn.2010-01.com.solidfire:ayyb.uuid-59e99b3b-c7d6-45c2-924c- virtio-pci-0000:00:04.0 virtio-pci-0000:00:04.0-part1 Same as we do for Nova Compute nodes… nothing really different except we add the file system
  38. 38. 38 Now, the web service/frontend… ➜ docker service create --name web --network demonet -p 80:80 jgriffith/jgriffith webbase This will: • Connect to our Redis container (regardless of what Swarm node it’s on) • Expose port 80 to all of the Swarm Nodes (access from any Swarm Node IP) • Count input/clicks and store them in the Redis DB

×