Your SlideShare is downloading. ×
0
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Puppet and Apache CloudStack
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Puppet and Apache CloudStack

4,757

Published on

Puppet is ideal for abstracting away the configurations of machines. In the time since puppet arrived on the scene, IaaS has started to creep into the mainstream. Now instead of just managing the …

Puppet is ideal for abstracting away the configurations of machines. In the time since puppet arrived on the scene, IaaS has started to creep into the mainstream. Now instead of just managing the configuration in the machine, the machine state itself can be configured, and even broken out to manage the configuration of all the deployed instances in a datacenter. We'll explore delving into using Apache CloudStack to do so, but we'll talk about the applicable other platforms as well.

David Nalley
Committer/PMC member, Apache CloudStack
David is a recovering sysadmin who spent a year in operations before starting to work on cloudy things. He's currently employed by Citrix in the Open Source Business Office to spend his time working on Apache CloudStack. In addition to CloudStack he's been involved in a number of other open source projects, including Zenoss and the Fedora Project.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
4,757
On Slideshare
0
From Embeds
0
Number of Embeds
7
Actions
Shares
0
Downloads
59
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Machine names
  • A human readable description
  • Disk image
  • Amount of CPU/RAM
  • Firewall Rules
  • Stuff for config mgmt.
  • Only new stuff here is 'networks' Previously a default network was assigned.
  • Transcript

    • 1. Infrastructure as code with Puppet and Apache CloudStack David Nalley ke4qqq@apache.org @ke4qqq
    • 2. #whoami • Apache Software Foundation Member • Apache CloudStack PMC Member • Recovering Sysadmin • Fedora Project Contributor • Zenoss contributor • Employed by Citrix in the Open Source Business Office
    • 3. Setting the stage Apache CloudStack is... ● an open source IaaS platform ● proven in production at massive scale ● awesome
    • 4. Gorgeous UI
    • 5. API ● Native: http://cloudstack.apache.org/docs/api ● EC2
    • 6. IaaS removes one constraint No longer waiting days or weeks to get a VM provisioned
    • 7. but introduces another... Now you have to get a machine configured in a timely manner.
    • 8. Self service ● UI ● API ● Some external tool
    • 9. People provision stuff... Not ops folks Often not familiar with environmental intricacies Don't care
    • 10. Baseline can be important....
    • 11. Classification Problem: We spin up, dynamically, 1-500 VMs at any given time - how do we decide what configurations apply.
    • 12. Classification The wrong way - dedicated images for each purpose
    • 13. Classification editing nodes.pp node 'foo-356.cloud.com' { include httpd }
    • 14. Classification globbing node 'mysql*' { include mysqld }
    • 15. Classification Everything is default node 'default' { include httpd }
    • 16. Classification External Node Classifier
    • 17. Classification External Node Classifier
    • 18. Classification Facts class base { case $::fact { 'httpd': { include httpd } 'otherrole': { include nginx } } }
    • 19. Classification - One Solution ● During instance provisioning define metadata. ● Custom fact for that metadata ● Case statement based on that fact
    • 20. Example Metadata role=webserver location=datacenter1 environment=production
    • 21. Corresponding manifest class base { case $::fact { 'webserver': { include httpd } 'database': { include postgresql } } }
    • 22. Corresponding manifest class base { case $::fact { 'webserver': { include httpd } 'database': { include postgresql } } }
    • 23. Links, et al. ● Fact: http://s.apache.org/acs_userdata ● Blog with details: http://s.apache.org/acs_userdata2
    • 24. Video - go watch it ● I only have 45 minutes - so can't delve into everything, you should watch the video- it’s great. ● http://youtu.be/c8YWctfOpwo
    • 25. Video - go watch it ● I only have 45 minutes - so can't delve into everything, you should watch the video- it’s great. ● http://youtu.be/c8YWctfOpwo
    • 26. And then there was a knife-plugin The folks at Edmunds.com wrote a knife plugin for CloudStack The knife plugin had the ability to define an application stack, potentially hundreds of nodes, that are interrelated, and provision them with a single knife command. https://github.com/cloudstack-extras/knife-cloudstack
    • 27. Deploying a machine with knife ~ knife cs server create
    • 28. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    • 29. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    • 30. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production",
    • 31. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    • 32. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    • 33. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    • 34. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    • 35. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    • 36. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    • 37. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    • 38. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    • 39. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    • 40. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    • 41. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    • 42. { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" },
    • 43. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    • 44. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    • 45. { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker- c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop- master}:50070/index.jsp" } ] }
    • 46. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    • 47. Deploy that Hadoop cluster with knife cs stack create hadoop_cluster_a
    • 48. I was jealous....
    • 49. Then at FOSDEM 2012 ● CloudStack user shows me Puppet types and resources for OpenNebula. ● https://puppetlabs.com/blog/puppetizing-opennebula/ ● They indicated they wanted this awesomeness for CloudStack....
    • 50. Why? ● They wanted to define each of their application stacks in puppet, so that not only the configuration of software on the machine, but the machines themselves would be configured by Puppet. ● Automated deployment of test environments that are exactly the same ● Really gets outside of machine configuration to entire infrastructure configuration
    • 51. What we are used to ● Puppet _defines_ the configuration within the machine
    • 52. What we want ● Puppet _defines_ the machine. ● Puppet _defines_ collection of machines ● Puppet _defines_ the machines, networks, and rest of infrastructure
    • 53. Then at Puppetconf ● There was Google Compute Engine types and resources for Puppet. ● Dan Bode gave a presentation showing off the work he had done... that presentation is worth seeing... ● http://www.slideshare.net/bodepd/google-compute-presentati
    • 54. And then for Christmas ● puppet types and providers arrived - courtesy of Dan Bode ● https://github.com/bodepd/cloudstack_resource s
    • 55. How does this work? cloudstack_instance { 'foo1': ensure => present, flavor => 'Small Instance', zone => 'FMT-ACS-001', image => 'CentOS 5.6(64-bit) no GUI (XenServer)', network => 'puppetlabs-network', # domain # account # hostname } ●
    • 56. Setting defaults Cloudstack_instance { image => 'CentOS 6.3', flavor => 'M1.medium', zone => 'San Jose', network => 'davids_net', keypair => 'david_keys', } cloudstack_instance { ensure => $::ensure, group => 'role=db', }
    • 57. A simple stack class my_web_stack { cloudstack_instance { 'foo4': ensure => present, group => 'role=apache', } cloudstack_instance { 'foo5': ensure => present, group => 'role=db', } }
    • 58. Questions
    • 59. Contact ● Project – http://cloudstack.apache.org – #cloudstack on irc.freenode.net ● Me – ke4qqq on irc.freenode.net – ke4qqq@apache.org

    ×