Infrastructure as code with
Puppet and Apache CloudStack
David Nalley
ke4qqq@apache.org
@ke4qqq
#whoami
• Apache Software Foundation Member
• Apache CloudStack PMC Member
• Recovering Sysadmin
• Fedora Project Contribu...
Setting the stage
Apache CloudStack is...
● an open source IaaS platform
● proven in production at massive scale
● awesome
Gorgeous UI
API
● Native: http://cloudstack.apache.org/docs/api
● EC2
IaaS removes one constraint
No longer waiting days or weeks to get a VM provisioned
but introduces another...
Now you have to get a machine configured in a timely
manner.
Self service
● UI
● API
● Some external tool
People provision stuff...
Not ops folks
Often not familiar with environmental intricacies
Don't care
Baseline can be important....
Classification
Problem: We spin up, dynamically, 1-500 VMs at any given time - how do
we decide what configurations apply.
Classification
The wrong way - dedicated images for each purpose
Classification
editing nodes.pp
node 'foo-356.cloud.com' {
include httpd
}
Classification
globbing
node 'mysql*' {
include mysqld
}
Classification
Everything is default
node 'default' {
include httpd
}
Classification
External Node Classifier
Classification
External Node Classifier
Classification
Facts
class base {
case $::fact {
'httpd': {
include httpd
}
'otherrole': {
include nginx
}
}
}
Classification - One Solution
● During instance provisioning define metadata.
● Custom fact for that metadata
● Case state...
Example Metadata
role=webserver
location=datacenter1
environment=production
Corresponding manifest
class base {
case $::fact {
'webserver': {
include httpd
}
'database': {
include postgresql
}
}
}
Corresponding manifest
class base {
case $::fact {
'webserver': {
include httpd
}
'database': {
include postgresql
}
}
}
Links, et al.
● Fact:
http://s.apache.org/acs_userdata
● Blog with details:
http://s.apache.org/acs_userdata2
Video - go watch it
● I only have 45 minutes - so can't delve
into everything, you should watch the
video- it’s great.
● h...
Video - go watch it
● I only have 45 minutes - so can't delve
into everything, you should watch the
video- it’s great.
● h...
And then there was a knife-plugin
The folks at Edmunds.com wrote a knife plugin for
CloudStack
The knife plugin had the ab...
Deploying a machine with knife
~ knife cs server create
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "produc...
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "produc...
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "produc...
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "produc...
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "produc...
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-ba...
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-ba...
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-ba...
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-ba...
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-ba...
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-ba...
"servers": [
{
"name": "zookeeper-a, zookeeper-b, zookeeper-c",
"description": "Zookeeper nodes",
"template": "rhel-5.6-ba...
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "produc...
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "produc...
{
"name": "hadoop-master",
"description": "Hadoop master node",
"template": "rhel-5.6-base",
"service": "large",
"networks...
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "produc...
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "produc...
{
"name": "hadoop-worker-a hadoop-worker-b hadoop-worker-
c",
"description": "Hadoop worker nodes",
"template": "rhel-5.6-...
{
"name": "hadoop_cluster_a",
"description": "A small hadoop cluster with hbase",
"version": "1.0",
"environment": "produc...
Deploy that Hadoop cluster with
knife cs stack create hadoop_cluster_a
I was jealous....
Then at FOSDEM 2012
● CloudStack user shows me Puppet types and resources
for OpenNebula.
● https://puppetlabs.com/blog/pu...
Why?
● They wanted to define each of their application
stacks in puppet, so that not only the configuration
of software on...
What we are used to
● Puppet _defines_ the configuration
within the machine
What we want
● Puppet _defines_ the machine.
● Puppet _defines_ collection of
machines
● Puppet _defines_ the machines,
ne...
Then at Puppetconf
● There was Google Compute
Engine types and resources for
Puppet.
● Dan Bode gave a presentation showin...
And then for Christmas
● puppet types and providers arrived - courtesy of Dan
Bode
● https://github.com/bodepd/cloudstack_...
How does this work?
cloudstack_instance { 'foo1':
ensure => present,
flavor => 'Small Instance',
zone => 'FMT-ACS-001',
im...
Setting defaults
Cloudstack_instance {
image => 'CentOS 6.3',
flavor => 'M1.medium',
zone => 'San Jose',
network => 'david...
A simple stack
class my_web_stack {
cloudstack_instance { 'foo4':
ensure => present,
group => 'role=apache',
}
cloudstack_...
Questions
Contact
● Project
– http://cloudstack.apache.org
– #cloudstack on irc.freenode.net
● Me
– ke4qqq on irc.freenode.net
– ke4...
Puppet and Apache CloudStack
Upcoming SlideShare
Loading in...5
×

Puppet and Apache CloudStack

4,852

Published on

Puppet is ideal for abstracting away the configurations of machines. In the time since puppet arrived on the scene, IaaS has started to creep into the mainstream. Now instead of just managing the configuration in the machine, the machine state itself can be configured, and even broken out to manage the configuration of all the deployed instances in a datacenter. We'll explore delving into using Apache CloudStack to do so, but we'll talk about the applicable other platforms as well.

David Nalley
Committer/PMC member, Apache CloudStack
David is a recovering sysadmin who spent a year in operations before starting to work on cloudy things. He's currently employed by Citrix in the Open Source Business Office to spend his time working on Apache CloudStack. In addition to CloudStack he's been involved in a number of other open source projects, including Zenoss and the Fedora Project.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
4,852
On Slideshare
0
From Embeds
0
Number of Embeds
7
Actions
Shares
0
Downloads
60
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Machine names
  • A human readable description
  • Disk image
  • Amount of CPU/RAM
  • Firewall Rules
  • Stuff for config mgmt.
  • Only new stuff here is 'networks' Previously a default network was assigned.
  • Puppet and Apache CloudStack

    1. 1. Infrastructure as code with Puppet and Apache CloudStack David Nalley ke4qqq@apache.org @ke4qqq
    2. 2. #whoami • Apache Software Foundation Member • Apache CloudStack PMC Member • Recovering Sysadmin • Fedora Project Contributor • Zenoss contributor • Employed by Citrix in the Open Source Business Office
    3. 3. Setting the stage Apache CloudStack is... ● an open source IaaS platform ● proven in production at massive scale ● awesome
    4. 4. Gorgeous UI
    5. 5. API ● Native: http://cloudstack.apache.org/docs/api ● EC2
    6. 6. IaaS removes one constraint No longer waiting days or weeks to get a VM provisioned
    7. 7. but introduces another... Now you have to get a machine configured in a timely manner.
    8. 8. Self service ● UI ● API ● Some external tool
    9. 9. People provision stuff... Not ops folks Often not familiar with environmental intricacies Don't care
    10. 10. Baseline can be important....
    11. 11. Classification Problem: We spin up, dynamically, 1-500 VMs at any given time - how do we decide what configurations apply.
    12. 12. Classification The wrong way - dedicated images for each purpose
    13. 13. Classification editing nodes.pp node 'foo-356.cloud.com' { include httpd }
    14. 14. Classification globbing node 'mysql*' { include mysqld }
    15. 15. Classification Everything is default node 'default' { include httpd }
    16. 16. Classification External Node Classifier
    17. 17. Classification External Node Classifier
    18. 18. Classification Facts class base { case $::fact { 'httpd': { include httpd } 'otherrole': { include nginx } } }
    19. 19. Classification - One Solution ● During instance provisioning define metadata. ● Custom fact for that metadata ● Case statement based on that fact
    20. 20. Example Metadata role=webserver location=datacenter1 environment=production
    21. 21. Corresponding manifest class base { case $::fact { 'webserver': { include httpd } 'database': { include postgresql } } }
    22. 22. Corresponding manifest class base { case $::fact { 'webserver': { include httpd } 'database': { include postgresql } } }
    23. 23. Links, et al. ● Fact: http://s.apache.org/acs_userdata ● Blog with details: http://s.apache.org/acs_userdata2
    24. 24. Video - go watch it ● I only have 45 minutes - so can't delve into everything, you should watch the video- it’s great. ● http://youtu.be/c8YWctfOpwo
    25. 25. Video - go watch it ● I only have 45 minutes - so can't delve into everything, you should watch the video- it’s great. ● http://youtu.be/c8YWctfOpwo
    26. 26. And then there was a knife-plugin The folks at Edmunds.com wrote a knife plugin for CloudStack The knife plugin had the ability to define an application stack, potentially hundreds of nodes, that are interrelated, and provision them with a single knife command. https://github.com/cloudstack-extras/knife-cloudstack
    27. 27. Deploying a machine with knife ~ knife cs server create
    28. 28. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    29. 29. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    30. 30. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production",
    31. 31. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    32. 32. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    33. 33. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    34. 34. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    35. 35. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    36. 36. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    37. 37. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    38. 38. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    39. 39. "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] },
    40. 40. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    41. 41. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    42. 42. { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" },
    43. 43. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    44. 44. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    45. 45. { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker- c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop- master}:50070/index.jsp" } ] }
    46. 46. { "name": "hadoop_cluster_a", "description": "A small hadoop cluster with hbase", "version": "1.0", "environment": "production", "servers": [ { "name": "zookeeper-a, zookeeper-b, zookeeper-c", "description": "Zookeeper nodes", "template": "rhel-5.6-base", "service": "small", "port_rules": "2181", "run_list": "role[cluster_a], role[zookeeper_server]", "actions": [ { "knife_ssh": ["role:zookeeper_server", "sudo chef-client"] } ] }, { "name": "hadoop-master", "description": "Hadoop master node", "template": "rhel-5.6-base", "service": "large", "networks": "app-net, storage-net", "port_rules": "50070, 50030, 60010", "run_list": "role[cluster_a], role[hadoop_master], role[hbase_master]" }, { "name": "hadoop-worker-a hadoop-worker-b hadoop-worker-c", "description": "Hadoop worker nodes", "template": "rhel-5.6-base", "service": "medium", "port_rules": "50075, 50060, 60030", "run_list": "role[cluster_a], role[hadoop_worker], role[hbase_regionserver]", "actions": [ { "knife_ssh": ["role:hadoop_master", "sudo chef-client"] }, { "http_request": "http://${hadoop-master}:50070/index.jsp" } ] } }
    47. 47. Deploy that Hadoop cluster with knife cs stack create hadoop_cluster_a
    48. 48. I was jealous....
    49. 49. Then at FOSDEM 2012 ● CloudStack user shows me Puppet types and resources for OpenNebula. ● https://puppetlabs.com/blog/puppetizing-opennebula/ ● They indicated they wanted this awesomeness for CloudStack....
    50. 50. Why? ● They wanted to define each of their application stacks in puppet, so that not only the configuration of software on the machine, but the machines themselves would be configured by Puppet. ● Automated deployment of test environments that are exactly the same ● Really gets outside of machine configuration to entire infrastructure configuration
    51. 51. What we are used to ● Puppet _defines_ the configuration within the machine
    52. 52. What we want ● Puppet _defines_ the machine. ● Puppet _defines_ collection of machines ● Puppet _defines_ the machines, networks, and rest of infrastructure
    53. 53. Then at Puppetconf ● There was Google Compute Engine types and resources for Puppet. ● Dan Bode gave a presentation showing off the work he had done... that presentation is worth seeing... ● http://www.slideshare.net/bodepd/google-compute-presentati
    54. 54. And then for Christmas ● puppet types and providers arrived - courtesy of Dan Bode ● https://github.com/bodepd/cloudstack_resource s
    55. 55. How does this work? cloudstack_instance { 'foo1': ensure => present, flavor => 'Small Instance', zone => 'FMT-ACS-001', image => 'CentOS 5.6(64-bit) no GUI (XenServer)', network => 'puppetlabs-network', # domain # account # hostname } ●
    56. 56. Setting defaults Cloudstack_instance { image => 'CentOS 6.3', flavor => 'M1.medium', zone => 'San Jose', network => 'davids_net', keypair => 'david_keys', } cloudstack_instance { ensure => $::ensure, group => 'role=db', }
    57. 57. A simple stack class my_web_stack { cloudstack_instance { 'foo4': ensure => present, group => 'role=apache', } cloudstack_instance { 'foo5': ensure => present, group => 'role=db', } }
    58. 58. Questions
    59. 59. Contact ● Project – http://cloudstack.apache.org – #cloudstack on irc.freenode.net ● Me – ke4qqq on irc.freenode.net – ke4qqq@apache.org
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×