fog or: How I Learned to Stop Worrying and Love the Cloud

Wesley Beary
Wesley BearyCloud Architect at Engine Yard
OR:


fog
       HOW I
      LEARNED
      TO STOP
      WORRYING
       AND
      LOVE THE

      CLOUD
geemus (Wesley Beary)
web: github.com/geemus
twitter: @geemus
employer and sponsor
web: engineyard.com
twitter: @engineyard
API driven on demand services

CLOUD   core: compute, dns, storage
        also: kvs, load balance, ...
What?
What?
on demand -   only pay for what you actually use
What?
on demand -       only pay for what you actually use

flexible -   add and remove resources in minutes (instead of weeks)
What?
on demand -       only pay for what you actually use

flexible -   add and remove resources in minutes (instead of weeks)

repeatable -      code, test, repeat
What?
on demand -         only pay for what you actually use

flexible -     add and remove resources in minutes (instead of weeks)

repeatable -        code, test, repeat

resilient -   build better systems with transient resources
Why Worry?
Why Worry?
option overload -   which provider/service should I use
Why Worry?
option overload -        which provider/service should I use

expertise -   each service has yet another knowledge silo
Why Worry?
option overload -             which provider/service should I use

expertise -       each service has yet another knowledge silo

tools -   vastly different API, quality, maintenance, etc
Why Worry?
option overload -             which provider/service should I use

expertise -       each service has yet another knowledge silo

tools -   vastly different API, quality, maintenance, etc

standards -        slow progress and differing interpretations
Ruby cloud services
web: github.com/geemus/fog
twitter: @fog
Why?
Why?
portable -   AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...
Why?
portable -   AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...

powerful -   compute, dns, storage, collections, models, mocks, requests, ...
Why?
portable -   AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...

powerful -   compute, dns, storage, collections, models, mocks, requests, ...

established -    92k downloads, 1112 followers, 141 forks, 67 contributors, me, ...
Why?
portable -   AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...

powerful -   compute, dns, storage, collections, models, mocks, requests, ...

established -    92k downloads, 1112 followers, 141 forks, 67 contributors, me, ...

Fog.mock! -     faster, cheaper, simulated cloud behavior
Who?
Who?
libraries -   carrierwave, chef, deckard, gaff, gemcutter, ...

products -    DevStructure, Engine Yard, iSwifter, OpenFeint, RowFeeder, ...
Interactive Bit!
Interactive Bit!
      cloud
Interactive Bit!
      cloud
       fog
What?
What?
That’s great and all but   I don’t have a use case...
What?
That’s great and all but   I don’t have a use case...
uptime -       because who wants a busted web site?
Setup
  geymus ~ ⌘ gem install fog
              or
geymus ~ ⌘ sudo gem install fog
Get Connected



7 # setup a connection to the service
8 compute = Fog::Compute.new(credentials)
Get Connected
1   credentials = {
2     :provider           => 'Rackspace',
3     :rackspace_api_key  => RACKSPACE_API_KEY,
4     :rackspace_username => RACKSPACE_USERNAME
5   }
6
7 # setup a connection to the service
8 compute = Fog::Compute.new(credentials)
Boot that Server
 1   server_data = compute.create_server(
 2     1,
 3     49
 4   ).body['server']
 5
Boot that Server
 1   server_data = compute.create_server(
 2     1,
 3     49
 4   ).body['server']
 5
 6    until compute.get_server_details(
 7      server_data['id']
 8    ).body['server']['status'] == 'ACTIVE'
 9    end
10
Boot that Server
 1   server_data = compute.create_server(
 2     1,
 3     49
 4   ).body['server']
 5

 6      until compute.get_server_details(
 7        server_data['id']
 8      ).body['server']['status'] == 'ACTIVE'
 9      end
10
11         commands = [
12           %{'mkdir .ssh'},
13           %{'echo #{File.read('~/.ssh/id_rsa.pub')} >> ~/.ssh/authorized_keys'},
14           %{passwd -l root},
15         ]
16
17         Net::SSH.start(
18           server_data['addresses'].first,
19           'root',
20           :password => server_data['password']
21         ) do |ssh|
22           commands.each do |command|
23             ssh.open_channel do |ssh_channel|
24               ssh_channel.request_pty
25               ssh_channel.exec(%{bash -lc '#{command}'})
26               ssh.loop
27             end
28           end
29         end
Worry!
Worry!
arguments -   what goes where, what does it mean?
Worry!
arguments -     what goes where, what does it mean?

portability -   most of this will only work on Rackspace
Worry!
arguments -      what goes where, what does it mean?

portability -   most of this will only work on Rackspace

disservice -    back to square one, but with tools in hand
Bootstrap



7 # boot server and setup ssh keys
8 server = compute.servers.bootstrap(server_attributes)
Bootstrap
1   server_attributes = {
2     :image_id         => '49',
3     :private_key_path => PRIVATE_KEY_PATH,
4     :public_key_path  => PUBLIC_KEY_PATH
5   }
6
7 # boot server and setup ssh keys
8 server = compute.servers.bootstrap(server_attributes)
Servers?
Servers?
1 compute.servers # list servers, same as #all
Servers?
1 compute.servers # list servers, same as #all
2
3 compute.servers.get(1234567890) # server by id
Servers?
1 compute.servers # list servers, same as #all
2
3 compute.servers.get(1234567890) # server by id
4
5 compute.servers.reload # update to latest
Servers?
1   compute.servers # list servers, same as #all
2
3   compute.servers.get(1234567890) # server by id
4
5   compute.servers.reload # update to latest
6
7   compute.servers.new(attributes) # local model
Servers?
1   compute.servers # list servers, same as #all
2
3   compute.servers.get(1234567890) # server by id
4
5   compute.servers.reload # update to latest
6
7   compute.servers.new(attributes) # local model
8
9   compute.servers.create(attributes) # remote model
ping
ping
1 # ping target 10 times
2 ssh_results = server.ssh("ping -c 10 #{target}")
ping
1 # ping target 10 times
2 ssh_results = server.ssh("ping -c 10 #{target}")
3 stdout = ssh_results.first.stdout
ping
1   # ping target 10 times
2   ssh_results = server.ssh("ping -c 10 #{target}")
3   stdout = ssh_results.first.stdout
4
5   # parse result, last line is summary
6   # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms
ping
1   # ping target 10 times
2   ssh_results = server.ssh("ping -c 10 #{target}")
3   stdout = ssh_results.first.stdout
4
5   # parse result, last line is summary
6   # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms
7   stats = stdout.split("/n").last.split(' ')[-2]
8   min, avg, max, stddev = stats.split('/')
ping
1   # ping target 10 times
2   ssh_results = server.ssh("ping -c 10 #{target}")
3   stdout = ssh_results.first.stdout
4
5   # parse result, last line is summary
6   # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms
7   stats = stdout.split("/n").last.split(' ')[-2]
8   min, avg, max, stddev = stats.split('/')

          NOTE: most complex code was string parsing!?!
cleanup
cleanup
 1 # shutdown the server
 2 server.destroy
cleanup
 1 # shutdown the server
 2 server.destroy
 3
 4 # return the data as a hash
 5 {
 6   :min    => min,
 7   :avg    => avg,
 8   :max    => max,
 9   :stddev => stddev
10 }
Next!
Next!
 1   -server_data = compute.create_server(
 2   +compute.import_key_pair(
 3   +  'id_rsa.pub',
 4   +  File.read('~/.ssh/id_rsa.pub')
 5   +)
 6   +
 7   +compute.authorize_security_group_ingress(
 8   +  'CidrIp'      => '0.0.0.0/0',
 9   +  'FromPort'    => 22,
10   +  'IpProtocol'  => 'tcp',
11   +  'GroupName'   => 'default',
12   +  'ToPort'      => 22
13   +)
14   +
15   +server_data = compute.run_instances(
16   +  'ami-1a837773',
17      1,
18   -  49
19   -).body['server']
20   +  1,
21   +  'InstanceType'  => 'm1.small',
22   +  'KeyName'       => 'id_rsa.pub',
23   +  'SecurityGroup' => 'default'
24   +).body['instancesSet'].first
25    
26   -until compute.get_server_details(
27   -  server_data['id']
28   -).body['server']['status'] == 'ACTIVE'
29   +until compute.describe_instances(
30   +  'instance-id' => server_data['instanceId']
31   +).body['reservationSet'].first['instancesSet'].first['instanceState']['name'] == 'running'
32    end
33    
34   +sleep(300)
35   +
36    Net::SSH.start(
37   -  server_data['addresses'].first,
38   -  'root',
39   -  :password => server_data['password']
40   +  server_data['ipAddress'],
41   +  'ubuntu',
42   +  :key_data => [File.read('~/.ssh/id_rsa')]
43    ) do |ssh|
44      commands = [
45        %{'mkdir .ssh'},
Next!
 1   -server_data = compute.create_server(
 2   +compute.import_key_pair(
 3   +  'id_rsa.pub',
 4   +  File.read('~/.ssh/id_rsa.pub')
 5   +)
 6   +
 7   +compute.authorize_security_group_ingress(
 8   +  'CidrIp'      => '0.0.0.0/0',
 9   +  'FromPort'    => 22,
10   +  'IpProtocol'  => 'tcp',
11   +  'GroupName'   => 'default',
12   +  'ToPort'      => 22
13   +)
14   +
15   +server_data = compute.run_instances(
16   +  'ami-1a837773',
17      1,
18   -  49
19   -).body['server']
20   +  1,
21   +  'InstanceType'  => 'm1.small',
22   +  'KeyName'       => 'id_rsa.pub',
23   +  'SecurityGroup' => 'default'
24   +).body['instancesSet'].first
25    
26   -until compute.get_server_details(
27   -  server_data['id']
28   -).body['server']['status'] == 'ACTIVE'
29   +until compute.describe_instances(
30   +  'instance-id' => server_data['instanceId']
31   +).body['reservationSet'].first['instancesSet'].first['instanceState']['name'] == 'running'
32    end
33    
34   +sleep(300)
35   +
36    Net::SSH.start(
37   -  server_data['addresses'].first,
38   -  'root',
39   -  :password => server_data['password']
40   +  server_data['ipAddress'],
41   +  'ubuntu',
42   +  :key_data => [File.read('~/.ssh/id_rsa')]
43    ) do |ssh|
44      commands = [
45        %{'mkdir .ssh'},
geopinging v1
 1 # specify a different provider
 2 credentials = {
 3   :provider           => 'AWS',
 4   :aws_access_key_id  => AWS_ACCESS_KEY_ID,
 5   :aws_secret_access_key => AWS_SECRET_ACCESS_KEY
 6 }
 7
 8 server_attributes = {
 9   :image_id         => 'ami-1a837773',
10   :private_key_path => PRIVATE_KEY_PATH,
11   :public_key_path  => PUBLIC_KEY_PATH,
12   :username         => 'ubuntu'
13 }
geopinging v2
1   # specify a different aws region
2   # ['ap-southeast-1', 'eu-west-1', 'us-west-1]
3   credentials.merge!({
4     :region => 'eu-west-1'
5   })
geopinging v...
portable -   AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ...
geopinging v...
portable -   AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ...

                 lather, rinse, repeat
How?
That is awesome, but   how did you...
exploring
geymus ~ ⌘ fog

  To run as 'default', add the following to ~/.fog

:default:
  :aws_access_key_id:     INTENTIONALLY_LEFT_BLANK
  :aws_secret_access_key: INTENTIONALLY_LEFT_BLANK
  :public_key_path:       INTENTIONALLY_LEFT_BLANK
  :private_key_path:      INTENTIONALLY_LEFT_BLANK
  :rackspace_api_key:     INTENTIONALLY_LEFT_BLANK
  :rackspace_username:    INTENTIONALLY_LEFT_BLANK
  ...
sign posts
sign posts
geymus ~ ⌘ fog
sign posts
geymus ~ ⌘ fog
 Welcome to fog interactive!
 :default credentials provide AWS and Rackspace
sign posts
geymus ~ ⌘ fog
 Welcome to fog interactive!
 :default credentials provide AWS and Rackspace
>> providers
sign posts
geymus ~ ⌘ fog
 Welcome to fog interactive!
 :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
sign posts
geymus ~ ⌘ fog
 Welcome to fog interactive!
 :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
sign posts
geymus ~ ⌘ fog
  Welcome to fog interactive!
  :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
[:directories, :files, :flavors, :images, :servers]
sign posts
geymus ~ ⌘ fog
  Welcome to fog interactive!
  :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
[:directories, :files, :flavors, :images, :servers]
>> Rackspace[:compute]
sign posts
geymus ~ ⌘ fog
  Welcome to fog interactive!
  :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
[:directories, :files, :flavors, :images, :servers]
>> Rackspace[:compute]
#<Fog::Rackspace::Compute ...>
sign posts
geymus ~ ⌘ fog
  Welcome to fog interactive!
  :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
[:directories, :files, :flavors, :images, :servers]
>> Rackspace[:compute]
#<Fog::Rackspace::Compute ...>
>> Rackspace[:compute].requests
sign posts
geymus ~ ⌘ fog
  Welcome to fog interactive!
  :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
[:directories, :files, :flavors, :images, :servers]
>> Rackspace[:compute]
#<Fog::Rackspace::Compute ...>
>> Rackspace[:compute].requests
[:confirm_resized_server, ..., :update_server]
what are those?
what are those?
provider   => [AWS, Rackspace, Zerigo, ...]
what are those?
provider   => [AWS, Rackspace, Zerigo, ...]

 service   => [Compute, DNS, Storage, ...]
what are those?
 provider    => [AWS, Rackspace, Zerigo, ...]

  service    => [Compute, DNS, Storage, ...]

collection   => [flavors, images, servers, ...]
what are those?
 provider    => [AWS, Rackspace, Zerigo, ...]

  service    => [Compute, DNS, Storage, ...]

collection   => [flavors, images, servers, ...]

   model     => [flavor, image, server, ...]
what are those?
 provider    => [AWS, Rackspace, Zerigo, ...]

  service    => [Compute, DNS, Storage, ...]

collection   => [flavors, images, servers, ...]

   model     => [flavor, image, server, ...]

  request    => [describe_instances, run_instances, ...]
requests?
requests?
>> Rackspace[:compute].list_servers
requests?
>> Rackspace[:compute].list_servers
#<Excon::Response:0x________
requests?
>> Rackspace[:compute].list_servers
#<Excon::Response:0x________
@body = {
  "servers" => []
},
requests?
>> Rackspace[:compute].list_servers
#<Excon::Response:0x________
@body = {
  "servers" => []
},
@headers = {
  "X-PURGE-KEY"=>"/______/servers",
  ...,
  "Connection"=>"keep-alive"
},
requests?
>> Rackspace[:compute].list_servers
#<Excon::Response:0x________
@body = {
  "servers" => []
},
@headers = {
  "X-PURGE-KEY"=>"/______/servers",
  ...,
  "Connection"=>"keep-alive"
},
@status=200>
sanity check
sanity check
>> Rackspace.servers.select {|server| server.ready?}
sanity check
>> Rackspace.servers.select {|server| server.ready?}
 <Fog::Rackspace::Compute::Servers
  filters={}
  []
 >
sanity check
>> Rackspace.servers.select {|server| server.ready?}
 <Fog::Rackspace::Compute::Servers
  filters={}
  []
 >
>> AWS.servers.select {|server| server.ready?}
sanity check
>> Rackspace.servers.select {|server| server.ready?}
 <Fog::Rackspace::Compute::Servers
  filters={}
  []
 >
>> AWS.servers.select {|server| server.ready?}
 <Fog::AWS::Compute::Servers
  []
 >
sanity check
>> Rackspace.servers.select {|server| server.ready?}
 <Fog::Rackspace::Compute::Servers
  filters={}
  []
 >
>> AWS.servers.select {|server| server.ready?}
 <Fog::AWS::Compute::Servers
  []
 >
>> exit
finding images
finding images
>> Rackspace.images.table([:id, :name])
finding images
>> Rackspace.images.table([:id, :name])
 +---------
+------------------------------------------+
 | id  | name                         |
 +---------
+------------------------------------------+
 | 49   | Ubuntu 10.04 LTS (lucid)      |
 +---------
+------------------------------------------+
 ...
finding images
>> Rackspace.images.table([:id, :name])
 +---------
+------------------------------------------+
 | id  | name                          |
 +---------
+------------------------------------------+
 | 49   | Ubuntu 10.04 LTS (lucid)        |
 +---------
+------------------------------------------+
 ...
>> AWS.images # I use alestic.com listing
finding images
>> Rackspace.images.table([:id, :name])
  +---------
+------------------------------------------+
  | id  | name                         |
  +---------
+------------------------------------------+
  | 49   | Ubuntu 10.04 LTS (lucid)       |
  +---------
+------------------------------------------+
  ...
>> AWS.images # I use alestic.com listing
...
exploring...
exploring...
  It takes   forever!
exploring...
    It takes   forever!
  It’s so   expensive!
exploring...
        It takes   forever!
      It’s so   expensive!
A warm welcome for   Fog.mock!
Mocks!
geymus ~ ⌘ FOG_MOCK=true fog
             or
       require ‘fog’
       Fog.mock!
simulation
simulation
Most functions   just work!
simulation
         Most functions   just work!
Unimplemented mocks?   Errors   keep you on track.
simulation
                   Most functions   just work!
      Unimplemented mocks?       Errors      keep you on track.

Tests run against both, so it is either   consistent    or a   bug.
Back to Business
  I have a bunch of data   now what?
Back to Business
  I have a bunch of data   now what?
   storage -       aggregating cloud data
Get Connected



7 # setup a connection to the service
8 storage = Fog::Storage.new(credentials)
Get Connected
1   credentials = {
2     :provider        => 'AWS',
3     :aws_access_key_id  => AWS_ACCESS_KEY_ID,
4     :aws_secret_access_key => AWS_SECRET_ACCESS_KEY
5   }
6
7 # setup a connection to the service
8 storage = Fog::Storage.new(credentials)
directories
1   # create a directory
2   directory = storage.directories.create(
3     :key    => directory_name,
4     :public => true
5   )
files
files
1   # store the file
2   file = directory.files.create(
3     :body   => File.open(path),
4     :key    => name,
5     :public => true
6   )
files
1   # store the file
2   file = directory.files.create(
3     :body   => File.open(path),
4     :key    => name,
5     :public => true
6   )
7
8   # return the public url for the file
9   file.public_url
geostorage
 1   # specify a different provider
 2   credentials = {
 3     :provider           => 'Rackspace',
 4     :rackspace_api_key  => RACKSPACE_API_KEY,
 5     :rackspace_username => RACKSPACE_USERNAME
 6   }
cleanup
cleanup
geymus ~ ⌘ fog
...
cleanup
geymus ~ ⌘ fog
...
>> directory = AWS.directories.get(DIRECTORY_NAME)
...
cleanup
geymus ~ ⌘ fog
...
>> directory = AWS.directories.get(DIRECTORY_NAME)
...
>> directory.files.each {|file| file.destroy}
...
cleanup
geymus ~ ⌘ fog
...
>> directory = AWS.directories.get(DIRECTORY_NAME)
...
>> directory.files.each {|file| file.destroy}
...
>> directory.destroy
...
cleanup
geymus ~ ⌘ fog
...
>> directory = AWS.directories.get(DIRECTORY_NAME)
...
>> directory.files.each {|file| file.destroy}
...
>> directory.destroy
...
>> exit
geoaggregating
portable -   AWS, Google, Local, Rackspace
geoaggregating
portable -   AWS, Google, Local, Rackspace

    lather, rinse, repeat
Phase 3: Profit
I’ve got the data, but how do I   freemium?
Phase 3: Profit
I’ve got the data, but how do I   freemium?
 dns -    make your cloud (premium) accessible
Get Connected



7 # setup a connection to the service
8 dns = Fog::DNS.new(credentials)
Get Connected
1   credentials = {
2     :provider  => 'Zerigo',
3     :zerigo_email => ZERIGO_EMAIL,
4     :zerigo_token => ZERIGO_TOKEN
5   }
6
7 # setup a connection to the service
8 dns = Fog::DNS.new(credentials)
zones
1   # create a zone
2   zone = dns.zone.create(
3     :domain => domain_name,
4     :email  => "admin@#{domain_name}"
5   )
records
1   # create a record
2   record = zones.records.create(
3     :ip   => '1.2.3.4',
4     :name => "#{customer_name}.#{domain_name}",
5     :type => 'A'
6   )
cleanup
geymus ~ ⌘ fog
...
>> zone = Zerigo.zones.get(ZONE_ID)
...
>> zone.records.each {|record| record.destroy}
...
>> zone.destroy
...
>> exit
geofreemiuming
portable -   AWS, Linode, Slicehost, Zerigo
geofreemiuming
portable -   AWS, Linode, Slicehost, Zerigo

    lather, rinse, repeat
Congratulations!
Congratulations!
todo -   copy/paste, push, deploy!
Congratulations!
todo -   copy/paste, push, deploy!

budgeting -       find ways to spend your pile of money
Congratulations!
todo -   copy/paste, push, deploy!

budgeting -       find ways to spend your pile of money

geemus -     likes coffee, bourbon, games, etc
Congratulations!
todo -   copy/paste, push, deploy!

budgeting -        find ways to spend your pile of money

geemus -       likes coffee, bourbon, games, etc

retire -   at your earliest convenience
Love!
Love!
knowledge -   suffering encoded in ruby
Love!
knowledge -   expertise encoded in ruby
Love!
knowledge -   expertise encoded in ruby

empowering -    show the cloud who is boss
Love!
knowledge -        expertise encoded in ruby

empowering -          show the cloud who is boss

exciting -   this is some cutting edge stuff!
Homework: Easy
Homework: Easy
  follow   @fog   to hear about releases
Homework: Easy
            follow   @fog   to hear about releases

follow   github.com/geemus/fog                to hear nitty gritty
Homework: Easy
             follow    @fog   to hear about releases

follow   github.com/geemus/fog                  to hear nitty gritty

     proudly display   stickers   wherever hackers are found
Homework: Easy
             follow    @fog   to hear about releases

follow   github.com/geemus/fog                  to hear nitty gritty

     proudly display   stickers   wherever hackers are found

           ask   geemus       your remaining questions
Homework: Easy
             follow     @fog   to hear about releases

follow   github.com/geemus/fog                   to hear nitty gritty

     proudly display    stickers   wherever hackers are found

           ask   geemus        your remaining questions

                 play   games    with   geemus
Homework: Normal
Homework: Normal
report issues at   github.com/geemus/fog/issues
Homework: Normal
report issues at   github.com/geemus/fog/issues
                   irc   #ruby-fog   on   freenode
Homework: Normal
report issues at   github.com/geemus/fog/issues
                   irc   #ruby-fog   on   freenode
discuss   groups.google.com/group/ruby-fog
Homework: Normal
report issues at   github.com/geemus/fog/issues
                   irc   #ruby-fog    on   freenode
discuss   groups.google.com/group/ruby-fog
                           write   blog posts
Homework: Normal
report issues at   github.com/geemus/fog/issues
                   irc   #ruby-fog    on   freenode
discuss   groups.google.com/group/ruby-fog
                           write   blog posts
                         give   lightning talks
Homework: Hard
Homework: Hard
help make   fog.io   the cloud services resource for ruby
Homework: Hard
help make   fog.io   the cloud services resource for ruby

send   pull requests      fixing issues or adding features
Homework: Hard
        help make   fog.io      the cloud services resource for ruby

        send   pull requests         fixing issues or adding features

proudly wear contributor-only   grey shirt      wherever hackers are found
Homework: Expert
Homework: Expert
 help   maintain   the cloud services you depend on
Homework: Expert
     help   maintain   the cloud services you depend on

become a    collaborator    by keeping informed and involved
Homework: Expert
         help   maintain     the cloud services you depend on

  become a      collaborator      by keeping informed and involved

proudly wear commit-only   black shirt     wherever hackers are found
Thanks!



@geemus - questions, comments, suggestions
Thanks! Questions?
                 (see also: README)

examples - http://gist.github.com/729992
   slides - http://slidesha.re/hR8sP9
    repo - http://github.com/geemus/fog
    bugs - http://github.com/geemus/fog/issues
@geemus - questions, comments, suggestions
1 of 165

Recommended

Cloud meets Fog & Puppet A Story of Version Controlled Infrastructure by
Cloud meets Fog & Puppet A Story of Version Controlled InfrastructureCloud meets Fog & Puppet A Story of Version Controlled Infrastructure
Cloud meets Fog & Puppet A Story of Version Controlled InfrastructureHabeeb Rahman
3.3K views36 slides
Ansible fest Presentation slides by
Ansible fest Presentation slidesAnsible fest Presentation slides
Ansible fest Presentation slidesAaron Carey
463 views23 slides
DevOps with Fabric by
DevOps with FabricDevOps with Fabric
DevOps with FabricSimone Federici
786 views39 slides
Using Ansible for Deploying to Cloud Environments by
Using Ansible for Deploying to Cloud EnvironmentsUsing Ansible for Deploying to Cloud Environments
Using Ansible for Deploying to Cloud Environmentsahamilton55
4.2K views44 slides
Ansible - Swiss Army Knife Orchestration by
Ansible - Swiss Army Knife OrchestrationAnsible - Swiss Army Knife Orchestration
Ansible - Swiss Army Knife Orchestrationbcoca
31.7K views26 slides
Fabric workshop(1) - (MOSG) by
Fabric workshop(1) - (MOSG)Fabric workshop(1) - (MOSG)
Fabric workshop(1) - (MOSG)Soshi Nemoto
533 views29 slides

More Related Content

What's hot

Capistrano, Puppet, and Chef by
Capistrano, Puppet, and ChefCapistrano, Puppet, and Chef
Capistrano, Puppet, and ChefDavid Benjamin
10.1K views41 slides
A Fabric/Puppet Build/Deploy System by
A Fabric/Puppet Build/Deploy SystemA Fabric/Puppet Build/Deploy System
A Fabric/Puppet Build/Deploy Systemadrian_nye
9.2K views28 slides
Introduction to Ansible by
Introduction to AnsibleIntroduction to Ansible
Introduction to AnsibleKnoldus Inc.
24.2K views18 slides
docker build with Ansible by
docker build with Ansibledocker build with Ansible
docker build with AnsibleBas Meijer
2.1K views13 slides
Using Ansible Dynamic Inventory with Amazon EC2 by
Using Ansible Dynamic Inventory with Amazon EC2Using Ansible Dynamic Inventory with Amazon EC2
Using Ansible Dynamic Inventory with Amazon EC2Brian Schott
18.6K views29 slides
Ansible with AWS by
Ansible with AWSAnsible with AWS
Ansible with AWSAllan Denot
2.3K views24 slides

What's hot(20)

Capistrano, Puppet, and Chef by David Benjamin
Capistrano, Puppet, and ChefCapistrano, Puppet, and Chef
Capistrano, Puppet, and Chef
David Benjamin10.1K views
A Fabric/Puppet Build/Deploy System by adrian_nye
A Fabric/Puppet Build/Deploy SystemA Fabric/Puppet Build/Deploy System
A Fabric/Puppet Build/Deploy System
adrian_nye9.2K views
Introduction to Ansible by Knoldus Inc.
Introduction to AnsibleIntroduction to Ansible
Introduction to Ansible
Knoldus Inc.24.2K views
docker build with Ansible by Bas Meijer
docker build with Ansibledocker build with Ansible
docker build with Ansible
Bas Meijer2.1K views
Using Ansible Dynamic Inventory with Amazon EC2 by Brian Schott
Using Ansible Dynamic Inventory with Amazon EC2Using Ansible Dynamic Inventory with Amazon EC2
Using Ansible Dynamic Inventory with Amazon EC2
Brian Schott18.6K views
Ansible with AWS by Allan Denot
Ansible with AWSAnsible with AWS
Ansible with AWS
Allan Denot2.3K views
Managing Your Cisco Datacenter Network with Ansible by fmaccioni
Managing Your Cisco Datacenter Network with AnsibleManaging Your Cisco Datacenter Network with Ansible
Managing Your Cisco Datacenter Network with Ansible
fmaccioni6.4K views
Ansible is the simplest way to automate. MoldCamp, 2015 by Alex S
Ansible is the simplest way to automate. MoldCamp, 2015Ansible is the simplest way to automate. MoldCamp, 2015
Ansible is the simplest way to automate. MoldCamp, 2015
Alex S4.4K views
IT Automation with Ansible by Rayed Alrashed
IT Automation with AnsibleIT Automation with Ansible
IT Automation with Ansible
Rayed Alrashed15.7K views
A quick intro to Ansible by Dan Vaida
A quick intro to AnsibleA quick intro to Ansible
A quick intro to Ansible
Dan Vaida503 views
Ansible 2.0 - How to use Ansible to automate your applications in AWS. by Idan Tohami
Ansible 2.0 - How to use Ansible to automate your applications in AWS.Ansible 2.0 - How to use Ansible to automate your applications in AWS.
Ansible 2.0 - How to use Ansible to automate your applications in AWS.
Idan Tohami5.2K views
Ansible roles done right by Dan Vaida
Ansible roles done rightAnsible roles done right
Ansible roles done right
Dan Vaida1.7K views
CoreOS in a Nutshell by CoreOS
CoreOS in a NutshellCoreOS in a Nutshell
CoreOS in a Nutshell
CoreOS644 views
CoreOS + Kubernetes @ All Things Open 2015 by Brandon Philips
CoreOS + Kubernetes @ All Things Open 2015CoreOS + Kubernetes @ All Things Open 2015
CoreOS + Kubernetes @ All Things Open 2015
Brandon Philips409 views
Deployment with Fabric by andymccurdy
Deployment with FabricDeployment with Fabric
Deployment with Fabric
andymccurdy977 views

Similar to fog or: How I Learned to Stop Worrying and Love the Cloud

How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine Yard by
How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine YardHow I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine Yard
How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine YardSV Ruby on Rails Meetup
556 views123 slides
Cloud Meetup - Automation in the Cloud by
Cloud Meetup - Automation in the CloudCloud Meetup - Automation in the Cloud
Cloud Meetup - Automation in the Cloudpetriojala123
78 views28 slides
Rhebok, High Performance Rack Handler / Rubykaigi 2015 by
Rhebok, High Performance Rack Handler / Rubykaigi 2015Rhebok, High Performance Rack Handler / Rubykaigi 2015
Rhebok, High Performance Rack Handler / Rubykaigi 2015Masahiro Nagano
76.1K views67 slides
Cutting through the fog of cloud by
Cutting through the fog of cloudCutting through the fog of cloud
Cutting through the fog of cloudKyle Rames
1.5K views94 slides
Writing robust Node.js applications by
Writing robust Node.js applicationsWriting robust Node.js applications
Writing robust Node.js applicationsTom Croucher
15.6K views82 slides
Future Decoded - Node.js per sviluppatori .NET by
Future Decoded - Node.js per sviluppatori .NETFuture Decoded - Node.js per sviluppatori .NET
Future Decoded - Node.js per sviluppatori .NETGianluca Carucci
235 views56 slides

Similar to fog or: How I Learned to Stop Worrying and Love the Cloud(20)

How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine Yard by SV Ruby on Rails Meetup
How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine YardHow I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine Yard
How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine Yard
Cloud Meetup - Automation in the Cloud by petriojala123
Cloud Meetup - Automation in the CloudCloud Meetup - Automation in the Cloud
Cloud Meetup - Automation in the Cloud
petriojala12378 views
Rhebok, High Performance Rack Handler / Rubykaigi 2015 by Masahiro Nagano
Rhebok, High Performance Rack Handler / Rubykaigi 2015Rhebok, High Performance Rack Handler / Rubykaigi 2015
Rhebok, High Performance Rack Handler / Rubykaigi 2015
Masahiro Nagano76.1K views
Cutting through the fog of cloud by Kyle Rames
Cutting through the fog of cloudCutting through the fog of cloud
Cutting through the fog of cloud
Kyle Rames1.5K views
Writing robust Node.js applications by Tom Croucher
Writing robust Node.js applicationsWriting robust Node.js applications
Writing robust Node.js applications
Tom Croucher15.6K views
Future Decoded - Node.js per sviluppatori .NET by Gianluca Carucci
Future Decoded - Node.js per sviluppatori .NETFuture Decoded - Node.js per sviluppatori .NET
Future Decoded - Node.js per sviluppatori .NET
Gianluca Carucci235 views
Stack kicker devopsdays-london-2013 by Simon McCartney
Stack kicker devopsdays-london-2013Stack kicker devopsdays-london-2013
Stack kicker devopsdays-london-2013
Simon McCartney1.2K views
Apache MXNet Distributed Training Explained In Depth by Viacheslav Kovalevsky... by Big Data Spain
Apache MXNet Distributed Training Explained In Depth by Viacheslav Kovalevsky...Apache MXNet Distributed Training Explained In Depth by Viacheslav Kovalevsky...
Apache MXNet Distributed Training Explained In Depth by Viacheslav Kovalevsky...
Big Data Spain1.4K views
Railsconf2011 deployment tips_for_slideshare by tomcopeland
Railsconf2011 deployment tips_for_slideshareRailsconf2011 deployment tips_for_slideshare
Railsconf2011 deployment tips_for_slideshare
tomcopeland1.4K views
NGINX Can Do That? Test Drive Your Config File! by Jeff Anderson
NGINX Can Do That? Test Drive Your Config File!NGINX Can Do That? Test Drive Your Config File!
NGINX Can Do That? Test Drive Your Config File!
Jeff Anderson2.8K views
Reusable, composable, battle-tested Terraform modules by Yevgeniy Brikman
Reusable, composable, battle-tested Terraform modulesReusable, composable, battle-tested Terraform modules
Reusable, composable, battle-tested Terraform modules
Yevgeniy Brikman28.4K views
Using Sinatra to Build REST APIs in Ruby by LaunchAny
Using Sinatra to Build REST APIs in RubyUsing Sinatra to Build REST APIs in Ruby
Using Sinatra to Build REST APIs in Ruby
LaunchAny9.5K views
Burn down the silos! Helping dev and ops gel on high availability websites by Lindsay Holmwood
Burn down the silos! Helping dev and ops gel on high availability websitesBurn down the silos! Helping dev and ops gel on high availability websites
Burn down the silos! Helping dev and ops gel on high availability websites
Lindsay Holmwood1.6K views
Puppet Camp Seattle 2014: Puppet: Cloud Infrastructure as Code by Puppet
Puppet Camp Seattle 2014: Puppet: Cloud Infrastructure as CodePuppet Camp Seattle 2014: Puppet: Cloud Infrastructure as Code
Puppet Camp Seattle 2014: Puppet: Cloud Infrastructure as Code
Puppet742 views
Facebook的缓存系统 by yiditushe
Facebook的缓存系统Facebook的缓存系统
Facebook的缓存系统
yiditushe1.1K views
Presentation iv implementasi 802x eap tls peap mscha pv2 by Hell19
Presentation iv implementasi  802x eap tls peap mscha pv2Presentation iv implementasi  802x eap tls peap mscha pv2
Presentation iv implementasi 802x eap tls peap mscha pv2
Hell19607 views

Recently uploaded

Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue by
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlueMigrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlueShapeBlue
218 views20 slides
"Surviving highload with Node.js", Andrii Shumada by
"Surviving highload with Node.js", Andrii Shumada "Surviving highload with Node.js", Andrii Shumada
"Surviving highload with Node.js", Andrii Shumada Fwdays
56 views29 slides
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates by
Keynote Talk: Open Source is Not Dead - Charles Schulz - VatesKeynote Talk: Open Source is Not Dead - Charles Schulz - Vates
Keynote Talk: Open Source is Not Dead - Charles Schulz - VatesShapeBlue
252 views15 slides
Cencora Executive Symposium by
Cencora Executive SymposiumCencora Executive Symposium
Cencora Executive Symposiummarketingcommunicati21
159 views14 slides
Kyo - Functional Scala 2023.pdf by
Kyo - Functional Scala 2023.pdfKyo - Functional Scala 2023.pdf
Kyo - Functional Scala 2023.pdfFlavio W. Brasil
457 views92 slides
The Role of Patterns in the Era of Large Language Models by
The Role of Patterns in the Era of Large Language ModelsThe Role of Patterns in the Era of Large Language Models
The Role of Patterns in the Era of Large Language ModelsYunyao Li
85 views65 slides

Recently uploaded(20)

Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue by ShapeBlue
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlueMigrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue
ShapeBlue218 views
"Surviving highload with Node.js", Andrii Shumada by Fwdays
"Surviving highload with Node.js", Andrii Shumada "Surviving highload with Node.js", Andrii Shumada
"Surviving highload with Node.js", Andrii Shumada
Fwdays56 views
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates by ShapeBlue
Keynote Talk: Open Source is Not Dead - Charles Schulz - VatesKeynote Talk: Open Source is Not Dead - Charles Schulz - Vates
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates
ShapeBlue252 views
The Role of Patterns in the Era of Large Language Models by Yunyao Li
The Role of Patterns in the Era of Large Language ModelsThe Role of Patterns in the Era of Large Language Models
The Role of Patterns in the Era of Large Language Models
Yunyao Li85 views
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue by ShapeBlue
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue
ShapeBlue147 views
Business Analyst Series 2023 - Week 4 Session 7 by DianaGray10
Business Analyst Series 2023 -  Week 4 Session 7Business Analyst Series 2023 -  Week 4 Session 7
Business Analyst Series 2023 - Week 4 Session 7
DianaGray10139 views
Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava... by ShapeBlue
Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava...Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava...
Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava...
ShapeBlue145 views
NTGapps NTG LowCode Platform by Mustafa Kuğu
NTGapps NTG LowCode Platform NTGapps NTG LowCode Platform
NTGapps NTG LowCode Platform
Mustafa Kuğu423 views
Future of AR - Facebook Presentation by Rob McCarty
Future of AR - Facebook PresentationFuture of AR - Facebook Presentation
Future of AR - Facebook Presentation
Rob McCarty64 views
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue by ShapeBlue
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlueVNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue
ShapeBlue203 views
The Power of Heat Decarbonisation Plans in the Built Environment by IES VE
The Power of Heat Decarbonisation Plans in the Built EnvironmentThe Power of Heat Decarbonisation Plans in the Built Environment
The Power of Heat Decarbonisation Plans in the Built Environment
IES VE79 views
Extending KVM Host HA for Non-NFS Storage - Alex Ivanov - StorPool by ShapeBlue
Extending KVM Host HA for Non-NFS Storage -  Alex Ivanov - StorPoolExtending KVM Host HA for Non-NFS Storage -  Alex Ivanov - StorPool
Extending KVM Host HA for Non-NFS Storage - Alex Ivanov - StorPool
ShapeBlue123 views
CloudStack and GitOps at Enterprise Scale - Alex Dometrius, Rene Glover - AT&T by ShapeBlue
CloudStack and GitOps at Enterprise Scale - Alex Dometrius, Rene Glover - AT&TCloudStack and GitOps at Enterprise Scale - Alex Dometrius, Rene Glover - AT&T
CloudStack and GitOps at Enterprise Scale - Alex Dometrius, Rene Glover - AT&T
ShapeBlue152 views
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda... by ShapeBlue
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...
ShapeBlue161 views
KVM Security Groups Under the Hood - Wido den Hollander - Your.Online by ShapeBlue
KVM Security Groups Under the Hood - Wido den Hollander - Your.OnlineKVM Security Groups Under the Hood - Wido den Hollander - Your.Online
KVM Security Groups Under the Hood - Wido den Hollander - Your.Online
ShapeBlue221 views
DRBD Deep Dive - Philipp Reisner - LINBIT by ShapeBlue
DRBD Deep Dive - Philipp Reisner - LINBITDRBD Deep Dive - Philipp Reisner - LINBIT
DRBD Deep Dive - Philipp Reisner - LINBIT
ShapeBlue180 views
State of the Union - Rohit Yadav - Apache CloudStack by ShapeBlue
State of the Union - Rohit Yadav - Apache CloudStackState of the Union - Rohit Yadav - Apache CloudStack
State of the Union - Rohit Yadav - Apache CloudStack
ShapeBlue297 views

fog or: How I Learned to Stop Worrying and Love the Cloud

  • 1. OR: fog HOW I LEARNED TO STOP WORRYING AND LOVE THE CLOUD
  • 2. geemus (Wesley Beary) web: github.com/geemus twitter: @geemus
  • 3. employer and sponsor web: engineyard.com twitter: @engineyard
  • 4. API driven on demand services CLOUD core: compute, dns, storage also: kvs, load balance, ...
  • 6. What? on demand - only pay for what you actually use
  • 7. What? on demand - only pay for what you actually use flexible - add and remove resources in minutes (instead of weeks)
  • 8. What? on demand - only pay for what you actually use flexible - add and remove resources in minutes (instead of weeks) repeatable - code, test, repeat
  • 9. What? on demand - only pay for what you actually use flexible - add and remove resources in minutes (instead of weeks) repeatable - code, test, repeat resilient - build better systems with transient resources
  • 11. Why Worry? option overload - which provider/service should I use
  • 12. Why Worry? option overload - which provider/service should I use expertise - each service has yet another knowledge silo
  • 13. Why Worry? option overload - which provider/service should I use expertise - each service has yet another knowledge silo tools - vastly different API, quality, maintenance, etc
  • 14. Why Worry? option overload - which provider/service should I use expertise - each service has yet another knowledge silo tools - vastly different API, quality, maintenance, etc standards - slow progress and differing interpretations
  • 15. Ruby cloud services web: github.com/geemus/fog twitter: @fog
  • 16. Why?
  • 17. Why? portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...
  • 18. Why? portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ... powerful - compute, dns, storage, collections, models, mocks, requests, ...
  • 19. Why? portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ... powerful - compute, dns, storage, collections, models, mocks, requests, ... established - 92k downloads, 1112 followers, 141 forks, 67 contributors, me, ...
  • 20. Why? portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ... powerful - compute, dns, storage, collections, models, mocks, requests, ... established - 92k downloads, 1112 followers, 141 forks, 67 contributors, me, ... Fog.mock! - faster, cheaper, simulated cloud behavior
  • 21. Who?
  • 22. Who? libraries - carrierwave, chef, deckard, gaff, gemcutter, ... products - DevStructure, Engine Yard, iSwifter, OpenFeint, RowFeeder, ...
  • 25. Interactive Bit! cloud fog
  • 26. What?
  • 27. What? That’s great and all but I don’t have a use case...
  • 28. What? That’s great and all but I don’t have a use case... uptime - because who wants a busted web site?
  • 29. Setup geymus ~ ⌘ gem install fog or geymus ~ ⌘ sudo gem install fog
  • 30. Get Connected 7 # setup a connection to the service 8 compute = Fog::Compute.new(credentials)
  • 31. Get Connected 1 credentials = { 2   :provider           => 'Rackspace', 3   :rackspace_api_key  => RACKSPACE_API_KEY, 4   :rackspace_username => RACKSPACE_USERNAME 5 } 6 7 # setup a connection to the service 8 compute = Fog::Compute.new(credentials)
  • 32. Boot that Server  1 server_data = compute.create_server(  2   1,  3   49  4 ).body['server']  5
  • 33. Boot that Server  1 server_data = compute.create_server(  2   1,  3   49  4 ).body['server']  5  6 until compute.get_server_details(  7   server_data['id']  8 ).body['server']['status'] == 'ACTIVE'  9 end 10
  • 34. Boot that Server  1 server_data = compute.create_server(  2   1,  3   49  4 ).body['server']  5  6 until compute.get_server_details(  7   server_data['id']  8 ).body['server']['status'] == 'ACTIVE'  9 end 10 11 commands = [ 12   %{'mkdir .ssh'}, 13   %{'echo #{File.read('~/.ssh/id_rsa.pub')} >> ~/.ssh/authorized_keys'}, 14   %{passwd -l root}, 15 ] 16 17 Net::SSH.start( 18   server_data['addresses'].first, 19   'root', 20   :password => server_data['password'] 21 ) do |ssh| 22   commands.each do |command| 23     ssh.open_channel do |ssh_channel| 24       ssh_channel.request_pty 25       ssh_channel.exec(%{bash -lc '#{command}'}) 26       ssh.loop 27     end 28   end 29 end
  • 36. Worry! arguments - what goes where, what does it mean?
  • 37. Worry! arguments - what goes where, what does it mean? portability - most of this will only work on Rackspace
  • 38. Worry! arguments - what goes where, what does it mean? portability - most of this will only work on Rackspace disservice - back to square one, but with tools in hand
  • 39. Bootstrap 7 # boot server and setup ssh keys 8 server = compute.servers.bootstrap(server_attributes)
  • 40. Bootstrap 1 server_attributes = { 2   :image_id         => '49', 3   :private_key_path => PRIVATE_KEY_PATH, 4   :public_key_path  => PUBLIC_KEY_PATH 5 } 6 7 # boot server and setup ssh keys 8 server = compute.servers.bootstrap(server_attributes)
  • 42. Servers? 1 compute.servers # list servers, same as #all
  • 43. Servers? 1 compute.servers # list servers, same as #all 2 3 compute.servers.get(1234567890) # server by id
  • 44. Servers? 1 compute.servers # list servers, same as #all 2 3 compute.servers.get(1234567890) # server by id 4 5 compute.servers.reload # update to latest
  • 45. Servers? 1 compute.servers # list servers, same as #all 2 3 compute.servers.get(1234567890) # server by id 4 5 compute.servers.reload # update to latest 6 7 compute.servers.new(attributes) # local model
  • 46. Servers? 1 compute.servers # list servers, same as #all 2 3 compute.servers.get(1234567890) # server by id 4 5 compute.servers.reload # update to latest 6 7 compute.servers.new(attributes) # local model 8 9 compute.servers.create(attributes) # remote model
  • 47. ping
  • 48. ping 1 # ping target 10 times 2 ssh_results = server.ssh("ping -c 10 #{target}")
  • 49. ping 1 # ping target 10 times 2 ssh_results = server.ssh("ping -c 10 #{target}") 3 stdout = ssh_results.first.stdout
  • 50. ping 1 # ping target 10 times 2 ssh_results = server.ssh("ping -c 10 #{target}") 3 stdout = ssh_results.first.stdout 4 5 # parse result, last line is summary 6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms
  • 51. ping 1 # ping target 10 times 2 ssh_results = server.ssh("ping -c 10 #{target}") 3 stdout = ssh_results.first.stdout 4 5 # parse result, last line is summary 6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms 7 stats = stdout.split("/n").last.split(' ')[-2] 8 min, avg, max, stddev = stats.split('/')
  • 52. ping 1 # ping target 10 times 2 ssh_results = server.ssh("ping -c 10 #{target}") 3 stdout = ssh_results.first.stdout 4 5 # parse result, last line is summary 6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms 7 stats = stdout.split("/n").last.split(' ')[-2] 8 min, avg, max, stddev = stats.split('/') NOTE: most complex code was string parsing!?!
  • 54. cleanup  1 # shutdown the server  2 server.destroy
  • 55. cleanup  1 # shutdown the server  2 server.destroy  3  4 # return the data as a hash  5 {  6   :min    => min,  7   :avg    => avg,  8   :max    => max,  9   :stddev => stddev 10 }
  • 56. Next!
  • 57. Next!  1 -server_data = compute.create_server(  2 +compute.import_key_pair(  3 +  'id_rsa.pub',  4 +  File.read('~/.ssh/id_rsa.pub')  5 +)  6 +  7 +compute.authorize_security_group_ingress(  8 +  'CidrIp'      => '0.0.0.0/0',  9 +  'FromPort'    => 22, 10 +  'IpProtocol'  => 'tcp', 11 +  'GroupName'   => 'default', 12 +  'ToPort'      => 22 13 +) 14 + 15 +server_data = compute.run_instances( 16 +  'ami-1a837773', 17    1, 18 -  49 19 -).body['server'] 20 +  1, 21 +  'InstanceType'  => 'm1.small', 22 +  'KeyName'       => 'id_rsa.pub', 23 +  'SecurityGroup' => 'default' 24 +).body['instancesSet'].first 25   26 -until compute.get_server_details( 27 -  server_data['id'] 28 -).body['server']['status'] == 'ACTIVE' 29 +until compute.describe_instances( 30 +  'instance-id' => server_data['instanceId'] 31 +).body['reservationSet'].first['instancesSet'].first['instanceState']['name'] == 'running' 32  end 33   34 +sleep(300) 35 + 36  Net::SSH.start( 37 -  server_data['addresses'].first, 38 -  'root', 39 -  :password => server_data['password'] 40 +  server_data['ipAddress'], 41 +  'ubuntu', 42 +  :key_data => [File.read('~/.ssh/id_rsa')] 43  ) do |ssh| 44    commands = [ 45      %{'mkdir .ssh'},
  • 58. Next!  1 -server_data = compute.create_server(  2 +compute.import_key_pair(  3 +  'id_rsa.pub',  4 +  File.read('~/.ssh/id_rsa.pub')  5 +)  6 +  7 +compute.authorize_security_group_ingress(  8 +  'CidrIp'      => '0.0.0.0/0',  9 +  'FromPort'    => 22, 10 +  'IpProtocol'  => 'tcp', 11 +  'GroupName'   => 'default', 12 +  'ToPort'      => 22 13 +) 14 + 15 +server_data = compute.run_instances( 16 +  'ami-1a837773', 17    1, 18 -  49 19 -).body['server'] 20 +  1, 21 +  'InstanceType'  => 'm1.small', 22 +  'KeyName'       => 'id_rsa.pub', 23 +  'SecurityGroup' => 'default' 24 +).body['instancesSet'].first 25   26 -until compute.get_server_details( 27 -  server_data['id'] 28 -).body['server']['status'] == 'ACTIVE' 29 +until compute.describe_instances( 30 +  'instance-id' => server_data['instanceId'] 31 +).body['reservationSet'].first['instancesSet'].first['instanceState']['name'] == 'running' 32  end 33   34 +sleep(300) 35 + 36  Net::SSH.start( 37 -  server_data['addresses'].first, 38 -  'root', 39 -  :password => server_data['password'] 40 +  server_data['ipAddress'], 41 +  'ubuntu', 42 +  :key_data => [File.read('~/.ssh/id_rsa')] 43  ) do |ssh| 44    commands = [ 45      %{'mkdir .ssh'},
  • 59. geopinging v1  1 # specify a different provider  2 credentials = {  3   :provider  => 'AWS',  4   :aws_access_key_id  => AWS_ACCESS_KEY_ID,  5   :aws_secret_access_key => AWS_SECRET_ACCESS_KEY  6 }  7  8 server_attributes = {  9   :image_id         => 'ami-1a837773', 10   :private_key_path => PRIVATE_KEY_PATH, 11   :public_key_path  => PUBLIC_KEY_PATH, 12   :username         => 'ubuntu' 13 }
  • 60. geopinging v2 1 # specify a different aws region 2 # ['ap-southeast-1', 'eu-west-1', 'us-west-1] 3 credentials.merge!({ 4   :region => 'eu-west-1' 5 })
  • 61. geopinging v... portable - AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ...
  • 62. geopinging v... portable - AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ... lather, rinse, repeat
  • 63. How? That is awesome, but how did you...
  • 64. exploring geymus ~ ⌘ fog   To run as 'default', add the following to ~/.fog :default:   :aws_access_key_id:     INTENTIONALLY_LEFT_BLANK   :aws_secret_access_key: INTENTIONALLY_LEFT_BLANK   :public_key_path:       INTENTIONALLY_LEFT_BLANK   :private_key_path:      INTENTIONALLY_LEFT_BLANK   :rackspace_api_key:     INTENTIONALLY_LEFT_BLANK   :rackspace_username:    INTENTIONALLY_LEFT_BLANK ...
  • 67. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace
  • 68. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers
  • 69. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace]
  • 70. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections
  • 71. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections [:directories, :files, :flavors, :images, :servers]
  • 72. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections [:directories, :files, :flavors, :images, :servers] >> Rackspace[:compute]
  • 73. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections [:directories, :files, :flavors, :images, :servers] >> Rackspace[:compute] #<Fog::Rackspace::Compute ...>
  • 74. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections [:directories, :files, :flavors, :images, :servers] >> Rackspace[:compute] #<Fog::Rackspace::Compute ...> >> Rackspace[:compute].requests
  • 75. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections [:directories, :files, :flavors, :images, :servers] >> Rackspace[:compute] #<Fog::Rackspace::Compute ...> >> Rackspace[:compute].requests [:confirm_resized_server, ..., :update_server]
  • 77. what are those? provider => [AWS, Rackspace, Zerigo, ...]
  • 78. what are those? provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...]
  • 79. what are those? provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...] collection => [flavors, images, servers, ...]
  • 80. what are those? provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...] collection => [flavors, images, servers, ...] model => [flavor, image, server, ...]
  • 81. what are those? provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...] collection => [flavors, images, servers, ...] model => [flavor, image, server, ...] request => [describe_instances, run_instances, ...]
  • 86. requests? >> Rackspace[:compute].list_servers #<Excon::Response:0x________ @body = { "servers" => [] }, @headers = { "X-PURGE-KEY"=>"/______/servers", ..., "Connection"=>"keep-alive" },
  • 87. requests? >> Rackspace[:compute].list_servers #<Excon::Response:0x________ @body = { "servers" => [] }, @headers = { "X-PURGE-KEY"=>"/______/servers", ..., "Connection"=>"keep-alive" }, @status=200>
  • 89. sanity check >> Rackspace.servers.select {|server| server.ready?}
  • 90. sanity check >> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers filters={} [] >
  • 91. sanity check >> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers filters={} [] > >> AWS.servers.select {|server| server.ready?}
  • 92. sanity check >> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers filters={} [] > >> AWS.servers.select {|server| server.ready?} <Fog::AWS::Compute::Servers [] >
  • 93. sanity check >> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers filters={} [] > >> AWS.servers.select {|server| server.ready?} <Fog::AWS::Compute::Servers [] > >> exit
  • 96. finding images >> Rackspace.images.table([:id, :name]) +--------- +------------------------------------------+ | id | name | +--------- +------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +--------- +------------------------------------------+ ...
  • 97. finding images >> Rackspace.images.table([:id, :name]) +--------- +------------------------------------------+ | id | name | +--------- +------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +--------- +------------------------------------------+ ... >> AWS.images # I use alestic.com listing
  • 98. finding images >> Rackspace.images.table([:id, :name]) +--------- +------------------------------------------+ | id | name | +--------- +------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +--------- +------------------------------------------+ ... >> AWS.images # I use alestic.com listing ...
  • 100. exploring... It takes forever!
  • 101. exploring... It takes forever! It’s so expensive!
  • 102. exploring... It takes forever! It’s so expensive! A warm welcome for Fog.mock!
  • 103. Mocks! geymus ~ ⌘ FOG_MOCK=true fog or require ‘fog’ Fog.mock!
  • 106. simulation Most functions just work! Unimplemented mocks? Errors keep you on track.
  • 107. simulation Most functions just work! Unimplemented mocks? Errors keep you on track. Tests run against both, so it is either consistent or a bug.
  • 108. Back to Business I have a bunch of data now what?
  • 109. Back to Business I have a bunch of data now what? storage - aggregating cloud data
  • 110. Get Connected 7 # setup a connection to the service 8 storage = Fog::Storage.new(credentials)
  • 111. Get Connected 1 credentials = { 2   :provider  => 'AWS', 3   :aws_access_key_id  => AWS_ACCESS_KEY_ID, 4   :aws_secret_access_key => AWS_SECRET_ACCESS_KEY 5 } 6 7 # setup a connection to the service 8 storage = Fog::Storage.new(credentials)
  • 112. directories 1 # create a directory 2 directory = storage.directories.create( 3   :key    => directory_name, 4   :public => true 5 )
  • 113. files
  • 114. files 1 # store the file 2 file = directory.files.create( 3   :body   => File.open(path), 4   :key    => name, 5   :public => true 6 )
  • 115. files 1 # store the file 2 file = directory.files.create( 3   :body   => File.open(path), 4   :key    => name, 5   :public => true 6 ) 7 8 # return the public url for the file 9 file.public_url
  • 116. geostorage  1 # specify a different provider  2 credentials = {  3   :provider           => 'Rackspace',  4   :rackspace_api_key  => RACKSPACE_API_KEY,  5   :rackspace_username => RACKSPACE_USERNAME  6 }
  • 119. cleanup geymus ~ ⌘ fog ... >> directory = AWS.directories.get(DIRECTORY_NAME) ...
  • 120. cleanup geymus ~ ⌘ fog ... >> directory = AWS.directories.get(DIRECTORY_NAME) ... >> directory.files.each {|file| file.destroy} ...
  • 121. cleanup geymus ~ ⌘ fog ... >> directory = AWS.directories.get(DIRECTORY_NAME) ... >> directory.files.each {|file| file.destroy} ... >> directory.destroy ...
  • 122. cleanup geymus ~ ⌘ fog ... >> directory = AWS.directories.get(DIRECTORY_NAME) ... >> directory.files.each {|file| file.destroy} ... >> directory.destroy ... >> exit
  • 123. geoaggregating portable - AWS, Google, Local, Rackspace
  • 124. geoaggregating portable - AWS, Google, Local, Rackspace lather, rinse, repeat
  • 125. Phase 3: Profit I’ve got the data, but how do I freemium?
  • 126. Phase 3: Profit I’ve got the data, but how do I freemium? dns - make your cloud (premium) accessible
  • 127. Get Connected 7 # setup a connection to the service 8 dns = Fog::DNS.new(credentials)
  • 128. Get Connected 1 credentials = { 2   :provider  => 'Zerigo', 3   :zerigo_email => ZERIGO_EMAIL, 4   :zerigo_token => ZERIGO_TOKEN 5 } 6 7 # setup a connection to the service 8 dns = Fog::DNS.new(credentials)
  • 129. zones 1 # create a zone 2 zone = dns.zone.create( 3   :domain => domain_name, 4   :email  => "admin@#{domain_name}" 5 )
  • 130. records 1 # create a record 2 record = zones.records.create( 3   :ip   => '1.2.3.4', 4   :name => "#{customer_name}.#{domain_name}", 5   :type => 'A' 6 )
  • 131. cleanup geymus ~ ⌘ fog ... >> zone = Zerigo.zones.get(ZONE_ID) ... >> zone.records.each {|record| record.destroy} ... >> zone.destroy ... >> exit
  • 132. geofreemiuming portable - AWS, Linode, Slicehost, Zerigo
  • 133. geofreemiuming portable - AWS, Linode, Slicehost, Zerigo lather, rinse, repeat
  • 135. Congratulations! todo - copy/paste, push, deploy!
  • 136. Congratulations! todo - copy/paste, push, deploy! budgeting - find ways to spend your pile of money
  • 137. Congratulations! todo - copy/paste, push, deploy! budgeting - find ways to spend your pile of money geemus - likes coffee, bourbon, games, etc
  • 138. Congratulations! todo - copy/paste, push, deploy! budgeting - find ways to spend your pile of money geemus - likes coffee, bourbon, games, etc retire - at your earliest convenience
  • 139. Love!
  • 140. Love! knowledge - suffering encoded in ruby
  • 141. Love! knowledge - expertise encoded in ruby
  • 142. Love! knowledge - expertise encoded in ruby empowering - show the cloud who is boss
  • 143. Love! knowledge - expertise encoded in ruby empowering - show the cloud who is boss exciting - this is some cutting edge stuff!
  • 145. Homework: Easy follow @fog to hear about releases
  • 146. Homework: Easy follow @fog to hear about releases follow github.com/geemus/fog to hear nitty gritty
  • 147. Homework: Easy follow @fog to hear about releases follow github.com/geemus/fog to hear nitty gritty proudly display stickers wherever hackers are found
  • 148. Homework: Easy follow @fog to hear about releases follow github.com/geemus/fog to hear nitty gritty proudly display stickers wherever hackers are found ask geemus your remaining questions
  • 149. Homework: Easy follow @fog to hear about releases follow github.com/geemus/fog to hear nitty gritty proudly display stickers wherever hackers are found ask geemus your remaining questions play games with geemus
  • 151. Homework: Normal report issues at github.com/geemus/fog/issues
  • 152. Homework: Normal report issues at github.com/geemus/fog/issues irc #ruby-fog on freenode
  • 153. Homework: Normal report issues at github.com/geemus/fog/issues irc #ruby-fog on freenode discuss groups.google.com/group/ruby-fog
  • 154. Homework: Normal report issues at github.com/geemus/fog/issues irc #ruby-fog on freenode discuss groups.google.com/group/ruby-fog write blog posts
  • 155. Homework: Normal report issues at github.com/geemus/fog/issues irc #ruby-fog on freenode discuss groups.google.com/group/ruby-fog write blog posts give lightning talks
  • 157. Homework: Hard help make fog.io the cloud services resource for ruby
  • 158. Homework: Hard help make fog.io the cloud services resource for ruby send pull requests fixing issues or adding features
  • 159. Homework: Hard help make fog.io the cloud services resource for ruby send pull requests fixing issues or adding features proudly wear contributor-only grey shirt wherever hackers are found
  • 161. Homework: Expert help maintain the cloud services you depend on
  • 162. Homework: Expert help maintain the cloud services you depend on become a collaborator by keeping informed and involved
  • 163. Homework: Expert help maintain the cloud services you depend on become a collaborator by keeping informed and involved proudly wear commit-only black shirt wherever hackers are found
  • 164. Thanks! @geemus - questions, comments, suggestions
  • 165. Thanks! Questions? (see also: README) examples - http://gist.github.com/729992 slides - http://slidesha.re/hR8sP9 repo - http://github.com/geemus/fog bugs - http://github.com/geemus/fog/issues @geemus - questions, comments, suggestions

Editor's Notes

  1. \n
  2. \n
  3. \n
  4. \n
  5. \n
  6. \n
  7. \n
  8. \n
  9. \n
  10. \n
  11. \n
  12. \n
  13. \n
  14. \n
  15. \n
  16. \n
  17. \n
  18. \n
  19. \n
  20. \n
  21. \n
  22. \n
  23. \n
  24. \n
  25. \n
  26. \n
  27. \n
  28. \n
  29. \n
  30. \n
  31. \n
  32. \n
  33. \n
  34. \n
  35. \n
  36. \n
  37. \n
  38. \n
  39. \n
  40. \n
  41. \n
  42. \n
  43. \n
  44. \n
  45. \n
  46. \n
  47. \n
  48. \n
  49. \n
  50. \n
  51. \n
  52. \n
  53. \n
  54. \n
  55. \n
  56. \n
  57. \n
  58. \n
  59. \n
  60. \n
  61. \n
  62. \n
  63. \n
  64. \n
  65. \n
  66. \n
  67. \n
  68. \n
  69. \n
  70. \n
  71. \n
  72. \n
  73. \n
  74. \n
  75. \n
  76. \n
  77. \n
  78. \n
  79. \n
  80. \n
  81. \n
  82. \n
  83. \n
  84. \n
  85. \n
  86. \n
  87. \n
  88. \n
  89. \n
  90. \n
  91. \n
  92. \n
  93. \n
  94. \n
  95. \n
  96. \n
  97. \n
  98. \n
  99. \n
  100. \n
  101. \n
  102. \n
  103. \n
  104. \n
  105. \n
  106. \n
  107. \n
  108. \n
  109. \n
  110. \n
  111. \n
  112. \n
  113. \n
  114. \n
  115. \n
  116. \n
  117. \n
  118. \n
  119. \n
  120. \n
  121. \n
  122. \n
  123. \n
  124. \n
  125. \n
  126. \n
  127. \n
  128. \n
  129. \n
  130. \n
  131. \n
  132. \n
  133. \n
  134. \n
  135. \n
  136. \n
  137. \n
  138. \n
  139. \n
  140. \n
  141. \n