OR:fog       HOW I      LEARNED      TO STOP      WORRYING       AND      LOVE THE      CLOUD
geemus (Wesley Beary)web: github.com/geemustwitter: @geemus
employer and sponsorweb: engineyard.comtwitter: @engineyard
API driven on demand servicesCLOUD   core: compute, dns, storage        also: kvs, load balance, ...
What?on demand -   pay for only what you actually use
What?on demand -       pay for only what you actually useflexible -   add and remove resources in minutes (instead of weeks)
What?on demand -       pay for only what you actually useflexible -   add and remove resources in minutes (instead of weeks...
What?on demand -         pay for only what you actually useflexible -     add and remove resources in minutes (instead of w...
But, but...
But, but...option overload -   which provider should I use
But, but...option overload -       which provider should I useintimidating -   each service requires separate expertise
But, but...option overload -              which provider should I useintimidating -        each service requires separate ...
Ruby cloud computingweb: github.com/geemus/fogtwitter: @fog
Why?portable -   AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...powerful -   compute, storage, colle...
Who?libraries -   carrierwave, chef, deckard, gaff, gemcutter, ...products -    DevStructure, Engine Yard, iSwifter, OpenF...
What?That’s great and all but   I don’t have a use case...
What?That’s great and all but   I don’t have a use case...uptime -       because who wants a busted web site?
Setup  geymus ~ ⌘ gem install fog              orgeymus ~ ⌘ sudo gem install fog
Get Connected7 # setup a connection to the service8 compute = Fog::Compute.new(credentials)
Get Connected1   credentials = {2     :provider           => Rackspace,3     :rackspace_api_key  => RACKSPACE_API_KEY,4   ...
Bootstrap7 # boot server and setup ssh keys8 server = compute.servers.bootstrap(server_attributes)
Bootstrap1   server_attributes = {2     :image_id         => 49,3     :private_key_path => PRIVATE_KEY_PATH,4     :public_...
Servers?
Servers?1 compute.servers # list servers, same as #all
Servers?1 compute.servers # list servers, same as #all23 compute.servers.get(1234567890) # server by id
Servers?1 compute.servers # list servers, same as #all23 compute.servers.get(1234567890) # server by id45 compute.servers....
Servers?1   compute.servers # list servers, same as #all23   compute.servers.get(1234567890) # server by id45   compute.se...
Servers?1   compute.servers # list servers, same as #all23   compute.servers.get(1234567890) # server by id45   compute.se...
bootstrap?
bootstrap?convenience method for servers
bootstrap?               convenience method for serverscreate -    creates a server matching attributeswait_for { ready? }...
ping
ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")
ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")3 stdout = ssh_results.first.stdout
ping1   # ping target 10 times2   ssh_results = server.ssh("ping -c 10 #{target}")3   stdout = ssh_results.first.stdout45  ...
ping1   # ping target 10 times2   ssh_results = server.ssh("ping -c 10 #{target}")3   stdout = ssh_results.first.stdout45  ...
ping1   # ping target 10 times2   ssh_results = server.ssh("ping -c 10 #{target}")3   stdout = ssh_results.first.stdout45  ...
cleanup
cleanup 1 # shutdown the server 2 server.destroy
cleanup 1 # shutdown the server 2 server.destroy 3 4 # return the data as a hash 5 { 6   :min    => min, 7   :avg    => av...
geopinging v1 1 # specify a different provider 2 credentials = { 3   :provider => AWS, 4   :aws_access_key_id => AWS_ACCES...
geopinging v21   # specify a different aws region2   # [ap-southeast-1, eu-west-1, us-west-1]3   credentials.merge!({4    ...
geopinging v...portable -   AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ...
geopinging v...portable -   AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ...                 lather, rinse, r...
How?That is awesome, but   how did you know...
exploringgeymus ~ ⌘ fog  To run as default, add the following to ~/.fog:default:  :aws_access_key_id:     INTENTIONALLY_LE...
sign posts
sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace
sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Racks...
sign postsgeymus ~ ⌘ fog  Welcome to fog interactive!  :default credentials provide AWS and Rackspace>> providers[AWS, Rac...
sign postsgeymus ~ ⌘ fog  Welcome to fog interactive!  :default credentials provide AWS and Rackspace>> providers[AWS, Rac...
sign postsgeymus ~ ⌘ fog  Welcome to fog interactive!  :default credentials provide AWS and Rackspace>> providers[AWS, Rac...
sign postsgeymus ~ ⌘ fog  Welcome to fog interactive!  :default credentials provide AWS and Rackspace>> providers[AWS, Rac...
requests?
requests?>> Rackspace[:compute].list_servers
requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________
requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = {
requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = { "servers" => []
requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = {  "servers" => []},
requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = {  "servers" => []},@headers = {
requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = {  "servers" => []},@headers = {  "X-PURGE...
requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = {  "servers" => []},@headers = {  "X-PURGE...
requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = {  "servers" => []},@headers = {  "X-PURGE...
requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = {  "servers" => []},@headers = {  "X-PURGE...
requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = {  "servers" => []},@headers = {  "X-PURGE...
sanity check
sanity check>> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers  filters={}  [] >
sanity check>> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers  filters={}  [] >>> AWS....
sanity check>> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers  filters={}  [] >>> AWS....
finding images
finding images>> Rackspace.images.table([:id, :name]) +---------+------------------------------------------+
finding images>> Rackspace.images.table([:id, :name]) +---------+------------------------------------------+ | id  | name  ...
finding images>> Rackspace.images.table([:id, :name]) +---------+------------------------------------------+ | id  | name  ...
finding images>> Rackspace.images.table([:id, :name])  +---------+------------------------------------------+  | id  | name...
exploring...
exploring...  It takes   forever!
exploring...    It takes   forever!  It’s so   expensive!
exploring...        It takes   forever!      It’s so   expensive!A warm welcome for   Fog.mock!
Mocks!geymus ~ ⌘ FOG_MOCK=true fog             or       require ‘fog’       Fog.mock!
simulation
simulationMost functions   just work!
simulation         Most functions   just work!Unimplemented mocks?   Errors   keep you on track.
simulation                   Most functions   just work!      Unimplemented mocks?       Errors      keep you on track.Tes...
Back to Business  I have a bunch of data   now what?
Back to Business  I have a bunch of data   now what?   storage -       aggregating cloud data
Get Connected7 # setup a connection to the service8 storage = Fog::Storage.new(credentials)
Get Connected1   credentials = {2     :provider => AWS,3     :aws_access_key_id => AWS_ACCESS_KEY_ID,4     :aws_secret_acc...
directories1   # create a directory2   directory = storage.directories.create(3     :key    => directory_name,4     :publi...
files
files1   # store the file2   file = directory.files.create(3     :body   => File.open(path),4     :key    => name,5     :publi...
files1   # store the file2   file = directory.files.create(3     :body   => File.open(path),4     :key    => name,5     :publi...
geostorage 1   # specify a different provider 2   credentials = { 3     :provider           => Rackspace, 4     :rackspace...
cleanup
cleanupgeymus ~ ⌘ fog...
cleanupgeymus ~ ⌘ fog...>> directory = AWS.directories.get(DIRECTORY_NAME)...
cleanupgeymus ~ ⌘ fog...>> directory = AWS.directories.get(DIRECTORY_NAME)...>> directory.files.each {|file| file.destroy}...
cleanupgeymus ~ ⌘ fog...>> directory = AWS.directories.get(DIRECTORY_NAME)...>> directory.files.each {|file| file.destroy}......
cleanupgeymus ~ ⌘ fog...>> directory = AWS.directories.get(DIRECTORY_NAME)...>> directory.files.each {|file| file.destroy}......
Phase 3: ProfitI’ve got the data, but how do I   freemium?
Phase 3: ProfitI’ve got the data, but how do I   freemium? dns -    make your cloud (premium) accessible
Get Connected7 # setup a connection to the service8 dns = Fog::DNS.new(credentials)
Get Connected1   credentials = {2     :provider  => Zerigo,3     :zerigo_email => AWS_ACCESS_KEY_ID,4     :zerigo_token =>...
zones1   # create a zone2   zone = dns.zone.create(3     :domain => domain_name,4     :email  => "admin@#{domain_name}"5   )
records1   # create a record2   record = zones.records.create(3     :ip   => 1.2.3.4,4     :name => "#{customer_name}.#{do...
cleanupgeymus ~ ⌘ fog...>> zone = Zerigo.zones.get(ZONE_ID)...>> zone.records.each {|record| record.destroy}...>> zone.des...
Roadmapcompute -   consolidating and refining the shared api
Roadmapcompute -   consolidating and refining the shared apicompletion -    fill in remaining requests and mocks for core se...
Roadmapcompute -      consolidating and refining the shared apicompletion -       fill in remaining requests and mocks for c...
Roadmapcompute -      consolidating and refining the shared apicompletion -       fill in remaining requests and mocks for c...
Thanks!
Thanks! Questions?               (see also: README) examples - http://gist.github.com/729992    slides - http://slidesha.r...
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
Upcoming SlideShare
Loading in …5
×

fog or: How I Learned to Stop Worrying and Love the Cloud

7,124
-1

Published on

Learn how to easily get started on cloud computing with fog. If you can control your infrastructure choices, you’ll make better choices in development and get what you need in production. You'll get an overview of fog and concrete examples to give you a head start on your provisioning workflow.

Published in: Technology
0 Comments
11 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
7,124
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
101
Comments
0
Likes
11
Embeds 0
No embeds

No notes for slide
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • fog or: How I Learned to Stop Worrying and Love the Cloud

    1. 1. OR:fog HOW I LEARNED TO STOP WORRYING AND LOVE THE CLOUD
    2. 2. geemus (Wesley Beary)web: github.com/geemustwitter: @geemus
    3. 3. employer and sponsorweb: engineyard.comtwitter: @engineyard
    4. 4. API driven on demand servicesCLOUD core: compute, dns, storage also: kvs, load balance, ...
    5. 5. What?on demand - pay for only what you actually use
    6. 6. What?on demand - pay for only what you actually useflexible - add and remove resources in minutes (instead of weeks)
    7. 7. What?on demand - pay for only what you actually useflexible - add and remove resources in minutes (instead of weeks)repeatable - code, test, repeat
    8. 8. What?on demand - pay for only what you actually useflexible - add and remove resources in minutes (instead of weeks)repeatable - code, test, repeatresilient - build better systems with transient resources
    9. 9. But, but...
    10. 10. But, but...option overload - which provider should I use
    11. 11. But, but...option overload - which provider should I useintimidating - each service requires separate expertise
    12. 12. But, but...option overload - which provider should I useintimidating - each service requires separate expertisetools - vastly different API’s built on varying levels of quality
    13. 13. Ruby cloud computingweb: github.com/geemus/fogtwitter: @fog
    14. 14. Why?portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...powerful - compute, storage, collections, models, mocks, requests, ...established - 50k downloads, 860 followers, 100 forks, 40 contributors, me, ...Fog.mock! - faster, cheaper, simulated cloud behavior
    15. 15. Who?libraries - carrierwave, chef, deckard, gaff, gemcutter, ...products - DevStructure, Engine Yard, iSwifter, OpenFeint, RowFeeder, ...
    16. 16. What?That’s great and all but I don’t have a use case...
    17. 17. What?That’s great and all but I don’t have a use case...uptime - because who wants a busted web site?
    18. 18. Setup geymus ~ ⌘ gem install fog orgeymus ~ ⌘ sudo gem install fog
    19. 19. Get Connected7 # setup a connection to the service8 compute = Fog::Compute.new(credentials)
    20. 20. Get Connected1 credentials = {2   :provider           => Rackspace,3   :rackspace_api_key  => RACKSPACE_API_KEY,4   :rackspace_username => RACKSPACE_USERNAME5 }67 # setup a connection to the service8 compute = Fog::Compute.new(credentials)
    21. 21. Bootstrap7 # boot server and setup ssh keys8 server = compute.servers.bootstrap(server_attributes)
    22. 22. Bootstrap1 server_attributes = {2   :image_id         => 49,3   :private_key_path => PRIVATE_KEY_PATH,4   :public_key_path  => PUBLIC_KEY_PATH5 }67 # boot server and setup ssh keys8 server = compute.servers.bootstrap(server_attributes)
    23. 23. Servers?
    24. 24. Servers?1 compute.servers # list servers, same as #all
    25. 25. Servers?1 compute.servers # list servers, same as #all23 compute.servers.get(1234567890) # server by id
    26. 26. Servers?1 compute.servers # list servers, same as #all23 compute.servers.get(1234567890) # server by id45 compute.servers.reload # update to latest
    27. 27. Servers?1 compute.servers # list servers, same as #all23 compute.servers.get(1234567890) # server by id45 compute.servers.reload # update to latest67 compute.servers.new(attributes) # local model
    28. 28. Servers?1 compute.servers # list servers, same as #all23 compute.servers.get(1234567890) # server by id45 compute.servers.reload # update to latest67 compute.servers.new(attributes) # local model89 compute.servers.create(attributes) # remote model
    29. 29. bootstrap?
    30. 30. bootstrap?convenience method for servers
    31. 31. bootstrap? convenience method for serverscreate - creates a server matching attributeswait_for { ready? } - waits until server respondsssh - sets up ssh keys and disables password authentication
    32. 32. ping
    33. 33. ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")
    34. 34. ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")3 stdout = ssh_results.first.stdout
    35. 35. ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")3 stdout = ssh_results.first.stdout45 # parse result, last line is summary6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms
    36. 36. ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")3 stdout = ssh_results.first.stdout45 # parse result, last line is summary6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms7 stats = stdout.split("/n").last.split( )[-2]8 min, avg, max, stddev = stats.split(/)
    37. 37. ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")3 stdout = ssh_results.first.stdout45 # parse result, last line is summary6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms7 stats = stdout.split("/n").last.split( )[-2]8 min, avg, max, stddev = stats.split(/) NOTE: most complex code was string parsing!?!
    38. 38. cleanup
    39. 39. cleanup 1 # shutdown the server 2 server.destroy
    40. 40. cleanup 1 # shutdown the server 2 server.destroy 3 4 # return the data as a hash 5 { 6   :min    => min, 7   :avg    => avg, 8   :max    => max, 9   :stddev => stddev10 }
    41. 41. geopinging v1 1 # specify a different provider 2 credentials = { 3   :provider => AWS, 4   :aws_access_key_id => AWS_ACCESS_KEY_ID, 5   :aws_secret_access_key => AWS_SECRET_ACCESS_KEY 6 } 7 8 server_attributes = { 9   :image_id         => ami-1a837773,10   :private_key_path => PRIVATE_KEY_PATH,11   :public_key_path  => PUBLIC_KEY_PATH,12   :username         => ubuntu13 }
    42. 42. geopinging v21 # specify a different aws region2 # [ap-southeast-1, eu-west-1, us-west-1]3 credentials.merge!({4   :region => eu-west-15 })
    43. 43. geopinging v...portable - AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ...
    44. 44. geopinging v...portable - AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ... lather, rinse, repeat
    45. 45. How?That is awesome, but how did you know...
    46. 46. exploringgeymus ~ ⌘ fog  To run as default, add the following to ~/.fog:default:  :aws_access_key_id:     INTENTIONALLY_LEFT_BLANK  :aws_secret_access_key: INTENTIONALLY_LEFT_BLANK  :public_key_path:       INTENTIONALLY_LEFT_BLANK  :private_key_path:      INTENTIONALLY_LEFT_BLANK  :rackspace_api_key:     INTENTIONALLY_LEFT_BLANK  :rackspace_username:    INTENTIONALLY_LEFT_BLANK ...
    47. 47. sign posts
    48. 48. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace
    49. 49. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]
    50. 50. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]>> Rackspace.collections[:directories, :files, :flavors, :images, :servers]
    51. 51. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]>> Rackspace.collections[:directories, :files, :flavors, :images, :servers]>> Rackspace[:compute]#<Fog::Rackspace::Compute ...>
    52. 52. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]>> Rackspace.collections[:directories, :files, :flavors, :images, :servers]>> Rackspace[:compute]#<Fog::Rackspace::Compute ...>>> Rackspace[:compute].requests
    53. 53. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]>> Rackspace.collections[:directories, :files, :flavors, :images, :servers]>> Rackspace[:compute]#<Fog::Rackspace::Compute ...>>> Rackspace[:compute].requests[:confirm_resized_server, ..., :update_server]
    54. 54. requests?
    55. 55. requests?>> Rackspace[:compute].list_servers
    56. 56. requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________
    57. 57. requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = {
    58. 58. requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = { "servers" => []
    59. 59. requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = { "servers" => []},
    60. 60. requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = { "servers" => []},@headers = {
    61. 61. requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = { "servers" => []},@headers = { "X-PURGE-KEY"=>"/______/servers",
    62. 62. requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = { "servers" => []},@headers = { "X-PURGE-KEY"=>"/______/servers", ...,
    63. 63. requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = { "servers" => []},@headers = { "X-PURGE-KEY"=>"/______/servers", ..., "Connection"=>"keep-alive"
    64. 64. requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = { "servers" => []},@headers = { "X-PURGE-KEY"=>"/______/servers", ..., "Connection"=>"keep-alive"},
    65. 65. requests?>> Rackspace[:compute].list_servers#<Excon::Response:0x________@body = { "servers" => []},@headers = { "X-PURGE-KEY"=>"/______/servers", ..., "Connection"=>"keep-alive"},@status=200>
    66. 66. sanity check
    67. 67. sanity check>> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers filters={} [] >
    68. 68. sanity check>> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers filters={} [] >>> AWS.servers.select {|server| server.ready?} <Fog::AWS::Compute::Servers [] >
    69. 69. sanity check>> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers filters={} [] >>> AWS.servers.select {|server| server.ready?} <Fog::AWS::Compute::Servers [] >>> exit
    70. 70. finding images
    71. 71. finding images>> Rackspace.images.table([:id, :name]) +---------+------------------------------------------+
    72. 72. finding images>> Rackspace.images.table([:id, :name]) +---------+------------------------------------------+ | id | name | +---------+------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +---------+------------------------------------------+ ...
    73. 73. finding images>> Rackspace.images.table([:id, :name]) +---------+------------------------------------------+ | id | name | +---------+------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +---------+------------------------------------------+ ...>> AWS.images # I use alestic.com listing
    74. 74. finding images>> Rackspace.images.table([:id, :name]) +---------+------------------------------------------+ | id | name | +---------+------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +---------+------------------------------------------+ ...>> AWS.images # I use alestic.com listing...
    75. 75. exploring...
    76. 76. exploring... It takes forever!
    77. 77. exploring... It takes forever! It’s so expensive!
    78. 78. exploring... It takes forever! It’s so expensive!A warm welcome for Fog.mock!
    79. 79. Mocks!geymus ~ ⌘ FOG_MOCK=true fog or require ‘fog’ Fog.mock!
    80. 80. simulation
    81. 81. simulationMost functions just work!
    82. 82. simulation Most functions just work!Unimplemented mocks? Errors keep you on track.
    83. 83. simulation Most functions just work! Unimplemented mocks? Errors keep you on track.Tests run against both, so it is either consistent or a bug.
    84. 84. Back to Business I have a bunch of data now what?
    85. 85. Back to Business I have a bunch of data now what? storage - aggregating cloud data
    86. 86. Get Connected7 # setup a connection to the service8 storage = Fog::Storage.new(credentials)
    87. 87. Get Connected1 credentials = {2   :provider => AWS,3   :aws_access_key_id => AWS_ACCESS_KEY_ID,4   :aws_secret_access_key => AWS_SECRET_ACCESS_KEY5 }67 # setup a connection to the service8 storage = Fog::Storage.new(credentials)
    88. 88. directories1 # create a directory2 directory = storage.directories.create(3   :key    => directory_name,4   :public => true5 )
    89. 89. files
    90. 90. files1 # store the file2 file = directory.files.create(3   :body   => File.open(path),4   :key    => name,5   :public => true6 )
    91. 91. files1 # store the file2 file = directory.files.create(3   :body   => File.open(path),4   :key    => name,5   :public => true6 )78 # return the public url for the file9 file.public_url
    92. 92. geostorage 1 # specify a different provider 2 credentials = { 3   :provider           => Rackspace, 4   :rackspace_api_key  => RACKSPACE_API_KEY, 5   :rackspace_username => RACKSPACE_USERNAME 6 }
    93. 93. cleanup
    94. 94. cleanupgeymus ~ ⌘ fog...
    95. 95. cleanupgeymus ~ ⌘ fog...>> directory = AWS.directories.get(DIRECTORY_NAME)...
    96. 96. cleanupgeymus ~ ⌘ fog...>> directory = AWS.directories.get(DIRECTORY_NAME)...>> directory.files.each {|file| file.destroy}...
    97. 97. cleanupgeymus ~ ⌘ fog...>> directory = AWS.directories.get(DIRECTORY_NAME)...>> directory.files.each {|file| file.destroy}...>> directory.destroy...
    98. 98. cleanupgeymus ~ ⌘ fog...>> directory = AWS.directories.get(DIRECTORY_NAME)...>> directory.files.each {|file| file.destroy}...>> directory.destroy...>> exit
    99. 99. Phase 3: ProfitI’ve got the data, but how do I freemium?
    100. 100. Phase 3: ProfitI’ve got the data, but how do I freemium? dns - make your cloud (premium) accessible
    101. 101. Get Connected7 # setup a connection to the service8 dns = Fog::DNS.new(credentials)
    102. 102. Get Connected1 credentials = {2   :provider  => Zerigo,3   :zerigo_email => AWS_ACCESS_KEY_ID,4   :zerigo_token => AWS_SECRET_ACCESS_KEY5 }67 # setup a connection to the service8 dns = Fog::DNS.new(credentials)
    103. 103. zones1 # create a zone2 zone = dns.zone.create(3   :domain => domain_name,4   :email  => "admin@#{domain_name}"5 )
    104. 104. records1 # create a record2 record = zones.records.create(3   :ip   => 1.2.3.4,4   :name => "#{customer_name}.#{domain_name}",5   :type => A6 )
    105. 105. cleanupgeymus ~ ⌘ fog...>> zone = Zerigo.zones.get(ZONE_ID)...>> zone.records.each {|record| record.destroy}...>> zone.destroy...>> exit
    106. 106. Roadmapcompute - consolidating and refining the shared api
    107. 107. Roadmapcompute - consolidating and refining the shared apicompletion - fill in remaining requests and mocks for core services
    108. 108. Roadmapcompute - consolidating and refining the shared apicompletion - fill in remaining requests and mocks for core serviceserrors - more consistent error raising and handling
    109. 109. Roadmapcompute - consolidating and refining the shared apicompletion - fill in remaining requests and mocks for core serviceserrors - more consistent error raising and handlingdocumentation - blog posts, examples, fog.io
    110. 110. Thanks!
    111. 111. Thanks! Questions? (see also: README) examples - http://gist.github.com/729992 slides - http://slidesha.re/hR8sP9 repo - http://github.com/geemus/fog bugs - http://github.com/geemus/fog/issues irc - irc://irc.freenode.net/ruby-fogmailing list - http://groups.google.com/group/ruby-fog twitter - @fog and @geemus
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×