fog or: How I Learned to Stop Worrying and Love the Cloud (OpenStack Edition)

2,439 views

Published on

Cloud computing scared the crap out of me - the quirks and nightmares of provisioning cloud computing, dns, storage, ... on AWS, Terremark, Rackspace, ... - I mean, where do you even start?

Since I couldn't find a good answer, I undertook the (probably insane) task of creating one. fog gives you a place to start by creating abstractions that work across many different providers, greatly reducing the barrier to entry (and the cost of switching later). The abstractions are built on top of solid wrappers for each api. So if the high level stuff doesn't cut it you can dig in and get the job done. On top of that, mocks are available to simulate what clouds will do for development and testing (saving you time and money).

You'll get a whirlwind tour of basic through advanced as we create the building blocks of a highly distributed (multi-cloud) system with some simple Ruby scripts that work nearly verbatim from provider to provider. Get your feet wet working with cloud resources or just make it easier on yourself as your usage gets more complex, either way fog makes it easy to get what you need from the cloud.

The OpenStack Edition adds my concerns about OpenStack API development, including things that have already been fixed and things that we haven't yet encountered. Hopefully this consumer perspective can help shed light on some rough spots.

Published in: Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,439
On SlideShare
0
From Embeds
0
Number of Embeds
29
Actions
Shares
0
Downloads
79
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • fog or: How I Learned to Stop Worrying and Love the Cloud (OpenStack Edition)

    1. 1. OR:fog HOW I LEARNED TO STOP WORRYING AND LOVE THE CLOUD
    2. 2. geemus (Wesley Beary)web: github.com/geemustwitter: @geemus
    3. 3. employer and sponsorweb: engineyard.comtwitter: @engineyard
    4. 4. API driven on demand servicesCLOUD core: compute, dns, storage also: kvs, load balance, ...
    5. 5. What?
    6. 6. What?on demand - only pay for what you actually use
    7. 7. What?on demand - only pay for what you actually useflexible - add and remove resources in minutes (instead of weeks)
    8. 8. What?on demand - only pay for what you actually useflexible - add and remove resources in minutes (instead of weeks)repeatable - code, test, repeat
    9. 9. What?on demand - only pay for what you actually useflexible - add and remove resources in minutes (instead of weeks)repeatable - code, test, repeatresilient - build better systems with transient resources
    10. 10. Why Worry?
    11. 11. Why Worry?option overload - which provider/service should I use
    12. 12. Why Worry?option overload - which provider/service should I useexpertise - each service has yet another knowledge silo
    13. 13. Why Worry?option overload - which provider/service should I useexpertise - each service has yet another knowledge silotools - vastly different API, quality, maintenance, etc
    14. 14. Why Worry?option overload - which provider/service should I useexpertise - each service has yet another knowledge silotools - vastly different API, quality, maintenance, etcstandards - slow progress and differing interpretations
    15. 15. Ruby cloud servicesweb: github.com/geemus/fogtwitter: @fog
    16. 16. Why?
    17. 17. Why?portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...
    18. 18. Why?portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...powerful - compute, dns, storage, collections, models, mocks, requests, ...
    19. 19. Why?portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...powerful - compute, dns, storage, collections, models, mocks, requests, ...established - 250k downloads, 1564 followers, 266 forks, 127 contributors, me, ...
    20. 20. Why?portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...powerful - compute, dns, storage, collections, models, mocks, requests, ...established - 250k downloads, 1564 followers, 266 forks, 127 contributors, me, ...Fog.mock! - faster, cheaper, simulated cloud behavior
    21. 21. Who?
    22. 22. Who?libraries - carrierwave, chef, gemcutter, puppet, ...products - AppFog, Engine Yard, Heroku, ...
    23. 23. Interactive Bit!
    24. 24. Interactive Bit! cloud
    25. 25. Interactive Bit! cloud fog
    26. 26. What?
    27. 27. What?That’s great and all but I don’t have a use case...
    28. 28. What?That’s great and all but I don’t have a use case...uptime - because who wants a busted web site?
    29. 29. Setup geymus ~ ⌘ gem install fog orgeymus ~ ⌘ sudo gem install fog
    30. 30. Get Connected7 # setup a connection to the service8 compute = Fog::Compute.new(credentials)
    31. 31. Get Connected1 credentials = {2   :provider           => Rackspace,3   :rackspace_api_key  => RACKSPACE_API_KEY,4   :rackspace_username => RACKSPACE_USERNAME5 }67 # setup a connection to the service8 compute = Fog::Compute.new(credentials)
    32. 32. Boot that Server 1 server_data = compute.create_server( 2   1, 3   49 4 ).body[server] 5
    33. 33. Boot that Server 1 server_data = compute.create_server( 2   1, 3   49 4 ).body[server] 5 6 until compute.get_server_details( 7   server_data[id] 8 ).body[server][status] == ACTIVE 9 end10
    34. 34. Boot that Server 1 server_data = compute.create_server( 2   1, 3   49 4 ).body[server] 5 6 until compute.get_server_details( 7   server_data[id] 8 ).body[server][status] == ACTIVE 9 end1011 commands = [12   %{mkdir .ssh},13   %{echo #{File.read(~/.ssh/id_rsa.pub)} >> ~/.ssh/authorized_keys},14   %{passwd -l root},15 ]1617 Net::SSH.start(18   server_data[addresses].first,19   root,20   :password => server_data[password]21 ) do |ssh|22   commands.each do |command|23     ssh.open_channel do |ssh_channel|24       ssh_channel.request_pty25       ssh_channel.exec(%{bash -lc #{command}})26       ssh.loop27     end28   end29 end
    35. 35. Worry!
    36. 36. Worry!arguments - what goes where, what does it mean?
    37. 37. Worry!arguments - what goes where, what does it mean?portability - most of this will only work on Rackspace
    38. 38. Worry!arguments - what goes where, what does it mean?portability - most of this will only work on Rackspacedisservice - back to square one, but with tools in hand
    39. 39. Bootstrap7 # boot server and setup ssh keys8 server = compute.servers.bootstrap(server_attributes)
    40. 40. Bootstrap1 server_attributes = {2   :image_id         => 49,3   :private_key_path => PRIVATE_KEY_PATH,4   :public_key_path  => PUBLIC_KEY_PATH5 }67 # boot server and setup ssh keys8 server = compute.servers.bootstrap(server_attributes)
    41. 41. Servers?
    42. 42. Servers?1 compute.servers # list servers, same as #all
    43. 43. Servers?1 compute.servers # list servers, same as #all23 compute.servers.get(1234567890) # server by id
    44. 44. Servers?1 compute.servers # list servers, same as #all23 compute.servers.get(1234567890) # server by id45 compute.servers.reload # update to latest
    45. 45. Servers?1 compute.servers # list servers, same as #all23 compute.servers.get(1234567890) # server by id45 compute.servers.reload # update to latest67 compute.servers.new(attributes) # local model
    46. 46. Servers?1 compute.servers # list servers, same as #all23 compute.servers.get(1234567890) # server by id45 compute.servers.reload # update to latest67 compute.servers.new(attributes) # local model89 compute.servers.create(attributes) # remote model
    47. 47. ping
    48. 48. ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")
    49. 49. ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")3 stdout = ssh_results.first.stdout
    50. 50. ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")3 stdout = ssh_results.first.stdout45 # parse result, last line is summary6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms
    51. 51. ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")3 stdout = ssh_results.first.stdout45 # parse result, last line is summary6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms7 stats = stdout.split("/n").last.split( )[-2]8 min, avg, max, stddev = stats.split(/)
    52. 52. ping1 # ping target 10 times2 ssh_results = server.ssh("ping -c 10 #{target}")3 stdout = ssh_results.first.stdout45 # parse result, last line is summary6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms7 stats = stdout.split("/n").last.split( )[-2]8 min, avg, max, stddev = stats.split(/) NOTE: most complex code was string parsing!?!
    53. 53. cleanup
    54. 54. cleanup 1 # shutdown the server 2 server.destroy
    55. 55. cleanup 1 # shutdown the server 2 server.destroy 3 4 # return the data as a hash 5 { 6   :min    => min, 7   :avg    => avg, 8   :max    => max, 9   :stddev => stddev10 }
    56. 56. Next!
    57. 57. Next! 1 -server_data = compute.create_server( 2 +compute.import_key_pair( 3 +  id_rsa.pub, 4 +  File.read(~/.ssh/id_rsa.pub) 5 +) 6 + 7 +compute.authorize_security_group_ingress( 8 +  CidrIp      => 0.0.0.0/0, 9 +  FromPort    => 22,10 +  IpProtocol  => tcp,11 +  GroupName   => default,12 +  ToPort      => 2213 +)14 +15 +server_data = compute.run_instances(16 +  ami-1a837773,17    1,18 -  4919 -).body[server]20 +  1,21 +  InstanceType  => m1.small,22 +  KeyName       => id_rsa.pub,23 +  SecurityGroup => default24 +).body[instancesSet].first25  26 -until compute.get_server_details(27 -  server_data[id]28 -).body[server][status] == ACTIVE29 +until compute.describe_instances(30 +  instance-id => server_data[instanceId]31 +).body[reservationSet].first[instancesSet].first[instanceState][name] == running32  end33  34 +sleep(300)35 +36  Net::SSH.start(37 -  server_data[addresses].first,38 -  root,39 -  :password => server_data[password]40 +  server_data[ipAddress],41 +  ubuntu,42 +  :key_data => [File.read(~/.ssh/id_rsa)]43  ) do |ssh|44    commands = [45      %{mkdir .ssh},
    58. 58. Next! 1 -server_data = compute.create_server( 2 +compute.import_key_pair( 3 +  id_rsa.pub, 4 +  File.read(~/.ssh/id_rsa.pub) 5 +) 6 + 7 +compute.authorize_security_group_ingress( 8 +  CidrIp      => 0.0.0.0/0, 9 +  FromPort    => 22,10 +  IpProtocol  => tcp,11 +  GroupName   => default,12 +  ToPort      => 2213 +)14 +15 +server_data = compute.run_instances(16 +  ami-1a837773,17    1,18 -  4919 -).body[server]20 +  1,21 +  InstanceType  => m1.small,22 +  KeyName       => id_rsa.pub,23 +  SecurityGroup => default24 +).body[instancesSet].first25  26 -until compute.get_server_details(27 -  server_data[id]28 -).body[server][status] == ACTIVE29 +until compute.describe_instances(30 +  instance-id => server_data[instanceId]31 +).body[reservationSet].first[instancesSet].first[instanceState][name] == running32  end33  34 +sleep(300)35 +36  Net::SSH.start(37 -  server_data[addresses].first,38 -  root,39 -  :password => server_data[password]40 +  server_data[ipAddress],41 +  ubuntu,42 +  :key_data => [File.read(~/.ssh/id_rsa)]43  ) do |ssh|44    commands = [45      %{mkdir .ssh},
    59. 59. geopinging v1 1 # specify a different provider 2 credentials = { 3   :provider  => AWS, 4   :aws_access_key_id  => AWS_ACCESS_KEY_ID, 5   :aws_secret_access_key => AWS_SECRET_ACCESS_KEY 6 } 7 8 server_attributes = { 9   :image_id         => ami-1a837773,10   :private_key_path => PRIVATE_KEY_PATH,11   :public_key_path  => PUBLIC_KEY_PATH,12   :username         => ubuntu13 }
    60. 60. geopinging v21 # specify a different aws region2 # [ap-southeast-1, eu-west-1, us-west-1]3 credentials.merge!({4   :region => eu-west-15 })
    61. 61. geopinging v...portable - AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ...
    62. 62. geopinging v...portable - AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ... lather, rinse, repeat
    63. 63. How?That is awesome, but how did you...
    64. 64. exploringgeymus ~ ⌘ fog  To run as default, add the following to ~/.fog:default:  :aws_access_key_id:     INTENTIONALLY_LEFT_BLANK  :aws_secret_access_key: INTENTIONALLY_LEFT_BLANK  :public_key_path:       INTENTIONALLY_LEFT_BLANK  :private_key_path:      INTENTIONALLY_LEFT_BLANK  :rackspace_api_key:     INTENTIONALLY_LEFT_BLANK  :rackspace_username:    INTENTIONALLY_LEFT_BLANK ...
    65. 65. sign posts
    66. 66. sign postsgeymus ~ ⌘ fog
    67. 67. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace
    68. 68. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers
    69. 69. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]
    70. 70. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]>> Rackspace.collections
    71. 71. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]>> Rackspace.collections[:directories, :files, :flavors, :images, :servers]
    72. 72. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]>> Rackspace.collections[:directories, :files, :flavors, :images, :servers]>> Compute[:rackspace]
    73. 73. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]>> Rackspace.collections[:directories, :files, :flavors, :images, :servers]>> Compute[:rackspace]#<Fog::Compute::Rackspace ...>
    74. 74. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]>> Rackspace.collections[:directories, :files, :flavors, :images, :servers]>> Compute[:rackspace]#<Fog::Compute::Rackspace ...>>> Compute[:rackspace].requests
    75. 75. sign postsgeymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace>> providers[AWS, Rackspace]>> Rackspace.collections[:directories, :files, :flavors, :images, :servers]>> Compute[:rackspace]#<Fog::Compute::Rackspace ...>>> Compute[:rackspace].requests[:confirm_resized_server, ..., :update_server]
    76. 76. what are those?
    77. 77. what are those?provider => [AWS, Rackspace, Zerigo, ...]
    78. 78. what are those?provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...]
    79. 79. what are those? provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...]collection => [flavors, images, servers, ...]
    80. 80. what are those? provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...]collection => [flavors, images, servers, ...] model => [flavor, image, server, ...]
    81. 81. what are those? provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...]collection => [flavors, images, servers, ...] model => [flavor, image, server, ...] request => [describe_instances, run_instances, ...]
    82. 82. requests?
    83. 83. requests?>> Compute[:rackspace].list_servers
    84. 84. requests?>> Compute[:rackspace].list_servers#<Excon::Response:0x________
    85. 85. requests?>> Compute[:rackspace].list_servers#<Excon::Response:0x________@body = { "servers" => []},
    86. 86. requests?>> Compute[:rackspace].list_servers#<Excon::Response:0x________@body = { "servers" => []},@headers = { "X-PURGE-KEY"=>"/______/servers", ..., "Connection"=>"keep-alive"},
    87. 87. requests?>> Compute[:rackspace].list_servers#<Excon::Response:0x________@body = { "servers" => []},@headers = { "X-PURGE-KEY"=>"/______/servers", ..., "Connection"=>"keep-alive"},@status=200>
    88. 88. sanity check
    89. 89. sanity check>> Compute[:rackspace].servers.select {|s| s.ready?}
    90. 90. sanity check>> Compute[:rackspace].servers.select {|s| s.ready?} <Fog::Compute::Rackspace::Servers filters={} [] >
    91. 91. sanity check>> Compute[:rackspace].servers.select {|s| s.ready?} <Fog::Compute::Rackspace::Servers filters={} [] >>> Compute[:aws].servers.select {|s| s.ready?}
    92. 92. sanity check>> Compute[:rackspace].servers.select {|s| s.ready?} <Fog::Compute::Rackspace::Servers filters={} [] >>> Compute[:aws].servers.select {|s| s.ready?} <Fog::Compute::AWS::Servers [] >
    93. 93. sanity check>> Compute[:rackspace].servers.select {|s| s.ready?} <Fog::Compute::Rackspace::Servers filters={} [] >>> Compute[:aws].servers.select {|s| s.ready?} <Fog::Compute::AWS::Servers [] >>> exit
    94. 94. finding images
    95. 95. finding images>> Compute[:rackspace].images.table([:id, :name])
    96. 96. finding images>> Compute[:rackspace].images.table([:id, :name]) +---------+------------------------------------------+ | id | name | +---------+------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +---------+------------------------------------------+ ...
    97. 97. finding images>> Compute[:rackspace].images.table([:id, :name]) +---------+------------------------------------------+ | id | name | +---------+------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +---------+------------------------------------------+ ...>> Compute[:aws].images # I use alestic.com listing
    98. 98. finding images>> Compute[:rackspace].images.table([:id, :name]) +---------+------------------------------------------+ | id | name | +---------+------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +---------+------------------------------------------+ ...>> Compute[:aws].images # I use alestic.com listing...
    99. 99. exploring...
    100. 100. exploring... It takes forever!
    101. 101. exploring... It takes forever! It’s so expensive!
    102. 102. exploring... It takes forever! It’s so expensive!A warm welcome for Fog.mock!
    103. 103. Mocks!geymus ~ ⌘ FOG_MOCK=true fog or require ‘fog’ Fog.mock!
    104. 104. simulation
    105. 105. simulationMost functions just work!
    106. 106. simulation Most functions just work!Unimplemented mocks? Errors keep you on track.
    107. 107. simulation Most functions just work! Unimplemented mocks? Errors keep you on track.Tests run against both, so it is either consistent or a bug.
    108. 108. Elsewhere/Previously
    109. 109. Elsewhere/Previously storage - aggregating cloud data
    110. 110. Elsewhere/Previously storage - aggregating cloud data dns - make your cloud (premium) accessible
    111. 111. Elsewhere/Previously storage - aggregating cloud data dns - make your cloud (premium) accessible congrats - deploy, retire, etc
    112. 112. Love!
    113. 113. Love!knowledge - suffering encoded in ruby
    114. 114. Love!knowledge - expertise encoded in ruby
    115. 115. Love!knowledge - expertise encoded in rubyempowering - show the cloud who is boss
    116. 116. Love!knowledge - expertise encoded in rubyempowering - show the cloud who is bossexciting - this is some cutting edge stuff!
    117. 117. OpenStack
    118. 118. OpenStackrelevance? - we share, so nothing to worry about, right?
    119. 119. OpenStackrelevance? - we share, so nothing to worry about, right?overruled! - I am terrified
    120. 120. Consumers?
    121. 121. Consumers?absenteeism - unpaid, high barrier, nebulous payoff
    122. 122. Consumers?absenteeism - unpaid, high barrier, nebulous payoff sandbox - greatly reduces barrier to participation
    123. 123. Identity
    124. 124. Identityroll your own! - initially auth was left to providers
    125. 125. Identityroll your own! - initially auth was left to providers keystone - thankfully it seems we have standardized
    126. 126. “Compatibility”
    127. 127. “Compatibility”AWS EC2 - moving target, not a standard, kill it please
    128. 128. “Compatibility”AWS EC2 - moving target, not a standard, kill it please Rackspace Servers - better, sort of
    129. 129. “Compatibility”AWS EC2 - moving target, not a standard, kill it please Rackspace Servers - better, sort of diff? - slightly different is harder than vastly different
    130. 130. Coherence
    131. 131. CoherenceCode Talks - defining API is great, but code beats specs
    132. 132. CoherenceCode Talks - defining API is great, but code beats specs Style Guide - specs miss stuff, so focus on general rules
    133. 133. CoherenceCode Talks - defining API is great, but code beats specs Style Guide - specs miss stuff, so focus on general rules diff? - slightly different is harder than vastly different
    134. 134. Documentation
    135. 135. DocumentationTimely - I need more and better docs, yesterday
    136. 136. DocumentationTimely - I need more and better docs, yesterdayMaintained - I need to docs to reflect current reality
    137. 137. Documentation Timely - I need more and better docs, yesterday Maintained - I need to docs to reflect current realityVersioned - I need docs that reflect particular past realities
    138. 138. Standard?
    139. 139. Standard?headers - apache vs nginx vs etc
    140. 140. Standard? headers - apache vs nginx vs etcperformance - wide range of meaning for eventually
    141. 141. Extensions
    142. 142. Extensionsdifferentiation - desirable to producers, not consumers
    143. 143. Extensionsdifferentiation - desirable to producers, not consumers open source - there is already a clear path to extending
    144. 144. Versioning
    145. 145. Versioninglegacy - for other providers, clients need only support latest
    146. 146. Versioninglegacy - for other providers, clients need only support latestfuture - clients need to support infinitely many versions/extensions
    147. 147. Homework
    148. 148. Homeworkclients - engage with consumers for feedback
    149. 149. Homeworkclients - engage with consumers for feedbackdogfood - use it yourselves to find the gaps
    150. 150. Thanks!@geemus - questions, comments, suggestions
    151. 151. Thanks! Questions? (see also: README) tutorial - http://github.com/geemus/learn_fogexamples - http://gist.github.com/729992 slides - http://slidesha.re/hR8sP9 repo - http://github.com/geemus/fog bugs - http://github.com/geemus/fog/issues@geemus - questions, comments, suggestions

    ×