Cloud Foundry Open Tour China (english)

1,204 views

Published on

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

Cloud Foundry Open Tour China (english)

  1. 1. developers perspective mark lucovsky vp of engineering, cloud foundry
  2. 2. agenda•  cloud foundry – PaaS•  sample app: •  polyglot in action •  node •  redis •  json •  ruby •  html5 •  jQuery •  multi-tier •  horizontally scalable •  vmc manifest •  etc. 2 developer perspective v2.0
  3. 3. cloud foundry3 developer perspective v2.0
  4. 4. cloud foundry: open paas•  active open source project, liberal license•  infrastructure neutral core, runs on any IaaS/Infra•  extensible runtime/framework, services architecture •  node, ruby, java, scala, erlang, etc. •  postgres, neo4j, mongodb, redis, mysql, rabbitmq•  clouds: raw infra structure, to fully managed (AppFog)•  VMware’s delivery forms •  raw bits and deployment tools on GitHub •  Micro Cloud Foundry •  cloudfoundry.com 4 developer perspective v2.0
  5. 5. key abstractions•  applications•  instances•  services•  vmc – cli (based almost 1:1 on control api) 5 developer perspective v2.0
  6. 6. hello world: classic $  cat  hw.c   #include  <stdio.h>   main()  {      printf(“Hello  Worldn”);   }   $  cc  hw.c;  ./a.out  6 developer perspective v2.0
  7. 7. hello world of the cloud $  cat  hw.rb   require  rubygems   require  sinatra     $hits  =  0   get  /  do      $hits  =  $hits  +  1      "Hello  World  -­‐  #{$hits}"   end   $  vmc  push  hw  7 developer perspective v2.0
  8. 8. cc  hw.c vmc  push  hw8 developer perspective v2.1
  9. 9. hello world of the cloud: scale it up $  vmc  instances  hw  10     get  /  do      $hits  =  $hits  +  1      "Hello  World  -­‐  #{$hits}"   end     #  above  code  is  broken  for  >  1  instance   #  move  hit  counter  to  redis,  hi-­‐perf  K/V  store   $  vmc  create-­‐service  redis  –bind  hw     get  /  do      $hits  =  $redis.incr(‘hits’)      "Hello  World  -­‐  #{$hits}"   end  9 developer perspective v2.0
  10. 10. vmc command line toolingCreate  app,  update  app,  control  app  vmc  push  [appname]  [-­‐-­‐path]  [-­‐-­‐url]  [-­‐-­‐instances  N]  [-­‐-­‐mem]  [-­‐-­‐no-­‐start]  vmc  update  <appname>  [-­‐-­‐path  PATH]  vmc  stop  <appname>  vmc  start  <appname>  vmc  target  [url]    Update  app  settings,  get  app  information  vmc  mem  <appname>  [memsize]  vmc  map  <appname>  <url>  vmc  instances  <appname>  <num  |  delta>  vmc  {crashes,  crashlogs,  logs}  <appname>  vmc  files  <appname>  [path]    Deal  with  services,  users,  and  information  vmc  create-­‐service  <service>  [-­‐-­‐name  servicename]  [-­‐-­‐bind  appname]  vmc  bind-­‐service  <servicename>  <appname>  vmc  unbind-­‐service    <servicename>  <appname>  vmc  delete-­‐service  <servicename>    vmc  user,  vmc  passwd,  vmc  login,  vmc  logout,  vmc  add-­‐user  vmc  services,  vmc  apps,  vmc  info  10 developer perspective v2.0
  11. 11. sample app11 developer perspective v2.0
  12. 12. 12 developer perspective v2.0
  13. 13. stac2: load generation system- jQuery, jQuery UI json-p stac2 - 2 x 128mb- haml templates frontend - ruby 1.8.7, sinatra- 100% JS based UI smtp http json email - 16 x 128mb* api server - node.JS, 0.6.8 reports rpush redis api redis blpop redis api blpop - 96 x 128mb - 16 x 128mb* - ruby 1.8.7, sinatra vmc worker http worker - node.JS, 0.6.8 * - api server and http worker share the same node.JS process/instance 13 developer perspective v2.0
  14. 14. deployment instructions $ cd ~/stac2 $ vmc push14 developer perspective v2.0
  15. 15. how is this possible?$  cd  ~/stac2;  cat  manifest.yml  applications:      ./nabh:          instances:  16          mem:  128M          runtime:  node06          url:  ${name}.${target-­‐base}          services:              nab-­‐redis:                  type:  :redis  ./nabv:          instances:  96          mem:  128M            runtime:  ruby18          url:  ${name}.${target-­‐base}          services:              nab-­‐redis:                  type:  :redis  ./stac2:          instances:  2          mem:  128M            runtime:  ruby18          url:  ${name}.${target-­‐base}  15 developer perspective v2.0
  16. 16. design tidbits•  producer/consumer pattern using rpush/blpop•  node.JS: multi-server and high performance async i/o•  caldecott – aka vmc tunnel for debugging•  redis sorted sets for stats collection•  redis expiring keys for rate calculation16 developer perspective v2.0
  17. 17. producer/consumer•  core design pattern•  found at the heart of many complex appsclassic mode:- thread pools- semaphore/mutex, completion ports, etc.- scalability limited to visibility of the work queue producer work work queue work consumercloud foundry mode:- instance pools- redis rpush/blpop, rabbit queues, etc.- full horizontal scalability, cloud scale17 developer perspective v2.0
  18. 18. producer/consumer: code//  producer  function  commit_item(queue,  item)  {      //  push  the  work  item  onto  the  proper  queue        redis.rpush(queue,  item,  function(err,  data)  {            //  optionally  trim  the  queue,  throwing  away          //  data  as  needed  to  ensure  the  queue  does          //  not  grow  unbounded          if  (!err  &&  data  >  queueTrim)  {              redis.ltrim(queue,  0,  queueTrim-­‐1);          }            });  }    //  consumer  function  worker()  {      //  blocking  wait  for  workitems      blpop_redis.blpop(queue,  0,  function(err,  data)  {                    //  data[0]  ==  queue,  data[1]  ==  item          if  (!err)  {              doWork(data[1]);          }          process.nextTick(worker);            });  }     18 developer perspective v2.0
  19. 19. node.JS multi-server: http API server//  the  api  server  handles  two  key  load  generation  apis  //  /http  –  for  http  load,  /vmc  for  Cloud  Foundry  API  load  var  routes  =  {“/http”:  httpCmd,  “/vmc”:  vmcCmd}    //  http  api  server  booted  by  app.js,  passing  redis  client  //  and  Cloud  Foundry  instance    function  boot(redis_client,  cfinstance)  {      var  redis  =  redis_client;            function  onRequest(request,  response)  {          var  u  =  url.parse(request.url);          var  path  =  u.pathname;          if  (routes[path]  &&  typeof  routes[path]  ==  ‘function’)  {              routes[path](request,  response);          }  else  {              response.writeHead(404,  {‘Content-­‐Type’:  ‘text/plain’});              response.write(‘404  Not  Found’);              response.end();          }      }      server  =  http.createServer(onRequest).listen(cfinstance[‘port’]);  }  19 developer perspective v2.0
  20. 20. node.JS multi-server: blpop servervar  blpop_redis  =  null;  var  status_redis  =  null;  var  cfinstance  =  null;    //  blpop  server  handles  work  requests  for  http  traffic  //  that  are  placed  on  the  queue  by  the  http  API  server  //  another  blpop  server  sits  in  the  ruby/sinatra  VMC  server  function  boot(r1,  r2,  cfi)  {      //  multiple  redis  clients  due  to  concurrency  constraints      blpop_redis  =  r1;      status_redis  =  r2;      cfinstance  =  cfi;      worker();  }    //  this  is  the  blpop  server  loop  function  worker()  {      blpop_redis.blpop(queue,  0,  function(err,  data)  {          if  (!err)  {              doWork(data[1]);          }          process.nextTick(worker);            });  }    20 developer perspective v2.0
  21. 21. caldecott: aka vmc tunnel#  create  a  caldecott  tunnel  to  the  redis  server  $  vmc  tunnel  nab-­‐redis  redis-­‐cli  Binding  Service  [nab-­‐redis]:  OK  …  Launching  redis-­‐cli  -­‐h  localhost  -­‐p  10000  -­‐a  ...’        #  enumerate  the  keys  used  by  stac2  redis>  keys  vmc::staging::*  1)  “vmc::staging::actions::time_50”  2)  “vmc::staging::active_workers”  …    #  enumerate  actions  that  took  less  that  50ms  redis>  zrange  vmc::staging::actions::time_50  0  -­‐1  withscores  1)  “delete_app”  2)  “1”  3)  “login”  4)  “58676”  5)  “info”  6)  “80390”    #  see  how  many  work  items  we  dumped  due  to  concurrency  constraint  redis>  get  vmc::staging::wastegate  “7829”       21 developer perspective v2.0
  22. 22. redis sorted sets for stats collection#  log  action  into  a  sorted  set,  net  result  is  set  contains  #  actions  and  the  number  of  times  the  action  was  executed  #  count  total  action  count,  and  also  per  elapsed  time  bucket  def  logAction(action,  elapsedTimeBucket)        #  actionKey  is  the  set  for  all  counts      #  etKey  is  the  set  for  a  particular  time  bucket  e.g.,  _1s,  _50ms      actionKey  =  “vmc::#{@cloud}::actions::action_set”      etKey  =  “vmc::#{@cloud}::actions::times#{elapsedTimeBucket}”      @redis.zincrby  actionKey,  1,  action      @redis.zincrby  etKey,  1,  action  end    #  enumerate  actions  and  their  associated  count  redis>  zrange  vmc::staging::actions::action_set  0  -­‐1  withscores  1)  “login”  2)  “212092”  3)  “info”  4)  “212093”    #  enumerate  actions  that  took  between  400ms  and  1s  redis>  zrange  vmc::staging::actions::time_400_1s  0  -­‐1  withscores  1)  “create-­‐app”  2)  “14”  3)  “bind-­‐service”  4)  “75”    22 developer perspective v2.0
  23. 23. redis incrby and expire for rate calcs#  to  calculate  rates  (e.g.,  4,000  requests  per  second)  #  we  use  plain  old  redis.incrby.  the  trick  is  that  the    #  key  contains  the  current  1sec  timestamp  as  it’s  suffix  value  #  all  activity  that  happens  within  this  1s  period  accumulates  #  in  that  key.  by  setting  an  expire  on  the  key,  the  key  is    #  automatically  deleted  10s  after  last  write  def  logActionRate(cloud)      tv  =  Time.now.tv_sec      one_s_key  =  "vmc::#{cloud}::rate_1s::#{tv}"        #  increment  the  bucket  and  set  expires,  key      #  will  eventually  expires  Ns  after  the  last  write      @redis.incrby  one_s_key,  1      @redis.expire  one_s_key,  10  end    #  return  current  rate  by  looking  at  the  bucket  for  the  previous    #  one  second  period.  by  looking  further  back  and  averaging,  we    #  can  smooth  the  rate  calc  def  actionRate(cloud)      tv  =  Time.now.tv_sec  -­‐  1      one_s_key  =  "vmc::#{cloud}::rate_1s::#{tv}"      @redis.get  one_s_key  end    23 developer perspective v2.0
  24. 24. 24 developer perspective v2.0
  25. 25. www.cloudfoundry.com/jobs25 developer perspective v2.0

×