The document provides an overview of Cloud Foundry, an open source Platform as a Service (PaaS). It discusses key Cloud Foundry concepts like applications, instances, services and the vmc command line tool. It then demonstrates a sample polyglot application called STAC2 that is deployed to Cloud Foundry, using Node.js, Ruby, Redis and other technologies. Specific aspects of the STAC2 app architecture are explained, like its use of the producer/consumer pattern with Redis and horizontally scalable Node.js servers.
4. cloud foundry: open paas
• active open source project, liberal license
• infrastructure neutral core, runs on any IaaS/Infra
• extensible runtime/framework, services architecture
• node, ruby, java, scala, erlang, etc.
• postgres, neo4j, mongodb, redis, mysql, rabbitmq
• clouds: raw infra structure, to fully managed (AppFog)
• VMware’s delivery forms
• raw bits and deployment tools on GitHub
• Micro Cloud Foundry
• cloudfoundry.com
4 developer perspective v2.0
5. key abstractions
• applications
• instances
• services
• vmc – cli (based almost 1:1 on control api)
5 developer perspective v2.0
7. hello world of the cloud
$
cat
hw.rb
require
'rubygems'
require
'sinatra'
$hits
=
0
get
'/'
do
$hits
=
$hits
+
1
"Hello
World
-‐
#{$hits}"
end
$
vmc
push
hw
7 developer perspective v2.0
8. cc
hw.c
vmc
push
hw
8 developer perspective v2.1
9. hello world of the cloud: scale it up
$
vmc
instances
hw
10
get
'/'
do
$hits
=
$hits
+
1
"Hello
World
-‐
#{$hits}"
end
#
above
code
is
broken
for
>
1
instance
#
move
hit
counter
to
redis,
hi-‐perf
K/V
store
$
vmc
create-‐service
redis
–bind
hw
get
'/'
do
$hits
=
$redis.incr(‘hits’)
"Hello
World
-‐
#{$hits}"
end
9 developer perspective v2.0
16. design tidbits
• producer/consumer pattern using rpush/blpop
• node.JS: multi-server and high performance async i/o
• caldecott – aka vmc tunnel for debugging
• redis sorted sets for stats collection
• redis expiring keys for rate calculation
16 developer perspective v2.0
17. producer/consumer
• core design pattern
• found at the heart of many complex apps
classic mode:
- thread pools
- semaphore/mutex, completion ports, etc.
- scalability limited to visibility of the work queue
producer work work queue work consumer
cloud foundry mode:
- instance pools
- redis rpush/blpop, rabbit queues, etc.
- full horizontal scalability, cloud scale
17 developer perspective v2.0
18. producer/consumer: code
//
producer
function
commit_item(queue,
item)
{
//
push
the
work
item
onto
the
proper
queue
redis.rpush(queue,
item,
function(err,
data)
{
//
optionally
trim
the
queue,
throwing
away
//
data
as
needed
to
ensure
the
queue
does
//
not
grow
unbounded
if
(!err
&&
data
>
queueTrim)
{
redis.ltrim(queue,
0,
queueTrim-‐1);
}
});
}
//
consumer
function
worker()
{
//
blocking
wait
for
workitems
blpop_redis.blpop(queue,
0,
function(err,
data)
{
//
data[0]
==
queue,
data[1]
==
item
if
(!err)
{
doWork(data[1]);
}
process.nextTick(worker);
});
}
18 developer perspective v2.0
19. node.JS multi-server: http API server
//
the
api
server
handles
two
key
load
generation
apis
//
/http
–
for
http
load,
/vmc
for
Cloud
Foundry
API
load
var
routes
=
{“/http”:
httpCmd,
“/vmc”:
vmcCmd}
//
http
api
server
booted
by
app.js,
passing
redis
client
//
and
Cloud
Foundry
instance
function
boot(redis_client,
cfinstance)
{
var
redis
=
redis_client;
function
onRequest(request,
response)
{
var
u
=
url.parse(request.url);
var
path
=
u.pathname;
if
(routes[path]
&&
typeof
routes[path]
==
‘function’)
{
routes[path](request,
response);
}
else
{
response.writeHead(404,
{‘Content-‐Type’:
‘text/plain’});
response.write(‘404
Not
Found’);
response.end();
}
}
server
=
http.createServer(onRequest).listen(cfinstance[‘port’]);
}
19 developer perspective v2.0
20. node.JS multi-server: blpop server
var
blpop_redis
=
null;
var
status_redis
=
null;
var
cfinstance
=
null;
//
blpop
server
handles
work
requests
for
http
traffic
//
that
are
placed
on
the
queue
by
the
http
API
server
//
another
blpop
server
sits
in
the
ruby/sinatra
VMC
server
function
boot(r1,
r2,
cfi)
{
//
multiple
redis
clients
due
to
concurrency
constraints
blpop_redis
=
r1;
status_redis
=
r2;
cfinstance
=
cfi;
worker();
}
//
this
is
the
blpop
server
loop
function
worker()
{
blpop_redis.blpop(queue,
0,
function(err,
data)
{
if
(!err)
{
doWork(data[1]);
}
process.nextTick(worker);
});
}
20 developer perspective v2.0
21. caldecott: aka vmc tunnel
#
create
a
caldecott
tunnel
to
the
redis
server
$
vmc
tunnel
nab-‐redis
redis-‐cli
Binding
Service
[nab-‐redis]:
OK
…
Launching
'redis-‐cli
-‐h
localhost
-‐p
10000
-‐a
...’
#
enumerate
the
keys
used
by
stac2
redis>
keys
vmc::staging::*
1)
“vmc::staging::actions::time_50”
2)
“vmc::staging::active_workers”
…
#
enumerate
actions
that
took
less
that
50ms
redis>
zrange
vmc::staging::actions::time_50
0
-‐1
withscores
1)
“delete_app”
2)
“1”
3)
“login”
4)
“58676”
5)
“info”
6)
“80390”
#
see
how
many
work
items
we
dumped
due
to
concurrency
constraint
redis>
get
vmc::staging::wastegate
“7829”
21 developer perspective v2.0
22. redis sorted sets for stats collection
#
log
action
into
a
sorted
set,
net
result
is
set
contains
#
actions
and
the
number
of
times
the
action
was
executed
#
count
total
action
count,
and
also
per
elapsed
time
bucket
def
logAction(action,
elapsedTimeBucket)
#
actionKey
is
the
set
for
all
counts
#
etKey
is
the
set
for
a
particular
time
bucket
e.g.,
_1s,
_50ms
actionKey
=
“vmc::#{@cloud}::actions::action_set”
etKey
=
“vmc::#{@cloud}::actions::times#{elapsedTimeBucket}”
@redis.zincrby
actionKey,
1,
action
@redis.zincrby
etKey,
1,
action
end
#
enumerate
actions
and
their
associated
count
redis>
zrange
vmc::staging::actions::action_set
0
-‐1
withscores
1)
“login”
2)
“212092”
3)
“info”
4)
“212093”
#
enumerate
actions
that
took
between
400ms
and
1s
redis>
zrange
vmc::staging::actions::time_400_1s
0
-‐1
withscores
1)
“create-‐app”
2)
“14”
3)
“bind-‐service”
4)
“75”
22 developer perspective v2.0
23. redis incrby and expire for rate calcs
#
to
calculate
rates
(e.g.,
4,000
requests
per
second)
#
we
use
plain
old
redis.incrby.
the
trick
is
that
the
#
key
contains
the
current
1sec
timestamp
as
it’s
suffix
value
#
all
activity
that
happens
within
this
1s
period
accumulates
#
in
that
key.
by
setting
an
expire
on
the
key,
the
key
is
#
automatically
deleted
10s
after
last
write
def
logActionRate(cloud)
tv
=
Time.now.tv_sec
one_s_key
=
"vmc::#{cloud}::rate_1s::#{tv}"
#
increment
the
bucket
and
set
expires,
key
#
will
eventually
expires
Ns
after
the
last
write
@redis.incrby
one_s_key,
1
@redis.expire
one_s_key,
10
end
#
return
current
rate
by
looking
at
the
bucket
for
the
previous
#
one
second
period.
by
looking
further
back
and
averaging,
we
#
can
smooth
the
rate
calc
def
actionRate(cloud)
tv
=
Time.now.tv_sec
-‐
1
one_s_key
=
"vmc::#{cloud}::rate_1s::#{tv}"
@redis.get
one_s_key
end
23 developer perspective v2.0