Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
Concurrent
Ruby
Application Servers
Concurrent
Ruby
Application Servers
Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
http://godfat.org/slide/
2012-12-07-concurrent.pdf
http://godfat.org/slide/
2012-...
Concurrency?Concurrency?
What We Have?What We Have?
App Servers?App Servers?
Me?
Q?
Concurrency?Concurrency?
What We Have?What We Have?
App Servers?App Servers?
Me?
Q?
Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
•	Programmer at Cardinal Blue•	Programmer at Cardinal Blue
Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
•	Programmer at Cardinal Blue
•	Use Ruby from 2006
•	Programmer at Cardinal Blue
...
Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
•	Programmer at Cardinal Blue
•	Use Ruby from 2006
•	Interested in PL and FP (e.g...
Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
•	Programmer at Cardinal Blue
•	Use Ruby from 2006
•	Interested in PL and FP (e.g...
Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
•	Programmer at Cardinal Blue
•	Use Ruby from 2006
•	Interested in PL and FP (e.g...
Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
•	Programmer at Cardinal Blue
•	Use Ruby from 2006
•	Interested in PL and FP (e.g...
Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
PicCollagePicCollage
Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
in 7 daysin 7 days
•	~3.1k requests per minute
•	Average response time:
~135 ms
•...
Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
Recent GemsRecent Gems
•	jellyfish - Pico web
framework for building
API-centric ...
Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
Special ThanksSpecial Thanks
ihower, ET Blue and Cardinal Blueihower, ET Blue and...
Concurrency?Concurrency?
What We Have?What We Have?
App Servers?App Servers?
Me?
Q?
CaveatsCaveats No disk I/O
No pipelined requests
No chunked encoding
No web sockets
No …
To make things simpler for now.
N...
It's not faster for a userIt's not faster for a userCautionCaution
10 moms can't produce
1 child in 1 month
10 moms can't produce
1 child in 1 month
10 moms can produce
10 children in 10 months
10 moms can produce
10 children in 10 months
Resource mattersResource matters
We might not always have
enough processors
We might not always have
enough processors
Resource mattersResource matters
We might not always have
enough processors
We might not always have
enough processors
Resource mattersResource matters
We ...
Imagine 15 children
have to be context switched
amongst 10 moms
Imagine 15 children
have to be context switched
amongst 10...
Multitasking is not freeMultitasking is not free
If your program context switches,
then actually it runs slower
If your program context switches,
then actually it runs slo...
If your program context switches,
then actually it runs slower
If your program context switches,
then actually it runs slo...
It's more fair if they are all served
equally in terms of waiting time
It's more fair if they are all served
equally in te...
I just want a drink!I just want a drink!
#04
#03
#02
#01
#00
#05
Waiting
Processing
Context Switching
Sequential
Time
User ID
To illustrateTo illustrate the overal...
#04
#03
#02
#01
#00
#05
Waiting
Processing
Context Switching
Concurrent
Time
User ID
#04
#03
#02
#01
#00
#05
Waiting
Proce...
#04
#03
#02
#01
#00
#05
Waiting
Processing
Context Switching
ConcurrentTotal Waiting and
Processing
Time
Time
User ID
#04
...
Scalable != FastScalable != Fast
Scalable == Less complainingScalable == Less complaining
Rails is not fastRails is not fast
Rails might be scalableRails might be scalable
Scalable != FastScalable != Fast
Scalabl...
So,So,
When do we want
concurrency?
When do we want
concurrency?
So,So,
When do we want
concurrency?
When do we want
concurrency?
When context switching cost is
much cheaper than a single...
So,So,
When do we want
concurrency?
When do we want
concurrency?
When context switching cost is
much cheaper than a single...
If context switching cost is at about 1 secondIf context switching cost is at about 1 second
If context switching cost is at about 1 second
It's much cheaper than 10 months, so it
might be worth it
If context switch...
#04
#03
#02
#01
#00
#05
#04
#03
#02
#01
#00
#05
Time
User ID
Time
User ID
But if context switching cost is at about
5 mont...
Do kernel threads context switch fast?Do kernel threads context switch fast?
Do kernel threads context switch fast?Do kernel threads context switch fast?
Do user threads context switch fast?Do user t...
Do kernel threads context switch fast?Do kernel threads context switch fast?
Do user threads context switch fast?Do user t...
A ton of different concurrency models
then invented
A ton of different concurrency models
then invented
A ton of different concurrency models
then invented
A ton of different concurrency models
then invented
Each of them has d...
A ton of different concurrency models
then invented
A ton of different concurrency models
then invented
Each of them has d...
So,So,
How many different types
of tasks are there?
How many different types
of tasks are there?
Two patternsTwo patterns
Linear data dependencyLinear data dependency
of composite tasksof composite tasks
go to resturant...
go to resturant
order food order drink
eat drink
go to resturant
order food order spices
mix
eat
Two patternsTwo patterns
...
prepare to share a photo
to facebook to flickr
save save
prepare to merge photos
from facebook from flickr
merge
save
prep...
prepare to share a photo
to facebook to flickr
save save
prepare to share a photo
to facebook to flickr
save save
CPU
CPU ...
prepare to share a photo
to facebook to flickr
save save
prepare to merge photos
from facebook from flickr
merge
save
prep...
Concurrency?Concurrency?
What We Have?What We Have?
App Servers?App Servers?
Me?
Q?
•	Performance•	Performance
Separation of Concerns
•	Performance
•	Ease of programming
•	Performance
•	Ease of programming
Separation of Concerns
•	Performance (implementation)
•	Ease of programming (interface)
•	Performance (implementation)
•	Ease of programming (int...
Advantages if interface is
orthogonal to its implementation
•	Change implementation with ease•	Change implementation with ...
•	Change implementation with ease
•	Don't need to know impl detail in
order to use this interface
•	Change implementation ...
interfaceinterface implementationimplementation
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
interfaceinterface imple...
Interface for
Linear data dependency
Interface for
Linear data dependency
blocking threads
callback reactor fibers
CPU
I/O
order_food{ |food|
eat(food)
}
order_tea{ |tea|
drink(tea)
}
order_food{ ...
blocking threads
callback reactor fibers
CPU
I/O
eat(order_food)
drink(order_tea)
eat(order_food)
drink(order_tea)
If we c...
blocking threads
callback reactor fibers
CPU
I/O
Not going to happen on Ruby though
Not going to happen on Ruby though
cal...
Interface for
Mixed data dependency
Interface for
Mixed data dependency
If callback is the only thing we have...If callback is the only thing we have...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
# order_food is blocking...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
food, spices = nil
order...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
food = order_food
spices...
but how?
ImplementationImplementation
Forget about
data dependency for now
Forget about
data dependency for now
ImplementationImplementation
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
Implementation: 2 main c...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
def order_food
Thread.ne...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
def order_food
make_requ...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
def order_food
buf = []
...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
If order_food is CPU bou...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
def order_food
Thread.ne...
You don't have to care whether it's
CPU bound or
I/O bound with a thread
You don't have to care whether it's
CPU bound or
...
And you won't want to process a
CPU bound task inside a reactor
blocking other clients.
And you won't want to process a
CP...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
SummarySummary
•	Threads...
Back to mixed data dependencyBack to mixed data dependency
blocking threads
callback reactor fibers
CPU
I/O callback react...
If we could have some other interface
than callbacks
If we could have some other interface
than callbacks
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
ThreadsThreads
food, spi...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
what if we still want ca...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
food, spices = nil
order...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
food, spices = nil
order...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
condv = ConditionVariabl...
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
Turn reactor callback to...
Threads vs FibersThreads vs Fibers
threads if your request is wrapped
inside a thread (e.g. thread pool
strategy)
threads if your request is wrapped
inside a...
we're using eventmachine + thread
pool with thread synchronization
we're using eventmachine + thread
pool with thread sync...
also, using fibers we're running a
risk where we might block the event
loop somehow we don't know
also, using fibers we're...
and we can even go one step
further...
and we can even go one step
further...
...into the futures!...into the futures!
this is also a demonstration that some
interfaces are only available to some
implementations
this is also a demonstration ...
Who got futures?Who got futures?
•	rest-core for HTTP futures
•	celluloid for general futures
•	also check celluloid-io fo...
a more complex (real world)
example
a more complex (real world)
example
•	update friend list from facebook
•	get photo lis...
Concurrency?Concurrency?
What We Have?What We Have?
App Servers?App Servers?
Me?
Q?
Again:
we don't talk about chunked
encoding and web sockets or so for
now; simply plain old HTTP 1.0
Again:
we don't talk ...
Network
concurrency
Network
concurrency
Application
concurrency
Application
concurrency
sockets I/O bound tasks would be
ideal for an event loop to process
them efficiently
sockets I/O bound tasks would be
idea...
we can abstract the http
server (reverse proxy)
easily, since it only needs
to do one thing and do it
well (unix philosoph...
however, different application does
different things, one strategy might
work well for one application but
not the other
h...
we could have an universal
concurrency model which could do
averagely good, but not perfect for
say, *your* application
we...
that is why Rainbows provides all
possible concurrency models for you
to choose from
that is why Rainbows provides all
pos...
what if we want to make external
requests to outside world?
what if we want to make external
requests to outside world?
it...
before we jump into the detail...before we jump into the detail...
let's see some concurrent popular ruby
application serv...
Thin, Puma, Unicorn familyThin, Puma, Unicorn family
Default ThinDefault Thin
eventmachine (event loop)
for buffering requests
eventmachine (event loop)
for buffering requests...
Threaded ThinThreaded Thin
eventmachine (event loop)
for buffering requests
eventmachine (event loop)
for buffering reques...
PumaPuma
zbatery + ThreadPool
= puma
zbatery + ThreadPool
= puma
UnicornUnicorn
no network concurrencyno network concurrency
worker process
application concurrency
worker process
applicat...
RainbowsRainbows
another concurrency model
+ unicorn
another concurrency model
+ unicorn
ZbateryZbatery
rainbows with single
unicorn (no fork)
rainbows with single
unicorn (no fork)
saving memories
saving memori...
Zbatery +
EventMachine
= default Thin
Zbatery +
EventMachine
= default Thin
Rainbows +
EventMachine
= cluster default
Thin
Rainbows +
EventMachine
= cluster default
Thin
Zbatery +
EventMachine +
TryDefer
(thread pool)
= threaded Thin
Zbatery +
EventMachine +
TryDefer
(thread pool)
= threaded...
Each model has its strength to
deal with different task
Each model has its strength to
deal with different task
blocking threads
callback reactor fibers
CPU
I/O callback reactor
blocking threads
fibers
CPU
I/O
Remember? threads for cp...
What if we want to resize images,
encode videos?
What if we want to resize images,
encode videos?
it's of course CPU bound...
What if we want to do both? what
if we first request facebook, and
then encode video, or vice versa?
What if we want to do...
The reactor could be used for http
concurrency and also making
external requests
The reactor could be used for http
concur...
Ultimate solutionUltimate solution
for what i can think of right nowfor what i can think of right now
USE EVERYTHINGUSE EVERYTHING
Rainbows +
EventMachine +
Thread pool +
Futures!
Rainbows +
EventMachine +
Thread pool +
Futures!
And how do we do that in a web application?
We'll need to do the above example in a
concurrent way. i.e.
And how do we do ...
To compare all the
application servers above
To compare all the
application servers above
Thin (default)
Thin (threaded)
Puma
Unicorn
Rainbows
Zbatery
Passenger
Goliath
EventMachine
EventMachine
Thread pool
Worke...
Concurrency?Concurrency?
What We Have?What We Have?
App Servers?App Servers?
Me?
Q?
Concurrent Ruby Application Servers
Concurrent Ruby Application Servers
Concurrent Ruby Application Servers
Upcoming SlideShare
Loading in …5
×

Concurrent Ruby Application Servers

5,234 views

Published on

Published in: Technology
0 Comments
5 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,234
On SlideShare
0
From Embeds
0
Number of Embeds
70
Actions
Shares
0
Downloads
23
Comments
0
Likes
5
Embeds 0
No embeds

No notes for slide

Concurrent Ruby Application Servers

  1. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 Concurrent Ruby Application Servers Concurrent Ruby Application Servers
  2. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 http://godfat.org/slide/ 2012-12-07-concurrent.pdf http://godfat.org/slide/ 2012-12-07-concurrent.pdf
  3. Concurrency?Concurrency? What We Have?What We Have? App Servers?App Servers? Me? Q?
  4. Concurrency?Concurrency? What We Have?What We Have? App Servers?App Servers? Me? Q?
  5. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 • Programmer at Cardinal Blue• Programmer at Cardinal Blue
  6. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 • Programmer at Cardinal Blue • Use Ruby from 2006 • Programmer at Cardinal Blue • Use Ruby from 2006
  7. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 • Programmer at Cardinal Blue • Use Ruby from 2006 • Interested in PL and FP (e.g. Haskell) • Programmer at Cardinal Blue • Use Ruby from 2006 • Interested in PL and FP (e.g. Haskell)
  8. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 • Programmer at Cardinal Blue • Use Ruby from 2006 • Interested in PL and FP (e.g. Haskell) • Programmer at Cardinal Blue • Use Ruby from 2006 • Interested in PL and FP (e.g. Haskell)Also concurrency recently
  9. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 • Programmer at Cardinal Blue • Use Ruby from 2006 • Interested in PL and FP (e.g. Haskell) • https://github.com/godfat • Programmer at Cardinal Blue • Use Ruby from 2006 • Interested in PL and FP (e.g. Haskell) • https://github.com/godfat Also concurrency recently
  10. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 • Programmer at Cardinal Blue • Use Ruby from 2006 • Interested in PL and FP (e.g. Haskell) • https://github.com/godfat • https://twitter.com/godfat • Programmer at Cardinal Blue • Use Ruby from 2006 • Interested in PL and FP (e.g. Haskell) • https://github.com/godfat • https://twitter.com/godfat Also concurrency recently
  11. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 PicCollagePicCollage
  12. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 in 7 daysin 7 days • ~3.1k requests per minute • Average response time: ~135 ms • Average concurrent requests per process: ~7 • Total processes: 18 • Above data observed from NewRelic • App server: Zbatery with EventMachine and thread pool on Heroku Cedar stack • ~3.1k requests per minute • Average response time: ~135 ms • Average concurrent requests per process: ~7 • Total processes: 18 • Above data observed from NewRelic • App server: Zbatery with EventMachine and thread pool on Heroku Cedar stack PicCollagePicCollage
  13. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 Recent GemsRecent Gems • jellyfish - Pico web framework for building API-centric web applications • rib - Ruby-Interactive- ruBy -- Yet another interactive Ruby shell • rest-core - Modular Ruby clients interface for REST APIs • rest-more - Various REST clients such as Facebook and Twitter built with rest-core • jellyfish - Pico web framework for building API-centric web applications • rib - Ruby-Interactive- ruBy -- Yet another interactive Ruby shell • rest-core - Modular Ruby clients interface for REST APIs • rest-more - Various REST clients such as Facebook and Twitter built with rest-core
  14. Lin Jen-Shin (godfat) @ Rubyconf.tw/2012 Special ThanksSpecial Thanks ihower, ET Blue and Cardinal Blueihower, ET Blue and Cardinal Blue
  15. Concurrency?Concurrency? What We Have?What We Have? App Servers?App Servers? Me? Q?
  16. CaveatsCaveats No disk I/O No pipelined requests No chunked encoding No web sockets No … To make things simpler for now. No disk I/O No pipelined requests No chunked encoding No web sockets No … To make things simpler for now.
  17. It's not faster for a userIt's not faster for a userCautionCaution
  18. 10 moms can't produce 1 child in 1 month 10 moms can't produce 1 child in 1 month
  19. 10 moms can produce 10 children in 10 months 10 moms can produce 10 children in 10 months
  20. Resource mattersResource matters
  21. We might not always have enough processors We might not always have enough processors Resource mattersResource matters
  22. We might not always have enough processors We might not always have enough processors Resource mattersResource matters We need multitaskingWe need multitasking
  23. Imagine 15 children have to be context switched amongst 10 moms Imagine 15 children have to be context switched amongst 10 moms
  24. Multitasking is not freeMultitasking is not free
  25. If your program context switches, then actually it runs slower If your program context switches, then actually it runs slower Multitasking is not freeMultitasking is not free
  26. If your program context switches, then actually it runs slower If your program context switches, then actually it runs slower Multitasking is not freeMultitasking is not free But it's more fair for usersBut it's more fair for users
  27. It's more fair if they are all served equally in terms of waiting time It's more fair if they are all served equally in terms of waiting time
  28. I just want a drink!I just want a drink!
  29. #04 #03 #02 #01 #00 #05 Waiting Processing Context Switching Sequential Time User ID To illustrateTo illustrate the overall running time...the overall running time...
  30. #04 #03 #02 #01 #00 #05 Waiting Processing Context Switching Concurrent Time User ID #04 #03 #02 #01 #00 #05 Waiting Processing Context Switching Sequential Time User ID
  31. #04 #03 #02 #01 #00 #05 Waiting Processing Context Switching ConcurrentTotal Waiting and Processing Time Time User ID #04 #03 #02 #01 #00 #05 Waiting Processing Context Switching SequentialTotal Waiting and Processing Time Time User ID
  32. Scalable != FastScalable != Fast Scalable == Less complainingScalable == Less complaining
  33. Rails is not fastRails is not fast Rails might be scalableRails might be scalable Scalable != FastScalable != Fast Scalable == Less complainingScalable == Less complaining
  34. So,So, When do we want concurrency? When do we want concurrency?
  35. So,So, When do we want concurrency? When do we want concurrency? When context switching cost is much cheaper than a single task When context switching cost is much cheaper than a single task
  36. So,So, When do we want concurrency? When do we want concurrency? When context switching cost is much cheaper than a single task When context switching cost is much cheaper than a single task Or if you have much more cores than your clients (= no need for context switching) Or if you have much more cores than your clients (= no need for context switching)
  37. If context switching cost is at about 1 secondIf context switching cost is at about 1 second
  38. If context switching cost is at about 1 second It's much cheaper than 10 months, so it might be worth it If context switching cost is at about 1 second It's much cheaper than 10 months, so it might be worth it
  39. #04 #03 #02 #01 #00 #05 #04 #03 #02 #01 #00 #05 Time User ID Time User ID But if context switching cost is at about 5 months... But if context switching cost is at about 5 months...
  40. Do kernel threads context switch fast?Do kernel threads context switch fast?
  41. Do kernel threads context switch fast?Do kernel threads context switch fast? Do user threads context switch fast?Do user threads context switch fast?
  42. Do kernel threads context switch fast?Do kernel threads context switch fast? Do user threads context switch fast?Do user threads context switch fast? Do fibers context switch fast?Do fibers context switch fast?
  43. A ton of different concurrency models then invented A ton of different concurrency models then invented
  44. A ton of different concurrency models then invented A ton of different concurrency models then invented Each of them has different strengths to deal with different tasks Each of them has different strengths to deal with different tasks
  45. A ton of different concurrency models then invented A ton of different concurrency models then invented Each of them has different strengths to deal with different tasks Each of them has different strengths to deal with different tasks Also trade off, trading with the ease of programming and efficiency Also trade off, trading with the ease of programming and efficiency
  46. So,So, How many different types of tasks are there? How many different types of tasks are there?
  47. Two patternsTwo patterns Linear data dependencyLinear data dependency of composite tasksof composite tasks go to resturant order food order drink eat drink order_food -> eat order_tea -> drink order_food -> eat order_tea -> drink
  48. go to resturant order food order drink eat drink go to resturant order food order spices mix eat Two patternsTwo patterns Linear data dependencyLinear data dependency Mixed data dependencyMixed data dependency of composite tasksof composite tasks order_food -> eat order_tea -> drink order_food -> eat order_tea -> drink [order_food, order_spices] -> add_spices -> eat [order_food, order_spices] -> add_spices -> eat
  49. prepare to share a photo to facebook to flickr save save prepare to merge photos from facebook from flickr merge save prepare to share a photo to facebook to flickr save save prepare to merge photos from facebook from flickr merge save Linear data dependencyLinear data dependency Mixed data dependencyMixed data dependency
  50. prepare to share a photo to facebook to flickr save save prepare to share a photo to facebook to flickr save save CPU CPU CPU I/O I/O CPU IOCPU IOOne single task could be or boundOne single task could be or bound Linear data dependencyLinear data dependency Mixed data dependencyMixed data dependency
  51. prepare to share a photo to facebook to flickr save save prepare to merge photos from facebook from flickr merge save prepare to share a photo to facebook to flickr save save prepare to merge photos from facebook from flickr merge save CPU CPU CPU I/O I/O CPU CPU CPU I/O I/O CPU IOCPU IOOne single task could be or boundOne single task could be or bound Linear data dependencyLinear data dependency Mixed data dependencyMixed data dependency
  52. Concurrency?Concurrency? What We Have?What We Have? App Servers?App Servers? Me? Q?
  53. • Performance• Performance Separation of Concerns
  54. • Performance • Ease of programming • Performance • Ease of programming Separation of Concerns
  55. • Performance (implementation) • Ease of programming (interface) • Performance (implementation) • Ease of programming (interface) Separation of Concerns
  56. Advantages if interface is orthogonal to its implementation • Change implementation with ease• Change implementation with ease
  57. • Change implementation with ease • Don't need to know impl detail in order to use this interface • Change implementation with ease • Don't need to know impl detail in order to use this interface Advantages if interface is orthogonal to its implementation
  58. interfaceinterface implementationimplementation
  59. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O interfaceinterface implementationimplementation
  60. Interface for Linear data dependency Interface for Linear data dependency
  61. blocking threads callback reactor fibers CPU I/O order_food{ |food| eat(food) } order_tea{ |tea| drink(tea) } order_food{ |food| eat(food) } order_tea{ |tea| drink(tea) } CallbackCallback callback reactor blocking threads fibers CPU I/O
  62. blocking threads callback reactor fibers CPU I/O eat(order_food) drink(order_tea) eat(order_food) drink(order_tea) If we could control side-effect and do some static analysis... If we could control side-effect and do some static analysis... callback reactor blocking threads fibers CPU I/O
  63. blocking threads callback reactor fibers CPU I/O Not going to happen on Ruby though Not going to happen on Ruby though callback reactor blocking threads fibers CPU I/O eat(order_food) drink(order_tea) eat(order_food) drink(order_tea) If we could control side-effect and do some static analysis... If we could control side-effect and do some static analysis...
  64. Interface for Mixed data dependency Interface for Mixed data dependency
  65. If callback is the only thing we have...If callback is the only thing we have...
  66. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O # order_food is blocking order_spices order_food{ |food| order_spices{ |spices| eat(add_spices(spices, food)) } } # order_food is blocking order_spices order_food{ |food| order_spices{ |spices| eat(add_spices(spices, food)) } } We don't want to do this
  67. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O food, spices = nil order_food{ |arrived_food| food = arrived_food start_eating(food, spices) if food && spices } order_spices{ |arrived_spices| spices = arrived_spices start_eating(food, spices) if food && spices } ## def start_eating food, spices superfood = add_spices(spices, food) eat(superfood) end food, spices = nil order_food{ |arrived_food| food = arrived_food start_eating(food, spices) if food && spices } order_spices{ |arrived_spices| spices = arrived_spices start_eating(food, spices) if food && spices } ## def start_eating food, spices superfood = add_spices(spices, food) eat(superfood) end Tedious, but performs betterTedious, but performs better
  68. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O food = order_food spices = order_spices superfood = add_spices(spices, food) eat(superfood) # or one liner eat(add_spices(order_spices, order_food)) food = order_food spices = order_spices superfood = add_spices(spices, food) eat(superfood) # or one liner eat(add_spices(order_spices, order_food)) Ideally, we could do this with futuresIdeally, we could do this with futures
  69. but how?
  70. ImplementationImplementation
  71. Forget about data dependency for now Forget about data dependency for now ImplementationImplementation
  72. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O Implementation: 2 main choicesImplementation: 2 main choices
  73. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O def order_food Thread.new{ food = order_food_blocking yield(food) } end def order_food Thread.new{ food = order_food_blocking yield(food) } end with a thread with a thread If order_food is I/O boundIf order_food is I/O bound
  74. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O def order_food make_request('order_food'){ |food| yield(food) } end def order_food make_request('order_food'){ |food| yield(food) } end with a reactor with a reactor If order_food is I/O boundIf order_food is I/O bound
  75. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O def order_food buf = [] reactor = Thread.current[:reactor] sock = TCPSocket.new('example. com', 80) request = "GET / HTTP/1.0rnrn" reactor.write sock, request do reactor.read sock do |response| if response buf << response else yield(buf.join) ennnnd def order_food buf = [] reactor = Thread.current[:reactor] sock = TCPSocket.new('example. com', 80) request = "GET / HTTP/1.0rnrn" reactor.write sock, request do reactor.read sock do |response| if response buf << response else yield(buf.join) ennnnd with a very simple reactor with a very simple reactor If order_food is I/O boundIf order_food is I/O bound https://github.com/godfat/ruby-server-exp/blob/ master/sample/reactor.rb
  76. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O If order_food is CPU boundIf order_food is CPU bound def order_food Thread.new{ food = order_food_blocking yield(food) } end def order_food Thread.new{ food = order_food_blocking yield(food) } end with a thread with a thread
  77. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O def order_food Thread.new{ food = order_food_blocking yield(food) } end def order_food Thread.new{ food = order_food_blocking yield(food) } end with a thread with a thread If order_food is I/O boundIf order_food is I/O bound
  78. You don't have to care whether it's CPU bound or I/O bound with a thread You don't have to care whether it's CPU bound or I/O bound with a thread
  79. And you won't want to process a CPU bound task inside a reactor blocking other clients. And you won't want to process a CPU bound task inside a reactor blocking other clients.
  80. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O SummarySummary • Threads for CPU bound task • Reactor for I/O bound task
  81. Back to mixed data dependencyBack to mixed data dependency blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O
  82. If we could have some other interface than callbacks If we could have some other interface than callbacks
  83. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O ThreadsThreads food, spices = nil t0 = Thread.new{ food = order_food } t1 = Thread.new{ spices = order_spices } t0.join t1.join superfood = add_spices(spices, food) eat(superfood) food, spices = nil t0 = Thread.new{ food = order_food } t1 = Thread.new{ spices = order_spices } t0.join t1.join superfood = add_spices(spices, food) eat(superfood) We can do it with threads easily We can do it with threads easily
  84. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O what if we still want callbacks, since then we can pick either threads or reactors as the implementation detail? what if we still want callbacks, since then we can pick either threads or reactors as the implementation detail?
  85. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O food, spices = nil order_food{ |arrived_food| food = arrived_food start_eating(food, spices) if food && spices } order_spices{ |arrived_spices| spices = arrived_spices start_eating(food, spices) if food && spices } ## def start_eating food, spices superfood = add_spices(spices, food) eat(superfood) end food, spices = nil order_food{ |arrived_food| food = arrived_food start_eating(food, spices) if food && spices } order_spices{ |arrived_spices| spices = arrived_spices start_eating(food, spices) if food && spices } ## def start_eating food, spices superfood = add_spices(spices, food) eat(superfood) end we could use threads or fibers to remove the need for defining another callback (i.e. start_eating) we could use threads or fibers to remove the need for defining another callback (i.e. start_eating) instead of writing this... instead of writing this...
  86. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O food, spices = nil order_food{ |arrived_food| food = arrived_food start_eating(food, spices) if food && spices } order_spices{ |arrived_spices| spices = arrived_spices start_eating(food, spices) if food && spices } ## def start_eating food, spices superfood = add_spices(spices, food) eat(superfood) end food, spices = nil order_food{ |arrived_food| food = arrived_food start_eating(food, spices) if food && spices } order_spices{ |arrived_spices| spices = arrived_spices start_eating(food, spices) if food && spices } ## def start_eating food, spices superfood = add_spices(spices, food) eat(superfood) end
  87. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O condv = ConditionVariable.new mutex = Mutex.new food, spices = nil order_food{ |arrived_food| food = arrived_food condv.signal if food && spices } order_spices{ |arrived_spices| spices = arrived_spices condv.signal if food && spices } ## mutex.synchronize{ condv.wait(mutex) } superfood = add_spices(spices, food) eat(superfood) condv = ConditionVariable.new mutex = Mutex.new food, spices = nil order_food{ |arrived_food| food = arrived_food condv.signal if food && spices } order_spices{ |arrived_spices| spices = arrived_spices condv.signal if food && spices } ## mutex.synchronize{ condv.wait(mutex) } superfood = add_spices(spices, food) eat(superfood) Turn threads callback back to synchronized likeTurn threads callback back to synchronized likeThreadsThreads
  88. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O Turn reactor callback to synchronized styleTurn reactor callback to synchronized styleFibersFibers fiber = Fiber.current food, spices = nil order_food{ |arrived_food| food = arrived_food fiber.resume if food && spices } order_spices{ |arrived_spices| spices = arrived_spices fiber.resume if food && spices } ## Fiber.yield superfood = add_spices(spices, food) eat(superfood) fiber = Fiber.current food, spices = nil order_food{ |arrived_food| food = arrived_food fiber.resume if food && spices } order_spices{ |arrived_spices| spices = arrived_spices fiber.resume if food && spices } ## Fiber.yield superfood = add_spices(spices, food) eat(superfood)
  89. Threads vs FibersThreads vs Fibers
  90. threads if your request is wrapped inside a thread (e.g. thread pool strategy) threads if your request is wrapped inside a thread (e.g. thread pool strategy) fibers if your request is wrapped inside a fiber (e.g. reactor + fibers) fibers if your request is wrapped inside a fiber (e.g. reactor + fibers)
  91. we're using eventmachine + thread pool with thread synchronization we're using eventmachine + thread pool with thread synchronization we used to run fibers, but it didn't work well with other libraries we used to run fibers, but it didn't work well with other libraries e.g. activerecord's connection pool didn't respect fibers, only threads e.g. activerecord's connection pool didn't respect fibers, only threads
  92. also, using fibers we're running a risk where we might block the event loop somehow we don't know also, using fibers we're running a risk where we might block the event loop somehow we don't know so using threads is easier if you consider thread-safety is easier than fiber-safety + potential risk of blocking the reactor so using threads is easier if you consider thread-safety is easier than fiber-safety + potential risk of blocking the reactor
  93. and we can even go one step further... and we can even go one step further...
  94. ...into the futures!...into the futures!
  95. this is also a demonstration that some interfaces are only available to some implementations this is also a demonstration that some interfaces are only available to some implementations food = order_food spices = order_spices superfood = add_spices(spices, food) eat(superfood) # or one liner eat(add_spices(order_spices, order_food)) food = order_food spices = order_spices superfood = add_spices(spices, food) eat(superfood) # or one liner eat(add_spices(order_spices, order_food))
  96. Who got futures?Who got futures? • rest-core for HTTP futures • celluloid for general futures • also check celluloid-io for replacing eventmachine • rest-core for HTTP futures • celluloid for general futures • also check celluloid-io for replacing eventmachine
  97. a more complex (real world) example a more complex (real world) example • update friend list from facebook • get photo list from facebook • download 3 photos from the list • detect the dimension of the 3 photos • merge above photos • upload to facebook • update friend list from facebook • get photo list from facebook • download 3 photos from the list • detect the dimension of the 3 photos • merge above photos • upload to facebook this example shows a mix model of linear and mixed data dependencythis example shows a mix model of linear and mixed data dependency
  98. Concurrency?Concurrency? What We Have?What We Have? App Servers?App Servers? Me? Q?
  99. Again: we don't talk about chunked encoding and web sockets or so for now; simply plain old HTTP 1.0 Again: we don't talk about chunked encoding and web sockets or so for now; simply plain old HTTP 1.0
  100. Network concurrency Network concurrency Application concurrency Application concurrency
  101. sockets I/O bound tasks would be ideal for an event loop to process them efficiently sockets I/O bound tasks would be ideal for an event loop to process them efficiently however, CPU bound tasks should be handled in real hardware core however, CPU bound tasks should be handled in real hardware core nginx, eventmachine, libev, nodejs, etc. nginx, eventmachine, libev, nodejs, etc. e.g. kernel process/thread e.g. kernel process/thread
  102. we can abstract the http server (reverse proxy) easily, since it only needs to do one thing and do it well (unix philosophy) we can abstract the http server (reverse proxy) easily, since it only needs to do one thing and do it well (unix philosophy) that is, using an event loop to buffer the requeststhat is, using an event loop to buffer the requests
  103. however, different application does different things, one strategy might work well for one application but not the other however, different application does different things, one strategy might work well for one application but not the other
  104. we could have an universal concurrency model which could do averagely good, but not perfect for say, *your* application we could have an universal concurrency model which could do averagely good, but not perfect for say, *your* application
  105. that is why Rainbows provides all possible concurrency models for you to choose from that is why Rainbows provides all possible concurrency models for you to choose from
  106. what if we want to make external requests to outside world? what if we want to make external requests to outside world? it's I/O bound, and could be the most significant bottleneck, much slower than your favorite database it's I/O bound, and could be the most significant bottleneck, much slower than your favorite database e.g. facebooke.g. facebook
  107. before we jump into the detail...before we jump into the detail... let's see some concurrent popular ruby application servers let's see some concurrent popular ruby application servers
  108. Thin, Puma, Unicorn familyThin, Puma, Unicorn family
  109. Default ThinDefault Thin eventmachine (event loop) for buffering requests eventmachine (event loop) for buffering requests no application concurrencyno application concurrency you can run thin cluster for application concurrencyyou can run thin cluster for application concurrency
  110. Threaded ThinThreaded Thin eventmachine (event loop) for buffering requests eventmachine (event loop) for buffering requests thread pool to serve requests thread pool to serve requests you can of course run cluster for this you can of course run cluster for this
  111. PumaPuma zbatery + ThreadPool = puma zbatery + ThreadPool = puma
  112. UnicornUnicorn no network concurrencyno network concurrency worker process application concurrency worker process application concurrency
  113. RainbowsRainbows another concurrency model + unicorn another concurrency model + unicorn
  114. ZbateryZbatery rainbows with single unicorn (no fork) rainbows with single unicorn (no fork) saving memories saving memories
  115. Zbatery + EventMachine = default Thin Zbatery + EventMachine = default Thin
  116. Rainbows + EventMachine = cluster default Thin Rainbows + EventMachine = cluster default Thin
  117. Zbatery + EventMachine + TryDefer (thread pool) = threaded Thin Zbatery + EventMachine + TryDefer (thread pool) = threaded Thin
  118. Each model has its strength to deal with different task Each model has its strength to deal with different task
  119. blocking threads callback reactor fibers CPU I/O callback reactor blocking threads fibers CPU I/O Remember? threads for cpu operations, reactor for I/O operations Remember? threads for cpu operations, reactor for I/O operations
  120. What if we want to resize images, encode videos? What if we want to resize images, encode videos? it's of course CPU bound, and should be handled in a real core/CPUit's of course CPU bound, and should be handled in a real core/CPU
  121. What if we want to do both? what if we first request facebook, and then encode video, or vice versa? What if we want to do both? what if we first request facebook, and then encode video, or vice versa? or we need to request facebook and encode videos and request facebook again?or we need to request facebook and encode videos and request facebook again?
  122. The reactor could be used for http concurrency and also making external requests The reactor could be used for http concurrency and also making external requests
  123. Ultimate solutionUltimate solution for what i can think of right nowfor what i can think of right now
  124. USE EVERYTHINGUSE EVERYTHING
  125. Rainbows + EventMachine + Thread pool + Futures! Rainbows + EventMachine + Thread pool + Futures!
  126. And how do we do that in a web application? We'll need to do the above example in a concurrent way. i.e. And how do we do that in a web application? We'll need to do the above example in a concurrent way. i.e.
  127. To compare all the application servers above To compare all the application servers above
  128. Thin (default) Thin (threaded) Puma Unicorn Rainbows Zbatery Passenger Goliath EventMachine EventMachine Thread pool Worker processes Worker processes + depends on configurations Depends on configurations I/O threads libev EventMachine N/A Thread pool Thread pool Worker processes Process pool Process/thread pool N/A Rack Rack Rack Rack Rack Rack Rack Wrapped Rack Network Interface Application
  129. Concurrency?Concurrency? What We Have?What We Have? App Servers?App Servers? Me? Q?

×