• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content

Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this presentation? Why not share!

Concurrent Ruby Application Servers

on

  • 2,159 views

 

Statistics

Views

Total Views
2,159
Views on SlideShare
2,099
Embed Views
60

Actions

Likes
3
Downloads
17
Comments
0

3 Embeds 60

http://dev-connpass.owo3.net 36
https://twitter.com 21
http://localhost 3

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Concurrent Ruby Application Servers Concurrent Ruby Application Servers Presentation Transcript

    • Concurrent RubyApplication Servers Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • http://godfat.org/slide/ 2012-12-07-concurrent.pdfLin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • Me? Concurrency? What We Have? App Servers? Q?
    • Me? Concurrency? What We Have? App Servers? Q?
    • • Programmer at Cardinal BlueLin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • • Programmer at Cardinal Blue• Use Ruby from 2006Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • • Programmer at Cardinal Blue• Use Ruby from 2006• Interested in PL and FP (e.g. Haskell)Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • • Programmer at Cardinal Blue• Use Ruby from 2006 ncy recently landoncurre Haskell)• Interested in PL so c FP (e.g. ALin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • • Programmer at Cardinal Blue• Use Ruby from 2006 ncy recently landoncurre Haskell)• Interested in PL so c FP (e.g. A• https://github.com/godfatLin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • • Programmer at Cardinal Blue• Use Ruby from 2006 ncy recently landoncurre Haskell)• Interested in PL so c FP (e.g. A• https://github.com/godfat• https://twitter.com/godfatLin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • PicCollageLin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • • ~3.1k requests per minute • Above data observed from• Average response time: NewRelic ~135 ms • App server: Zbatery with• Average concurrent EventMachine and thread requests per process: ~7 pool on Heroku Cedar stack• Total processes: 18 PicCollage in 7 days Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • Recent Gems• jellyfish - Pico web • rest-core - Modular Ruby framework for building clients interface for REST API-centric web APIs applications • rest-more - Various REST• rib - Ruby-Interactive- clients such as Facebook ruBy -- Yet another and Twitter built with interactive Ruby shell rest-core Lin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • Special Thanksihower, ET Blue and Cardinal BlueLin Jen-Shin (godfat) @ Rubyconf.tw/2012
    • Me? Concurrency? What We Have? App Servers? Q?
    • Caveats No disk I/O No pipelined requests No chunked encoding No web sockets No … To make things simpler for now.
    • Caution Its not faster for a user
    • 10 moms cant produce1 child in 1 month
    • 10 moms can produce10 children in 10 months
    • Resource matters
    • Resource matters We might not always have enough processors
    • Resource matters We might not always have enough processors We need multitasking
    • Imagine 15 childrenhave to be context switchedamongst 10 moms
    • Multitasking is not free
    • Multitasking is not free If your program context switches, then actually it runs slower
    • Multitasking is not free If your program context switches, then actually it runs slower But its more fair for users
    • Its more fair if they are all servedequally in terms of waiting time
    • I just want a drink!
    • To illustrate the overall running time...User ID#05#04 Sequential#03#02 Waiting#01 Processing#00 Time Context Switching
    • User ID#05#04 Sequential#03#02 Waiting#01 Processing#00 Time Context SwitchingUser ID#05#04 Concurrent#03#02 Waiting#01 Processing#00 Time Context Switching
    • User ID#05#04 Total Waiting and Sequential Processing#03 Time#02 Waiting#01 Processing#00 Time Context SwitchingUser ID#05#04 Total Waiting and Concurrent Processing#03 Time#02 Waiting#01 Processing#00 Time Context Switching
    • Scalable != FastScalable == Less complaining
    • Scalable != FastScalable == Less complainingRails is not fastRails might be scalable
    • So, When do we want concurrency?
    • So, When do we want concurrency? When context switching cost is much cheaper than a single task
    • So, When do we want concurrency? When context switching cost is much cheaper than a single task Or if you have much more cores than your clients (= no need for context switching)
    • If context switching cost is at about 1 second
    • If context switching cost is at about 1 secondIts much cheaper than 10 months, so itmight be worth it
    • But if context switching cost is at about 5 months...User ID#05#04#03#02#01#00 TimeUser ID#05#04#03#02#01#00 Time
    • Do kernel threads context switch fast?
    • Do kernel threads context switch fast?Do user threads context switch fast?
    • Do kernel threads context switch fast?Do user threads context switch fast?Do fibers context switch fast?
    • A ton of different concurrency modelsthen invented
    • A ton of different concurrency modelsthen inventedEach of them has different strengths todeal with different tasks
    • A ton of different concurrency modelsthen inventedEach of them has different strengths todeal with different tasksAlso trade off, trading with the ease ofprogramming and efficiency
    • So, How many different types of tasks are there?
    • Two patterns of composite tasks Linear data dependency go to resturant order food order drink eat drink order_food -> eat order_tea -> drink
    • Two patterns of composite tasks Linear data dependency Mixed data dependency go to resturant go to resturant order food order drink order food order spices eat drink mix eat order_food -> eat [order_food, order_spices] order_tea -> drink -> add_spices -> eat
    • Linear data dependency Mixed data dependency prepare to share a photo prepare to merge photosto facebook to flickr from facebook from flickrsave save merge save
    • One single task could be CPU or IO bound Linear data dependency Mixed data dependency CPU prepare to share a photoI/O I/O to facebook to flickrCPU CPU save save
    • One single task could be CPU or IO bound Linear data dependency Mixed data dependency CPU CPU prepare to share a photo prepare to merge photosI/O I/O I/O I/O to facebook to flickr from facebook from flickrCPU CPU CPU save save merge CPU save
    • Me? Concurrency? What We Have? App Servers? Q?
    • Separation of Concerns • Performance
    • Separation of Concerns • Performance • Ease of programming
    • Separation of Concerns • Performance (implementation) • Ease of programming (interface)
    • Advantages if interface isorthogonal to its implementation • Change implementation with ease
    • Advantages if interface isorthogonal to its implementation • Change implementation with ease • Dont need to know impl detail in order to use this interface
    • interface implementation
    • interface implementationCPU blocking threadsI/O callback reactor fibers
    • Interface forLinear data dependency
    • CPU blocking threads I/O callback reactor fibersCallback order_food{ |food| eat(food) } order_tea{ |tea| drink(tea) }
    • CPU blocking threads I/O callback reactor fibersIf we could control side-effectand do some static analysis... eat(order_food) drink(order_tea)
    • CPU blocking threads I/O callback reactor fibersIf we could control side-effectand do some static analysis... eat(order_food) drink(order_tea) en on Rub y though Not goin g to happ
    • Interface forMixed data dependency
    • If callback is the only thing we have...
    • CPU blocking threads I/O callback reactor fibersWe dont want to do this # order_food is blocking order_spices order_food{ |food| order_spices{ |spices| eat(add_spices(spices, food)) } }
    • CPU blocking threads I/O callback reactor fibersTedious, but performs better food, spices = nil order_food{ |arrived_food| food = arrived_food start_eating(food, spices) if food && spices } order_spices{ |arrived_spices| spices = arrived_spices start_eating(food, spices) if food && spices } ## def start_eating food, spices superfood = add_spices(spices, food) eat(superfood) end
    • CPU blocking threads I/O callback reactor fibersIdeally, we could do this with futures food = order_food spices = order_spices superfood = add_spices(spices, food) eat(superfood) # or one liner eat(add_spices(order_spices, order_food))
    • but how?
    • Implementation
    • Implementation Forget about data dependency for now
    • Implementation: 2 main choices CPU blocking threads I/O callback reactor fibers
    • CPU blocking threads I/O callback reactor fibersIf order_food is I/O bounddef order_food Thread.new{ food = order_food_blocking yield(food) }end with a thread
    • CPU blocking threads I/O callback reactor fibersIf order_food is I/O bounddef order_food make_request(order_food){ |food| yield(food) }end with a reactor
    • CPU blocking threads I/O callback reactor fibersIf order_food is I/O bounddef order_food buf = [] reactor = Thread.current[:reactor] sock = TCPSocket.new(example.com, 80) request = "GET / HTTP/1.0rnrn" reactor.write sock, request do ple reactor reactor.read sock do |response| very sim if response buf << response else with a yield(buf.join)ennnnd https://github.com/godfat/ruby-server-exp/blob/ master/sample/reactor.rb
    • CPU blocking threads I/O callback reactor fibersIf order_food is CPU bounddef order_food Thread.new{ food = order_food_blocking yield(food) }end with a thread
    • CPU blocking threads I/O callback reactor fibersIf order_food is I/O bound def order_food Thread.new{ food = order_food_blocking yield(food) } end with a thread
    • You dont have to care whether itsCPU bound orI/O bound with a thread
    • And you wont want to process aCPU bound task inside a reactorblocking other clients.
    • Summary CPU blocking threads I/O callback reactor fibers • Threads for CPU bound task • Reactor for I/O bound task
    • CPU blocking threads I/O callback reactor fibersBack to mixed data dependency
    • If we could have some other interfacethan callbacks
    • Threads CPU blocking threads I/O callback reactor fibersfood, spices = nilt0 = Thread.new{ food = order_food }t1 = Thread.new{ spices = order_spices }t0.joint1.joinsuperfood = add_spices(spices, food)eat(superfood) t with thr eads easily We can do i
    • CPU blocking threads I/O callback reactor fiberswhat if we still want callbacks, since thenwe can pick either threads or reactors asthe implementation detail?
    • we could use threads CPU blocking threadsor fibers to remove I/O callback reactor fibersthe need for defininganother callback (i.e.start_eating) food, spices = nil order_food{ |arrived_food| food = arrived_food start_eating(food, spices) if food && spices } order_spices{ |arrived_spices| spices = arrived_spices start_eating(food, spices) if food && spices } ## def start_eating food, spices superfood = add_spices(spices, food) eat(superfood) d of writi ng this... end instea
    • CPU blocking threads I/O callback reactor fibersfood, spices = nilorder_food{ |arrived_food| food = arrived_food start_eating(food, spices) if food && spices}order_spices{ |arrived_spices| spices = arrived_spices start_eating(food, spices) if food && spices}##def start_eating food, spices superfood = add_spices(spices, food) eat(superfood)end
    • CPU blocking threads I/O callback reactor fibersThreads Turn threads callback back to synchronized like condv = ConditionVariable.new mutex = Mutex.new food, spices = nil order_food{ |arrived_food| food = arrived_food condv.signal if food && spices } order_spices{ |arrived_spices| spices = arrived_spices condv.signal if food && spices } ## mutex.synchronize{ condv.wait(mutex) } superfood = add_spices(spices, food) eat(superfood)
    • CPU blocking threads I/O callback reactor fibersFibers Turn reactor callback to synchronized style fiber = Fiber.current food, spices = nil order_food{ |arrived_food| food = arrived_food fiber.resume if food && spices } order_spices{ |arrived_spices| spices = arrived_spices fiber.resume if food && spices } ## Fiber.yield superfood = add_spices(spices, food) eat(superfood)
    • Threads vs Fibers
    • threads if your request is wrappedinside a thread (e.g. thread poolstrategy)fibers if your request is wrappedinside a fiber (e.g. reactor + fibers)
    • were using eventmachine + threadpool with thread synchronizationwe used to run fibers, but it didntwork well with other librariese.g. activerecords connection pool didnt respectfibers, only threads
    • also, using fibers were running arisk where we might block the eventloop somehow we dont knowso using threads is easier if youconsider thread-safety is easierthan fiber-safety + potential risk ofblocking the reactor
    • and we can even go one stepfurther...
    • ...into the futures!
    • nstration that somethis is a lso a demo nly availab le to some interf aces are o implem entations food = order_food spices = order_spices superfood = add_spices(spices, food) eat(superfood) # or one liner eat(add_spices(order_spices, order_food))
    • Who got futures?• rest-core for HTTP futures• celluloid for general futures• also check celluloid-io for replacing eventmachine
    • a more complex (real world)example• update friend list from facebook• get photo list from facebook• download 3 photos from the list• detect the dimension of the 3 photos• merge above photos• upload to facebook ws a mix model of this e xample sho ed data d ependency linea r and mix
    • Me? Concurrency? What We Have? App Servers? Q?
    • Again:we dont talk about chunkedencoding and web sockets or so fornow; simply plain old HTTP 1.0
    • NetworkconcurrencyApplicationconcurrency
    • sockets I/O bound tasks would beideal for an event loop to processthem efficiently n ginx, even tmachine, libev, n odejs, etc.however, CPU bound tasks shouldbe handled in real hardware core ernel proc ess/thread e.g. k
    • we can abstract the httpserver (reverse proxy)easily, since it only needsto do one thing and do itwell (unix philosophy) using an e vent loopthat is, buffer th e requests to
    • however, different application doesdifferent things, one strategy mightwork well for one application butnot the other
    • we could have an universalconcurrency model which could doaveragely good, but not perfect forsay, *your* application
    • that is why Rainbows provides allpossible concurrency models for youto choose from
    • what if we want to make externalrequests to outside world? e.g. facebookits I/O bound, and could be the most significantbottleneck, much slower than your favoritedatabase
    • before we jump into the detail... lets see some concurrent popular ruby application servers
    • Thin, Puma, Unicorn family
    • Default Thineventmachine (event loop)for buffering requestsno application concurrency run thin cluster foryou can lication co ncurrency app
    • Threaded Thineventmachine (event loop)for buffering requeststhread pool to serverequests urse run you can of co clust er for this
    • Pumazbatery + ThreadPool= puma
    • Unicornno network concurrencyworker processapplication concurrency
    • Rainbowsanother concurrency model+ unicorn
    • Zbatery rainbows with single unicorn (no fork)saving memories
    • Zbatery +EventMachine= default Thin
    • Rainbows +EventMachine= cluster defaultThin
    • Zbatery +EventMachine +TryDefer(thread pool)= threaded Thin
    • Each model has its strength todeal with different task
    • Remember? threads for cpu operations,reactor for I/O operations CPU blocking threads I/O callback reactor fibers
    • What if we want to resize images,encode videos? und, and should be its of cou rse CPU bo in a real core/CPU handled
    • What if we want to do both? whatif we first request facebook, andthen encode video, or vice versa? encode request fa cebook andor w e need to ook again? os and req uest faceb vide
    • The reactor could be used for httpconcurrency and also makingexternal requests
    • Ultimate solutionfor what i can think of right now
    • USE EVERYTHING
    • Rainbows +EventMachine +Thread pool +Futures!
    • And how do we do that in a web application?Well need to do the above example in aconcurrent way. i.e.
    • To compare all theapplication servers above
    • Network Interface ApplicationThin (default) EventMachine Rack N/AThin (threaded) EventMachine Rack Thread poolPuma Thread pool Rack Thread poolUnicorn Worker processes Rack Worker processesRainbows Worker processes + depends on configurations RackZbatery Depends on configurations RackPassenger I/O threads Rack Process pool libev Process/thread poolGoliath EventMachine Wrapped N/A Rack
    • Me? Concurrency? What We Have? App Servers? Q?