Operating Systems - Queuing Systems

4,732 views
4,620 views

Published on

Published in: Technology, Education
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
4,732
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
154
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Operating Systems - Queuing Systems

  1. 1. Operating Systems CMPSCI 377 Queuing Systems Emery Berger University of Massachusetts Amherst UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  2. 2. Queuing Systems & Servers Queuing systems  High-level model of concurrent applications  Flux  Language for building servers  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 2
  3. 3. Queuing Networks Model of tasks or services  Node includes queue (line) & server  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 3
  4. 4. Queuing Networks Model of tasks or services  Node includes queue (line) & server  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 4
  5. 5. Queuing Networks Model of tasks or services  Node includes queue (line) & server  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 5
  6. 6. Queuing Networks Model of tasks or services  Node includes queue (line) & server  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 6
  7. 7. Queuing Networks Model of tasks or services  Node includes queue (line) & server  arrival rate () UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 7
  8. 8. Queuing Networks Model of tasks or services  Node includes queue (line) & server  waiting time UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 8
  9. 9. Queuing Networks Model of tasks or services  Node includes queue (line) & server  service time UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 9
  10. 10. Queuing Networks Model of tasks or services  Node includes queue (line) & server  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 10
  11. 11. Stable Systems Stable queuing system:  arrival rate = departure rate UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 11
  12. 12. Stable Systems Stable queuing system:  arrival rate = departure rate UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 12
  13. 13. Stable Systems Stable queuing system:  arrival rate = departure rate UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 13
  14. 14. Stable Systems Stable queuing system:  arrival rate = departure rate What happens if  > departure rate?  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 14
  15. 15. Stable Systems Stable queuing system:  arrival rate = departure rate What happens if  > departure rate?  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 15
  16. 16. Stable Systems Stable queuing system:  arrival rate = departure rate What happens if  > departure rate?  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 16
  17. 17. Stable Systems Stable queuing system:  arrival rate = departure rate What happens if  > departure rate?  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 17
  18. 18. Networks of Queues Can build system from connected servers  Latency = time for one thing to get through  Throughput = service rate  5/sec 5/sec 5/sec Throughput?  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 18
  19. 19. Networks of Queues Can build system with numerous  connected servers Latency = time for one thing to get through  Throughput = service rate  5/sec 1/sec 5/sec Throughput?  Lowest throughput = bottleneck  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 19
  20. 20. Little’s Law Little’s Law – applies to any “blackbox”  server Queue length (N) =  arrival rate () * average waiting time (T) N=T  N  T UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 20
  21. 21. Applications of Little’s Law Compute waiting time to get into  restaurant, bar, etc. If N = 20 people in front of you,   = departure rate = 1 / 5 min., how long will you wait in line? N=T N  T UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 21
  22. 22. Applications of Little’s Law Required service time?  Arrival rate = one job @ 500 ms  Average queue length = 10  T=?  What’s the average latency?  N=T N  T UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 22
  23. 23. Applications of Little’s Law Required service time?  Arrival rate = one job @ 500 ms  Average queue length = 5  T=?  What’s the average latency?  N=T N  T UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 23
  24. 24. Motivating Example: Image Server image server Client  Requests image @ desired  quality, size not found Server  Images: RAW  http://server/Easter-bunny/ 200x100/75 Compresses to JPG  Caches requests  Sends to client  client UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  25. 25. Problem: Concurrency image Could write sequential code server  but… More clients (latency)  Bigger server  Multicores, multiprocessors  One approach: threads  Limit reuse,  risk deadlock, burden programmer Complicate debugging  Mixes program logic &  concurrency control clients UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  26. 26. The Flux Programming Language High-performance & deadlock-free concurrent programming w/ sequential components Flux = Components + Flow + Atomicity  Components unmodified C, C++ (or Java)  Flow implicitly || path thru components  Atomicity high-level mutual exclusion  Compiler generates:  Deadlock-free, runtime-independent server  Threads, thread pools, events, …  Path profiling  Discrete event simulator  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  27. 27. Flux Outline Intro to Flux: building a server  Components  Flows  Atomicity  Performance results  Server performance  Performance prediction (QNMs)  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  28. 28. Flux Server “Main” Source nodes originate flows  Conceptually in separate thread  Executes inside implicit infinite loop  Here: initiates flow for each image request  source Listen  Image; Listen image server ReadRequest Compress Write Complete ReadRequest Compress Write Complete ReadRequest Compress Write Complete UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  29. 29. Flux Image Server Basic image server requires:  HTTP parsing (http)   Socket handling (socket)  JPEG compression (libjpeg)  All UNIX-style C libraries Abstract node = flow across nodes  Concrete or abstract  Image = ReadRequest  Compress  Write  Complete; image server ReadRequest Compress Write Complete http libjpeg socket http UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  30. 30. Control Flow Direct flow via user-supplied predicate types  Type test applied to output  Note: no variables – dispatch on output “type”  Here: cache frequently requested images  typedef hit TestInCache; Handler:[_,_,hit] = ; Handler:[_,_,_] = ReadFromDisk  Compress  StoreInCache; hit handler ReadRequest CheckCache Write Complete Listen handler StoreInCache ReadInFromDisk Compress UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  31. 31. Supporting Concurrency Many clients = concurrent flows  Must keep cache consistent  Atomicity constraints  Same name = mutual exclusion  Apply to nodes or whole flow (abstract node)  atomic CheckCache {}; {, }; atomic Complete atomic StoreInCache {}; CheckCache hit ReadRequest Write Complete CheckCache hit hit ReadRequest Write Complete CheckCache hit Writehandler ReadRequest Complete Listen ReadInFromDisk Compress StoreInCache ReadRequest CheckCache StoreInCache Write Complete ReadInFromDisk Compress ReadInFromDisk Compress StoreInCache handler StoreInCache ReadInFromDisk Compress UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  32. 32. More Atomicity Reader / writer constraints  Multiple readers or single writer (default)  atomic ReadList: {listAccess?}; atomic AddToList: {listAccess!}; Per-session constraints  User-supplied function ≈ hash on source  Added to flow ≈ chooses from array of locks  atomic AddHasChunk: {chunks(session)}; UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  33. 33. Preventing Deadlock Naïve execution can deadlock  atomic A: {z,y}; atomic A: {y,z}; atomic B: {y,z}; atomic B: {y,z}; Establish canonical lock order  Partial order  Alphabetic by name  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 33
  34. 34. Preventing Deadlock, II Harder with abstract nodes  A = B; C = D; A:{z} A A{z} atomic A{z}; C{y} C B atomic B{y}; atomic C{y,z}; Solution: Elevate constraints; fixed point  A = B; C = D; A{y,z} C atomic A{y,z}; B atomic B{y}; atomic C{y,z}; UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 34
  35. 35. Handling Errors What if image requested doesn’t exist?  Error = negative return value from component  Remember – nodes oblivious to Flux  Solution: error handlers  Go to alternate paths on error  Possible extension – can match on error paths  handle error ReadInFromDisk  FourOhFour; hit handler ReadRequest CheckCache Write Complete Listen handler StoreInCache ReadInFromDisk Compress FourOhFour UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  36. 36. Almost Complete Flux Image Server source Listen  Image; Image = ReadRequest  CheckCache  Handler  Write  Complete; Handler[_,_,hit] = ; Handler[_,_,_] = ReadFromDisk  Compress  StoreInCache; atomic CheckCache: {cacheLock}; atomic StoreInCache: {cacheLock}; atomic Complete: {cacheLock}; handle error ReadInFromDisk  FourOhFour; Concise, readable expression of server logic  No threads, etc.: simplifies programming, debugging  image hit handler server ReadRequest CheckCache Write Complete Listen handler StoreInCache ReadInFromDisk Compress UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  37. 37. Flux Outline Intro to Flux: building a server  Components, flow  Atomicity, deadlock avoidance  Performance results  Server performance  Performance prediction  Future work  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  38. 38. Flux Results Four servers:  Image server (+ libjpeg) [23 lines of Flux]  Multi-player online game [54]  BitTorrent (2 undergrads: 1 week!) [84]  Web server (+ PHP) [36]  Evaluation  Benchmark: variant of SPECweb99  Three different runtimes here  Thread: one per connection  Thread pool: fixed max # threads  Event-driven: helper threads for blocking calls  Compared to Capriccio [SOSP03], SEDA [SOSP01]  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 38
  39. 39. Web Server UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 39
  40. 40. Performance Prediction observed parameters UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 40
  41. 41. Performance Prediction observed parameters UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 41
  42. 42. The End UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 42

×