• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Operating Systems - Queuing Systems
 

Operating Systems - Queuing Systems

on

  • 5,222 views

 

Statistics

Views

Total Views
5,222
Views on SlideShare
5,208
Embed Views
14

Actions

Likes
0
Downloads
124
Comments
0

2 Embeds 14

http://www.slideshare.net 8
http://prisms.cs.umass.edu 6

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Operating Systems - Queuing Systems Operating Systems - Queuing Systems Presentation Transcript

    • Operating Systems CMPSCI 377 Queuing Systems Emery Berger University of Massachusetts Amherst UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • Queuing Systems & Servers Queuing systems  High-level model of concurrent applications  Flux  Language for building servers  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 2
    • Queuing Networks Model of tasks or services  Node includes queue (line) & server  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 3
    • Queuing Networks Model of tasks or services  Node includes queue (line) & server  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 4
    • Queuing Networks Model of tasks or services  Node includes queue (line) & server  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 5
    • Queuing Networks Model of tasks or services  Node includes queue (line) & server  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 6
    • Queuing Networks Model of tasks or services  Node includes queue (line) & server  arrival rate () UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 7
    • Queuing Networks Model of tasks or services  Node includes queue (line) & server  waiting time UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 8
    • Queuing Networks Model of tasks or services  Node includes queue (line) & server  service time UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 9
    • Queuing Networks Model of tasks or services  Node includes queue (line) & server  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 10
    • Stable Systems Stable queuing system:  arrival rate = departure rate UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 11
    • Stable Systems Stable queuing system:  arrival rate = departure rate UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 12
    • Stable Systems Stable queuing system:  arrival rate = departure rate UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 13
    • Stable Systems Stable queuing system:  arrival rate = departure rate What happens if  > departure rate?  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 14
    • Stable Systems Stable queuing system:  arrival rate = departure rate What happens if  > departure rate?  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 15
    • Stable Systems Stable queuing system:  arrival rate = departure rate What happens if  > departure rate?  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 16
    • Stable Systems Stable queuing system:  arrival rate = departure rate What happens if  > departure rate?  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 17
    • Networks of Queues Can build system from connected servers  Latency = time for one thing to get through  Throughput = service rate  5/sec 5/sec 5/sec Throughput?  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 18
    • Networks of Queues Can build system with numerous  connected servers Latency = time for one thing to get through  Throughput = service rate  5/sec 1/sec 5/sec Throughput?  Lowest throughput = bottleneck  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 19
    • Little’s Law Little’s Law – applies to any “blackbox”  server Queue length (N) =  arrival rate () * average waiting time (T) N=T  N  T UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 20
    • Applications of Little’s Law Compute waiting time to get into  restaurant, bar, etc. If N = 20 people in front of you,   = departure rate = 1 / 5 min., how long will you wait in line? N=T N  T UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 21
    • Applications of Little’s Law Required service time?  Arrival rate = one job @ 500 ms  Average queue length = 10  T=?  What’s the average latency?  N=T N  T UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 22
    • Applications of Little’s Law Required service time?  Arrival rate = one job @ 500 ms  Average queue length = 5  T=?  What’s the average latency?  N=T N  T UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 23
    • Motivating Example: Image Server image server Client  Requests image @ desired  quality, size not found Server  Images: RAW  http://server/Easter-bunny/ 200x100/75 Compresses to JPG  Caches requests  Sends to client  client UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • Problem: Concurrency image Could write sequential code server  but… More clients (latency)  Bigger server  Multicores, multiprocessors  One approach: threads  Limit reuse,  risk deadlock, burden programmer Complicate debugging  Mixes program logic &  concurrency control clients UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • The Flux Programming Language High-performance & deadlock-free concurrent programming w/ sequential components Flux = Components + Flow + Atomicity  Components unmodified C, C++ (or Java)  Flow implicitly || path thru components  Atomicity high-level mutual exclusion  Compiler generates:  Deadlock-free, runtime-independent server  Threads, thread pools, events, …  Path profiling  Discrete event simulator  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • Flux Outline Intro to Flux: building a server  Components  Flows  Atomicity  Performance results  Server performance  Performance prediction (QNMs)  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • Flux Server “Main” Source nodes originate flows  Conceptually in separate thread  Executes inside implicit infinite loop  Here: initiates flow for each image request  source Listen  Image; Listen image server ReadRequest Compress Write Complete ReadRequest Compress Write Complete ReadRequest Compress Write Complete UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • Flux Image Server Basic image server requires:  HTTP parsing (http)   Socket handling (socket)  JPEG compression (libjpeg)  All UNIX-style C libraries Abstract node = flow across nodes  Concrete or abstract  Image = ReadRequest  Compress  Write  Complete; image server ReadRequest Compress Write Complete http libjpeg socket http UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • Control Flow Direct flow via user-supplied predicate types  Type test applied to output  Note: no variables – dispatch on output “type”  Here: cache frequently requested images  typedef hit TestInCache; Handler:[_,_,hit] = ; Handler:[_,_,_] = ReadFromDisk  Compress  StoreInCache; hit handler ReadRequest CheckCache Write Complete Listen handler StoreInCache ReadInFromDisk Compress UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • Supporting Concurrency Many clients = concurrent flows  Must keep cache consistent  Atomicity constraints  Same name = mutual exclusion  Apply to nodes or whole flow (abstract node)  atomic CheckCache {}; {, }; atomic Complete atomic StoreInCache {}; CheckCache hit ReadRequest Write Complete CheckCache hit hit ReadRequest Write Complete CheckCache hit Writehandler ReadRequest Complete Listen ReadInFromDisk Compress StoreInCache ReadRequest CheckCache StoreInCache Write Complete ReadInFromDisk Compress ReadInFromDisk Compress StoreInCache handler StoreInCache ReadInFromDisk Compress UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • More Atomicity Reader / writer constraints  Multiple readers or single writer (default)  atomic ReadList: {listAccess?}; atomic AddToList: {listAccess!}; Per-session constraints  User-supplied function ≈ hash on source  Added to flow ≈ chooses from array of locks  atomic AddHasChunk: {chunks(session)}; UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • Preventing Deadlock Naïve execution can deadlock  atomic A: {z,y}; atomic A: {y,z}; atomic B: {y,z}; atomic B: {y,z}; Establish canonical lock order  Partial order  Alphabetic by name  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 33
    • Preventing Deadlock, II Harder with abstract nodes  A = B; C = D; A:{z} A A{z} atomic A{z}; C{y} C B atomic B{y}; atomic C{y,z}; Solution: Elevate constraints; fixed point  A = B; C = D; A{y,z} C atomic A{y,z}; B atomic B{y}; atomic C{y,z}; UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 34
    • Handling Errors What if image requested doesn’t exist?  Error = negative return value from component  Remember – nodes oblivious to Flux  Solution: error handlers  Go to alternate paths on error  Possible extension – can match on error paths  handle error ReadInFromDisk  FourOhFour; hit handler ReadRequest CheckCache Write Complete Listen handler StoreInCache ReadInFromDisk Compress FourOhFour UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • Almost Complete Flux Image Server source Listen  Image; Image = ReadRequest  CheckCache  Handler  Write  Complete; Handler[_,_,hit] = ; Handler[_,_,_] = ReadFromDisk  Compress  StoreInCache; atomic CheckCache: {cacheLock}; atomic StoreInCache: {cacheLock}; atomic Complete: {cacheLock}; handle error ReadInFromDisk  FourOhFour; Concise, readable expression of server logic  No threads, etc.: simplifies programming, debugging  image hit handler server ReadRequest CheckCache Write Complete Listen handler StoreInCache ReadInFromDisk Compress UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • Flux Outline Intro to Flux: building a server  Components, flow  Atomicity, deadlock avoidance  Performance results  Server performance  Performance prediction  Future work  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
    • Flux Results Four servers:  Image server (+ libjpeg) [23 lines of Flux]  Multi-player online game [54]  BitTorrent (2 undergrads: 1 week!) [84]  Web server (+ PHP) [36]  Evaluation  Benchmark: variant of SPECweb99  Three different runtimes here  Thread: one per connection  Thread pool: fixed max # threads  Event-driven: helper threads for blocking calls  Compared to Capriccio [SOSP03], SEDA [SOSP01]  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 38
    • Web Server UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 39
    • Performance Prediction observed parameters UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 40
    • Performance Prediction observed parameters UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 41
    • The End UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 42